The Problem
KEIBIDROP mounts a virtual filesystem on your machine using FUSE. You drag files in, they get encrypted and sent to the peer. During development on an Intel Mac, the filesystem would occasionally freeze. There was no crash, no panic, and no error message; the Finder would try to list the mount point and just hang. ls in a terminal would block forever. The process was alive, CPU near zero, and completely unresponsive. The only way out was kill -9 followed by umount -f.
Later, when Andrei started testing on Linux, the same class of bug appeared with different symptoms. Linux FUSE3 has its own threading model and its own ways of exposing lock contention. The underlying cause was the same: we were holding mutexes across slow operations.
What FUSE Does
FUSE (Filesystem in Userspace) lets you implement a filesystem as a regular program instead of a kernel module. The kernel intercepts filesystem calls (open, read, write, getattr) and forwards them to your userspace process via /dev/fuse.
This is powerful but comes with a critical constraint: if your userspace handler blocks, the entire filesystem blocks. The kernel is waiting for your response. Every application that touches that mount point (Finder, Spotlight, antivirus, your own code) will hang until you reply.
On macOS, macFUSE uses a limited thread pool for dispatching FUSE operations. Block a few threads and you have blocked them all. On Linux, FUSE3 can use multithreaded dispatch, but the same lock contention problems apply.
The Deadlock Pattern
After hours of staring at frozen processes, we identified a three-goroutine deadlock. The lock declarations live in pkg/filesystem/types.go:
- Goroutine A holds
AfmLock(the main file metadata mutex) and is making a network call to fetch remote file metadata. The network call takes longer than expected. - Goroutine B is a FUSE
Open()handler fired because Finder (or the Linux VFS) is opening a file. It needsAfmLockto look up the file entry. Blocked, waiting for Goroutine A. - Goroutine C is a FUSE
Read()handler for a different file. It needs the result from Goroutine B's open to get the file handle. Blocked, waiting for Goroutine B.
The FUSE dispatch threads are now blocked. New FUSE operations queue up behind them. The mount is frozen.
// The deadlock chain:
//
// Goroutine 42 [sync.Mutex.Lock]: -- holds AfmLock
// pkg/logic/common.(*Logic).syncRemoteFiles()
// -> network.FetchMetadata() // SLOW
// -> holding AfmLock the entire time
//
// Goroutine 87 [sync.Mutex.Lock]: -- waiting for AfmLock
// pkg/filesystem.(*FS).Open()
// -> logic.GetFileEntry()
// -> waiting on AfmLock
//
// Goroutine 91 [chan receive]: -- waiting for Open result
// pkg/filesystem.(*FS).Read()
// -> waiting for file handle from Open
Finding It
The breakthrough came from Go's built-in profiling. We added a pprof HTTP endpoint to the debug build:
import _ "net/http/pprof"
func init() {
go http.ListenAndServe("localhost:6060", nil)
}
When the filesystem froze, we could still hit http://localhost:6060/debug/pprof/goroutine?debug=2 to get a full goroutine dump. Every blocked goroutine, every mutex, every stack trace.
The goroutine dump showed exactly the pattern described above: dozens of goroutines stuck on sync.Mutex.Lock, all waiting for AfmLock, which was held by a goroutine deep inside a network call.
We also added structured logging around every lock acquisition:
log.Debug("acquiring AfmLock",
"goroutine", runtime.NumGoroutine(),
"caller", caller(),
"operation", "Open",
)
fs.AfmLock.Lock()
log.Debug("acquired AfmLock", "caller", caller())
This let us reconstruct the lock acquisition timeline and confirm the exact ordering that led to the deadlock.
The Fix: Brief Locking
The root cause was holding AfmLock while performing network I/O. The fix is a pattern we call "brief locking": hold the lock only long enough to read or write shared state, then release it before doing any slow work.
You can see this pattern in the gRPC Read handler, which explicitly documents why it no longer holds AfmLock for the entire stream:
Wrong: Hold Lock During I/O
func (l *Logic) syncRemoteFiles() {
l.AfmLock.Lock()
defer l.AfmLock.Unlock()
// BAD: network call while holding the lock
metadata, err := l.network.FetchMetadata()
if err != nil {
return
}
// update local state
for _, m := range metadata {
l.remoteFiles[m.Name] = m
}
}
Right: Lock Briefly, Copy, Unlock, Then Network
func (l *Logic) syncRemoteFiles() {
// Grab only what we need under the lock
l.AfmLock.Lock()
currentNames := make([]string, 0, len(l.remoteFiles))
for name := range l.remoteFiles {
currentNames = append(currentNames, name)
}
l.AfmLock.Unlock()
// Network call WITHOUT holding any lock
metadata, err := l.network.FetchMetadata()
if err != nil {
return
}
// Re-acquire lock only to update state
l.AfmLock.Lock()
for _, m := range metadata {
l.remoteFiles[m.Name] = m
}
l.AfmLock.Unlock()
}
Another example is the FUSE Release handler, which clears stream references under the lock, then closes the network stream outside the lock to avoid holding OpenMapLock during I/O.
Lock Ordering
Brief locking solves the "hold lock during slow work" problem, but you can still deadlock if two goroutines acquire multiple locks in different orders. We established a strict lock hierarchy, documented in the code:
- Level 1:
RemoteFilesLockprotects the remote file cache - Level 2:
AfmLockprotects the core file metadata map - Level 3:
OpenMapLockprotects the open file handle map
The rule: you may acquire a lower-level lock while holding a higher-level lock, but never acquire in reverse order. If you need RemoteFilesLock and you already hold AfmLock, you must release AfmLock first. This is enforced in comments at critical sites.
// ALLOWED: acquire in order (1 -> 2)
l.RemoteFilesLock.Lock()
l.AfmLock.Lock()
// ... work ...
l.AfmLock.Unlock()
l.RemoteFilesLock.Unlock()
// FORBIDDEN: acquire in reverse order (2 -> 1)
l.AfmLock.Lock()
l.RemoteFilesLock.Lock() // DEADLOCK RISK
// ...
We documented this hierarchy in a comment at the top of the file containing the lock declarations, so that every developer working on the codebase sees it immediately.
Platform Differences
The code was logically wrong on all platforms. But the deadlock manifested differently depending on the OS and hardware.
On macOS (Intel), the freeze was total and immediate: macFUSE's small thread pool meant that two or three blocked handlers were enough to lock out the entire mount. On Linux, FUSE3's multithreaded dispatch gave more headroom, but under load the same contention patterns appeared. Andrei's Linux testing surfaced cases where the gRPC stream read handler held locks that conflicted with incoming FUSE operations; that led to PR #70, which serialized concurrent stream reads to prevent corruption on the gRPC layer.
The takeaway is straightforward: concurrency bugs exist in the code regardless of the platform. Some platforms expose them faster than others. Testing on a single OS is not enough.
Lessons Learned
- Never hold locks during network calls. This seems obvious in retrospect, but when you are deep in a codebase with complex state management, it is easy to let a mutex scope creep outward. Use
defer Unlock()sparingly; it encourages holding locks for the entire function scope. - Test on multiple platforms. If you only test on one OS, you will miss timing-dependent bugs. macOS and Linux have different FUSE threading models; both found real issues.
- pprof is essential. The goroutine dump endpoint is the single most valuable debugging tool for Go concurrency issues. Add it to every non-trivial Go program.
- Document lock ordering. If you have more than one mutex, write down the acquisition order. Future you will thank present you when a deadlock report comes in at 11 PM.
- FUSE amplifies concurrency bugs. Because the kernel dispatches operations on multiple threads simultaneously, any lock contention in your handlers becomes a potential system freeze. Design your FUSE handlers to be as lock-free as possible.
- Structured logging around locks helps. When a deadlock does happen, having a log trail of "acquired X, waiting for Y" makes reconstruction possible instead of guesswork.
The fix took about 30 lines of code changes. Finding the bug took three days. That ratio is normal for concurrency issues, and it is exactly why defensive patterns like brief locking and lock hierarchies exist: they prevent the three-day debugging sessions.
For context: three days is fast. Before KEIBIDROP, I spent two years on a similar project where individual debugging sessions stretched across weeks, sometimes months. Concurrency bugs in filesystem code do not announce themselves. They hide behind timing windows and platform-specific thread scheduling. You stare at goroutine dumps until the pattern clicks, and sometimes it does not click for a very long time. The lock hierarchy and brief locking patterns in KEIBIDROP exist because I never want to repeat those months of waking up thinking about mutex acquisition order; every defensive pattern in the codebase is scar tissue from that experience.