Troubleshooting
Before you start
Replace media with your mount name.
FUSE permission issues (apps can't access the mount)
Symptoms:
- Plex/Jellyfin (or another app user) cannot read
/mnt/pfs/<mount>.
Checklist:
- Ensure
fuse3is installed. - If you use
allow_other, ensure/etc/fuse.confcontains:
user_allow_other
- Ensure your config enables it:
fuse:
allow_other: true
systemd service won't start
- Check status:
sudo systemctl status pfs@media.service
- Check logs:
sudo journalctl -u pfs@media.service -n 200 --no-pager
- Validate config:
sudo pfs doctor media
- Ensure paths exist (common first-run failure):
sudo ls -ld /mnt/pfs/media
sudo ls -ld /mnt/ssd1/media /mnt/hdd1/media
Free space looks wrong (df -h, SMB clients, Finder)
Symptoms:
df -h /mnt/pfs/<mount>shows a capacity that does not match "sum of all disks".- Different directories under the same mount show different free space.
- After a disk is unplugged (or permissions changed), free space looks suspiciously small or keeps changing.
What's going on:
- PolicyFS is backed by multiple storage paths and can route writes differently depending on the path.
- That means "free space for this mount" is a policy choice, not a single objective number.
Checklist:
- Start with the default (recommended) mount-wide reporting:
mounts:
media:
statfs:
reporting: mount_pooled_targets
on_error: ignore_failed
- If different directories show different numbers, check if you enabled path-aware reporting:
mounts:
media:
statfs:
reporting: path_pooled_targets
This mode is useful for some applications, but it can confuse humans because the mount root and a subdirectory may legitimately report different totals.
- If a storage path is missing/unavailable, the default
ignore_failedwill pool what it can. If you would rather fail fast (so monitoring catches it), set:
mounts:
media:
statfs:
on_error: fail_eio
For details and trade-offs, see Disk space reporting.
Maintenance jobs do nothing
Many maintenance commands intentionally exit 3 (no changes) when there is nothing to do. systemd timers treat exit code 3 as success.
Useful checks:
pfs doctor mediasudo journalctl -u pfs-move@media.service -n 200 --no-pagersudo journalctl -u pfs-prune@media.service -n 200 --no-pagersudo journalctl -u pfs-index@media.service -n 200 --no-pager
Files still appear after deletion
Symptom: You deleted a file (or folder) through the mount but it still shows up in directory listings or still occupies space on the physical disk.
This is expected behavior for storage paths with indexed: true. Deletes on indexed storage are recorded in the event log (events.ndjson) and are not applied to the physical disk immediately - they are deferred to the next pfs prune run.
Run prune manually to apply them now:
sudo systemctl start pfs-prune@media.service
Or check how many events are pending:
sudo pfs doctor media
Archive disks still spinning up after enabling indexed storage
Symptom: You set indexed: true on your HDDs but they still spin up during scans.
Two common reasons:
- The index hasn't been populated yet. Metadata can't be served from SQLite until
pfs indexhas run. Run it once first:
sudo systemctl start pfs-index@media.service
- File data reads always touch the physical disk.
indexed: trueonly helps with metadata operations (directory listings, file attributes). Whenever an app reads actual file content, the disk spins up - this is unavoidable.
Use pfs doctor <mount> to confirm the index is populated and check how many files are indexed per storage path.
High memory peak on the daemon
systemctl status pfs@<mount> reports a Memory Peak in the multi-GB range but current Memory is much lower.
Memory Peak is a cgroup high-watermark and captures transient spikes (large readdir, SQLite WAL mmap, FUSE request buffers), not a steady-state leak. To confirm, enable the built-in pprof endpoint and capture a live heap profile.
Enable pprof (per mount)
sudo systemctl edit pfs@media
Add a drop-in:
[Service]
Environment=PFS_PPROF_ADDR=127.0.0.1:6060
Then restart:
sudo systemctl restart pfs@media
The daemon will log pprof listening addr=127.0.0.1:6060.
Equivalent CLI flag (for manual runs): pfs mount <mount> --pprof-addr=127.0.0.1:6060.
Capture profiles
After the daemon has been running long enough to show the peak:
curl -sS http://127.0.0.1:6060/debug/pprof/heap -o heap.prof
curl -sS http://127.0.0.1:6060/debug/pprof/allocs -o allocs.prof
curl -sS http://127.0.0.1:6060/debug/pprof/goroutine -o goroutine.prof
Analyze
go tool pprof -top heap.prof
go tool pprof -top -cum allocs.prof
go tool pprof -http=:8080 heap.prof # flamegraph UI
If current heap (inuse_space) stays low and only alloc_space is large, the peak is transient allocation (expected - not a leak). To cap process RSS, add MemoryHigh= or MemoryMax= to the same drop-in.
Bind pprof to 127.0.0.1 only; the endpoint exposes runtime state and should not be reachable from other hosts.
Lock held (exit code 75)
Exit code 75 means another PolicyFS process is holding a lock (usually because another job is running).
Checklist:
- Check for running units:
systemctl list-units 'pfs-*' --state=running
- Check the daemon:
sudo systemctl status pfs@media.service