There are options. Use a WebDAV target directly or via Nextcloud and you will have what you want - though by its very nature it’s slower than using the Joplin server in its more conventional role.
I think most people would be scratching their heads as to why you need a filesystem sync when you’re using the Joplin server.
Scratching my head… Seems you might not understand what the filesystem storage is? I dont want to sync my filesystem, I want to us the filesystem storage option of Joplin server.
It appears that others have tried and had access problems (see this example) so it may be wise to search this forum for STORAGE_DRIVER=Type=Filesystem to see what issues others have encountered.
anything filesystem is, in my experience, poorly supported. Joplin should be doing what many other software do and split folders into the first letters to reduce the number of files in a folder.
when storing many small files, putting them all in a single directory is a serious performance trap. Even though modern filesystems like ext4 and XFS can technically handle millions of entries, operations like directory scans, globbing (*.md), backups, and indexing become painfully slow once you’re dealing with hundreds of thousands of files in one place. Tools that need to iterate over the directory tree—whether that’s Node.js, Python, or shell utilities—must enumerate every entry, and memory consumption balloons.
That’s why the common and time-tested solution is to shard the files into subdirectories based on the first two characters of a UUID or hash. For example, instead of dumping everything into resources/, you store them under resources/ab/uuid.md. Using two hex characters gives you 256 subdirectories, and a second level gives 65,536. This reduces the per-directory entry count from hundreds of thousands down to a few hundred or a few thousand, which keeps filesystem lookups fast and predictable. Backup tools, globbing libraries, and language runtimes all perform dramatically better under this scheme.
It’s a simple, widely adopted best practice used in systems like Git, image stores, and caches. The benefit isn’t theoretical—users see the difference between “instant” and “unusable.” By adopting a two-character directory sharding strategy, Joplin’s filesystem backend would remain efficient even with very large note collections.
I’m not sure why it hasn’t been done, but I think it would solve a lot of woes around this.
Another important factor influencing performance is what kind of storage device you are using. Modern high capacity low cost SSDs sometimes are slower than couple years old branded drives. That’s because in order to drive cost down, they are using cheap and slow QLC. Lack of dram cache also does not help.
Raspberry Pi 4B 5 GB RAM with Raspian OS on Micro SSD Card in Homenetwork
Datacenter vServer, 4 Cores, 8 GB RAM.
On a first test of uploading everything fresh to the sync target (using filesystem storage) the sync to the Pi in the Homenetwork was incredible faster. Dont know about performance syncing all of my devices then, which I will hardly will ot test because of 35000 elements to sync.
Come on man, are you really serious? Using RPi with SD card and then complaining about performance? SD card aren’t really designed to handle high volume of random writes, they can also fail at any time without warning.
Even my cheap qotom box with 6 years old CPU Celeron J4125 is going to be way faster, because it’s equipped with 16GB RAM, real SSD and fast 2.5GBe network cards.
I addressed this very point in my write-up, so please take your time and read what I wrote. It is the tools around the file system, not the file system itself that makes things slow.
Can you elaborate on why tools should make filesystem slow? Is Joplin using tools to access files instead of the usual way via OS API? I don’t see a reason why tools should make filesystem slow. Filtering or scanning list of files is O(n) operation, that is it depends on number of files and time it takes scales linearly with it.
“Tools that need to iterate over the directory tree—whether that’s Node.js, Python, or shell utilities—must enumerate every entry, and memory consumption balloons.”
Joplin is a tool. It is written in the elecron framework which is using nodejs.
What makes you think that I'm here to convince you of anything? How about you spend some time with chat GPT and do some research yourself?
This isn’t about “having enough RAM”—it’s about filesystem lookups and directory traversal overhead that gets worse the bigger the dir grows.
That’s why well-engineered tools don’t do this. They shard files by the first two characters of a UUID, giving you 256 subdirectories, or two levels deep for 65k. That way each folder only has hundreds of files, and suddenly lookups, syncs, and indexing are fast and reliable again.