Joplin server performance comparative

Good morning,
Just want to share some results I've gotten from compare nginx webdav vs joplin server 2.0.6.
Both running on a rpi4 8GB RAM, with ubuntu 20.04 64 bits on HDD.
Also I'm using docker on both cases.

Test has been done with a full initial sync of 3558 notes with a size of 266MB.

Results:

  • nginx webdav: 5:20 minutes
  • joplin server 2.0.6 using postgres: 11:00

I thought the results would be more similar, what's happening here?

Thanks!

1 Like

Good question. I've been wondering if I've introduced a performance regression in v2, maybe because the server now parses the Joplin items before saving them (unlike Nginx which just stores the raw data). I'll see if I can find a way to analyse the performances and find the bottleneck.

I should be releasing a new pre-release soon with optimisations on the apps and Joplin Server. In my tests I went from 8:46 min to 1:45 min for 8031 items running locally. I'll need to do more tests running from Docker but that's promising so far!

2 Likes

spectacular!

If you have a opportunity, could you try to run the tests again with Joplin Server 2.1 and desktop app 2.1?(You'll need to enable the optimisations as describe here). I assume it's faster now, but I'd be curious to know if it really improves things on rpi4, which has less resources than what I've tried on.

Hello again,
I've tested 2.1.1 server on same rpi4, using 2.1.3 client software (is windows 10, I didn't mention it before), same client hardware, same jex collection. Previously I uninstalled and erased all user data.

I'm added following into settings.json (client):

"featureFlag.syncAccurateTimestamps": true,
"featureFlag.syncMultiPut": true,

It took....
It's not working, :joy: it throws error:
Last error: Error. Not allowed: PUT

Right, I should have mentioned you actually need 2.1.3-beta which is where the changes are. Your desktop client version should be fine.

I've just run the test and results are: 12:30 minutes. It takes more time... wtf?¿

Hmm, normally there should definitely be some improvement if only because it skips a request per item on first sync, unless the slow hardware somehow is the bottleneck.

Any chance you could post the server log to see if it's really performing the optimisations?

Is this enought? or do you need a higher log level?
server.log (354.7 KB)

It's not clear from the log what's happening. If I make the sum of the time spent server side, I count 267000ms, so about 4:30. Which means there's more than 5 minutes unaccounted for.

Is it possible that the client or network is slow for some reason? Also did you enable gzip compression on nginx? As that could possibly make a small difference.

By the way are you still getting the same results with nginx webdav?

does settings.json change apply for WebDAV case?

No, they don't. But I'm just wondering if for whatever reason your server became slower, maybe due to lack of space or some other reason, which perhaps also affects nginx too.

webdav keeps running fast, hdd space is not a problem, network is also fast, client is a i5 2500 with 8GB and ssd, it's fine. I'm using nginx as proxy.

Ok I'm not sure what the issue is then as it's quite different from my own tests. If I can think of what could cause this I'll let you know.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.