Discussions about Sync

Yes, there’s no resolution but there’s at least conflict detection.
I think in your scenario you should have seen a conflict

There are enough reports of broken sync here on the forum and on GitHub.

The problem is, when your WebDAV server doesn’t work, Joplin is the one that’s going to report a problem. So people naturally come here to ask why the app doesn’t work, when it’s not actually the app but the server.

You’ll find many posts in this forum for example about a Nextcloud bug that locks user files, others about a Jianguoyun that caps the number of daily requests, Dropbox that blocks files for copyright infringement, other posts about HTTP 500 status code (server errors), others about custom SSL certificates, proxies, wrong password, wrong WebDAV folder, and on and on.

We can make many things simpler on Joplin side but anything server-side gets more complicated, and all we can do is try to report errors as accurately as possible.

1 Like

Sure, Joplin cannot prevent server-side errors but I’d argue that it should be able to handle them gracefully.
At this point the main advice in case of any sync issues is: make a backup, wipe your data and start from scratch. There must be a better way.

Yes, absolutely.
And often these threads have rather misleading titles because the user can tell no better.
This thread alone had a title “Sync issues” when the “issue” was that the user had deleted notes on one device, synced the other and wondered why the notes “disappeared” on the other device.

Now that is a very different discussion. I would suggest you open a new thread on that to keep all the relevant conversations in one place. :smiley:

Yes it’s more or less planned to have tools to manage the sync target. There’s already been the tool to upgrade a master key and another one to re-encrypt all the notes. There should also be one to upload local data to target and vice-versa.

It’s been low priority because if Joplin is used in a normal way it works. But because users have full access to their sync target, they move file around, delete them, rename folders, etc and that’s when problems happen.

While in fact if the data in the sync target is broken, all you need to do is use Joplin the normal way: import your last JEX backup, sync, and that’s it. Because the notes are newly imported, they won’t be deleted, and they’ll be uploaded to the target.

We’ll probably have some tools to manage the sync target at some point, like one-way sync from/to the sync target, but to be honest I expect we’ll see similar issues as powerful tools like this means more people accidentally wiping out their sync target or local data.

I think many of these issues are unavoidable due to the fact that users fully control their data. There are advantages to that but it also means you need to be more careful. It’s not like Evernote where the server is controlled by the company so they can probably have your back if you delete everything.

I think a reasonable interim action may be to document very prominently (like a warning) on the website that users should not fiddle with the data in the sync target or local directories, unless they know what they are doing and even then take a proper backup before doing so.

It's just a suspicion but I think that sometimes these issues are not caused by users accessing data outside of Joplin but by the sync process not handling various errors correctly. Things like network timeouts and so on.

For instance for attachments there's the main attachments table but also note_attachments - shouldn't these be updated inside a transaction?

There's a service to handle the note_resources table, and it's there for a reason. If it was as simple as "why don't you do a transaction" that's what I would have done :slight_smile:

Then again it's possible that it can be improved or made simpler or more robust. But probably it starts by analysing why it's currently done the way it's done.

With something complex like sync, we need be very specific, what errors are not handled correctly? I always look at sync issues that are reported and I only know of one error that could be reported in a better way (there's a GitHub issue about it), but other than that I see no reason to think that errors are not handled correctly.

Very bad connections with long response time would make sync difficult, might cause issues with timeout, but it should not corrupt data.

But again sync is complex and it's possible that I'm missing something.

I was planning to test the sync logic, maybe write some unit tests for it too. I just need to find time for this.
Hopefully I'll have a better answer then.

1 Like