I imported a large collection of website dumps (like ‘read it later’ archive) and have some issues. The import process was OK (using the web-clipper curl API), but the sync now takes like forever…
Are you aware of some size limitations?
Is everyone else having problems with a large amount of nodes?
What could be the bottleneck?
I use WebDAV sync and an OSX client.