Potential breaking change in next Joplin Server update (2.5.10)

Just a head up that the next Joplin Server update could potentially include a breaking change, depending on your data.

One of the database migration is going to add an "owner_id" column to the "items" table (where all notes, notebooks, etc. are stored), and automatically populate it. Normally that shouldn't take too long but you might want to make sure you won't need the server right away when you process this.

The second database migration will add a unique constraint on items.name and items.owner_id and that's where the breaking change might be. Normally this data is already unique because that's enforced by the application but in some rare cases, due to a race condition, there could be duplicate data in there. If that happens the migration will fail and the server will not start.

If that happens, you'll need to decide what to do with the data, as it's not possible to automatically decide. You can find all duplicates using this query:

select count(*), name
from items group by name, owner_id
having count(*) > 1;

Once you have the list of IDs you have a few options:

  • Find the corresponding item in Joplin (it can unfortunately be anything - a note, resource, folder, etc.), then delete it and sync.

  • Or, just delete the data directly in the database. You'll want to delete the corresponding item_id from the user_items table too.

But really in most cases you should be fine. Especially if you don't have that many notes it's unlikely you have duplicates.

Are user of joplin cloud impacted by this? Will duplicated notes show up as a conflict notes and could be addressed by the conflict plugin?

The migration is already applied on Joplin Cloud so you won't need to do anything. There was one user affected by this - I've contacted him and since he wasn't using the account anymore he told me I could delete the duplicate notes.

1 Like

Hi, I tried the query using psql on Joplin Server 2.5.9 (before upgrading to check before if I will have a problem), and I get :

ERROR:  column "items.id" must appear in the GROUP BY clause or be used in an aggregate function
LINE 1: select id, count(*), name, owner_id

Should I first upgrade and then try the query ?

Right, I should have tested the query first. You actually need to remove the "id" part from it (as in my now updated post). If it returns nothing you're good to go.

1 Like

excellent, thank you.
Now I get :

 count | name | owner_id 
(0 rows)

It should be OK then

ok it seems like my server is affected. What do I need to do? I do not know how to use database tools and I am not a developer.

You need to open a database client (a skill that's always useful when you self host):

psql -h localhost -U joplin joplin

(change the credential based on your env file)

Then run this query:

select count(*), name, owner_id
from items group by name, owner_id
having count(*) > 1;

And then you can follow the info mentioned in the top post.


From your log it seems that it's some timeCheck file that's duplicated, and you can safely delete these.

delete from items where name='NAME OF FILE';
1 Like

Thanks for the reply. Every app out there uses a different database, not easy to get accustomed to every piece of backend.

I am running this inside the db docker

psql -h localhost -U admin joplin
psql (13.1 (Debian 13.1-1.pgdg100+1))
Type "help" for help.

joplin=# select count(*), name, owner_id
joplin-# from items group by name, owner_id
joplin-# having count(*) > 1;

ERROR:  column "owner_id" does not exist
LINE 1: select count(*), name, owner_id

This output does not seem to be helpful.

joplin-# delete from items where name='NAME OF FILE';
ERROR:  syntax error at or near "ERROR"
LINE 1: ERROR:  column "owner_id" does not exist

Then you need to try the queries without owner_id


This is where things are

joplin=# select count(*), name
joplin-# from items group by name
joplin-# having count(*) > 1;

 count |           name           
     2 | info.json
     2 | .sync/readme.txt
     2 | .sync/version.txt
     2 | temp/timeCheck207743.txt
     2 | temp/timeCheck299477.txt
(5 rows)

Do you have multiple users on the server? The first three files are tricky but the last two you can safely run:

delete from items where name in ("temp/timeCheck207743.txt", "temp/timeCheck299477.txt")'

Then try to start the server again to see if it works this time.

1 Like

Same problem unfortunately. I restart the app not the db though not sure if that would make a difference.

Processing items 10000 / 11211
Processing items 11000 / 11211
migration file "20211030103016_item_owner_name_unique.js" failed
migration failed with error: alter table "items" add constraint "items_name_owner_id_unique" unique ("name", "owner_id") - could not create unique index "items_name_owner_id_unique"
error: alter table "items" add constraint "items_name_owner_id_unique" unique ("name", "owner_id") - could not create unique index "items_name_owner_id_unique"
    at Parser.parseErrorMessage (/home/joplin/packages/server/node_modules/pg-protocol/src/parser.ts:357:11)
    at Parser.handlePacket (/home/joplin/packages/server/node_modules/pg-protocol/src/parser.ts:186:21)
    at Parser.parse (/home/joplin/packages/server/node_modules/pg-protocol/src/parser.ts:101:30)
    at Socket.<anonymous> (/home/joplin/packages/server/node_modules/pg-protocol/src/index.ts:7:48)
    at Socket.emit (node:events:390:28)
    at addChunk (node:internal/streams/readable:315:12)
    at readableAddChunk (node:internal/streams/readable:289:9)
    at Socket.Readable.push (node:internal/streams/readable:228:10)
    at TCP.onStreamRead (node:internal/stream_base_commons:199:23) {
  length: 270,
  severity: 'ERROR',
  code: '23505',
  detail: 'Key (name, owner_id)=(temp/timeCheck207743.txt, l16aNcKWxJMS4SfaZdBrHBPukzL4bdBp) is duplicated.',
  hint: undefined,
  position: undefined,
  internalPosition: undefined,
  internalQuery: undefined,
  where: undefined,
  schema: 'public',
  table: 'items',
  column: undefined,
  dataType: undefined,
  constraint: 'items_name_owner_id_unique',
  file: 'tuplesort.c',
  line: '4226',
  routine: 'comparetup_index_btree'

It doesn't look like that timeCheck was deleted. Maybe with single quotes?

delete from items where name in ('temp/timeCheck207743.txt', 'temp/timeCheck299477.txt');

Also run the first select command after that to check that they've indeed be deleted.

1 Like

Thanks for the help, as far as I tell this worked.

app_1  | 2021-11-12 20:05:11: App: Auto-migrating database...                                                                                                                                                                                                                   
app_1  | Processing items 0 / 11207                                                                                                                                                                                                                                             
app_1  | Processing items 1000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 2000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 3000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 4000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 5000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 6000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 7000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 8000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 9000 / 11207                                                                                                                                                                                                                                          
app_1  | Processing items 10000 / 11207                                                                                                                                                                                                                                         
app_1  | Processing items 11000 / 11207                                                                                                                                                                                                                                         
app_1  | 2021-11-12 20:08:53: App: Latest migration: { name: '20211105183559_storage.js', done: true }                                                                                                                                                                          
app_1  | 2021-11-12 20:08:53: App: Starting services...
app_1  | 2021-11-12 20:08:54: ShareService: Starting maintenance...
app_1  | 2021-11-12 20:08:54: EmailService: Service will be disabled because mailer config is not set or is explicitly disabled
app_1  | 2021-11-12 20:08:54: TaskService: Scheduling #1 (Delete expired tokens): 0 */6 * * *

Yes it all looks normal now

Thanks. I also see this in the long, not sure if it is important

app_1  | 2021-11-12 20:44:46: [error] App: ::ffff: {
app_1  |   code: '23505',
app_1  |   message: 'insert into "items" ("content_size", "content_storage_id", "created_time", "id", "mime_type", "name", "owner_id", "updated_time") values ($1, $2, $3, $4, $5, $6, $7, $8) - duplicate key value violates unique constraint "items_name_owner_id_unique"'
app_1  | }
app_1  | 2021-11-12 20:44:46: App: PUT /api/items/root:/.resource/7cd320e4fdc0410fb44f6c565529a34c:/content (500) (38948ms)

If it syncs it should be fine, but I don't understand how this duplicate can happen.

1 Like

I am also seeing this maybe it is an important error

2021-11-13 04:13:41: [error] BaseModel: 
2021-11-13 04:13:41: [error] App: ::ffff: Error: 1 row(s) should have been deleted but 0 row(s) were deleted. ID: zoQf1NDkA42KUAmfO8klftyOgjU97sE0
    at ItemModel.<anonymous> (/home/joplin/packages/server/src/models/BaseModel.ts:355:65)
    at Generator.next (<anonymous>)
    at fulfilled (/home/joplin/packages/server/dist/models/BaseModel.js:5:58)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (node:internal/process/task_queues:96:5)
2021-11-13 04:13:43: App: DELETE /api/items/root:/temp/timeCheck606767.txt: (500) (13543ms)
2021-11-13 04:13:43: App: PUT /api/items/root:/locks/sync_desktop_476f54c00d1f40858a4990cb2124cbf8.json:/content (200) (13545ms)