Joplin Server Container always dies after a few months

Operating system

Linux

Joplin version

3.2.12

What issue do you have?

I use the joplin server docker container, I have followed the setup guide listed here:

Every few weeks or months it always dies, I assume there's an out of memory issue somewhere but not sure what's happening.
Has anyone else had this problem? If so are there some steps I can diagnose this issue with?

Here is the log output before it died (it's always the same error message)

03:43:00 0|app    | 2025-02-13 03:43:00: TaskService: Running #12 (Process emails) (scheduled)...
03:43:00 0|app    | 2025-02-13 03:43:00: TaskService: Running #13 (Log heartbeat message) (scheduled)...
03:43:00 0|app    | 2025-02-13 03:43:00: TaskService: Completed #12 (Process emails) in 98ms
03:43:01 0|app    | 2025-02-13 03:43:01: metrics: Cpu: 1.01%; Mem: 932.95 / 1983.4 MB (47%); Req: 0 / min; Active req: 0
03:43:01 0|app    | 2025-02-13 03:43:01: TaskService: Completed #13 (Log heartbeat message) in 1127ms
03:43:10 0|app    | 2025-02-13 03:43:10: TaskService: Running #11 (Process shared items) (scheduled)...
03:43:10 0|app    | 2025-02-13 03:43:10: TaskService: Completed #11 (Process shared items) in 36ms
time="2025-02-13T03:43:36Z" level=error msg="error waiting for container: unexpected EOF"

Should the mem be that high?

Any help I'd appreciate
Thanks

I'd be interested to know why as well. We had this issue a few times on Joplin Cloud too although we have it setup to reboot automatically so it's not a massive problem, but would still be nice to understand why it happens.

Your memory usage at 930MB seems fine and is not over the limit anyway so it should not crash the container. I found that logs in that situation tell pretty much nothing, whether it's the system one or Docker one but if you find anything please share back.

I've done some digging, i looked into my syslog for around the same time the server stopped working (I got this timestamp from docker ps -a).

I found this:

Feb 13 03:43:12 systemd[1]: Stopping Service for snap application docker.dockerd...
Feb 13 03:43:12 systemd[1]: Stopping Service for snap application docker.dockerd...
Feb 13 03:43:15 docker.dockerd[2529731]: time="2025-02-13T03:43:15.411023614Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=moby
Feb 13 03:43:15 docker.dockerd[2529731]: time="2025-02-13T03:43:15.330924211Z" level=error msg="Failed to get event" error="rpc error: code = Unavailable desc = error reading from server: EOF" module=libcontainerd namespace=plugins.moby
Feb 13 03:43:18 docker.dockerd[2529731]: time="2025-02-13T03:43:18.647782543Z" level=error msg="Error sending stop (signal 15) to container" container=fb37297f030da25bfa79a754d624a49d78e7b0c395a1419e7e8a70219afb8a1b error="Cannot kill container fb37297f030da25bfa79a754d624a49d78e7b0c395a1419e7e8a70219afb8a1b: Unavailable: connection error: desc = \"transport: Error while dialing: dial unix:///run/snap.docker/containerd/containerd.sock: timeout\": unavailable"
Feb 13 03:43:27 docker.dockerd[3849447]: time="2025-02-13T03:43:27.007688988Z" level=error msg="copy shim log after reload" error="read /proc/self/fd/8: file already closed" namespace=moby
Feb 13 03:43:29 docker.dockerd[2529731]: time="2025-02-13T03:43:29.445099648Z" level=error msg="Force shutdown daemon"
Feb 13 03:43:34 docker.dockerd[2529731]: time="2025-02-13T03:43:34.553198252Z" level=error msg="Error shutting down http server" error="context canceled"
Feb 13 03:43:35 systemd[1]: snap.docker.dockerd.service: Succeeded.
Feb 13 03:43:35 systemd[1]: Stopped Service for snap application docker.dockerd.
Feb 13 03:43:36 snapd[3222788]: services.go:1152: RemoveSnapServices - disabling snap.docker.nvidia-container-toolkit.service
Feb 13 03:43:36 snapd[3222788]: services.go:1152: RemoveSnapServices - disabling snap.docker.dockerd.service
Feb 13 03:43:36 systemd[1]: snap.docker.docker-521555bd-b699-451e-b955-a6f8e7a2e4e0.scope: Succeeded.
Feb 13 03:43:36 systemd[1]: Reloading.
Feb 13 03:43:36 systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.

Looks interesting, it looks like snap tries to update docker but fails to bring the container back up or something.

Some are saying this could be related to snap, so maybe installing docker not via snap could help.

Or turning off snap updates and seeing if it still happens.

I imagine it's the Docker snap in your case, yeah. The system attempts to restart services, but the high level service for Docker is the Docker service itself, not whatever you're using Docker for (Joplin), the updater feels like it's successfuly done a full restart of the environment, but simply just doesn't, you'd likely need to augment it somehow so the Docker service has a concept of default containers to run when it boots, assuming this exists.

(sudo snap refresh --hold=forever docker - if you wanted to disable the updates)

Coincidentally, I was just talking to a friend about an older project I never finished, putting the Joplin Server itself into a snap, it was pretty damn functional at the point I remember working on it; you'd see snap here would actually see the high level service as Joplin (and Caddy) themselves, and if snapd were to reboot a Joplin-Server snap, it'd explicitly reboot the Joplin Server environment too, because it can see that's the real target.

(GitHub - JGCarroll/joplin-server-snap - but this would need updating since it's now 4 years old and the sync stuff will have all changed)

The Joplin Cloud environment might be running Docker as a snap, but for now, I'd assume that isn't the case, but the snap update system does seem to be your specific problem there.

Edit: Start containers automatically | Docker Docs looks relevant if you wanted to keep Docker updating automatically but minimise the downtime, you'd be able to do things like tell snapd to only apply the update overnight when you're not using the system too with Refresh Timers like e.g., sudo snap set system refresh.timer=2:00-5:00, conceptually you'd still be "randomly rebooting" (for Docker patches) at the same rate but you'd shift that downtime to when you're not noticing it and make the restart automatic, so in theory you'd have a 30 seconds a month with zero involvement at a time you won't even notice kind of deal)

Thanks for the info

So I've removed the snap version of docker, and reinstalled docker using the apt-get method. I will see how that goes and report back here if there's any issues still happening, but fingers crossed that was it.

You still run the risk of it doing the same, it'll just be when unattended-upgrades upgrades Docker rather than Snap. The underlying problem is Docker (or systemd, alternatively) not being told to restart the containers which is the same for either scenario, the reduced updates would reduce the frequency but not avoid it entirely.

But for sure, I imagine you'll get significantly less updates, so you should notice it less, but you'd still notice it from time to time unless you did the Docker automatic restarts at the container level because when Apt eventually does upgrade Docker, it wouldn't restart Joplin either.

(Of course that assumes you have unattended upgrades on, but it usually is by default, if it isn't though, it's all manual and you would avoid it)

You're right I guess there's 2 things here to try:

Stop Docker from having unattended upgrades

sudo apt-mark hold docker containerd
from ubuntu - How to make sure docker service will start after containerd upgrade? - Server Fault

The downside to this is Docker is no longer going to get updates until I manualy do it.

Set Container to always restart

In docker run I could set --restart=always which should bring the container back up if docker itself is restarted.

I may try the latter and see if that helps

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.