Update: local LLM compatibility
Although I usually use Claude, I've been testing joplin-mcp with Jan (a nice and friendly local LLM chat / server) and it works reasonably well: search, tagging, note creation function as expected even with a small model like Jan-v3-4b-base-instruct. Here it is, for example, finding and cleaning up unused tags at 43 tokens/sec:
Setup instructions are in the README.
I noticed Laurent ran into issues using local models with Belsar-ai's more flexible MCP server, where models would get stuck in loops or fail to present results. I suspect the difference comes down to architecture: joplin-mcp exposes dedicated MCP tools with typed parameters, so the model only needs to pick a tool and fill in fields. Servers that use a script execution approach require the model to write code on the fly, which is a much harder task for smaller or quantised models.
If anyone else is running local models with Joplin MCP, I'd be curious to hear which models and configurations work for you.
