Hey all,
We've overhauled the belsar-ai joplin-mcp server over the past couple months — it's faster, uses far fewer tokens, and gives you much finer control over your notes.
Here's what changed:
-
Script execution instead of 1-1 endpoint mapping. We moved to a guided, semi-isolated JS VM execution environment, inspired by Anthropic's research on code execution with MCP. The result: better performance with significantly lower token usage.
-
Human-readable tool output. Output is now pretty-printed with emoji formatting, so you can read it directly in the tool window. The LLM no longer re-prints everything, which cuts clutter and context bloat.
-
Surgical note editing. You can now request a table of contents for any note — generated server-side, so only the TOC itself hits your context window. From there, you can read or edit specific sections of a note without pulling the whole thing. The MCP server handles tool selection automatically.
-
Per-project notebook scoping (optional). Drop an
mcp-configdirectory in the root of any project to set a default notebook and limit which notebooks the MCP server can see. If you're working across multiple repos or jobs, each one gets its own scope — better focus, better privacy.
TLDR: joplin-mcp is a lot faster, leaner on context, and now supports fine-grained editing and per-project privacy scoping.
Our team uses this every day for collaboration, so we're going to keep supporting and refining it. Give it a try, and if you run into any bugs, open an issue on GitHub — we're nice ![]()