I've been working recently on a toolbox, joplin-mcp, to enable LLM agents to interact with Joplin. The goal is to be able to perform actions seamlessly on Joplin's database from within common LLM chat interfaces. This package wraps Joplin's Data API (using @Marph's Joppy) with an API that many LLMs support today (Model Context Protocol, MCP, see the diagram below). It is worth mentioning a similar package by @pikao that was released recently (providing a read-only toolbox). This way you may instruct a LLM to find notes, edit them, tag them, remove unnecessary ones, etc. For example, the first thing that I asked the LLM to do was to find forgotten tags that are no longer in use and delete them.
The installation is fairly simple. I included examples for how to connect joplin-mcp with a commercial model (Claude Desktop), which takes about 5 minutes to setup, and a simple offline / open solution based on Ollama.
I tried to provide a nearly complete Data API support (excluding resource management for the time being). That said, I'm trying to think of ways to improve these tools. During the installation you'll be able to disable / enable tools, or limit their permissions to view or edit your notes, based on your privacy and security preferences. Furthermore, most chat interfaces will ask you to approve each tool execution in the UI. The complete tool set includes:
I'm trying to think how RAG can be integrated with this MCP. One option is to test how external RAG tools can interact with joplin-mcp. Another option is to support Jarvis embeddings, perhaps by loading and searching its note indexing database. We could also try to design tools with more advanced search methods.
That's very impressive, thanks for sharing this. The kind of queries it supports makes better than the built-in search engine and features:
Once configured, you can ask your AI assistant:
"List all my notebooks" - See your Joplin organization
"Find notes about Python programming" - Search your knowledge base
"Create a meeting note for today's standup" - Quick note creation
"Tag my recent AI notes as 'important'" - Organize with tags
"Show me my todos" - Find task items
I think the drawback of these AI-based solution, at least for non-technical users, is the need to install and setup Python scripts. Even for technical users it's sometimes a bit of a challenge to get Python apps running.
Thanks @laurent! I agree, it's a downside that I wasn't able to overcome yet. This is probably a solution for power users, even though I tried to minimise the setup to 2 lines:
pip install joplin-mcp
joplin-mcp-install
After these, a user can open the chat app and start typing.
Maybe this (or something similar giving MCP capabilities) could be either part of the default Joplin install / app, or installable in one click through a plugin?
A few thoughts:
Is there / should there be a cloud native MCP for Joplin Cloud subscribers? So any AI agent can be coupled to my Joplin stuff through Joplin cloud, even if Joplin is not installed on the device? This would be an additional selling point for Joplin Cloud vs for example DAP .
Is there / should there be some security in place, like having an MCP API token or similar, even locally, so that not every random stuff running on my device in my back (looking at you Microsoft et co) can access my Joplin without me knowing, even locally?
Regarding security, I'm not an expert, but the default config uses STDIO to communicate with the chat client. So while it may not be airtight and hardened, random stuff running on your device has no (easy) access to it.
Release v0.4.0 adds a new tool to import notes into Joplin (supporting Markdown, HTML, CSV, TXT, JEX, directories and attachments), displays full notebook paths in tools output, and adds Docker support.
Over the past few months multiple MCP servers for Joplin have been released. With the recent addition of belsar-ai/joplin-mcp, which looks quite impressive, I co-wrote with ChatGPT an overview of the existing MCPs (sorry if I missed any). In order to stay balanced, I asked ChatGPT to crawl over the GitHub repos of these MCPs and extract their features. In my opinion, there is no single “ultimate” MCP, and we probably would have benefited from combining our efforts into a single project (if we could bridge the TS / Python split). But since it’s not very hard to build one, each developer (myself included) ended up building their own server from scratch, tailored to their own taste.
It could be interesting at some point to create a benchmark for personal knowledge management in order to evaluate these servers and their tools, and to understand better what patterns work best with LLMs and Joplin.
Disclosure: I’m the maintainer of alondmnt/joplin-mcp. I’ve tried to describe all projects as neutrally as possible; corrections from other maintainers are very welcome.
There are now several Joplin MCP servers; they differ in scope (notes vs tags/notebooks vs attachments/revisions) and in how they handle context size / filtered content.
Some focus on simple CRUD or read-only search, others try to expose more of Joplin’s structure (tags, links, resources, revisions).
There is no single best one: depending on your use-case you might care more about attachments, privacy / context control, read-only safety, or just simplicity.
High-level comparison
“Filtered/context features” = anything that helps the AI not pull huge amounts of text unnecessarily (previews, metadata-only listings, pagination, line ranges, etc).
Higher-level note tools: find_notes_*, find_in_note (regex in-note), get_links (note links + backlinks), get_note with sections, TOC, and sequential reading; per-tool permissions; multiple transports (STDIO / HTTP / SSE). import_from_file can import files or directories (md/html/csv/jex/generic), supports CSV modes/options, rewrites note links to internal Joplin links, and uploads resources/attachments as part of the import.
Global content-exposure policy: search/listings/individual notes can be set to none / preview / full, with a configurable max_preview_length. get_note can return TOC only, metadata only, or specific line ranges, which helps keep long notes manageable.
Broad Joplin API coverage: tools for attachments/resources (list, metadata, download/upload, update, delete) and revision history; many “filtered retrieval” tools (by notebook, tag, resource, etc.). Server description includes explicit search/discovery strategy for the LLM.
Many list/filtered tools accept fields, limit and sorting, so the client can request metadata-only (e.g. id,title,updated_time) or smaller subsets instead of full note bodies. No global preview policy; it’s per-call and driven by how the client uses fields/limit.
One of the earlier servers. Tools: list_notebooks, search_notes, read_notebook, read_note, read_multinote. Designed around a “search → then read” workflow, with good logging and clear examples.
search_notes returns snippets plus note + notebook IDs; the recommended pattern is search_notes → feed selected IDs into read_multinote, which is context-friendlier than reading everything. Still returns full note bodies when reading; no explicit preview/length knobs.
Straightforward server with read_multinote and import_markdown. Encourages a similar staged “search → pick note IDs → batch read” workflow, with nice docs and logging.
Staged retrieval (search → read_multinote) plus numeric limits. As with jakubfrieb’s server, context control comes from how you chain the tools, not from preview/TOC features.
Adds scan_unchecked_items to find unchecked todos across notebooks; includes create/delete/move tools for notes and notebooks. Geared toward “review my tasks in this notebook tree” type use.
List/search tools support basic limit. scan_unchecked_items gives a condensed view of incomplete tasks; when you fetch a note, you typically get full content. No explicit previews/sections.
Notes-only server: full-text search, read, create/update, delete (trash or permanent), plus import_markdown. Aimed at simple Claude/AI integration with uv-based setup.
Search/list tools take a limit to cap result count; individual notes are returned in full. Context control is mostly “search first, then read a small number of specific notes.”
Read-only by design with rate limiting and tests. Good if you want an AI to search and navigate your notes but never modify them.
Safety is primarily “no write tools” + rate limiting. Retrieval returns full note content for selected notes; there’s no extra preview/TOC layer, but you can’t accidentally edit anything.
This is intentionally simplified; several of these repos have more tools than fit in one row.
Which one might you want?
Very very roughly:
Privacy / explicit context management are important → start with alondmnt/joplin-mcp.
You care about attachments/resources and revision history → look at belsar-ai/joplin-mcp.
You want read-only search + navigation, with staged retrieval → jakubfrieb/mcp-joplin.
You want todo-style summaries across notebooks → happyeric77/mcp-joplin (scan_unchecked_items).
Zoom-in: filtered content & note-level design
(alondmnt/joplin-mcp vs belsar-ai/joplin-mcp)
Two extensive MCPs, with different design choices around note reading, filtered content, and resources. This section isn’t about “better/worse”, just surfacing the differences.
“Top-level” search tools in the docstring: find_notes (text or list all, paginated), find_notes_with_tag, find_notes_in_notebook – explicitly marked as the main functions for text, tag, and notebook searches.
Multiple listing/filtered tools: notes by notebook, notes by tag, notes attached to a resource, notes in a revision chain, etc., generally with limit and sorting. Very API-like filtered retrieval.
Smart note reading
get_note has “smart display”: short notes → full content, long notes → TOC by default. Supports section= (by heading text/slug/number), toc_only, metadata_only, force_full, plus sequential reading via start_line + line_count.
get_note just returns the note as stored in Joplin; logic like “only show part of this” or “treat long notes differently” is left to the client/model.
In-note search
find_in_note runs regex search inside a single note with limit + offset and flags (case_sensitive, multiline, dotall), returning paginated matches with context.
No dedicated in-note search tool; you’d usually get_note and then search inside it with the LLM.
Links / graph features
get_links parses note links of the form [text](:/noteId[#section]) and returns outgoing links and backlinks, including section slugs and line context, effectively exposing the note graph.
No note link-graph tools, but strong attachment graph support via tools like get_note_attachments and get_resource_notes (notes ↔ resources).
Attachments & revisions
Focus is notes/notebooks/tags plus import. There is no general “resource CRUD” surface, but import_from_file will import attachments/resources and convert note links to internal Joplin links when bringing in files/directories (mixed formats and RAW exports).
Strong ongoing support: list/get/update/delete resources, download/upload attachments, plus list/get revisions. Good if you want to programmatically inspect and manage files and history in an existing vault.
Global settings for search_results, individual_notes and listings: each can be none, preview, or full, plus a max_preview_length. This acts as a policy layer over what tools are allowed to return (e.g. “search results = previews only”).
Many list/“filtered” tools support a fields parameter so the client can request metadata-only (id,title,updated_time, etc.) or include/exclude body, and limit to cap counts. There’s no single global exposure flag; it’s per-call via fields/limit.
Pagination / incremental reading
Search tools use limit + offset; find_in_note is paginated; get_note supports line-range reading with start_line + line_count. That makes it easier to step through long notes gradually.
List tools use limit and sorting (order_by, order_dir); pagination is mainly “top N by some ordering”. No built-in line-range reading for individual notes.
Tool permissions & safety
Configurable per-tool permissions (e.g. enable/disable create_note, delete_note, get_all_notes, import_from_file). Some “heavy” tools like get_all_notes and import_from_file are disabled by default.
Some destructive operations (e.g. deleting resources) perform runtime checks before acting (for example, listing notes that still use the resource and warning instead of immediately deleting). All tools are generally available once the server is configured.
Caveats
This is just a snapshot based on the public repos at the time of writing; things will definitely evolve.
I tried to stick to documented behaviour (and obvious code paths) rather than guessing about internals.
If you maintain any of these projects (or have one I missed), please feel free to jump in with corrections, clarifications, or additional features—especially around context/filtered-content behaviour or new tools.
shikuz - hello! Belsar-ai guy here. I think you were too modest about alondmnt/joplin-mcp. When we first reviewed the ecosystem, your project stood out for code quality and release cadence. It’s clear a lot of care went into it.
We built belsar-ai/joplin-mcp mainly for architectural reasons. The alondmnt project is a comprehensive Python application with a bundled MCP server; Belsar needed a vanilla MCP server without the associated application. For our use, that’s the cleanest long-term design. That’s also why the belsar-ai project is ~3,000 LOC compared to ~10,000 of application code in alondmnt.
Belsar doesn’t need to hold this package forever. We wrote it in TypeScript under an Apache 2.0 license so that if the Joplin project ever wants to take ownership, it’s in the language their team uses and under a license that gives them full authority over it. If that ever comes to pass, we’d work with our Google friends in the UX group to get it listed as an official MCP extension for broader distribution: https://geminicli.com/extensions/