I was applying to Project 1 : AI-supported search for notes
Questions
The question: who provides the API key?
For the AI search feature to work, we need an embedding model (to convert notes and queries into vectors for semantic similarity search). This requires calling an AI API. So the question is:
-
Does Joplin provide a central API key?
Probably not viable for an open-source project. It would mean Joplin paying for every user's api calls
-
Do we let users Bring Their Own Key?
User enters their API key once in a new Settings AI tab picks their preferred provider (Openai, Anthropic, Gemini) and all AI features in Joplin use that config.
-
What about privacy-first / offline users?
Support Ollama as a local provider option ( no api key needed) , runs on the user's machine. This aligns well with Joplin's offline-first philosophy.
Rather than hardcoding one provider into the search feature, I am thinking we build a shared (joplin/ai-provider) abstraction layer (using the Vercel AI sdk( AI SDK by Vercel ) that sits between all Joplin AI features and the actual provider.
So lets answer you question one by one :-
- No, Joplin don't provide a central API key.
- Yep most of the joplin plugin available allow user to enter there API key.
- I don't have idea regarding all the plugin but Jarvis definitely supports this. It also allows users to choose custom offline models, so users who prefer a privacy-first setup can run models locally instead of using cloud APIs.
Regarding Vercel AI sdk I have no idea never used it.
1 Like

Laurent prefer for it to be part of the core application not a plugin ( prefer)
take a look
My bad, I thought you were asking about implementing it as a plugin. Building it as part of the core application would be quite challenging
My approach for this idea is to avoid making external APIs a requirement.
The core semantic search would run locally using lightweight embedding models, so no API key is needed for basic functionality.
External providers can be supported optionally (user-configured), but the default design is local-first to align with Joplin’s privacy and offline-first philosophy.
From an architecture perspective, this is implemented as a hybrid retrieval layer on top of the existing search rather than relying on an external AI service.