Need advise for Integration with my local LLM

Hi everyone, first of all, thanks for your time and this amazing project. I have been using it for years but now I need to integrate my joplin notes with a personal project and I would love to know your opinion to understand what would be the best approach.


I need to find a way to upload my notes (which are encrypted) as md files to a local LLM as context. But I don't know what would it be the best approach to follow. For more info please read below.


I'm running a local LLM in my PC, and I want to feed it with my personal notes from Joplin as context. I'm already using PrivateGPT (which support file imports as .md files) to parse and process the notes however, I need to implement a few features on my side so it works as expected:

  1. I need to be able to retrieve the content of all the notes. Since they are encrypted I can't just upload them from disk. What would be the best approach here? I saw that they are stored as clear text inside the database.slqlite file. Should I export them from here? Or there is anything easier to write all my notes unencrypted to disk?
  2. I need to reupload each note every time it gets modified or a new note is created. I was thinking maybe on creating a plugin that every time any of those 2 actions happen it exports the note to a folder so I can upload it again to the local LLM (or maybe directly upload it).

Is there any good way of implementing this? Maybe I can do both things in a plugin running inside joplin, which tracks each note and push it to the LLM if is has never been processed or the note content changes. What do you think?

I have never developed a plugin before, so if you could point me out in the right direction I would really appreciate it. For example:

  1. is there any onCreate or onUpdate callback function that I can invoke or subscribe as webhook to track those events?\
  2. How should I store the state of each note to see if I need to reupload or not.

Thank you again for your time!

Try setting up PrivateGPT to work with Jarvis, an AI assistant plugin for Joplin.

If you succeed, you can add instructions to the guide, if it doesn't already have applicable instructions for PrivateGPT.

If you have questions, you can ask them in the plugin forum thread.

1 Like

Looks like PrivateGPT has an endpoint at port 8000, so setting it up is likely going to be similar to Ollama/LiteLLM in the Jarvis guide.

If you want to do it the other way around (manage it externally instead of inside Joplin), take a look at the LangChain / LlamaIndex APIs for Joplin. These extensions can be used to upload all your notes to the local LLM. They do not include logic for updates (on creation / modification) yet.


Thank you for all the insights. I think I may replace PrivateGPT with Jarvis, I will be comparing both options in the coming days. At moment I'm just RAW exporting all the .md file and uploading them as context to the vector DB of privateGPT.

I will compare the searches using the same llm model but with both PrivateGPT and Jarvis. I'm learning about LLM, but being able to use the best context possible on each query it's my main priority, as the question are going to be related to the info stored in my notes. For example: "Give me a list of all the notes where I have reported a syntax error on the code" or "Give me a list of hostnames that appear in my notes where they use lavarel". Do you think this type of questions are viable?

Thanks again for your time!

I'll be interested to read the results of this comparison.

I'm not familiar with how PrivateGPT works and how it handles queries such as the ones that you're interested in. In any case, what Jarvis does is to calculate the embedding of your query, and then search for the most similar text sections in your Joplin database. There are a few additional tricks in the guide and on the forum for steering the reponse in the right direction (it's worth reading these), but that's the basic mechanism (similar to this). This means that Jarvis will not use the LLM to construct repeated text search queries until the search results match your request, or other iterative processes. (Could be an interesting project though.)

Note that Jarvis works with external models, and you can define which model to use for generating the response (default: OpenAI, not the one preferred for privacy conscious users) and which model to use for generating note embeddings (default: the offline USE). Also note that if you wish to process code sections you will need to switch it on in the settings.

1 Like