Plugin: Jarvis (AI assistant) - also on mobile [v0.12.0, 2025-12-15]

This is great.

You got it exactly right. Once you write a prompt, the plugin tries to find semantically related note excerpts to your question, and lets the LLM generate an answer based on these.

Now please open a new note, write “summarise all issues I have had with my aquarium filter”, and try to run the command Chat with your notes. If you’re unsure why you received a specific response, I suggest bringing the cursor back to the end of your prompt (or deleting the LLM output), and running Preview chat notes context. A dialog will pop up, that shows your prompt (or your entire conversation with the model), as well as the text from your notes that was sent to the LLM. You can observe whether it’s relevant to your question, and probably see how it’s related to the output.

BTW, you may increase the length of the text that is extracted from notes using the setting Notes: Context tokens in “Jarvis: Related Notes”.

Hmmm,

I think there is an issue.

If I use the "Preview chat notes context" I can see the prompt and text from my notes no issue.

But if I use "Chat with your notes" I am now getting an error popup:

and in Jan API logs I see:

13:53:05]
DEBUG
Received request: POST /v1/chat/completions Some("127.0.0.1:1337") None None
[13:53:05]
DEBUG
Processing path: /v1/chat/completions, removing prefix: /v1
[13:53:05]
DEBUG
Proxying request to: ``http://127.0.0.1:39291/v1/chat/completions
[13:53:05]
DEBUG
Received response with status: 400 Bad Request

I think the issue maybe with my Jarvis plugin settings, when I click apply after setting options under “Jarvis: Chat” I get this error:

How can I determine what I have done wrong, is there a plugin debug mode or log I can examine?

Is it possible that you have text entered in one of the Prompts: fields in the advanced settings section of “Jarvis: Chat”? I suggest to clear them for now (or format them as valid JSONs).

I’ll try to reproduce this with Jan. I assume that this also happens when running Chat with Jarvis (not with notes)?

Try this, filter / search for log messages containing “jarvis”.

v0.10.5

  • added: 'None' model to disable chat / generation features
  • improved: error handling for JSON responses in OpenAI model queries

Thanks for this new version .. I had already tried Jarvis but never succeeded in keeping it because of some issues. Now I really want to test it and i have some questions and requests :

  • Is it possible to completely disable Related Notes ? I have tried to use a local model (LM Studio + gemma which works with Chat) but it doesn’t seem to succeed and i don’t want to send all my notes to online services.. I would prefer stopping it completely. Or maybe i should install a specific text embedding model in LM Studio ?
  • Did you manage to make Chat work with LM Studio and Mistral ? i have errors about jinja template.. I could have more details if needed

Thansk for your work

LM Studio embeddings

I updated the guide with instructions for using LM Studio for related notes.

LM Studio with Mistral

I just tested mistralai/mistral-small-3.2 on LM Studio v0.3.24 and Jarvis v0.10.5 and it seems to work with the following configuration:

Disabling related notes

If you do choose to disable this feature, follow these steps:

  1. Select a local model like Ollama for related notes, leave the advanced settings (model, endpoint) empty.
  2. Hide the related notes panel (Tools -> Jarvis -> Toggle related notes panel).
  3. When you start Joplin select Cancel when the first model error pops up.

No notes will be processed and you won't get additional errors until Joplin starts next time.

Hello,

thanks for your detailed answer. I think i will keep the Related notes with offline service because it’s quite interesting :slight_smile:

About embedding, i have downloaded nomic-embed-text-v2-moe in LM Studio and loaded it

then configured Jarvis with Ollama for Related Notes

and these settings :

Here are the logs i can see in my LM Studio

If i delete all Jarvis data and restart, it gives the first time warning and says “Model could not be loaded” in the Related notes tab.

What can i do ?

***

About Mistral i have tried again and i get this message :

I’m going to try to same model as you

Sorry i have updated LM Studio to latest version and the embedding now works with

text-embedding-nomic-embed-text-v2-moe

and i’m now downloading other text embedding which have a better context length limit, thanks

About Mistral, i have the same error with the old 7B model of Mistral, and the Small 3.2 is too big for my computer so i will try others or online ones, thanks for your help

1 Like

Still working with Jarvis and I really like it, thanks for this plugin.

Would it be possible, in advanced options, to fine tune the Auto-complete prompt ? I love it but i would like it to enable markdown and make it give a bit longer answers .. I have seen the prompt and i would be happy to modify it a bit.

In fact most of my usage of Jarvis is to complete an existing note : i don’t want to have the Chat GUI, i just want Jarvis to add new lines generated based on beginning of the note and that’s exactly what Auto complete does. I would just want it to be less restrictive and enable markdown

Thanks

Yes, I'll add it to the next release.

1 Like

Hey there, since the latest version of Jarvis (or maybe even the one before that, not quite sure) I keep getting this error when using the gemini embedding-004 model when generating the notes db:

Error: Gemini embedding failed: [GoogleGenerativeAI Error]: Error fetching from https://generativelanguage.googleapis.com/v1beta/models/text-embedding-004:embedContent: [400 ] Request payload size exceeds the limit: 36000 bytes.

The notes db has about 2500 notes, and it used to work before, so I’m not sure whether it’s a change in the API or in the plugin. I have found no way to adjust the payload size, which apparently is different from the token size, so I’m not sure what to do. The error occurs during db generation around 1700 notes, if that’s any help.

Thanks!

Hi @yodahome!

I was able to generate a gemini embedding-004 database with about 1000 notes smoothly, but my Notes: Max tokens is set to 128 instead of the default 512 (smaller chunks), and my chunks include only Notes: Embed last heading in chunk (the rest of the checkboxes, like title, leading headings and tags are disabled). perhaps you have a note with many tags and very long headings, and the aggregated chunk exceeds the payload size limit.

Hey @shikuz, thanks for the hints. I tried to tinker with those settings, but unsuccessfully. Lowering the max tokens leads to fewer notes being processed (as changing the setting always prompts a DB rebuild) and so does increasing the value for some odd reason (I tried up to 1024). Between 1200 and 1500 notes are processed before the same error message appears. I also disabled all the checkboxes, that didn’t change anything afaics. I used the offline universal sentence encoder and that went through (unsurprisingly), but my notes are mostly in German. Furthermore, I tried switching to multilingual-001, where I get to 560 notes at 128 max tokens, but the error message is truncated, and I can’t see if it’s the same. Any ideas what else I could check for?

thanks for checking, @yodahome.

  1. I'm working to improve the error handling in Jarvis. I prepared this experimental pre-release that you can try. on errors, the plugin will: (a) report which note failed (so you can exclude it by tagging it with exclude.from.jarvis, and may also try to figure out what's special about it, or share it with me for testing). (b) give you the option to skip this note and continue processing the rest.

  2. BTW, I discovered that gemini-text-embedding-004 is no longer recommended (see here), while gemini-embedding-001 is the go to model. these things change rather often (especially with gemini, it feels). I should probably deprecate 004 in the next release, even though, as I wrote earlier, it still functioned when I tested it.

1 Like

Thanks @shikuz the experimental version was of great help. There are two extensive notes that apparently caused a problem because they were web pages added via the web clipper (which I had already forgotten about), one with actual HTML code, the other in Markdown. I can send you those as jex if that helps, I tagged them not to be scanned, but I had to restart Joplin before Jarvis actually stopped trying to add the notes. I also switched to the 001 model and so far, it seems to work fine.

Great to hear @yodahome! Yes, please share them if you can (DM).

v0.11.0

  • added: OCR text indexing
    • you can now chat with your note attachments too
  • added: doc/query conditioning (embeddings v3)
    • this is expected to improve semantic search
    • you will be prompted to rebuild your Jarvis database
  • added: keep response text selected for accept/reject/regenerate
  • added: research: pubmed database paper search
  • added: setting: autocomplete prompt (for @fredv)
  • added: setting: notes embeddings timeout
  • improved: upgraded models list
    • gpt-5
    • claude 4.1
    • gemini 2.5
  • improved: html note processing
  • improved: embeddings error handling: report note ID / title, retry / skip note
  • improved: openai error message handling
  • improved: research: paper ranking with new settings
  • improved: research: prompts and output
  • improved: decrease default temperature setting
  • fixed: claude-opus support
  • fixed: claude max_tokens setting
  • chore: move most logs to debug log
1 Like

I am impressed with your work on Joplin and Jarvis. I would appreciate information as to the role the database plays in chatting with the notes and Jarvis. Is there any Jarvis function that allows the Jarvis model to have information about all the notes/notebooks in Joplin? Thank you.

1 Like

Welcome @DW413!

The database enables Jarvis to attach to the current chat relevant chunks of information from your notes in order to answer your questions more accurately. Whenever you execute Chat with your notes, Jarvis processes the current conversation and searches for relevant notes in its database. This step is skipped when commands such as Chat with Jarvis or Ask Jarvis are used. By default, Jarvis' database contains all your notes, so potentially chunks from any note may be added to the conversation. However, you may exclude specific notes from the database (tag them with exclude.from.jarvis), or complete notebooks (execute the command Exclude notebook from note DB). You may also guide Jarvis to select or ignore certain notes with advanced chat features.

Perhaps the table at the bottom here may also help.

1 Like