Plugin: Jarvis (AI assistant) - also on mobile [v0.12.0, 2025-12-15]

Jarvis uses embeddings to search the content of your notes, and based on the context and prompt that you provide it looks for the most similar text chunks. You can see exactly the content that is sent to the model by placing your cursor after the prompt and running the command Preview chat notes context. I expect to see there (in purple) relevant chunks from the note you provided. If they are not relevant, or not lengthy enough, you can do a few things, depending on the capabilities of the models that you have access to:

  1. Send more excerpts to the model.
    • Increase Chat: Memory tokens in the settings in Jarvis: Chat.
    • You need a model with a large context window for that.
  2. Switch to a better embedding model than the default one.
  3. Send a larger fraction (perhaps even all) of the content of each note it finds (or given).
    • Increase the 2 settings Notes: Number of leading / trailing blocks to add in Jarvis: Related notes.
  4. Chunk your notes into smaller blocks (more specific representation), or larger ones to return longer consecutive blocks each time.
    • Decrease or increase Notes: Max tokens in Jarvis: Related notes.
    • Note that this will require processing all your note database again!
  5. If all else fails (although I don't think it should, if all settings are correct), use "Ask Jarvis" and copy paste the content of your specific single note into the prompt window, or chat with Jarvis at the bottom of the notes you would like to summarise.
1 Like