Jarvis uses embeddings to search the content of your notes, and based on the context and prompt that you provide it looks for the most similar text chunks. You can see exactly the content that is sent to the model by placing your cursor after the prompt and running the command Preview chat notes context. I expect to see there (in purple) relevant chunks from the note you provided. If they are not relevant, or not lengthy enough, you can do a few things, depending on the capabilities of the models that you have access to:
- Send more excerpts to the model.
- Increase
Chat: Memory tokensin the settings inJarvis: Chat. - You need a model with a large context window for that.
- Increase
- Switch to a better embedding model than the default one.
- Send a larger fraction (perhaps even all) of the content of each note it finds (or given).
- Increase the 2 settings
Notes: Number of leading / trailing blocks to addinJarvis: Related notes.
- Increase the 2 settings
- Chunk your notes into smaller blocks (more specific representation), or larger ones to return longer consecutive blocks each time.
- Decrease or increase
Notes: Max tokensinJarvis: Related notes. - Note that this will require processing all your note database again!
- Decrease or increase
- If all else fails (although I don't think it should, if all settings are correct), use "Ask Jarvis" and copy paste the content of your specific single note into the prompt window, or chat with Jarvis at the bottom of the notes you would like to summarise.