Plugin: Jarvis (AI assistant) [v0.10.2, 2025-05-24]

Thanks for all the suggestions @davadev, great to hear that you're in continuous experimentation, and striving to make Jarvis better. :slight_smile: I am still using and maintaining Jarvis, and I plan to release a small (long overdue) update soon, but I'll admit that my resources are currently limited, especially for new features. I'll try to see if I can squeeze some of these features in to the next release.

Detailed preview / Prompt preview: There is actually a bug that I noticed only recently, where the scroll function to the exact chunk in the note is dependent on the Bundle plugin being installed (it works even if you hide the panel of this plugin). Until it's fixed (next release), you may install this plugin, and get to the exact chunk by clicking on the context preview. The entire prompt, including the context, is printed to a log, but it appears that this is only available in dev mode. I can add a preview dialog that shows the entire prompt, including the system message, chat history, note chunks, and prompt. In the meantime, you may also use a self-hosted LLM server in order to view what's being sent to the AI in the server log / terminal.

Semantic similarity: I can add an option to display the similarity score.

Dynamic note similarity: In a way this is already implemented, if I understand your comment. The settings have Notes: Minimal note similarity that you can set, which is the bottom threshold. For the note similarity panel, the setting Notes: Maximal number of notes to display is exactly the number of desired notes. I can apply this setting also on chat context. For chat with your notes, the setting Chat: Memory tokens determines both the total length of the chat history to be included, and the length of the context that will be extracted from notes. The context of the chat will include X tokens from your previous conversation and X tokens from the newly extracted chunks. (This also answers your first question.) So, for example, if you set it to a low number, only the top chunks above the bottom threshold will be included. Given this, you could roughly estimate the number of chunks that will be included (on average) for a number of memory tokens.

Other chunking methods: Will be happy to read references if you have any to share. EDIT: I almost completely forgot about it, but in the Advanced Settings section you'll find 3 settings that are related to your suggestion: Notes: Number of nearest blocks to add groups X similar chunks together in the context. It starts from the top similar chunk, and then selects additional chunks that are similar to it to be bundled in the context. Notes: Number of leading blocks to add and Notes: Number of trailing blocks to add will add X chunks that appear in the same note before / after the selected chunk, to create a continuous context that extends beyond the default chunk size.

Bigger field โ€œChat: System messageโ€: Unable to change that unfortunately (Joplin limits).

Notes: Aggregation method for notes similarly: This affects only the note similarity panel (ignored for notes context), and means that when sorting the notes in the panel (top similarity first) we consider either the maximally similar chunk in each note, or rather the average of the chunks in that note.

1 Like