You got it exactly right. Once you write a prompt, the plugin tries to find semantically related note excerpts to your question, and lets the LLM generate an answer based on these.
Now please open a new note, write “summarise all issues I have had with my aquarium filter”, and try to run the command Chat with your notes. If you’re unsure why you received a specific response, I suggest bringing the cursor back to the end of your prompt (or deleting the LLM output), and running Preview chat notes context. A dialog will pop up, that shows your prompt (or your entire conversation with the model), as well as the text from your notes that was sent to the LLM. You can observe whether it’s relevant to your question, and probably see how it’s related to the output.
BTW, you may increase the length of the text that is extracted from notes using the setting Notes: Context tokens in “Jarvis: Related Notes”.
Is it possible that you have text entered in one of the Prompts: fields in the advanced settings section of “Jarvis: Chat”? I suggest to clear them for now (or format them as valid JSONs).
I’ll try to reproduce this with Jan. I assume that this also happens when running Chat with Jarvis (not with notes)?
Try this, filter / search for log messages containing “jarvis”.
Thanks for this new version .. I had already tried Jarvis but never succeeded in keeping it because of some issues. Now I really want to test it and i have some questions and requests :
Is it possible to completely disable Related Notes ? I have tried to use a local model (LM Studio + gemma which works with Chat) but it doesn’t seem to succeed and i don’t want to send all my notes to online services.. I would prefer stopping it completely. Or maybe i should install a specific text embedding model in LM Studio ?
Did you manage to make Chat work with LM Studio and Mistral ? i have errors about jinja template.. I could have more details if needed
Sorry i have updated LM Studio to latest version and the embedding now works with
text-embedding-nomic-embed-text-v2-moe
and i’m now downloading other text embedding which have a better context length limit, thanks
About Mistral, i have the same error with the old 7B model of Mistral, and the Small 3.2 is too big for my computer so i will try others or online ones, thanks for your help
Still working with Jarvis and I really like it, thanks for this plugin.
Would it be possible, in advanced options, to fine tune the Auto-complete prompt ? I love it but i would like it to enable markdown and make it give a bit longer answers .. I have seen the prompt and i would be happy to modify it a bit.
In fact most of my usage of Jarvis is to complete an existing note : i don’t want to have the Chat GUI, i just want Jarvis to add new lines generated based on beginning of the note and that’s exactly what Auto complete does. I would just want it to be less restrictive and enable markdown
Hey there, since the latest version of Jarvis (or maybe even the one before that, not quite sure) I keep getting this error when using the gemini embedding-004 model when generating the notes db:
The notes db has about 2500 notes, and it used to work before, so I’m not sure whether it’s a change in the API or in the plugin. I have found no way to adjust the payload size, which apparently is different from the token size, so I’m not sure what to do. The error occurs during db generation around 1700 notes, if that’s any help.
I was able to generate a gemini embedding-004 database with about 1000 notes smoothly, but my Notes: Max tokens is set to 128 instead of the default 512 (smaller chunks), and my chunks include only Notes: Embed last heading in chunk (the rest of the checkboxes, like title, leading headings and tags are disabled). perhaps you have a note with many tags and very long headings, and the aggregated chunk exceeds the payload size limit.
Hey @shikuz, thanks for the hints. I tried to tinker with those settings, but unsuccessfully. Lowering the max tokens leads to fewer notes being processed (as changing the setting always prompts a DB rebuild) and so does increasing the value for some odd reason (I tried up to 1024). Between 1200 and 1500 notes are processed before the same error message appears. I also disabled all the checkboxes, that didn’t change anything afaics. I used the offline universal sentence encoder and that went through (unsurprisingly), but my notes are mostly in German. Furthermore, I tried switching to multilingual-001, where I get to 560 notes at 128 max tokens, but the error message is truncated, and I can’t see if it’s the same. Any ideas what else I could check for?
I'm working to improve the error handling in Jarvis. I prepared this experimental pre-release that you can try. on errors, the plugin will: (a) report which note failed (so you can exclude it by tagging it with exclude.from.jarvis, and may also try to figure out what's special about it, or share it with me for testing). (b) give you the option to skip this note and continue processing the rest.
BTW, I discovered that gemini-text-embedding-004 is no longer recommended (see here), while gemini-embedding-001 is the go to model. these things change rather often (especially with gemini, it feels). I should probably deprecate 004 in the next release, even though, as I wrote earlier, it still functioned when I tested it.
Thanks @shikuz the experimental version was of great help. There are two extensive notes that apparently caused a problem because they were web pages added via the web clipper (which I had already forgotten about), one with actual HTML code, the other in Markdown. I can send you those as jex if that helps, I tagged them not to be scanned, but I had to restart Joplin before Jarvis actually stopped trying to add the notes. I also switched to the 001 model and so far, it seems to work fine.
I am impressed with your work on Joplin and Jarvis. I would appreciate information as to the role the database plays in chatting with the notes and Jarvis. Is there any Jarvis function that allows the Jarvis model to have information about all the notes/notebooks in Joplin? Thank you.
The database enables Jarvis to attach to the current chat relevant chunks of information from your notes in order to answer your questions more accurately. Whenever you execute Chat with your notes, Jarvis processes the current conversation and searches for relevant notes in its database. This step is skipped when commands such as Chat with Jarvis or Ask Jarvis are used. By default, Jarvis' database contains all your notes, so potentially chunks from any note may be added to the conversation. However, you may exclude specific notes from the database (tag them with exclude.from.jarvis), or complete notebooks (execute the command Exclude notebook from note DB). You may also guide Jarvis to select or ignore certain notes with advanced chat features.