Plugin: Jarvis (AI assistant) [v0.8.5, 2024-06-04]

Thanks @davadev for the feedback. Interesting comments and suggestions.

I'm processing this, and trying to think along the lines of simple, human-readable syntax that may help steer Jarvis in the right direction.

One option, as you suggested, could be to follow links when searching for related notes. In fact, this might also be helpful when updating the related notes panel. This way, any links that appear in the chat note will be taken into account in the search for related notes (excluding the links that Jarvis generates to cite notes).

Another option could be to add Joplin search to your prompt. For example a line in the prompt such as: "search: DeLorean notebook:Automobiles". Jarvis will use the top hits from Joplin's search results and take the most relevant excerpts only from these notes (there's still a need to select excerpts because the notes may be too long to fit the maximal allowed length for the prompt). The search prompt itself will not be sent to GPT.

A third option, as you suggested, could be to manually add notes. For example a line in the prompt such as: "related: [some note](:/6ef951a41d64484483b7a2f021494d38)" (or even just a list of IDs).

A preview command is doable. It can use the panel to preview the note excerpts before sending them.

I'm note sure yet how to optimize user queries, but I'll give it more thought. BTW, Jarvis does that in a different context (in the research / literature review feature).

1 Like

Shikuz I tried this plugin and experimented a bit with the "chat with notes" feature. I am writing a fiction book and I was intrigued by the idea of asking questions about my creation. This produced some very impressive and funny results. Thank you for this great plugin.

I did have some challenges though. For example, for some chapters of my book, it didn't find any related notes, so I couldn't chat with the opened chapter. I tried updating the Jarvis database, but it didn't change anything. This also made me concerned whether Jarvis was taking into account the whole context of my book when it responded to other notes that had found "related notes".

I mentioned that this was a funny experience. This was because I told Jarvis to take on the role of one of my book characters so that I could talk to it. This "chat with notes" feature helped me to see what impression my characters made.:sweat_smile:

I think it would be even better if Jarvis took into account the whole context of my book, and if I could give him a role in "chat with notes", as is possible in "ask Jarvis"...:thinking:

1 Like

Hi @JamesWriterNarry (and welcome to the forum!), very cool and entertaining use case! :smiley: Thanks for sharing this.

Perhaps try to set a lower threshold for "Minimal note similarity" in the settings. v0.4.4 (coming out very soon) will include additional tools to help in this case, such as letting the user specify the notes they wish to chat with.

GPT, the model that Jarvis currently uses, is quite limited in the size of the context that it can process, so it's unlikely to be able to process an entire chapter or an entire book in one go. However, the search for (~300-word-long) relevant note chunks to be included in this context is supposed to cover your entire book, unless some of the notes were accidentally excluded by the user from the database.

In v0.4.4 you'll be able to preview what is actually sent to GPT along with your chat prompt.

While GPT is unlikely to be able to process your entire book, I will be working on ways to increase the total context that is sent to GPT.

You could define the role that you wish Jarvis to play by describing it in the chat. I haven't thought of a smooth way of integrating roles into chat without making the user interface too clunky. But if you think you'll find it useful, I believe it will be possible to add note-based context to "Ask Jarvis".


This release adds a number of experimental features that help you shape the context that Jarvis gets from your notes. Most of these are disabled by default until we gather more feedback, so look them up in the settings.

  • preview command (Tools-->Jarvis-->Preview chat notes context) (following @davadev 's feature request)
    • see the exact context that will be sent to the model previewed in the related notes panel
    • all of the new features below will be reflected in the preview, so you can experiment with these settings and see how they affect the resulting chat context
  • prompt commands
    • Notes: this command, when it appears in the chat prompt (in a new line), specifies the note IDs for the context (following @davadev 's feature request)
    • Search: this command, when it appears in the chat prompt (in a new line), uses Joplin search to retrieve relevant notes
    • you may combine both commands in your prompts (see example)
    • hard constraint: this mechanism replaces the usual search for related notes based on chat content and gives you (almost) complete control over the context that is selected. that is, only notes that come up in search, or listed in notes will be considered
User: Hi Jarvis, please summarize information on finger spins in the following notes.
Notes: [abcd](:/9b3e075fea954195b79d4238132dbe3b), 456f02ffc4984ef7bc9c117f1589f3dd
Search: yoyo competition tag:2018
  • note similarity and chat context can take links into account (following @davadev 's feature request)
    • soft constraint: the content of the chat/note is still taken into account, but any links that appear in the note will be used to refine the search for related notes. that is, it is not guaranteed that the linked notes will be selected to be included in the chat context
    • set the weight given to links in the settings
  • extension of the blocks (note chunks) sent to GPT
    • attaching the previous X blocks in the note to each selected block
    • attaching the next X blocks in the note
    • attaching the X most similar notes to the block in the DB
    • check the advanced settings to set these, and increase the "Memory tokens" setting to squeeze more blocks into the prompt
  • minimal block length
    • set to 100 chars by default (can be adjusted in the settings)
    • shorter blocks will be excluded
  • welcome message before initializing the note DB (following @SteveShank 's issue)
    • with explanations and an option to postpone the process

Recent Joplin user (just so I could try it with Jarvis!). Have you considered an "always on" mode in which Jarvis auto-tags notes (maybe for a given notebook) and possibly periodically updates some form of internal knowledge base with summaries, etc? The thought is that the combination of Joplin as storage, tags as metadata, etc that you could craft something more "live" and organic, and reducing the cataloging-efforts of the user. Essentially I'm thinking of something akin to what each agent has in the Generative Agents: Interactive Simulacra of Human Behavior paper: a memory with derived aggregate impressions of sorts?


Welcome to the community @oaustegard ! Hope you find Joplin useful.

I've been thinking about auto-tags and auto-titles (or 1-line summaries) for some time, yet it's still in the backlog (originally planned for v0.3.0, but it got pushed back as other features demanded all of my attention). One challenge that I had in mind was to try to mimic the style / system of the user, such as the vocabulary of tags that they prefer to use, and the way they use these tags. I'd still like to try to implement it at some point, but there's a chance that this is best done by training a model on the user's notes.

An "organic internal knowledge base" is also something that the recent note DB / embeddings / related notes tries to achieve, albeit through a different approach. Finally, although it's not an automatic background process, you can already start experimenting with quickly-generated auto-summaries and auto-tags using prompt templates (see the comment here on developing your own workflow). I'd be happy to hear about such experiments if you do.

I'll have a look at the paper you cited, sounds interesting.

1 Like

Shikuz I tried out the new features. I was sometimes able to get Jarvis to impersonate my fictional characters even better by using "chat with notes" with a prompt instructing Jarvis to impersonate my character along with an embedded search for relevant keyword, but it only seemed to work in some cases. (Probably depending on the quality of the prompt and search keywords.) In other cases it looked like I was getting a random answer that was not very relevant to my question.

I wondered if I could get a more consistent conversation with the impersonated character. I thought that if I wrote some fictional conversations with the character, I could use those notes to feed Jarvis's personality... My idea was to write something like this:


I would then use a new note to instruct Jarvis to pretend to be ELIS and then ask a question for ELIS and also include "Search: ELIS". That way he would only find messages from ELIS and include them if they were relevant to my question. This doesn't seem to work, probably because I don't have enough relevant text. I wanted to ask you if you think this could work if I had written enough fictional conversations...

Or maybe you have a better way of achieving this... Basically, my idea was to be able to adjust Jarvis' personality and "memories", simply by controlling which fake past conversations are fed to Jarvis as a basis for his answers...


@JamesWriterNarry That is an interesting use case.

@shikuz Maybe it would work if Jarvis had some sort of profile settings. Just like Bing has 3 different modes, Jarvis could have different profiles. Each profile could have different notebooks as sources and also different model temperatures, note similarity setting... I find that depending on which notebook I am browsing, I need to use a slightly different Jarvis setting to get the optimal result.

One question - does the syntax

"Search: keyword Notebook: MyNotes"

work? In the chat preview I seem to get different "relevant notes" that I think have nothing to do with my search query. ( at best it is word match). And if it does work, could you make some settings that would allow me to exclude semantically similar notes and only process the notes and notebooks to which I have narrowed the search?

Thank you both for the feedback.

@JamesWriterNarry I did the following experiment: Downloaded a couple of scripts (removed the title of the film and the credits, but left the character names untouched), each into a single note, and tried to talk to the characters. I only requested that the note that contains the script will be the single source for all context sent to the model (see one example below). I think that Jarvis was able to get into character to some extent, refer to events and to interactions with other characters (not all of them, of course, but some).

User: I want you to act as the character Vincent, think like he does, feel the way he does, and talk like he does. The script below contains examples of conversations that Vincent had. What's your favorite food?
Notes: 4a935661db4e443ba2cb85d801a5eac3

Jarvis: As Vincent, my favorite food is definitely a good steak. But speaking of food, do you know what a five-dollar shake tastes like? Mia had one at Jackrabbit Slim's and I gotta say, it was pretty damn good. [Note 1]

The only problem is that it's not a very good experiment, as I could push Jarvis to guess the name of the film in all the cases I checked (even ones much less famous than Pulp Fiction). Therefore, it's possible that the character was also based on the model's training, which isn't applicable in your case (I do imagine that the model can generally detect archetypes easily and impersonate them, based on a few clues / descriptions / examples). However, it's worth noting that without the context of the script - just giving Jarvis the name of the movie and the character to play - I did not get good results.

To conclude, I think that your approach has merit, but I recommend using the Notes: command instead of search if you're creating a special note(s) for the sake of the conversation, or already know in advance which of your notes are highly relevant. It remains to be seen how many examples (or how long of a conversation) the model needs in order to calibrate itself to the character.

@davadev I agree that if there's a need to change the settings frequently, then it would make sense to expose the relevant settings in a more convenient way, maybe in the panel, and perhaps even attached to profiles as you suggested. I'm adding it to the list of feature requests, but there are a few major ones that will come before that.

This looks like the correct syntax to search for notes containing keyword in the notebook MyNotes (unless there's a bug). I'll explain again how this command works, as I'm not sure it was clear in the release notes. Anything that comes after Search: is sent to Joplin to search for notes. It is essentially the same as searching via Joplin's search bar (you can in fact test it this way). If the search query is very specific, it might return a handful of notes, but if it's a single common word it's likely to return a hundred. It is from these search results that Jarvis then selects the most relevant excerpts to send to the model along with your prompt. So it's possible that the 2nd paragraph in the 100th note was selected by Jarvis, because it matched the search query and was semantically similar to the current conversation. Semantic similarity is still used to rank the search results because in most cases there isn't room for all results to fit in (due to context limits).

Shikuz Thank you. I have continued my experiments and come to the following conclusions, which I would like you to confirm.

  1. Scripts where the conversation switches between user and Jarvis without any formatting (like in the example I gave you with ELIS) are confusing to Jarvis. Better note formatting is needed.
  2. I need to better understand how semantic search works to make sure the right note (and even the right block of the note) is selected. My next idea (I haven't tested it yet) is to have short notes with subheadings that should be picked by the semantic search. I imagine that if the subheading is exactly the same as (or very similar to) the question I might ask my character, then it will be selected for answering. Then the text within the sub-heading is the answer given by the character, starting with the character's name, followed by the answer with statements in the first person.
  3. If I have understood the "Notes: Number of following blocks to add" correctly. Then these are the blocks immediately after the block that was found with the semantic search. Therefore, with the format I proposed in point 2, I reason that I must increase it's value to make sure that the answer provided to the question is used.
  4. I think the number of blocks here will depend on my "Notes: Minimum block length (chars) to be included". If my blocks are larger, I will need less of them.
  5. I am not sure what "minimum block length" is optimal, as I am aware that too long might mean that Jarvis overlooks shorter questions (subheadings) that match the semantic search, but on the other hand it might increase the relevance of those answers where source notes contain information that cannot be expressed in a short sentence.
  6. If I set the model temperature to a lower number, it will use more phrases from my notes. This means that to simulate the personality of the character more closely, all my notes must be written in the style of this character in the first singular person and the model temperature must be as low as possible.
  7. What I am wondering, though, is whether the prompt I use within "chat with your notes" to assign a role to Jarvis does not affect the semantic search to the extent that irrelevant notes (and blocks) are found. (Because it not only looks for similarity to the question I ask the character, but also to the prompt that tells Jarvis how to behave, and that prompt is usually much longer than the question to the character itself...)
  8. The next question is how can I get the character to answer in character style even if I don't ask anything and just write something like "Hello ELIS! Or a simple statement like "that's a cool idea". I think the formatting suggested in point 2 might work for this, but I haven't tested it yet...
  9. Last but not least, how can I get the character to have some kind of short memory? With the maximum number of tokens I don't imagine anything impressive, but I would like to extend the context the character considers beyond one or two questions...

@JamesWriterNarry Thanks for testing! Are you trying to create a personalised chatbot without actually training the model? :thinking:I tried your insights as I had the idea to create a support chatbot to replace myself at work :joy:.

Your suggestion in point 2 seems to work if the question is exactly the same as the subheading... if I change the question slightly, the related note doesn't show up. Hovewer, I tried using the "Search:" command to look up the keywords from the question, and that way I get the right note even if the question is phrased slightly differently.

My idea: What if Jarvis could analyse user questions to extract the keywords and do this search for you??

I love seeing the diacussion here.

@JamesWriterNarry these are very good points, especially the one regarding model temperature, which I hadn't thought of in this ccontext.

Both of you (thanks @davadev ) raised interesting ideas, and I'm thinking of adding the following feature list.

  • I didn't experience point 1, but I think I can try to improve chat parsing
  • add a prompt command / syntax for text that will be excluded from semantic search, such as for assigning a role
  • (vice versa) add a prompt command to exclusively define the context for semantic search
  • option to automatically search for notes, like Jarvis does for academic papers, although I have my doubts whether it will be very efficient (still a good experiment)
  • extending the context is a fundamental thing that I'm working on, and will affect character simulation as well

Additional comments:

(1) As I said earlier, I didn't run into such issues, but please continue to experiment and share.

(2) How semantic search works: notes are cut into chunks / blocks based on headings (the block is usually the heading + the text that follows) and length of blocks. The entire text of the block, along with note title, the headings hieratchy leading up to the block, and note tags, are all processed to assign an embedding / semantic features for the block. In a similar way the conversation with Jarvis is processed and assigned an embedding (optionally while taking links into account). Semantic search then selects note blocks that have the most similar embedding to the conversation. When Notes or Search commands are used, the same mechanism is at work, but searches for blocks across the given notes.

It's important to note that this way of searching tries to extract the essence of the block, it is not based on specific keywords or on exact or approximate text matching. Also, it will not try to match the heading or subheading, but rather something more holistic (the entire content that goes into the block, as described above). With perfect models (accurate, creative, with lots of memory), it's supposed to be magic. But we're not there yet, from my / our experience. Since a lot of it depends on the power of the model that processes the text (not GPT, but rather the one generating the embedding / semantic features), I want to work on upgrading it, and giving access to more models. I think that this could have the most impact on the results. In parallel, the ad hoc syntax, settings and heuristics that we're developing here in this discussion will probably continue to be relevant (some more than others).

(3) I think you got the "number of following blocks" right. This setting assumes that perhaps an individual block is not enough to get sufficient context, so it adds its neighbors in the note. On the other hand, this can also add irrelevant information, depending on the note.

(4)-(5) I'm not sure either, but in my notes I had very short blocks between headings that did not add much to the conversation, and this was an effective way to get rid of many of them.

(6) Great catch!

(7) It's certainly possible, see the feature list that tries to address this.

(8) Again, good question and something to experiment with.

@JamesWriterNarry I had a lot of fun yesterday watching my colleague's face when I showed him how Jarvis can do my job :joy: It seems I don't even need a lot of notes to create a simple demo. The only problem, as I mentioned, is that sometimes the note is not found and I have to do "Search: keyword" to find the note...

@shikuz since you mentioned that you're going to try to get Jarvis to look up the information, maybe you could get Jarvis to do the search more than once in case he doesn't find anything on the first try...

One more thing... Any chance of adding German to the supported languages for offline semantic search? If I am not mistaken, English is currently supported, right? I like that this part is offline for the same reasons you mentioned in your posts.

Shikuz , thank you for elaborating on my points, especially point two. I am thinking about how to make better use of subheadings, titles, and tags to give Jarvis the best context.

  • Is the note structure part of this hierarchy? If not, I will simply add the notebook name to the notes contained in the notebook.
  • How much influence or information do tags have on Jarvis?

I ask because I am thinking about creating a notebook structure along with best practice formatting for the notes to help Jarvis better simulate my characters. My idea:

  • Root notebook with the character's name

  • Subnotebooks dealing with different categories of possible queries to the character. I haven't settled on specific categories yet, but some of the ideas I had were to create the following subnotebooks

    • Memmories (subnotebook dealing only with the character's memories of events)
    • Personality (subnotebook containing only notes that affect the character's personality. These notes should contain answers to such casual questions as "How are you?", "Hello")
    • Opinions (subnotebook containing answers to opinion-type questions such as "What do you think of...?")
    • Preferences (subnotebook containing answers to personal preferences of the character, such as "What is your favorite animal?", "How do you feel about person X?",...)
  • Note titles should contain some contextual information.

    • For Memmories I thought about including the date of the memory, although I am not sure if ChatGPT3.5Turbo is good enough to understand the time flow and make inferences based on the memory sequence...
    • Personality subnotebook title could contain some keywords summarizing the content (example: greetings)
    • Opinions subnotebook title could contain references to the categories the opinions fall into (example: Technology - Artificial Intelligence).
    • Preferences subnotebook title (Hobbies - Music - Violin)
  • Subheadings then contain more detailed questions to ask the character. (H1 - most general question, H5 - most detailed question)

  • For tags, my use case will depend on how important they are.

    • What I have in mind so far is to tag all notes of the character with the character's name. (I am thinking about simulating more characters and I want to make sure they do not get mixed up).
    • Next idea would be to use tags for character emotions (example: #happy, #optimistic, #motivated).
    • Another idea would be to glue some notes from different notebooks together by simply using a predefined string tag (probably just some random number) I imagine this would help the character to glue some ideas together)

I am really excited about this project. I was thinking of training my own model, but like this I can focus more on the writing than the technical part.

Davadev, I like your idea. One question, how do you make your support chatbot reason step by step on the technical problem without getting lost and forgetting what steps have already been done?

Thanks, I tend to agree, this is the approach Jarvis takes when searching for scientific papers. Until I get to testing / implementing it (the list is long), if you feel like experimenting you could already try it now. Perhaps write your regular chat prompt and end with "Please suggest keywords to search for relevant notes" (not sure that chatting with notes is necessary here, or that this is the optimal way of asking), then use Joplin search to see what comes up with Jarvis' suggested search query, or just add it as a Search: prompt-command and see if it improved the conversation with Jarvis. This manual labor may give you / us some clues as to how effective an automatic workflow could be.

I would very much like to add a multilingual offline model. First attempts failed, but my guess is that it will become easier to add one in the near future.

I should probably add an important distinction. Everything that I wrote about blocks is taken into account when generating block features / embedding. However, when the content of the block is fetched in order to send it to GPT some bits of information are easier to retrieve than others (mostly due to design decisions), and therefore the text that is sent includes: (1) note title; (2) section heading; (3) the text. The rest of the metadata is useful for semantic similarity, but GPT won't be aware of it, at this point.

Regarding the hierarchy I mentioned sometime earlier, for the purpose of generating features (that is, the hierarchy within the note itself, the folder / notebook structure is ignored), a block under a level-3 subheading will look like this just before analysis:

title / h1 / h2 / h3
tags: t1, t2

The implications are that you have 2 types of fields (hidden, public) where you can store information and annotate it, and whatever you need GPT to know about during the discussion, you should include in "public" space. I believe you could convert any idea you had for a tag or a notebook to another bit added to titles and headings, and vice versa.

All of these ideas indeed sound very exciting! Looking forward to hearing about your results.

Hello all! I am super excited to have found this plugin. I am just starting my master degree and have a lot of reading ahead of me. I thought I would kickstart it by using the "Research with Jarvis" feature. However, when I run it (and other jarvis commands) it throws an error. I can't seem to find a very good "get started" spot or docs, any recommendations?
Thanks again! I am super excited for the potential this has!

Welcome @JustJacob !

This is a bit strange, as I don't remember ever seeing an empty error message displayed. Have you already defined anything (like API keys) in the settings? Do you see anything in the Related Notes panel? What OS and Joplin version are you using? (could you copy Joplin's about window?)

I should probably add more documentation, but the README on github is the best place to start. There's a short installation section, and the usage section also contains relevant information.

Thanks for the quick reply! I figured it out, it was an API key issue (I think I may have miscopied it or something), and my trial credits have expired.
Just in case you still want it here is my about data
Thanks so much!

Joplin 2.10.19 (prod, win32)

Client ID: ***I took this out as it seemed like an identifier?***
Sync Version: 3
Profile Version: 42
Keychain Supported: Yes

Revision: 1100ab0

History Panel: 1.0.1
Inline tags: 1.2.1
Inline TODO: 1.6.0
Jarvis: 0.4.4
Markdown table calculations: 1.0.5
Markdown Table: Colorize: 1.1.4
Math Mode: 0.5.3
Menu items, Shortcuts, Toolbar icons: 1.1.0
Note Link System: 0.8.0
Note list and sidebar toggle buttons: 1.0.3
Note Tabs: 1.4.0
Outline: 1.5.10
PlantUML2: 1.11.0
Rich Markdown: 0.13.2
Search & Replace: 2.2.0
Simple Backup: 1.3.1
Table Formatter Plugin: 1.2.1
Templates: 2.4.0