Plugin: Jarvis (AI assistant) [v0.8.5, 2024-06-04]

"AI is the new electricity" (Andrew Ng)

A few days ago I saw this announcement, and realized it's time to connect Joplin to the grid. So say hello to...


Jarvis (Joplin Assistant Running a Very Intelligent System) is an AI note-taking assistant based on OpenAI's GPT-3. You can ask it to generate text, or edit existing text based on free text instructions. You will need an OpenAI account for Jarvis to work (at the moment, new users get 18$ credit upon registering, which is equivalent to 900,000 tokens, or more than 600,000 generated words).



  • This plugin sends your queries to OpenAI (and only to it).
  • This plugin uses your OpenAI API key in order to do so (and uses it for this sole purpose).
  • You may incur charges (if you are a paying user) from OpenAI by using this plugin.
  • Therefore, always check your usage statistics on OpenAI periodically.
  • It is also recommended to rotate your API key occasionally.
  • The developer is not affiliated with OpenAI in any way.


  1. Install Jarvis from Joplin's plugin marketplace, or download it from github.
  2. Setup your OpenAI account.
  3. Enter your API key in the plugin settings page.


  • Text generation: Run the command "Ask Jarvis" (from the Tools menu) and write your query in the pop-up window, or select a prompt text in the editor before running the command.
  • Text editing: Select a text to edit, run the command "Let Jarvis edit selection" (from the Tools menu) and write your instructions in the pop-up window.


I was impressed by Notion's blog post covering AI features, nice to see Joplin can already do similar things!



  • fixed: increased the default creativity level of the model (temperature=9).
1 Like

This is a nice piece on how to use AI to generate ideas which I find relevant. Although Jarvis does not yet have interactive features the likes of ChatGPT, many of the techniques in the article (and others) can be of use while writing notes and brainstorming in Joplin.

1 Like

Tested today, it works great with open AI and DaVinci-003! Minor UX feedback; the prompt from Jarvis looks very narrow on the desktop. Amazing thank you!

Glad to hear that @Plainview!

Thanks for the feedback, I completely agree with you regarding the width of the prompt window. I struggled with this for a bit before the first release with no luck, but in the latest update v0.1.3 this should be fixed.


it's been a couple of months, and since the first release of this plugin AI chatbots have taken over our lives (mine too). however, I still find the good old GPT-3 useful, as it responds with similar quality, and has higher availbality / cheaper price (I was barely able to burn 3$ after 3 months of use). Jarvis didn't stay idle all this time, and has evolved steadily up to this release.


chat with Jarvis

  • new command: Chat with Jarvis (Cmd+Shift+C). this is the homemade equivalent of ChatGPT within a Joplin note. each time you run the command Jarvis will append its response to the note at the current cursor position (given the previous content that both of you created). or it will try to extend its own response if you didn't add anything. therefore, this essentially serves as a general-purpose autocomplete command. repeat the command a few times to replace the response with a new one until Jarvis gets it just right.


prompt templates

  • new: predefined and customizable prompt templates (check the settings). quickly select a combination of an instruction, a format for the response, a character that Jarvis will play, and perhaps add a nudge towards thinking more analytically than usual. this utilizes some of the techniques from this recommended tutorial, and draws inspiration from @sw-yx's very cool reverse prompting project. the new prompts are a work in progress. help me improve them, and add new useful templates to the database.
  • new: select whether to show or hide the input prompt in the response.



  • improved: lost queries display error messages with info from OpenAI.
  • new: auto-retry sending the query to OpenAI (adjusted if needed) when pressing the OK button in the error dialog.

look and feel

  • improved: dark theme support.
  • new: dedicated Jarvis sub-menu under Tools.

WOW!! Thanks. I've just had a little time to play with it, not nearly enough.


Jarvis is now connected to the web (if you choose to).

  • added: new model set as default
    • the gpt-3.5-turbo model is the one behind ChatGPT. its reponses a lot cheaper and are much faster (at least they used to be in the first few days). check the settings to see that you have it selected. this is great timing, because for the next new feature Jarvis needs to make many queries. personally, the switch was not as smooth as I anticipated, and it took me a couple of days to adjust my prompts in order to get good responses from the model. in the end, though, I'm mosty happy with the new results.


  • added: new command "Research with Jarvis"
    • "Research with Jarvis" generates automatic academic literature reviews. just write what you're interested in as free text, and optionally adjust the search parameters (high max_tokens is recommended). wait 2-3 minutes for all the output to appear in the note (depending on internet traffic). Jarvis will update the content as it finds new information on the web (using Semantic Scholar, Crossref, Elsevier, Springer & Wikipedia databases). in the end you will get a report with the following sections: title, prompt, research questions, queries, references, review and follow-up questions. this is not Bing AI or the cool Elicit project, but even a small DIY tool can do quite a lot with the help of a large language model. You can read more about how it's done in this post.
    • sources of information: Jarvis currently supports 2 search engines (and Wikipedia), and uses various paper/abstract repositories. as a general rule, you're likely to get better results when operating from a university campus or IP address, because institutions usually have access to more papers. the 2 search engines have complementary features, and I recommend trying both.
      • Semantic Scholar: (default) search is usually faster, more flexible (likely to find something related to any prompt), and it requires no API key. however, it has a tendency to prefer obscure, uncited papers.
      • Scopus: search is slower, and stricter, but tends to find higher impact papers. it requires registering for a free API key
    • Jarvis is a Joplin assistant first and foremost, but this feature was also ported to a VSCode extension.
  • contributions: thanks to @ryanfreckleton for fixing a prompt typo.


  • new: added gpt-4 / gpt-4-32k as optional models (#5)
    • note that currently this will work only if you have early access to these APIs, until the models become publicly available. also note the considerably higher pricing for these models (compared to gpt-3.5-turbo).
  • new: handling different max_tokens limits per model
  • new: chat indication that a response is being generated (#4)
  • fix: better token length estimation (#3)
  • changed: AGPLv3 license
1 Like

Is it possible to use Jarvis to ask questions and get answers based on the content of my note/s? I guess this might be quite token expensive but if it was possible to narrow the analyzed notes only to notes that meet the search criteria, perhaps it would not be that much...

1 Like


Today's new features were requested a number of times, and some may have noticed they have been in the works for a while.

  • Related notes

    • Find notes based on semantic similarity to the currently open note, or to selected text. This is done locally, without sending the content of your notes to a remote server. Notes are displayed in a dedicated panel. To run semantic search based on selected text, click on the Find related notes (toolbar button or context menu option).
    • Coincidentally, this turned out to be semantically similar to a longstanding plugin, but adds some technical improvements.
    • I will add support for multilingual online models in the next release, but I wanted this to be an offline feature first, as some users may feel uncomfortable sending their entire Joplin database to a third party. I believe that in most cases this will also run faster.
    • The current offline model (Google's Universal Sentence Encoder) performs well, but its main drawback is that it only supports English (sorry, I'm not a native either).
  • Chat with your notes

    • To add additional context to your conversation based on your notes, select the command Chat with your notes (from the Tools/Jarvis menu). Relevant short excerpts from your notes will be sent to OpenAI in addition to the usual conversation prompt / context. To exclude certain notes from this feature, add the tag #exclude.from.jarvis to the notes you wish to exclude. The regular chat is still available, and will not send out any of your notes. You may switch between regular chat and note-based chat on the same note.
  • Improved token length estimation.

  • Typo fix by @Wladefant.

These are crazy times, and it's practically impossible to keep up with all that is going on. Many of the most exciting models and tools are only days-to-weeks old. JavaScript is gradually picking up in terms of new packages (although it's still far behind Python). Hard to predict what the world will be like by the next release.

  • Help needed
    • If you know how to add the transformers package to a Joplin plugin's webpack your help will be greatly appreciated.
    • If you know a spell to convert recent tensorflow-hub NLP models to tfjs please teach it to me.

This is great news! I would have an idea for a feature if it has not already been added. I think it can be quite tedious to exclude notes from Jarvis for an existing database containing hundreds to thousands of notes using hastags... Would it be possible to do this on a notebook basis? I have two branches (subnotebooks within notebooks) that I would like to be able to use Jarvis with, but the rest I would like to leave out. Also, what model do you recommend to use that is cheapest? I was thinking of trying Babbage or Curie, but I am not sure if their capabilities are sufficient. My main use case is to quickly find information in my old notes without having to read through them and maybe do some grammar correction. But I don't need to create a new text.


I tried different models and the answers somehow seemed to generate content from my notes but the relevance to my question was low. Greatest relevance I got with GPT3.5Turbo (I didn't try GPT-4) but I still can't get rid of the feeling that the input selection is the reason for some of my tests lacking relevance to my questions... When I tried to tweak the number of max tokens as recommended in github, the setting seems to be saved, but when I try to chat about my notes, the setting changes back to the default of 2000. I have tried reverting the setting back but I keep getting this error:

(In plugin: Jarvis)
Error: -147 is less than the minimum of 0 - 'max tokens'
Press OK to retry.

Thanks for the feedback @davadev! This is definitely a work in progress and any feedback helps!

Added notebook exclusion / inclusion to v0.4.2. Once you select a notebook and run "Exclude notebook from note DB", the notebook and all its sub-notebooks will be excluded starting from the next update of the DB.

Regarding the model, the default gpt-3.5-turbo should give a good trade-off.

For some use cases, I believe that just using simple semantic search (related notes) could be pretty effective, because it will try to refer you to a specific section in your notes where relevant information exists. For example: (1) Write your query; (2) Select the query text; (3) Hit "Find related notes". This is also a way of understanding what why the chat missed your point (what it does, essentially is to search for related notes based on your query and chat history). So you can check in advance which notes are going to come up with your query.


I also experienced that some queries worked better than others, and I believe that this is also affected by how my notes are structured and written depending on their subject.

There are a few directions to improve this:

  1. Improving the embeddings, that is the mathematical representation of the notes. The current model has its pros (offline, fast), but there are better (and usually much heavier) ones, and I plan to add support for additional models.
  2. Improving the processing of retrieved notes in order to generate a response.
  3. Improving the chunking of the notes so that their context is optimized.
  4. Very careful prompting.

(Working on it...)

Additional information on tokens: There are two settings for max tokens: (1) "Model: Max Tokens" which is recommended to be maximal (~4000); (2) "Chat: Memory Tokens" which can be large but should not exceed some value depending on the use case. I recommend setting it to 1000-1500 tops if you are getting these messages.

(Long explanation follows...)

Because in the chat GPT gets both the history of your conversation and excerpts from your notes, the following defines the limits on the tokens:

[Length of chat up to this point, or Memory Tokens (whichever is smaller)] + [Length of notes extracted from notebooks, or Memory Tokens] + [GPT's response] < [Max Tokens]

So, if the chat is already pretty long (>2000 tokens), and Memory Tokens was set to be very large (e.g., >2000 tokens), you are left with no space for GPT to work with. I'll try to think of ways to either inform the user about such situations or circumvent them. In any case, either keep your chats shorter (open new notes for new topics), or decrease the memory tokens to 1000-1500. Such things will become less of an issue as models with support for as much as 100K tokens already exist, and will probably become widespread sooner or later.

1 Like

Regarding the default of 2000. The following models support up to 2048 tokens, at most:. text-davinci-002, text-curie-001, text-babbage-001, text-ada-001. When a user selects more than the model's max capabilities, Jarvis adjusts the setting accordingly. gpt-3.5-turbo, on the other hand, supports up to 4096 tokens. Is there a chance that you had one of the former models selected, instead of gpt-3.5-turbo?

I'm adding a warning message on this to the next release.

1 Like

Wow, really cool features. Thanks for the tip on highlighting text and finding similar notes! I will definitely use it a lot. You are right, I was using models that did not support more than 2000 tokens. This has solved my problem...

I saw that you can add templates to "chat with Jarvis", is this also possible for editing notes? I have a workflow where I basically scan my handwritten notes with OCR and send them to specific email inbox and then import them with email plugin into joplin. My challenge is that the imported email contains a lot of standard text that I don't need and the OCR is not perfect and makes mistakes every now and then, sometimes inserting line breaks in the wrong places. Before I started using your plugin, I used the ChatGPT interface to fix my broken OCR scans and then I deleted the unnecessary text caused by my workflow. It would be cool if I could create a template that would do this for me in one click.

In case you decide to follow this suggestion, I noticed that ChatGPT (GPT3.5) got confused when I tried to tell it in the prompt what text to delete from the note along with the instruction to fix the OCR transcript. I think the part where the prefix and suffix are deleted would be better handled by the standard logic (selecting the string that marks the start and end of the note) and then just provide a simple template for fixing the OCR transcript by correcting errors and removing line breaks.

1 Like

Interesting use case. Always intriguing to hear how people use GPT / LLMs.

It sounds like it's worth a shot. I can suggest the following workflow:

  1. Open your imported OCR / email note.
  2. Select the entire note content (Ctrl+A).
  3. Open the "Ask Jarvis" dialog (Ctrl+Shift+J). The note content will already appear in the dialog.
  4. Select the template "Fix OCR" (first on the leftmost list).
  5. Send the prompt.
  6. The note content will be replaced by the edited, clean version.

If this sounds efficient enough, then you can follow these steps to get there: (If you already know exactly what you want to put in your template, skip to the last part.)

  • Make a copy of one of your imported notes to train on. That is, try to rephrase your instructions until you're satisfied with the results.
  • Select the entire note content (Ctrl+A).
  • Open the "Ask Jarvis" dialog (Ctrl+Shift+J).
  • Add to the top of the text your instructions. For example: "The following text is the output of an OCR that was emailed to me. Remove the first X rows, and the last Y rows. Correct spelling errors and line breaks that could result from bad OCR" and so on... (You probably have a much better version already that you use to instruct ChatGPT with.)
  • You may select "Show prompt in response" if you want to keep your draft of the set of instructions stored in the training notes (until these instructions becomes perfect). When this option is not selected, Jarvis will act silently and just return the output of the edited note (leaving out the instructions and input).

Now, let's assume that you already have a good working version of the prompt instructions. In order to integrate your template into Jarvis, do the following:

  • Go to Jarvis settings, to the Advanced Settings section.
  • Locate the field "Prompts: Instruction dropdown options".
  • Add the following text:
{"Fix OCR": "Your prompt"}

If you go for it, good luck! Hope it works flawlessly.


I will try it, thank you. I have few more questions. Do you recommend having longer notes with all the relevant information or shorter notes? How important is the structure of the notes? Does it affect the number of tokens used? What is your recommendation on how to structure notes to get the most out of this plugin? What are the advantages and disadvantages of both approaches? (Longer vs Shorter notes if there are differences) Thank you very much for taking the time to provide such comprehensive answers!

Ideally: it shouldn't matter.
In practice: we (users) still need to figure it out.

So you're welcome to experiment and provide feedback.

I can explain how notes are currently processed:

  • Notes are split into sections by headings. If you're using headings in a long note, each heading (no matter its level) will be processed separately.
  • Each section is split into blocks (made up of sentences) with a maximal length of about 340 words. This length is dependent on the model's capabilities, and larger models may be able to process larger blocks in the future.
    • This is the atomic unit of a note, as far as Jarvis is concerned. When excerpts of notes are given to GPT, these are the blocks that came up as most relevant to the user prompt.
    • This also means that they are shorter than the maximal number of tokens that can be sent to the model, and there's even room to send a few of them to GPT in a single query.
  • Every block includes, for additional context, the title of the note and the heading hierarchy leading up to that section, as well as note tags if they are present.
  • Code blocks are handled separately from text. In fact, the default in v0.4.3 (new version) is to ignore them, but you can turn this option on. If Jarvis supports in the future models that can analyze code better, this may become handy.

This aims to capture the essence of short sections while taking into account the broader context of a note. It has its limitations. It remains to be seen how effective this approach is. I included all this information, as perhaps knowing the mechanism can inform your writing.

BTW, one more thing you could try, is to set the way multi-block notes are ranked. Check out the setting "Notes: Aggregation method for note similarity".

1 Like

@shikuz I was wondering if it would be possible to add a little more control to the selection of notes that are provided for reply. I have found that to chat effectively with my notes,

  • I need to start the chat with a note that has the best content related to my question.
  • The related notes must contain content that is relevant to my question.

This does not always work perfectly (due to the imperfection of my notes), so I thought I would create a main note that serves as a hub for other notes that contain information relevant to the questions I plan to ask. I added links to the other notes that should contain the relevant information. From my test, I noticed that this does not make a difference...

Would it be possible to add some kind of preference that would allow me to either manually add notes to analyze or follow links within the notes to other notes?

Honestly, I am not sure how to solve this, but I think having more control over what material goes to Jarvis might increase the relevance of the answers...Maybe it would be enough if the "note similarity" would not only be evaluated on the basis of the open note but also on the basis of a text that is in other notes that are cross-linked. (In other words creating one large note just for the purpose of improving relevant notes selection.)

Edit: After some testing, it seems that a big note really helps to get more relevant "related notes". I have some ideas that might improve the results...

  • Previewing which parts of notes are sent to might also help the user to better design the prompt...
  • Use Jarvis to somehow optimize the user query to produce a more effective search query. This idea comes from Bing's search. I noticed that when I use Bing AI to search for something, it sometimes improves my search query ... Maybe Jarvis could be used to refine the query and then answer it.
1 Like