Plugin: Jarvis (AI assistant) [v0.7.0, 2023-08-31]

Can you provide support for Azure OpenAI API? Thank you so much.

Thanks for submitting an issue @mask, I'll look into it when I get a chance and see what it requires. The top priority is to add support for models other than OpenAI's (to increase diversity. there are so many great models out there). But noted, and I'll see what I can do.

v0.5.0

This release includes many improvements under the hood, such as better chat processing, error handling, note chunking, and token counting (for example, no need to set max tokens in the settings in most cases). Most importantly, this version introduces new models for document embedding (related notes) and text generation (ask, research, chat). Adding other models in the future will be easier.

Personally, for the time being, I'm probably going to stay with the offline USE for related notes and the online gpt-3.5-turbo-16k for chat. The latter model that was introduced recently by OpenAI provides a large context window (high Max tokens) that is great for text summarization (including literature reviews / research), and for chatting with your notes (by increasing Memory tokens you can get a lot more context into the conversation).

  • New chat / ask / research models

    • Anything on Hugging Face (deafult: LaMini-Flan-T5-783M)
      • The great news are that HF has a free inference API (no need to set an API key)
      • But do note, that the free version only supports the smaller (less sophisticated) models, but they're still worth a try. The default model should work fast and reasonably well, all considered, but it's not the state of the art. If you're willing to pay for compute, you can get access to all hosted models
    • OpenAI
      • Extended-context gpt-3.5-turbo-16k
      • Everyone should also have access now to the long-awaited gpt-4
      • Deprecated a few old models of GPT-3
  • New semantic search / related notes models (embeddings)

    • OpenAI (text-embedding-ada-002)
      • As far as I know, this model can process code blocks. In order to enable code processing, check the setting "Notes: Include code blocks in DB"
      • This model is multi-lingual. I could not find official documentation for this, but it is likely to work OK with many European languages. It doesn't seem to support Asian languages that well
    • Anything on Hugging Face (default: paraphrase-multilingual-mpnet-base-v2)
      • I had good experience with HF embeddings
      • The default model is multi-lingual (trained on 50+ languages), and works well for a diverse set of languages
    • Some comments
      • These online models may be faster than the default (offline) model if you're using an old machine
      • You can try out a few models and switch between them. Jarvis will index your notes in a separate database for each and load the relevant results
      • You may need to adjust the setting Minimal note similarity when switching to a new model, in order to get the most relevant results displayed in the Related Notes panel
      • I couldn't get an additional offline model to work properly (and specifically transformers.js), perhaps next time (PRs are welcome)
  • Chat with notes features

    • Added a new user guide
    • The note database version has changed, and you will be asked to upgrade the DB
    • Only commands that appear in the last user prompt apply
    • New commands: Context: and Not Context:
    • Improved Search: command so that selected blocks must contain the search keywords
    • Here is a recap of all available chat commands:
Command Description Content included
in Jarvis prompt
Content included
in context search
Notes: The following list of note IDs (or internal
links) will be the source for chat context
No Yes
Search: The following Joplin search query will
be used to search for related notes
(in addition to semantic search), and
search keywords must appear in the
selected context
No Yes
Context: The following text will be the one
used to semantically search for related
notes instead of the entire note
No Yes
Not Context: The following text will be excluded
from semantic search (e.g., it can be used
to define Jarvis' role), but the rest of the
conversation will still be used
Yes No
3 Likes

This is a really feature-packed update! Looking forward to trying it out. Can you elaborate more on your personal experience with "New semantic search / related notes models (embeddings)", I mean could you show some benefits/differences on specific examples you tried?

Also, please correct me if I have misunderstood this. The current offline model needs internet to load, but once loaded it will update the note databade even without internet. The online models will only update the database when there is an internet connection. But regardless of the model, I should be able to use related notes offline even if the database is a bit outdated. Am I right?

1 Like

Very impressive, thanks for sharing @shikuz! I want to give the plugin a try although I haven't tried yet.

Do you think it could also be used to search for note? Sometimes I'm looking for a note which I know is there, but I can't find it because I don't type the right keywords. Could the AI help with this and return an exact note based on a vague search request?

2 Likes

Thanks @laurent & @davadev.

I'll share an experiment that I did the other day. I opened a new note, and wrote a line in French and selected it ("J'ai eu un rรชve la nuit derniรจre"), then hit "Find related notes" (when selecting text, only this text and the title of the note are used in the search. I kept the note title empty). The multi-lingual Hugging Face model was able to find all my English notes related to my dreams, that did not contain any of these words. I then repeated this 4 more times, each time selecting a different line of text:

Letzte Nacht hatte ich einen Traum
ูƒุงู† ู„ุฏูŠ ุญู„ู… ุงู„ู„ูŠู„ุฉ ุงู„ู…ุงุถูŠุฉ
ื—ืœืžืชื™ ื—ืœื•ื ืืชืžื•ืœ ื‘ืœื™ืœื”
ๆ˜จๅคœๅคขใ‚’่ฆ‹ใพใ—ใŸ

In all cases it was able to find the dream notes. This demonstrates the flexibility of search, and one of the highlights (for me) of the new models.

I didn't design this as a proper search tool with a text box for a query, so it's a little awkward to use at the moment, but the Related Notes panel displays similar notes as you switch between notes. I have anecdotal experience that what it finds can be helpful sometimes, and remind me of meetings and notes that I already forgot about. I think that as a general impression, @davadev, the new embedding models are slightly more accurate (also in English), but I'm waiting to hear feedback.

Indeed.

Theoretically, the database could be used even if the model is offline (and maybe it's a good feature request). It's also true that if a note's content hasn't changed the model isn't used (embeddings are loaded from the database). However, since new content, searching based on selected text, and chatting with notes, all depend on the model being available, Jarvis tests it when it loads. So if an online (or offline) model is not available when the model loads, it will not load the database and all related functions will not be available. However, once the model loaded on start-up, even if an online model lost connection to the internet, it will be able to query the database and keep displaying related notes.

3 Likes

One more advantage of the new semantic models is that, as far as I know, OpenAI's text-embedding-ada-002 can process code blocks. (There are probably numerous such models on Hugging Face as well.) In order to enable code processing, check the setting "Notes: Include code blocks in DB".

Shikuz, this update has been a game changer for me! Now I can really create a character that I can chat with. Especially the "not context:" feature and the improved "search:" make all the difference! ChatGPT 3.5 Turbo 16k Makes my characters more coherent too.

I tend to insert the same "not context:" block in every message. Would it be possible to set this globally, either in the Jarvis settings or for the current note, so that I don't have to insert it in every message? I do the same thing with the "Search:" feature. I basically always search for the occurrence of character names in my notes. If I could hide these tricks from chat, it would make the whole experience more magical...

I imagine a setting like this. Always insert this block of text in the chat with your notes. Within this block I could use all the commands you defined above.

Happy to hear that, @JamesWriterNarry :slight_smile:

I'll add something to the next release. I'm thinking of a block of text that you can insert anywhere (e.g., at the top / bottom of the note) and will set default commands for that note / chat, that will apply to every user prompt unless they are overridden.

Note that currently, as chat memory contains your conversation with Jarvis, this also includes Not context: lines from your previous prompts. So if you asked Jarvis in your first prompt to act as a character, this request would be part of the conversation when you come to write your next prompt (however it will not be used in semantic search). This is true until chat memory is exhausted (exceeds the allocated memory tokens). So if you use the 16k model and set a high number of tokens as memory, this initial Not context: instruction is likely to stay in memory until the end of your chat, and there is no need to repeat the same command every time (at least it's worth checking if this is an effective tactic). The above does not hold for the Search: command, that still needs to be repeated.

v0.5.2

  • new: search box in the related notes panel (for @laurent )
    • use free text to semantically search for related notes
    • in the example below, the notes are sorted by their relevance to the query in the box
      • within each note, its sections are sorted by their relevance
    • you may hide it in the settings ("Notes: Show search box")

  • new: global commands for chat with notes (for @JamesWriterNarry )
    • any command that appears in a "jarvis" code block will set its default behavior for the current chat / note

    • you may override this default by using the command again within a specific prompt in the chat

    • for example:

      ```jarvis
      Context: This is the default context for each prompt in the chat.
      Search: This is the default search query.
      Not context: This is the default text that will be excluded from semantic search, and appended to every prompt.
      ```
      
2 Likes

v0.5.3

  • new: custom OpenAI model IDs
    • select Chat: Model "(online) OpenAI: custom"
    • in the Advanced Settings section, set Chat: OpenAI custom model ID, for example: gpt-4-0314
1 Like

Hi. Jarvis is shockingly good functionality. Thanks for your hard work!

I recently discovered OpenRouter. im not sure how to describe it, as i'm new to all of this. (i'm a post-ChatGPT computer scientist. doctor by training). it allows API access to Claude and GPT4 and others through one API key. I was a) sharing that as an fellow AI nerd, I like Claude's writing skills better, at times and b) i was hoping you could consider implementing this API integration at some point.

Many thanks
E

1 Like

Thanks @ebc000 ! Glad you find it helpful.

Appreciate and curious about this suggestion. OpenRouter sounds in theory like a good idea (one API, many models), and their API looks easy to implement based on the limited documentation available on their website. I still need to test it. My main concern, however, is how well-established, reliable and privacy-respecting this service is, as everything that users send and receive will go through OpenRouter's servers. I'll keep an eye and see how their product develops, and may consider adding their API in the future.

1 Like

Even sending the notes to OpenAI probably is not an option for many people. I'm quite ignorant about LLM, but I assume a fully offline solution wouldn't be workable, would it?

Indeed, for some this is not an option. Therefore: (1) By default, note indexing and semantic search are based on an offline model (Universal Sentence Encoder). (2) Regular chat and queries to LLMs do not include excerpts from notes (only explicitly written queries). I hope that this use case suits most users.

There are a number of packages for offline LLM models that may work pretty well on, say, M1 laptops (such as 1, 2, 3). I'm very excited about the possibility of using them, but so far ran into technical difficulties bundling them with webpack, or loading them at runtime. It all comes down to using native addons that are pre-compiled (namely .node files), and to the fact that I'm not a JS/TS developer (plugins are just so much fun). Do you know @laurent if there is any inherent limitation in Electron, Joplin or specifically the plugin system that prohibits the use of such native addons?

4 Likes

In the app itself all native modules are compiled separately for the target platform, so I assume that to get this working with plugins they should also bundle a version of the module for each platform.

I don't know if any plugin that includes native modules, but maybe other plugin developers who tried it could help. Or if you try again with webpack, post the error messages here (or on the dev category) and perhaps someone can help

1 Like

v0.6.0

  • Annotations
    • This release introduces the toolbar button / command Annotate note with Jarvis. It can automatically annotate a note based on its content in 4 ways: By setting the title of the note; by adding a summary section; by adding links to related notes; and by adding tags. (gpt-4 is recommended for tags.) Each of these 4 features can be turned on or off in the settings in order to customize the behavior of the command. In addition, each sub-command can be run separately. For more information see this guide.
  • System message
    • You may edit it in the settings to inform Jarvis who he is, what is his purpose, and provide more information about yourself and your interests, in order to customize Jarvis' responses.
3 Likes

v0.7.0

Jarvis can now work completely offline! (Continue reading)
This release adds two new model interfaces.

Google PaLM

  • If you have access to it (it's free), you can use it for chat and for related notes.

Custom OpenAI-like APIs

  • This allows Jarvis to use custom endpoints and models that have an OpenAI-compatible interface.
  • Example: [tested] OpenRouter (for @ebc000) setup guide
  • Example: [not tested] Azure OpenAI (previously requested)
  • Example: [tested] Locally served GPT4All (for @laurent, and everyone else who showed interest) setup guide
    • This is an open source, offline model (you may in fact choose from several available models), that you can install and run on a laptop. It can be used for chat, and potentially also for related notes (embeddings didn't work for me, probably due to a gpt4all issue, but related notes already support the USE offline model).
    • This solution for an offline model is not ideal, as it may be technically challenging for a user to run their own server, but at the moment this workaround looks like the only viable solution, and doesn't involve a lot of steps.
  • Example: [not tested] LocalAI
    • This is another self-hosted server that supports many models, in case you run into issues with GPT4All.
8 Likes

Hi @shikuz. I have been experimenting with GPT4All and the "Mini ORCA (small)" model. My laptop only has 8GB of RAM, so most of the other options are off the table :frowning: GPT4All also has a plugin for adding documents, but I struggled to formulate a prompt that would make the model actually access my documents. I was wondering if you could add support for this model. My first impression is that it could work well for chatting with notes or summarizing them if it just managed to retrieve the relevant information, which Jarvis does quite well through semantic search. It also does not take up much space on my laptop and is relatively fast for an offline model.

I mean without the hassle of having to install my locally hosted GPT4ALL...My tests were with the GPT4All client.

Edit: It just occurred to me that "Mini Orca (small)" might not have enough tokens to process large notes. Maybe I need to look for another offline model. I tried something bigger with the GPT4All client, but since my toaster only has 8GB RAM, the bigger models for 8GB RAM struggled with performance.

I also tried to follow the guide for locally served GPT4All, but it was not so easy as I have no experience with Docker. At one point the app failed to start. I suspect this is because the docker build command by default downloads a slightly larger model ggml-model-gpt4all-falcon-q4_0 and since my laptop has only 8GB RAM it fails due to low resources.

Unfortunately, it feels that it will take some time before Local LLM will be useful on average end-user devices.

I wish I had access to Google PaLM as it seems I am currently running out of options for local LLM. Has anyone else had success getting JARVIS to work with local LLM on a low-end device?