Plugin: Jarvis (AI assistant) [v0.8.5, 2024-06-04]

Indeed, for some this is not an option. Therefore: (1) By default, note indexing and semantic search are based on an offline model (Universal Sentence Encoder). (2) Regular chat and queries to LLMs do not include excerpts from notes (only explicitly written queries). I hope that this use case suits most users.

There are a number of packages for offline LLM models that may work pretty well on, say, M1 laptops (such as 1, 2, 3). I'm very excited about the possibility of using them, but so far ran into technical difficulties bundling them with webpack, or loading them at runtime. It all comes down to using native addons that are pre-compiled (namely .node files), and to the fact that I'm not a JS/TS developer (plugins are just so much fun). Do you know @laurent if there is any inherent limitation in Electron, Joplin or specifically the plugin system that prohibits the use of such native addons?