@yugalkaushik Hi! Thank you for answer! It’s great to see your POC with sigma.js running in a Joplin plugin.
Your point regarding Transformers.js:
- I did GSoC back then and I managed to run language models in a plugin with Transformers.js. I wrote my report about how I managed to run it: GSoC 2024 (AI summarization plugin).
- Few contributors have made some benchmarking and measurements with running Transformers.js:
- I think the choice of just using Ollama is fine but I would like to see more argument about how much Ollama better than Transformers.js:
- What specific advantages do **Ollama'**s embedding models (like
nomic-embed-text) have over what's available in Transformers.js (likeBGE-small-en-v1.5orall-MiniLM-L6-v2)? Think about context length, embedding quality, etc. - How much is the inference speed difference for 1000 notes?
- What specific advantages do **Ollama'**s embedding models (like