New Joplin MCP Server

Right, I should have mentioned I'm trying to get it working with a local LLM.

In my tests google/gemma-3-27b gives good results now and then but it's random and it can occasionally get stuck in a loop (executing the same generated code over and over). It's possible I need to tune its parameters somehow. Most other models either didn't work at all or got stuck in loops.

openai/gpt-oss-20b finds the correct result but doesn't know how to present it and prints "done" instead. The actual result is basically in the log as raw json.