Thank you both for the feedback.
@JamesWriterNarry I did the following experiment: Downloaded a couple of scripts (removed the title of the film and the credits, but left the character names untouched), each into a single note, and tried to talk to the characters. I only requested that the note that contains the script will be the single source for all context sent to the model (see one example below). I think that Jarvis was able to get into character to some extent, refer to events and to interactions with other characters (not all of them, of course, but some).
User: I want you to act as the character Vincent, think like he does, feel the way he does, and talk like he does. The script below contains examples of conversations that Vincent had. What's your favorite food?
Jarvis: As Vincent, my favorite food is definitely a good steak. But speaking of food, do you know what a five-dollar shake tastes like? Mia had one at Jackrabbit Slim's and I gotta say, it was pretty damn good. [Note 1]
The only problem is that it's not a very good experiment, as I could push Jarvis to guess the name of the film in all the cases I checked (even ones much less famous than Pulp Fiction). Therefore, it's possible that the character was also based on the model's training, which isn't applicable in your case (I do imagine that the model can generally detect archetypes easily and impersonate them, based on a few clues / descriptions / examples). However, it's worth noting that without the context of the script - just giving Jarvis the name of the movie and the character to play - I did not get good results.
To conclude, I think that your approach has merit, but I recommend using the
Notes: command instead of search if you're creating a special note(s) for the sake of the conversation, or already know in advance which of your notes are highly relevant. It remains to be seen how many examples (or how long of a conversation) the model needs in order to calibrate itself to the character.
@davadev I agree that if there's a need to change the settings frequently, then it would make sense to expose the relevant settings in a more convenient way, maybe in the panel, and perhaps even attached to profiles as you suggested. I'm adding it to the list of feature requests, but there are a few major ones that will come before that.
This looks like the correct syntax to search for notes containing
keyword in the notebook
MyNotes (unless there's a bug). I'll explain again how this command works, as I'm not sure it was clear in the release notes. Anything that comes after
Search: is sent to Joplin to search for notes. It is essentially the same as searching via Joplin's search bar (you can in fact test it this way). If the search query is very specific, it might return a handful of notes, but if it's a single common word it's likely to return a hundred. It is from these search results that Jarvis then selects the most relevant excerpts to send to the model along with your prompt. So it's possible that the 2nd paragraph in the 100th note was selected by Jarvis, because it matched the search query and was semantically similar to the current conversation. Semantic similarity is still used to rank the search results because in most cases there isn't room for all results to fit in (due to context limits).