Not 'everyone' .... unlearned that after training others computer-skills for years. But just virtue signalling here.
But distributing dictionaries with the executable or in a seperate language pack is of the table?
just to be sure I understand your comment, do you mean "off the table"? If so, somebody more knowledgeable must answer. Perhaps there are restrictions on distribution of the dictionaries?
If I understand correctly, besides the spell checker, the dictionaries themselves are also open source.
Only thing is infrequent updates, for modern words, but I dont think that would be problematic.
Dont know the coding effort tho, with electron a dictionary is sort of a package-deal.
The technical solution itself is not a big issue - it's not hard to host a file or to make an Electron API call. What's difficult (and more and more so) is to maintain all these applications and infrastructure as they keep growing in complexity.
Here we are talking about updating the infrastructure to maintain and update dictionaries (presumably one per language) and to make new Electron API calls (and those get breaking changes very often). Should we actually do that? Is it that crucial to provide that level of control for each http request? I think that's something that should be discussed because it's apparently assumed that we have some moral obligation or something to provide that control, but I don't think we do. We have to be pragmatic and weight each solution against the maintenance cost.
Here the value is low for most users (many don't care, and those who do already have Little Snitch or similar installed), and the maintenance cost is relatively high. So because of this I'm not convinced it's worth supporting this feature.
I totally understand Laurent.
me too!!!
I use Joplin because its privacy focussed. I trust Joplin would not directly transmit my note contents to remote servers without my knowledge unless venerability has been exploited in my OS or Joplin. Without its focus on privacy, Joplin will lose its unique value to me and many others. The people arguing that Joplin should not prioritise privacy because it only serves a small user base, are making a seemingly sound, but destructive argument. While people who use Joplin because they like its editor can use Evernote with slight annoyance, people who want privacy will have very few to no options. Just because you do not want privacy now, does not mean you won't want it later on.
Should Joplin focus on 'what the userbase wants?' with guesstimates of what percentage of user want something or maintain what it has to offer more uniquely than other projects?
Privacy is not just a bonus, its a matter of ethics as far as software development is concerned. Should you allow Google to collect risky amounts of data as it might threaten our freedom in the long run, irrespective of whether people care or not? Just because people do not care, does not mean they should not care, does not mean that their demands, no matter the percentage demanding, should not be considered valid and worth catering to? For example, ramps for disabled need to built, despite the percentage of people benefitting from them is very small. Should not the people who care be ready with options for when the public becomes more cognizant of the importance of privacy?
However, the more important point is whether these calls to Google Servers are a privacy risk or not, and if they are, to what magnitude. The privacy risk level corresponds to the sensitivity of information that can be deduced with the help of data collected. Here it is important to know what data is being sent and can the user verify what data is being sent by looking at the code? What is the potential maximum amount of data being sent if the data being sent cannot be verified from the code? Can it included note content or not? As for trusting the devs, in this line of buisness, paranoria is not always unjustified. What is the possibility of underhand deal to quitely siphon off data of your privacy focussed app. Not saying anything like that is going on, but is there any way to properly put this beyond the realm of possibility without an analysis of what data is sent and to whom? The data of those seemingly looking for privacy is of great value. Having said this, I will like to state that I highly value all the work that has been done with Joplin and I respect and admire the dev(s?) and contributors very much I stated that possibility not to throw shade on wonderful people but to achieve clarity about what people might be worried about so that an adequate solution might be found.
Beyond just the reality of privacy, there is also the mental comfort level. Many people are paranoid, as they are not able to calculate the actual risk posed to them by Google data collection. They just want to feel safe. Sometimes feeling safe can also be very important. So, if it is assessed that Google server calls are not a substantial privacy risk, that assessment needs ideally to be available either in form of a pinned post or
a section in the privacy policy. This will give people a ready reference each time they feel challenged on privacy aspects of Joplin. All this panic over Google server calls might just be down to lack of knowledge about how software works. People like me feel helpless in a software infused world, with no concrete idea what reasonable limits of something might be.
I don't hear people making this argument at all. The original complaint is that Joplin is making a call to a Google server. This may have something to do with secrecy, but it has nothing to do with keeping the user's data private. The argument is simply that if you need that level of control over what your software does to external servers, then you need to be using some other kind of protection.
I think this is very true. It's fine to care about server calls to Google. It's not fine to equate this with the devs being irresponsible/deceitful and sharing user data.
Did we read the same thread?
However, I do agree, that if the data being sent to Google is of low value, than its unreasonable to expect devs and contributors to go extraordinary lengths to stop it from happening.
Relying on third party firewall tools is a good idea to help maintain privacy in general.
Where did you get the impression that Joplin's focus on privacy has changed? In fact, Laurent basically said the opposite, with what I think a very sensible compromise:
That was exactly the point. No data is sent to Google servers. A GET request is made to download a dictionary. As with all requests, certain information is most likely logged on the server (IP address, user agent string, URI).
In my tests (on Linux) I just turned off spellchecking and no requests were made.
I think that the default settings should be changed.
On the other hand I also get Laurent's statement.
I even removed the spellchecker icons from my UI on my macOS machines. But I am most likely not a good reference point anyway, because I use multiple security layers.
A post was split to a new topic: Security Layers
This is my view too. I'm not worried about the metadata that Google will capture in a single GET request. I think the talk about what other secret information could be transmitted in SSL is a bit overblown. The software making the request is open-source. It wouldn't be too hard to open up the requests if someone really wanted to.
For me, Joplin is about keeping my data private, rather than my existence. I'm sure Google already knows a lot about me just from ads. Adding the single data-point that I use an Electron app isn't much more. If the text of my notes was being uploaded for spell-checking, that would be diffierent.
I host LanguageTool myself GitHub - Erikvl87/docker-languagetool: Dockerfile for LanguageTool server - configurable
Is there a way to use it in Joplin so that the user does not ned to rely on external stuff given the user hosts her own language tools?
This came up on Hacker News yesterday and it seems relevant to the discussion here: GDPR penalty for passing on of IP address to Google by using Google Fonts | Hacker News
(I don't read German so this is based on comments on HN) A company has been fined for GDPR violation because it used Google fonts on its website and it caused client's browsers send a GET request to Google servers thus leaking IP addresses.
I think this Google translation of a news article is quite good: LG Mรผnchen: Einbindung von Google Fonts ohne Einwilligung (look at 5., 6. and 7.)
As @roman_r_m already said a German website used Google Fonts and has to pay 100โฌ in compensation to a user. Its important to note that there was no opt-in option. Google Fonts where loaded automatically and therefore the IP address was transmitted automatically. GDPR allows providers to claim a "legitimate interest" to transmit certain data, but it is easy to use local fonts on a web server. So there is not legitimate interest to use Google Fonts. Therefore the company has to pay compensation to the user.
Its the first instance so an appeal to a higher court is likely. Moreover it is just one district court. Only time and more lawsuits will tell if using Google Fonts is a GDPR violation. The court decision deals only with Google Fonts, not with YouTube embeds, Adobe Fonts, Hunspell or any other form of external services. So this decision does not have a direct impact on Joplin.
But it shows where we are headed in Germany/EU (and other regions that introduced similar legislation) IP addresses are personal data, therefore protected by the GDPR. So "simple" GET request are a GDPR violation, certain exceptions might apply.
I personally think you cannot compare Google Fonts and Googles Electron CDN, because you cannot add every dictionary to a Joplin release. But perhaps users have to opt-in before Joplin makes a connection to a Google server could be required.
(I am not a lawyer, so I am just guessing based on my experience with german courts in the past.)
I wonder if that applies to open source software though. It's under MIT license and no guarantee of any kind is provided - that's what the user agree on when they download the app. So if the author of such apps can be fined then it means the MIT license means nothing, and you can be fined and sued left and right for any little thing.
Likewise if you run curl https://google.com
should the author of curl be fined because it turns out that Google is keeping the ip in their log? You might say you ran this command yourself, but with Joplin it's the same, you downloaded and ran the app yourself, we aren't forcing you. And while doing so you've accepted the MIT license.
You're right Laurent, only thing is when open source (or any kind of software) isn't GDPR compliant then businesses cant procure it. At least not use it in a compliant way.
Mileage may vary of course, but sometimes these things become knock-out criteria in tenders.
But maybe Joplin isnt meant to be procured by businesses.
Side note:
- Something being free doesnt mean it dont needs to be checked against the same requirements a business has during normal procurement.
- When we would procure Joplin Cloud for my organisation all the checks apply, because of the procurement of a service.
This very plausible, wonderful explanation can be demystified rapidly. Either you think that Bruce Schneier is a nut, and his blog is useless (that is why Senators invite him to report in Congress I assume), ... in this case your statement on paranoia and panic is a useful call to reason. Or Bruce (not to mention the enlightenment we got from the Snowden files) has many serious points to make. In this case your call to reason is simply ... completely useless. Everybody choose for herself.