Yes. Email has always been one of the more vulnerable parts of the computer ecosystem, because any stranger can use it to send a (malicious) file into your computer or server for processing.
Simpler email is safer. Every new feature has bugs. Some bugs are vulnerabilities.
Gmail adding learning models is creating a new risk for you.
How large that risk is, has yet to be discovered.
My armchair opinion is that the new risk is minimal, compared to the rest of the risks of using email. But time will tell.
I’m the meantime, if I still used Gmail, I would turn the LLM features off and let someone else discover how risky it is. (Edit: It’s really probably fine, but I’m very risk averse with my email.)
I also agree that it’s time for rational people to leave Gmail, if they can. But my reasons are privacy reasons, rather than security reasons.
I work in ICT. Leaving Gmail is much easier said than done. It has the best spam filtering bar none and integrates with a whole host of other services that I use daily, like the mobile phone I’m writing this on for example, the one that integrates my calendar, tasks, contacts, photos, websites, YouTube channel, spreadsheets and, oh yeah … that other thing … Gmail.
So, if wishing made it so.
What I’d like is a Google Workspace tier that is entirely without AI.
I’ve left gmail and had no real challenges with spam filtering or anything else so far. I lost integration between calendar photos drive etc, which has removed some convenience, but that was also kind of the point.
Yeah I keep hearing this argument, yet in real world deployments with just SPF checking, greylisting, and spamassassin my experience has been that it really isn’t much of an issue.
Proton is pretty good and covers IMO the most critical parts of the Google ecosystem. I made the move a couple of weeks ago and it has been pretty easy, honestly.
As far as prompt injection is concerned, I don’t think it’s a risk unless you’re using some kind of agent to go though emails, which is not a Gmail specific thing.
If we’re taking about Google scraping your data the risk is more one of them having an incorrect profile on you, but running a conversational agent is quite expensive, I don’t they would have that as a large scale part of their pipeline. Embedding and clarification models likely aren’t instruction tuned so prompt injection won’t do anything.
Sure, it’s important to be aware of future potential issues, but there’s a huge difference between I get the wrong answer when I ask a chatbot about my email vs remote code execution.
Also, one is a general security vulnerability with email as a whole, like phishing you can get scammed regardless of your email client, vs improperly implemented features in a specific library. I don’t think this is a reason to leave Gmail.
it’s important to be aware of future potential issues,
New code tends to have flaws.
I agree that there’s no strong reason to expect that the current new implement has a serious flaw. But if I was still using Gmail, I would turn the new feature off.
Anything that can be exploited in a software stack is a higher risk when exposed to the risk cesspool of modern email.
So in summary: chance that this new feature is an injection risk: low.
Risk of harm if there’s any security flaws in it: high.
Mainly that I’ve seen nothing in the terms of sevice that says they won’t sell what they know about me to employers to help employers low-ball me during a salary negotiation.
Yes. Email has always been one of the more vulnerable parts of the computer ecosystem, because any stranger can use it to send a (malicious) file into your computer or server for processing.
Simpler email is safer. Every new feature has bugs. Some bugs are vulnerabilities.
Gmail adding learning models is creating a new risk for you.
How large that risk is, has yet to be discovered.
My armchair opinion is that the new risk is minimal, compared to the rest of the risks of using email. But time will tell.
I’m the meantime, if I still used Gmail, I would turn the LLM features off and let someone else discover how risky it is. (Edit: It’s really probably fine, but I’m very risk averse with my email.)
I also agree that it’s time for rational people to leave Gmail, if they can. But my reasons are privacy reasons, rather than security reasons.
I work in ICT. Leaving Gmail is much easier said than done. It has the best spam filtering bar none and integrates with a whole host of other services that I use daily, like the mobile phone I’m writing this on for example, the one that integrates my calendar, tasks, contacts, photos, websites, YouTube channel, spreadsheets and, oh yeah … that other thing … Gmail.
So, if wishing made it so.
What I’d like is a Google Workspace tier that is entirely without AI.
I’ve left gmail and had no real challenges with spam filtering or anything else so far. I lost integration between calendar photos drive etc, which has removed some convenience, but that was also kind of the point.
Yeah I keep hearing this argument, yet in real world deployments with just SPF checking, greylisting, and spamassassin my experience has been that it really isn’t much of an issue.
Proton is pretty good and covers IMO the most critical parts of the Google ecosystem. I made the move a couple of weeks ago and it has been pretty easy, honestly.
Yea. It is a difficult long process to DeGoogle.
We even have a support group for it, here:
https://lemmy.ml/c/degoogle
We’re all at different stages, but swap tips and tricks.
As far as prompt injection is concerned, I don’t think it’s a risk unless you’re using some kind of agent to go though emails, which is not a Gmail specific thing.
If we’re taking about Google scraping your data the risk is more one of them having an incorrect profile on you, but running a conversational agent is quite expensive, I don’t they would have that as a large scale part of their pipeline. Embedding and clarification models likely aren’t instruction tuned so prompt injection won’t do anything.
Agreed. Architecturally, there’s no reason to have a prompt injection risk, of any kind, here.
But, that was true about Log4J, as well - until we learned otherwise.
I tend toward extra caution in this modern era of libraries stacked on libraries.
Sure, it’s important to be aware of future potential issues, but there’s a huge difference between I get the wrong answer when I ask a chatbot about my email vs remote code execution.
Also, one is a general security vulnerability with email as a whole, like phishing you can get scammed regardless of your email client, vs improperly implemented features in a specific library. I don’t think this is a reason to leave Gmail.
New code tends to have flaws.
I agree that there’s no strong reason to expect that the current new implement has a serious flaw. But if I was still using Gmail, I would turn the new feature off.
Anything that can be exploited in a software stack is a higher risk when exposed to the risk cesspool of modern email.
So in summary: chance that this new feature is an injection risk: low.
Risk of harm if there’s any security flaws in it: high.
I agree. I left Gmail long ago for other reasons.
Mainly that I’ve seen nothing in the terms of sevice that says they won’t sell what they know about me to employers to help employers low-ball me during a salary negotiation.