[I'm a googler, who works in Apps, but not directly on anything relevant here].
This article mentions Access Transparency
> By default, G Suite Enterprise enables a feature called Access Transparency, which allows administrators to see who has looked at each document within the organization.
But gets it a bit wrong. Access Transparency is a log of any Google employees who have looked at stuff in your domain. From the official site "Access Transparency logs provide information about actions of Google staff when they access your data."[1]. Which is a nice way of knowing that Google employees aren't randomly snooping on your files.
The point of eavesdropping would largely be defeated if the person can find out, so I get why it's a part of law.
The thing about NSLs is the target often never finds out because there was no public investigations or courts involved.
I remember reading a particular drug investigation had over 50 wiretaps including the mother's and sister's (of the targets) smartphones because they sometimes used their phones for business, which is pretty common in poorer households. I've always been curious if they found out afterwards.
In The Wire they had a scene where they would pause the audio if it's only someone else talking after x amount of time. But I highly doubt every single text, picture, and message sent to the person isn't being seen by at least one person.
Only if you 100% completely selfhost (machine running in the basement?). Otherwise they could compromise your hosting provider at various different levels of the stack.
Come to think of it though, even if you put your own machine in your own basement, they could just come in when you're not home and rootkit your box in numerous ways, and you'd probably never know barring some pretty heavy security.
State level actors are very hard to defend against, especially when it's your own state.
It's a matter of scale. How many experience covert ops / TAO teams do you think the FBI has? Are they going to do a covert entry to your basement, deploy a custom software or hardware implant, watch you, retrieve their implant, etc? They would have to have probably cause, and in any case they don't have the budget or people to do many.
And you can actually make it incredibly hard quite easily -- do everything on an ipad with a strong passphrase and no network connection (except occasionally to get software updates from Apple), keep it in a decent tamper evident safe (not a money safe), a painfully loud alarm with PIRs, in a location where people are around.
There is a lot of law associated with residences, often constitutional, that does not protect you when the access is to a provider.
The mere existence of a local alarm can greatly increase the risk of getting caught when going into a residence. State level actors really hate getting caught. They tend to be the sort of people who do not deal very well with uncertainty.
The generic answer to these concerns is usually the following: if a (well-known, very scrutinized) company such as Google writes that kind of promise in public documentation that is part of a binding contract with paying customers, there is a good chance they won't purposefully break that agreement, and risk being caught by an audit, just for the sake of accessing someone's personal data.
That's great, but if it's only true until it isn't. The moments when that idea is false (however rare) are the life altering, permanent moments that result in irrevocable ruin for whomsoever might dare trust the promises and honor of [faceless corporation].
The truth is twofold.
One: if the barrier can be melted according to magic rules, then it is no real barrier. It is a sweet candy coating that melts in your mouth, not in your hands.
Two: if a corporation is made of many incidental strangers who happen to share an employer for overlapping moments in time, and the system has at least one authorization bypass, then so does the audit trail.
If you don't think corporations implode, suffer from disgruntled criminal employees, sell out to rivals, go completely bankrupt, or land themselves in jail, then bet all of your secrets on the idea that what they tell you is 100% truth.
Yep, a solution exists through. Here's how you get there:
* Strong identity: employees must be strongly identified before acting.
* Multi-party authz: nobody ever acts alone. One person can't be trusted, two people might be, M of N effectively represents the company.
* Noisy security: making a change to security parameters notifies all relevant parties in a way that intentionally avoids notification fatigue. You can't sneak a change through.
* Full auditability: even after the fact you can readily unravel what was done, seeing what the old state was, what was changed, who made the change, and who approved it.
Get those points, and a few other minor details, and this larger problem actually becomes tractible.
You know, we, working at Google, are people, right? We have moral and ethical standards just like everyone else. Many (but not all) of us also aren't locked in to Google and can find employment elsewhere easily but choose not to.
The following isn't about Google as such: Thing with a disgruntled criminal employee is that they don't usually come in bunches and don't collude because they can't easily identify each other. Which means they can't generally commit such acts and then also corrupt a whole 'nother department to cover it up.
Trusting your privacy on the moral and ethical compass of every individual at giant corporations is incredibly foolish. If this is a wide-spread belief at Google, it only further erodes my trust in the company.
It's not every employee, but rather something like any. As in: any employee with access to user data can check that their actions are logged correctly.
This doesn't protect against government action, and not at Google leadership specifically targetting you. But it does prevent the (rather common) abuse of such access by regular employees.
> You know, we, working at Google, are people, right? We have moral and ethical standards just like everyone else.
Could’ve fooled me. Or maybe your standards are just particularly low. Do you mind explaining where surveillance capitalism fits into your principled worldview?
There's at least one major difference here, which is that corporate entities don't have sovereign immunity. The CIA and NSA are immune from consequences when they systematically abuse our rights.
SOC seems to be the gold standard in terms of what enterprises are asking for, these days. Not that it addresses all the concerns as discussed here, but it does probably start to answer your question.
E&Y does (apparently), and Google is compliant with some ISO standard for software security. See "Does giving Google access to my data create a security risk? How does Google ensure that its employees do not pose a threat?"
Your assumption is that the company will knowingly access your data, but the more likely scenario is that a rogue employee working for your competitor (or simply looking to start their own startup) will access and steal your data/code/client list.
This is a difficult argument to counter; for instance, are you sure that signal can't decrypt your messages? If so, do you remain sure knowing that they can update the app?
As a security person I really can't think of any service (or piece of hardware) which I think satisfies the threat model where the provider is both clever and truly hostile.
Yes, you're pretty sure about Signal, because it's end-to-end encrypted; you can verify what the binary you're running is actually doing (you have to have the expertise to do so, but then, even if Signal was written entirely in browser Javascript, you'd still need the cryptography expertise to verify it). By design, Signal doesn't depend on its serverside deployment for cryptographic security. That's not true of G Suite.
You can verify what the binary did while you were watching. You can't verify what it did before, or what it will do next. OP said hostile and clever, and part of clever is only being hostile when nobody is watching. Apps that don't snoop constantly, delay transmissions, and hide transmissions in existing and expected communication channels are much harder to catch.
No, I'm saying, you can crack open the binary and see what it's capable of doing. If Signal wanted, it could obfuscate itself in various ways to make that hard, but (1) you'd notice that pretty quickly (that the code was hinky) and (2) Signal does not in fact want to do that.
You personally might not be able to do that (but then, you personally might not be able to spot a defective authenticated key exchange either), but people can. Once someone spots the "Signal Backdoor", that's it for Signal. There's a lot of incentive to do that legwork.
In contrast, G Suite could be comprehensively backdoored, and you'd have no way of knowing, no matter what your level of systems programming competence. I'm not saying they are backdoored; I rather doubt that they are, and I myself trust G Suite more than most other applications I use. But the point is, the trust you have to have in G Suite is different and more demanding than the trust you have to have in Signal.
This assumes that everyone gets the same binary, and the binary doesn't get updated. There is no reason that the binary delivered to your phone by the Google Play Store needs to be the same as the binary delivered to a reporter's phone.
Even if we can trust the binary (and I agree, with Signal as the example we probably can), the application distribution mechanism and the underlying OS and its update mechanisms are still a problem.
There is no reason that the binary delivered to your phone by the Google Play Store needs to be the same as the binary delivered to a reporter's phone.
That's moving the goalposts to individual targeting, though. The individual targeting scenario is not that interesting because, as the winged quote from the technical literature goes, "YOU’RE STILL GONNA BE MOSSAD’ED UPON".
> As a security person I really can't think of any service (or piece of hardware) which I think satisfies the threat model where the provider is both clever and truly hostile.
...I can. Disconnect from the internet.
It's a pain, and it won't be useful advice in many cases, but if you're a newsroom doing sensitive investigations on powerful individuals? I could make a case for it. Although, you'd want to ditch G Suite.
(You can certainly think up clever attacks that work without internet, but disconnecting really does remove most vectors.)
The threat model does not need to be that the provider as a whole is truly hostile. It could be "a rogue employee went snooping" or "the server got hacked" or "there was an access control bug."
Instead of "trust us to keep your data", what if Google said "we don't have your data." That would give me more confidence, since it both makes the hostile actor's job much harder and it's also easier to verify.
"Google is willing"? You make it sound like there's a person making a decision. The system is automated, there's no mechanism for employees to be unwilling.
A lot of effort goes in to ensuring that audit trails are non-optional.
Read the docs for access transparency and follow the instructions. I don't admin a gsuite domain so I don't actually know the sequence of buttons to click to view AT logs.
>Which is a nice way of knowing that Google employees aren't randomly snooping on your files.
Why would I, as a end user, be given to trust this if I think Google employees are snooping in my files? I have no way to audit how this is kept, so I'd have to assume that any Googler snooping these files is either doing so using a backchannel that is not audited or the log is a no-op.
If your worry is Google, as an organization, is actively trying to steal your stuff, that's one thing. If your worry is a rogue google employee is doing some unsanctioned thing, that's another. This (imo) mostly helps with the second, unless you also assume that Google as an organization is fairly inept and so can't log things reliably.
If the government is interested in something from my mail servers, I'll see the legal request or judges orders and will know what is going on, and will be able to take appropriate action.
If the government makes appropriate legal threats against Google, I won't necessarily know (National Security Letters) until long after the fact.
If your threat model includes the US government, then I would expect you would self-host anything sensitive. Even then, there's still the possibility that they could exploit some 0day they've been stockpiling, and root your servers without leaving a trail. Certainly harder than sticking Google with a gagged NSL, but possible.
But I don't think most people's threat model includes the US government. Probably not even most news organizations.
> If the government is interested in something from my mail servers, I'll see the legal request or judges orders and will know what is going on, and will be able to take appropriate action.
Or the mail servers of the person/people you're communicating with. At which point you wouldn't know, because they'd be subject to the same laws, and less well equipped to fight them.
I'd like to add an additional possible threat model that gets ignored pretty comprehensively and, in some cases, intentionally: Your data being fed to automated systems that provide summaries or derivative information based upon your data. People ignoring this is behind much of the NSAs snooping. They believe that until a HUMAN operator views the cleartext of some communication, the communication can not legally be said to have been 'intercepted' at all. And if you look up any statement ever made about reforms done at the NSA after Snowden's revelations, you will find that all of them, every single one, spoke exclusively about human analysts reading communications directly. They avoided addressing analysis, profiling, ML training, summarization, and other automated things very intentionally. The government has dropped a good many cases, serious cases involving child pornography even, to avoid ever testing this idea of theirs in court. We learned about this particular legal opinion of theirs (which would almost certainly never survive any court challenge at all) before Snowden even, back when the AT&T whistleblower came forward.
The likelihood a company like Google is reading your emails directly and trying to scoop your business on a product idea or something like that is slim. The likelihood they are profiling your communications in aggregate and producing derivative information like "how many companies in the space are considering hiring" or "do the employees at this company talk about Chipotle" and using that for advertising or data products is, I would guess, pretty high.
This is just a specific form of my "Google as an organization is out to get you (and willing to lie in their privacy policy)" threat. It may sound more reasonable to you, but its still the same set of actions.
We just had 2 incidents where both Facebook and Twitter broke their privacy policy by using phone numbers for ad targeting when they were only supposed to use them for 2FA & account recovery.
I wouldn’t trust a company with personal data while their main business model depends on violating your privacy, just like you wouldn’t trust an alcoholic with guarding a warehouse full of vodka.
The only way to be somewhat sure is to deal with companies that have zero uses for your personal data - this will not mitigate the risk of a malicious employee poking around but will at least mitigate the risk of large-scale data misuse like ad targeting because there’s simply no ads to target and no infrastructure to do so.
In this context, we're discussing gsuite, which doesn't use any data for ad targeting.
Edit: from the privacy policy:
> No. There are no ads in G Suite Services or Google Cloud Platform, and we have no plans to change this in the future. We do not scan for advertising purposes in Gmail or other G Suite services. Google does not collect or use data in G Suite services for advertising purposes.
We're talking about a company that makes the bulk of its money with ads, running a (supposedly ad-free) product on the same infrastructure that the ad-contaminated products run on.
There's both a risk of accidentally misusing data given the two services share infrastructure and code, as well as a business incentive to commit such "accidents", especially given both Facebook and Twitter set a precedent that there's absolutely no downside in doing so.
At the end of the day, you want to trust that your provider isn't out to get you, otherwise why are you even a customer (Oracle gets a free pass, because reasons). However, you want to know that they're serious about their claims, and transparency in their tools and processes is a big part of that.
Sure, I was one of them. But if your distrust of Google is high enough to believe that Access Transparency is basically a fake feature (i.e that there is a way for Google employees to access your files without showing up in the logs, except for legally required reasons), then I don't see how or why you would be giving them your files in the first place. I don't think that level of distrust is unreasonable -- Google has proven to be a bad actor in many scenarios -- but I just don't see how you could be a G Suite customer at that level of distrust.
This article mentions Access Transparency
> By default, G Suite Enterprise enables a feature called Access Transparency, which allows administrators to see who has looked at each document within the organization.
But gets it a bit wrong. Access Transparency is a log of any Google employees who have looked at stuff in your domain. From the official site "Access Transparency logs provide information about actions of Google staff when they access your data."[1]. Which is a nice way of knowing that Google employees aren't randomly snooping on your files.
[1]: https://support.google.com/a/answer/9230979?hl=en