Hey nobody wants even a single child being harassed. There is lots of harassment in real life too and it can happen on any street corner. But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society. As more and more of our lives have an online component, online services also can not prevent 100% of crime and lesser unpleasantness and still remain powerful enough to fulfill user needs and enable running of free society.
There are both unique challenges and unique opportunities online. Child accounts can be more locked down. But children can lie and use parent's id/phone/etc to create adult accounts. Plus government regulation can either discourage a service from offering child accounts in the first place or make these too restrictive to be useful, forcing everyone to lie.
Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.
> But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society.
Do we?
No, this is a serious question. London seems to have 20,873 CCTVs on its streets[1]. If people don't protest against 20k CCTVs, do you really believe they'll protest against
30k, 50k or 70k? 70k is when by average one street has one CCTV. It's still not one per corner, but practically it's more than enough to track everyone's path.
Slippery slope? Yes. Are we going to slide down? Probably.
I've been to many cities around the world that already left the London far behind. Miltiple cameras at every house looking in different directions, cameras at every isle corner in supermarkets etc. There are more cameras than citizens in those cities.
England has always been the first nation to suffer the designs of the elite. The bankster system, first in Britain. The secret services that have “license to kill”, ditto. A show for parliament and representation, the same. Emergence of a new elite class that wields power behind the throne, England. Political churches passed as churches of God, England. Surveillance and police state, England. 1984, also from England. Somebody in that Island knows where it is all heading.
p.s. and how could I forget. “Journalism” fleet street style, also England.
Good one. Yes, they gifted us with priesthoods. For that matter, banksters learned from Mesopotamians, so. But we’re discussing evil incorporated post enlightenment (where? lol) here.
Even in a world where perfect, indistinguishable-from-reality deepfakes could be created instantly at the push of a button, CCTV cameras would be no less useful in criminal proceedings than they are now.
"Yes your honor, I can testify that I personally pulled this footage from the camera at the corner of 15th street and King Dr."
Or, even better:
"Yes, your honor, I as an expert witness can confirm that this footage was signed by the secure hardware key embedded in the camera at the corner of 15th street and King Dr."
You will have to make a realistic multi angle deep fake faces before it happens. Like the face geometry and looks has to perfectly match the other deeps fake and any other camera. Plus matching different type of lights, camera gradient, lens, types, etc…
I dislike the "it can happen on any street corner" comparison. It's apples and oranges.
The cost of harassing someone online, especially anonymously (even on HN) is virtually zero.
In real life, in person, the cost of harassing someone is many times higher. It's way harder to do so anonymously in person, it's way harder to do it without other people noticing in person, and it has to happen synchronously in person compared to asynchronously online. I could go on... it's just not the same thing.
> Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.
For the reasons I mentioned above, neither parents nor schools (on average) are equipped to mitigate online harassment because it's a relatively new attack vector on children. The average parent of a teenager (or the average school administrator, age 40-60) didn't grow up experiencing online harassment in their teenage years.
The danger I see in your comment is assuming that online and offline harassment share the same dynamic. The simple fact is online vs. offline harassment is wildly different in nearly every way. If you were harassed in real life, that doesn't automatically mean you understand the dynamics of online harassment.
Edit: Also feel compelled to say I'm not biased in any way. i didn't suffer from much harassment online or offline. I also don't have kids. It's like saying a Zoom date and a coffee date are basically the same. Or saying "dinner with grandma" is the same in person or via facetime. It's apples and oranges, let's not pretend they're comparable.
> In real life, in person, the cost of harassing someone is many times higher. It's way harder to do so anonymously ...
Take a ride on the metro in New York, or just in the station. If you can't see 5 women being sexually harassed in 10 minutes, you're blind.
> because it's a relatively new attack vector on children
My grandfather showed me a picture from 1930. It showed "couples", meaning teenagers making out in the grass on the side of the road, there was a sort of slope going to a river. It also showed a girl being pushed down ...
This is nothing new, and we all know that harassment of all kinds happens most often at schools. It has been like that for at least a century and hasn't improved.
How is some random dude exposing themselves to children not similar enough?
Of course it's not the _same_ thing, but they are comparable. Yes, there might be differences that prevent using one in place of the other for certain arguments, but here the reference class is policing vs privacy, no? The concrete issue is almost irrelevant anyway. (I mean I recommend picking privacy in all cases except in some rare freak ones where humanity goes extinct unless we get the mind virus transmitted through Facebook. Oh wait.)
I think what bugs me most is that I've yet to see convincing evidence that any of these surveillance systems even help prevent such things as child abuse and terrorism. There's even pretty good evidence that these systems enable those things. We like to think child abuse happens from strangers, but usually it happens from someone the child knows, and often a relative. How many parents monitor their child's every move? For short term rewards you have ensured that you fail as a parent: teaching your child to be dependent and ensuring they are unable navigate the dangers and complexities of the world without you. The road to hell is indeed paved with good intentions and I completely understand the desire, but sometimes the kid has to burn their hand on the stove (hopefully only a little).
In addition to this, it seems that when we do use these tools to go after people that we end up just going after the low hanging fruit. It's the same reason the drug war has been a failure. Instead of going after manufacturers and distributors we go after users. It's easier and we create incentivize structures and metrics that make these the best path to optimization. I am absolutely okay with introducing a little friction. I'm far less concerned with someone looking at child abuse as I am about the persons creating and distributing it. Both are bad but clearly one is much worse and should be prioritized. If the worse group isn't prioritized then it isn't security, it is theater.
I live in the UK, a "free society", we have CCTV everywhere. It didn't stop me being sexually abused as a child, it's not there to catch pedophiles because pedophiles generally aren't like muggers. Encountered pedophiles at all three schools I went to—you put far too much trust in authority figures like teachers. We got educated by the same people who sexually abused us.
Where there's people, there's problems. From fraud to racism to pedophilia, and everything in-between.
Instead of worrying about children lying to access services for adults, worry about the problem in a non-technological way. Maybe we need greater deterrents in terms of legal repercussions? Maybe we can even help these people figure out what makes them this way?
We definitely had background checks at each school, but background checks are largely just theatre as it can only tell you what is currently known about someone, nothing more. Every modern-day undiscovered pedophile teaching kids has passed a background check at some point to end up teaching in the first place.
I was openly groomed in a room full of people for months on end before I was taken advantage of. We knew it wasn't right, we just weren't sure about the why or how, or what to do when we could articulate it to some extent.
Early teens is a good age to be because they're less vulnerable and more aware of themselves and others. Single-digits is a scary place to be. Wishing you and yours excellent health and peace.
> Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.
Agreed but I believe this is maybe 50% of the effective solution.
Ideologically I’d like to say 99% but the defense (educating children) is always reacting to the offense (predator tactics).
To affect the offense, the rules must change and that power rests with Meta (and to a lesser extent, one’s ability to ignore Meta products).
> Child accounts can be more locked down. But children can lie and use parent's id/phone/etc to create adult accounts.
Teenagers can also use a fake ID to buy alcohol, but we don't use that as an excuse for not having laws preventing children from buying alcohol.
I'm reminded of patio11's The optimal amount of fraud is non-zero [0]. It is worthwhile to have laws that are designed to substantially reduce the amount of harm done. It is also worthwhile to be aware that there is a threshold beyond which those laws would do more harm than good.
However, just because we can imagine laws beyond that threshold does not mean that our current laws are optimal. There's a delicate balance to be struck, but from the evidence that I've seen so far I'd say our laws regarding social media and children are way too far on the permissive side.
Well, how is War on Drugs going so far? Is it very hard to get drugs as a teen and what about the unintended side effects? Also making a purchase with precisely quantifiable ingredients is not the same thing as having a conversation where harm or benefit can be only evaluated given full context. Why do you think current laws are too little rather than too much? If personalized advertisement to children was not banned, companies would be more incentivezed to offer child accounts which are full featured enough for children to actually want to use.
> we don't use that as an excuse for not having laws preventing children from buying alcohol.
Listen, social media is just disrupting society like Uber disrupted taxis and Door dash disrupted food delivery.
The downsides are always fine and ignorable because of the financials, and regulation is just an archaic attempt to hamper disruption
We should never disrupt the forward momentum of the disruption economy, whatever the harm. Business owners will make enough money that we can soothe all harms. Later. With initiatives.
> The downsides are always fine and ignorable because of the financials
Do you sincerely believe that the downsides (which are, in the context of the forum in which you said what you just said, child abuse of all flavors) are always fine because of money? How is that even remotely defensible?
Maybe it would be better to teach her how to handle situations when you can't be watching what she's doing, rather than chasing some ideal surveillance system that will become ineffective as soon as she learns how to circumvent it...
Maybe we are around when time `bool child` can be cast to `int`. 80 years old grandma could use some child-intended protection as well as someone just over 13, 18, 20, 21, against internet malice, there's no reason they have to be cut off from platform mechanisms just because they boolean disqualify.
> But most people would agree that recording everything on every street corner 24/7 is not acceptable in a free society.
I admire your optimism. A huge number of cameras just popped up out of nowhere in my city last year. People are actually excited about it. When I tried to say something, they treated me like one of those crazies that escaped the mental hospital.
What's the base rate to compare this against? For other social networks, for other websites, for society as a whole. My general attitude about meta is that it's cool to point out any flaw that might get people mad at them, but I just want to know if there is any reason to think this is a special problem they have. And, even though it's a double standard, I do get my hackles up when they start testing the waters to see how effective it is to imply that end-to-end encryption causes child abuse.
I think the issue is that they could have taken steps to lower their rate, but didn't. There are steps even privacy-conscious companies can take.
Like it doesn't even have to be E2E messaging, they could check the birthday of people, train an LLM to differentiate appropriate vs inappropriate comments on children's Instagram posts, and use that to help surface problematic comments. They could bring up a popup asking any child to report the conversation which would be triggered by client-side javascript based on a hashed word list of words commonly used by groomers. They could warn the kids whenever they upload images to share directly in DMs rather than publicly on their insta or fb.
In a perfect world, the rate of grooming would be zero, but at Meta's scale, just creating even a little more friction that makes it harder to groom kids should be seen as worthwhile.
> I think the issue is that they could have taken steps to lower their rate, but didn't.
This is where it gets tough really fast. For arguemnt's sake, let's say semantic analysis can properly identify patterns well at 95% level (oooooffff already can see problems).
Public/wall comments on any platform are relatively easy to police. But, I argue the vast majority of problematic grooming comes from DMs. Policing DMs seems worse than trying to do hash analysis of private photos.
I don't think there's a win scenario here for anyone involved.
[edit]
> they could check the birthday of people,
yeah because this is answered honestly by predators.
> Policing DMs seems worse than trying to do hash analysis of private photos.
When it comes to DMs between adults and unrelated children, does it really seem worse? Because really what these platforms are enabling is a situation that's never really been common or tolerated before in society, unlimited unsupervised contact and rich (media-wise) communication between unrelated adults and children.
These platforms categorically do know when a user is an abuser, potential abuser or has in general been making inappropriate contacts with children and they could easily do something about it.
> When it comes to DMs between adults and unrelated children, does it really seem worse?
I should clarify when I say "worse" i mean "significantly more invasive". Checking if a photo's hash matches another photo's hash, at the very dumbest levels, straight forward, hash == hash or hash is inside larger hash.
But for DMs matching words isn't good enough because there are so different ways to say the same things. The system needs to understand when "I really like that top you're wearing" is a cute message between friends vs a predator because who as well as the lines right after really change the conversation. If you don't know the "who" then now we're asking an LLM does this conversation sound predatory!?
The machine needs a lot of context to get this right versus photo hashing analysis.
THat's why to mean analysis of chats is way "worse" than analysis of photos.
> If you don't know the "who" then now we're asking an LLM does this conversation sound predatory!?
But you always will know the 'who' - so, you (the platform) knows that it's a conversation between an adult and a child unrelated to them and yes, at this stage, simply asking a LLM 'hey is this conversation sus' will give you results that are more than enough to trigger a human level analysis of the interaction.
Also you have to consider that adults and unrelated children generally do not interact at all. So, if you wanted to legitimately try and screen off abusers, 'adults that initiated more than 5 conversations with children unrelated to them' then feeding all those interactions into a LLM classifier would get you 99% of the way there.
I’m not able to read the cited reference at that link, so I’ll try to take the statement at face value.
I could name or describe a number of systems right now that have outcomes that we could all agree are contrary or misaligned with their purpose, so unless the author is equivocating things like “purpose”, or “system”, I don’t think your premise is well founded, nor do I think your conclusion has merit.
The conclusion is that systems should not be judged deontologically, and instead should be judged consequentially.
The point is that if you could describe systems that are contrary to their purpose and have negative outcomes, one is justified in referring to them as systems whose purpose is that negative outcome. Doing otherwise is arguing from conclusions.
It's naive not to. Believing that Facebook's purpose is what it intends to do is like believing that North Korea is a democratic people's republic.
I can rephrase it in terms of intentionality if you wish:
Meta has a choice every day to exist or not and the not. When choice to exist results in the sexual harassment of children then transitively Meta is choosing that consequence.
To address your knife analogy, a knife alone is not a system. A knife plus a human plus another human being stabbed by the knife is. In that scenario, the person creating the system, the knife-wielder, should(like Meta), choose to dissolve the system. I think, at least for the knife+human+human stabbing system, that's quite easy to agree with.
> They could check the birthday of people, train an LLM to differentiate appropriate vs inappropriate comments on children's Instagram posts, and use that to help surface problematic comments. They could bring up a popup asking any child to report the conversation which would be triggered by client-side javascript based on a hashed word list of words commonly used by groomers. They could warn the kids whenever they upload images to share directly in DMs rather than publicly on their insta or fb.
All this reduces engagement.
That is the bottom line. Facebook can do more, but they won't unless something makes them, because it potentially costs billions.
In fact, they seem to lobby for this kind of thing... But only if someone makes all their competitors do that thing as well.
> train an LLM to differentiate appropriate vs inappropriate comments
> popup asking ... which would be triggered by client-side javascript based on a hashed word
Yeah, this is the feature that I think I should build my own social media for. Owning "the algorithm" and using it to indoctrinate users en masse with my own view. Are you sure you want to insist it's in any way or form even remotely ethical thing to do?
for me the issue is: why? followed by a few more "why?" and once the answer becomes "because." it becomes time to solve those intermediary questions. maybe that happens and nobody tells me, idk
The question is whether some kind of bulk generator is running up those numbers.
A very small number of people can generate 100K spams per day. Historically, when spammers have been found, they've been small operators making a lot of noise.
Exactly, it's important in these situations to put aside the concerns about children and instead we need to have a lengthy, detailed discussion on the exact nature of the use of the phrase "sexual harassment".
Once we have accomplished that, we can truly acknowledge that we've made the world a better place.
Offensiveness is a quality. And a subjective one, even.
One thing can be both of course.
But an offensive ad (e.g.) or a forum post with some kind of material of a sexual nature is not ipso facto harassment, unless someone took explicit action to harass you with it, I think.
You see the same problem with universities, the military, the peace corps, etc.
People have an agenda to push and they want to construct a simple narrative that supports that agenda. Simple as. Thanks for taking the time to point out that the narrative may be flawed, we need more people to do that.
Why are children even allowed on social media? From a society standpoint, social network use is correlated to mental health problems(1). Then factor in sexual harassment and bullying.
The kids would hate it, but then, they hate laws against selling tobacco & alcohol to minors and letting them into X-rated movies too. Too bad. It's for your own good, Junior.
"They'd just find ways around it" ?? Then we'll block those, too. We don't need to be 100% effective.
Oddly enough a Daily Mail article on this topic had much more information. [1] What interesting times we live in. Anyhow, one interesting and critical datum they mentioned is that the majority of minors on Instagram already pretend to be adults. I'd assume that's probably to sidestep whatever existing protections currently exist.
If we just keep pushing everything to be unavailable to people until they turn 18, we're just going to have people who don't fully start being an adult until late 20s.
There are plenty of other ways to communicate with friends. My generation used text messaging, the generation before used phone calls. Both are perfectly acceptable methods of communicating that don't expose children to the many problems that social media creates.
Why are children allowed on the computer at all? Why are they allowed outside the house? Why are they allowed to leave their rooms? Why are they allowed to have friends? Why are they allowed to talk to anyone? Why are they allowed to do literally anything at all? Why are they allowed to exist?
It's bizarre to end this by denouncing encryption. "There's been this long lasting problem on Facebook, but a large part of it is something introduced last month."
> Child safety experts, policymakers and law enforcement have argued encryption obstructs efforts to rescue child sex-trafficking victims and the prosecution of predators. Privacy advocates praised the decision for shielding users from surveillance by governments and law enforcement.
Seriously. They're practically mocking "privacy advocates" as criminals trying to evade law enforcement, whereas the other side of the argument is supported by a veritable gathering of paragons. I did not expect this kind of naked emotional appeal from The Guardian, but maybe that's my fault. And note that this is not an op-ed.
Could it possibly be that encryption does legitimately aid predators and it’s also useful for legitimate security needs?
Mentioning the former doesn’t make it an appeal to emotion. The emotional reaction you (and others) have to it is because it’s a legitimately emotional subject. Doesn’t make it less real.
Encryption aids everybody's privacy, in the same way that roads aid predators to steal away with your children, and food aids predators by giving them nourishment. Ergo we need to ban encryption, ban roads and ban food. Anyone who doesn't want to ban food (and let me snoop on all their messages) is an evil predator - https://www.youtube.com/watch?v=LNYZo5yRVNk
Anything in the same sentence as "aids predators" throws context out the window and marks it as a vile and awful thing to anyone listening. Don't use that language, and don't let someone lead you by the nose into using that language.
Instead, talk about how end-to-end-encryption improves the safety of all users, including children, because it gives them control over who sees their messages, and shields them from criminals who would gladly snoop and steal their personal data. Banning or breaking encryption will not make children safer. What will make them safer is parents supervising their young children's internet usage, and teaching them how to protect themselves online when they're older - how they should avoid giving out personal information (e.g. to Facebook or Instagram...), how to lock down access so only their verified friends can talk to them, etc.
I'd find it interesting to ask the opponents of E2E encryption what their feelings are on cash. Cash is also an untraceable form of interaction. Yes, cash is a medium of exchange for drugs and sex crimes, but it is also a bulwark against overreaching local and federal governments.
How much personal choice and freedom should we give up?
Everything is a double edged sword. Pointing that out isn't arguing in good faith. The question is the ratio of good to bad. Food and roads have an overwhelmingly positive good to bad ratio. Detractors are arguing that encryption on social media platforms have a worse ratio than food and roads. See, this is why I hate metaphors. It allows you to completely bypass the argument being made by comparing apples and oranges. Muddying the waters of the discussion with this rhetorical sleight of hand. Address the actual argument.
> Everything is a double edged sword. Pointing that out isn't arguing in good faith.
The GP did much more than just point that out. The GP gave arguments for how encryption helps keep children safer, rather than detracts from that.
> The question is the ratio of good to bad.
But that is not something that can be judged in a vacuum, or by a single authority. Everyone's circumstances are different, and any policy dictated from the top down is going to do the wrong thing for a non-trivial number of people. That is why freedom is a better option: give people the tools they need to make their own individual decisions. That applies to people protecting their children as much as anything else.
> Detractors are arguing that encryption on social media platforms have a worse ratio than food and roads.
No, they're arguing that encryption in general is just bad. They're not recognizing any benefits to encryption (such as the ones the GP described).
The people who are closest to the problem of children’s safety online come to a different assessment as to whether the larger risk is predators or hackers stealing… their identities? Their credit card numbers? Just generically “reading their messages?”
And no, “they” aren’t arguing encryption in general is just bad. For example, I specifically mentioned that that’s not the case. Battling the strawman is your own rhetorical choice.
> The people who are closest to the problem of children’s safety online come to a different assessment as to whether the larger risk is predators or hackers stealing… their identities?
The people who are closest to the problem are the children's parents. Are you saying they would rather not have encryption?
The people who are against encryption, from what I can see, are not parents. They are businesses who don't want to have their access to data curtailed, and politicians who favor those businesses. I have not seen any real arguments against encryption from them, just FUD about "protect the children!".
You think just “parents” are the people with the strongest grasp on the issue of CSAM?
I mean, maybe you’re just looking for weak versions of the argument. There are a ton of people who dedicate their careers to monitoring and combatting child abuse imagery. As far as I can tell, they’re unanimous in the view that E2E encryption is a huge, huge help for predators. I’m sure they’re not unanimous on how to level that fact with another distinct fact that E2E encryption is also useful for lots of other things.
Calling a real problem FUD doesn’t make it moot. That’s just a way for you to avoid addressing the argument.
> You think just “parents” are the people with the strongest grasp on the issue of CSAM?
That's not what I said. I said parents are closest to the issue of how to protect their children from predators.
> There are a ton of people who dedicate their careers to monitoring and combatting child abuse imagery. As far as I can tell, they’re unanimous in the view that E2E encryption is a huge, huge help for predators.
E2E is a huge help for predators to hide from Big Brother surveillance, yes, of course, that's obvious.
What is not obvious is whether Big Brother surveillance actually helps protect children from predators. Asking the people whose salaries depend on Big Brother surveillance whether that surveillance actually helps protect children does not strike me as a good way to evaluate that question.
Child predator gets caught. Authorities see metadata or have other reason to believe he’s been exchanging material with dozens of other predators. Unfortunately they cannot see who any of those people are due to strong encryption.
Is this a hard to imagine case? Seems completely obvious what the claim is here and that it has a basis in reality, and it’s obvious why someone dealing with this problem up close might be extremely highly motivated to solve it.
Child predator gets caught. Authorities tell child predator that the best way to minimize the number of years he spends in prison is to tell them what other predators he has been in contact with. Child predator gives them the information.
Sure, this is an imagined case, just as yours is. Is it any less plausible? I don't think so. And so we have two plausible imagined cases that give opposite answers to how useful banning encryption would actually be to law enforcement. In other words, imagined cases are of no help whatsoever in actually evaluating this issue.
What we, as members of the public (or for that matter concerned parents, for those who are), need in order to evaluate whether we should agree to banning encryption is open and transparent data on how well law enforcement does this job with vs. without encryption. Also data on how many child predators and other nefarious actors actually use encryption, given that it is easily available now. Child predators existed before encryption was available on consumer devices, and indeed before the Internet itself existed, so there should be plenty of historical data to use. Has anyone done such an analysis?
Firstly, do you think a child predator is going to comply with "encryption's illegal you know?"
Secondly, if your imagined case only succeeds because of poor predator opsec, your police are shit, and it's not a good argument for wrecking the security of everyone else in the world.
Allow me to remind you of how Ross Ulbricht, operator of the drugs website The Silk Road, was taken down. Law enforcers infiltrated the site, got chummy with admins and learned more his operations. They seized servers. They did good investigative work to narrow down who the operator was, and when they found him, they surveilled him, and grabbed him in a way that stopped him shutting his laptop, which would have immediately encrypted the contents.
If you hate child predators (and you should), then call for competent police to handle the case and catch them red-handed, don't give me the old "gosh darn it, if only everyone had to go around naked and write postcards, no letters, strip them completely of their privacy. I couldn't solve any cases otherwise!"
> it’s obvious why someone dealing with this problem up close might be extremely highly motivated to solve it.
It's obvious why law enforcement wants to ban encryption, sure--it means they have an excuse for not doing the actual hard work of gathering human intelligence about child predators and other nefarious actors, and shutting them down the old-fashioned way. Which says nothing at all about whether they actually can do the job better using Big Brother surveillance than they could the old-fashioned way.
Hypothetically, any privacy of any sort whatsoever aids wrongdoers. The issues are that (1) governments sometimes go la Terreur, (2) there seems to be some sort of ethnic cleansing program every century or so without necessarily any warning and (3) governments regularly go off-the-rails and bring the hammer down on random small time people doing nothing much wrong all the time.
It is baffling to me that some people can - in 3 breaths - condemn police violence, opine the the current/next US president is attempting to bring down democracy and then conclude that we just need to give government agencies control over one more aspect of life to make things better. The compartmentalising people are capable of is something to behold.
We need a system of protections in place to slow down government overreach. There are things in the world much scarier than small-scale harm to children, it is likely to come out of the government and privacy is a key plank. Besides, we all read through the Jeff Epstien thing. Enforcement of this stuff is already a bit corrupt; I expect the people responsible for this system would be abusers themselves. The most systematic of abusers are almost certainly going to be involved in these spying programs. Some authoritarians are probably good people. A lot just get off on abusing their own power against weaker people. These are not the sort of people we should want going through our mailboxes.
> to give government agencies control over one more aspect of life to make things better.
Often but not always, it's not a tradeoff between government having control, and nobody having control.
It's a tradeoff between a government that is in many ways democratically accountable having this control, and some rich guy who's not accountable to anyone having this control.
> ...that is in many ways democratically accountable...
But in this case, the plan is to remove encryption so that officials can monitor everyone's messaging. And it has to be all mail, otherwise fairly obviously the horrible people will go use secure communication.
So them being democratically accountable appears to mean they can just poke their nose into whatever they like and there isn't anything much voters can do about it. Using taxpayer money to enforce everything I might add.
Some random rich guy couldn't do this. He can't force anyone to divulge their mail, and he can't force the people being targeted to fund it.
And this idea of democratic accountability is a bit sus. The Nazi's were the plurality party when they took over (as far as I recall, anyway), the Communists were popular enough to win a war when they took over China. Popular support for a terrible idea doesn't change the nature of the idea, we're much better off with actual technical protections to give everyone time to figure out they're making terrible mistakes.
You're ignoring the lengthy article before the part they quoted. They spent several paragraphs describing how there's a real problem that's hurting children.
They then used the emotion that built up to attack something tangentially connected to that problem. That's what makes it an appeal to emotion.
Children also have legitimate security needs. Just wait for a massive service hack where all private information and racy texts/photos teens have been voluntarily sending to each other are made public for all to see. If server can't read messages, it can't leak them.
Infinite number of things are possible. This article is about Facebook and Instagram, where people make public posts, can be found by searches, and absolutely nothing is end-to-end encrypted. Law enforcement has complete and total access to every single message ever sent over those platforms, which even provide them with an API to query on essentially an honor system trusting that the warrant document LEO submits is legitimate.
It would be equally irrelevant if the Guardian instead of encryption started talking about criminals are aided by guns and explosives in this article.
The Guardian lost any credibility from me when they started boogeyman'ing aspartame[0] and put out a slanted scare article about how folks in the food industry are responsible for developing nutritional guidelines[1].
It's a propaganda piece using children as political weapons in an effort to weaken encryption worldwide. Would be bizarre if encryption denouncement was not present. If you see anyone using children as an argument, they're arguing in bad faith. It's worse than Godwin's law.
If only people were just as serious about denouncing social media platforms. Connecting everyone has done much more harm than encrypted comms. But for some reason, only encryption falls under scrutiny.
> an internal 2017 email describes executive opposition to scanning Facebook Messenger for “harmful content” because it would place the service “at a competitive disadvantage vs other apps who might offer more privacy”, the lawsuit states.
Amusing. Facebook trying to protect users' privacy. Newspapers killing them for supporting E2EE without CSAM scanning on the server. Boy if that isn't the collision of two hot-button issues. I wonder who will win.
I don't understand why they claim that scanning messages erodes privacy: it can be done in the recipient side totally on client side. It could be as easy as a image classifier: "hey, this photo may contain unwanted content (we may be wrong), do you want to see it? Yes, No, Report, Report & Block".
Unfortunately Facebook do an awful job blocking harmful content: I have reported posts where there is obvious phishing, hate speech or even murder threats and... They do nothing. But if you write "tinta negra HP" (HP black ink) they ban you immediately.
In China we feel safer, strangely. Social media is very policed, reported message are auto scored and disappear quickly even if it does reduce offensive-yet-legal discussion (but is it that bad?). Social media companies are actually terrified of being in the news because they know actual consequences will hit them.
I know it s contencious to even defend this, and this message would be disappeared if HN had the same scoring system as Red (https://en.m.wikipedia.org/wiki/Xiaohongshu) in China, but you re right: it can be done, social media can actually be optimistic and positive. But do you want this?
They're understaffed because they cut headcount assuming that more data will help them to catch more 'bad guys' with fewer resources.
Part of the problem is that they don't employ enough people smart enough to use the data in mathematically and statistically accurate ways.
They only look at 'bad', and don't seem to have any realisation that there could be an offset for 'good', or at least 'not bad'. I'm strongly biased in this opinion having personally been on the end of police misinterpretation of data.
I was specifically told the following (by a Lead Investigator, not just a plod):
- Use of Mega is suspicious
- Having virtual machines is suspicious
- Having tor on your computer (their wording) is suspicious
Pattern matching your lead investigator's thought process yields the following generalization:
> Anything that evades oversight and investigation or makes police's job harder is suspicious
They don't actually consider privacy as legitimate or justified. They think protecting oneself is "suspicious", criminal behavior. It's just like the guy who hires a lawyer instead of talking to police -- obviously guilty. In their minds, upstanding people just expose everything for all to see without a care in the world and let the chips fall where they may. They see themselves as people who are weighed down by checks and balances and basic civil rights and other such forms of worthless, meaningless red tape. Just think how many criminals this guy might arrest if he had limitless power like the NSA. He totally wouldn't get caught spying on his wife like all the others, no sir.
Because whoever works there can use that data to more effectively and efficiently carry out their work. How is this even a serious question? Not everything is about headcount.
It's all about headcount, actually. Society should maximize the number of people necessary for authorities to do anything at all. If they want to investigate someone, they should have to send men out there to physically compromise the computers instead of having a button they can push to reveal that person's entire life history on a monitor. That puts hard limits on the scale of their operations, ensures their powers are checked and limits the damage done by any eventual abuse.
Society is going in the opposite direction instead: it's minimizing the number of people required and maximizing the scale of operations. Naturally, societies all over the globe are trending towards totalitarian surveillance police states where power and authority are concentrated in the hands of few. The panopticon was envisioned as a way to allow a single guard to keep watch over limitless prisoners. People are always ready to accept that because they think they would never be imprisoned themselves.
So this is a change in topic, from "will surveillance help them catch more criminals", to "is surveillance a bad thing".
Basically, your view is that surveillance is bad because it can be abused by an authoritarian and eventually lead to the erosion of liberal democracy and individual freedoms. Which has validity.
What about terrorism, though? If we completely eliminated the surveillance apparatus, how many more terrorist attacks would there be, and what would the consequences of that be on the survival of liberal democracy? I ask these questions in earnest. Maybe the answer is "not many terrorist attacks would have been prevented". But then I read these news articles of multiple terrorist plotters being arrested before committing the act. What percentage of those arrests are attributable to the surveillance apparatus? And if that apparatus was removed, and these people successfully committed those acts (say, they blew up a bunch of people on NYE), what impact would this have on people's voting patterns? Well, people would be more inclined to vote for a strongman authoritarian to fix the terrorism issue, which then creates the very problem that you are concerned about with the surveillance state in the first place. Which is the erosion of liberalism and democracy and freedoms. The AfD will get elected. Forget any progress on climate change. Then we get climate refugees, leading to further destabilization. Etc. What I'm getting at is there is no solution that doesn't involve trade-offs when it comes to protecting freedoms and democracy.
Maybe the answer in the modern world isn't that surveillance is always bad, it's that surveillance needs to be heavily constrained with newly designed checks and balances.
What about the principles the US was founded upon, though?
Will you maintain those principles even though terrorists are flying aircraft into your buildings? Or will you break and start stripping your citizens of their rights, surveilling them without warrant in a desperate bid to stop future terrorists?
The US made its choice. The price of freedom is high and paid in blood. They no longer want to pay it. The consequences will come.
> I ask these questions in earnest.
I hold that these questions are ultimately irrelevant. It seems like these terrorists won either way. America was destroyed, even if only spiritually. Principles it once stood for, stand no more.
> America was destroyed, even if only spiritually.
This is my point. If a constrained surveillance apparatus could have stopped 9/11, we could have prevented all the negative consequences of 9/11, including the growth of the surveillance apparatus itself.
The system that you are advocating for (pre-9/11 lack of surveillance) led to the event (9/11) that destroyed what it is you're advocating for.
This is a flaw I see in the libertarian worldview. There's an under-appreciation of unpredictable spillover consequences. In my view, sometimes rights and freedoms need to be violated in order to protect those rights and freedoms. It's not that I want violations of freedoms, it's that I see it as a pragmatic necessary evil sometimes.
Let's think through the causality step by step. When a terrorist attack happens, people are shocked and angry. Clear-thinkers like yourself have no input into the decision making during this time because you're a small political minority. Nationalism and security paranoia dominate decision making. Then we get the Iraq War and all the other stuff.
This is a funny discussion because you're probably a libertarian and I also consider myself a (left-)libertarian (or at least strong anti-authoritarian). But I'm one of those "paradox of tolerance" guys who wants to protect liberal democracy from some of the failure modes that emerge from the complex social system that democracy is embedded in. That means: keeping inflation low, ensuring housing costs are reasonable, eliminating sectarian violence (terrorism and hate crimes), keeping institutions robust and low corruption, quality public education, making sure people feel physically safe and socially respected, racial equality/harmony (equality of opportunity and no racism).
> This is a flaw I see in the libertarian worldview. There's an under-appreciation of unpredictable spillover consequences. In my view, sometimes rights and freedoms need to be violated in order to protect those rights and freedoms. It's not that I want violations of freedoms, it's that I see it as a pragmatic necessary evil sometimes.
I understand your point. Deep down I agree with it and that causes me immense sadness and disillusionment.
I want a set of principles that are true, universal and moral. A solid bedrock of philosophy to guide my thoughts and actions. If such principles can be invalidated by circumstances, they are worthless. A principle like "people cannot be tortured" cannot be relativized by the fact terrorists flew aircraft into buildings. Even though they flew aircraft into buildings, they cannot be tortured. Obviously CIA guys reject that worldview, but for me these things must be absolute. Otherwise I'm going to start coming up with many more equally valid reasons to torture people.
If I cannot be certain of such fundamental principles, then pretty much anything can be justified based on circumstances and there's no point in wasting time philosophizing about anything. Everything becomes about power, what you can get away with. Civilization breaks down and the law of the jungle dominates. You become desensitized to death and suffering because you internalize the fact "people cannot be unjustly killed or imprisoned" was never a valid universal principle to begin with.
> But I'm one of those "paradox of tolerance" guys
Yeah, and I'm the guy who says democracy should be able to tolerate literal nazism or it's not really tolerant as it claims to be. I'm very sensitive to that argument because I live in a country where nazism is a crime and yet communism is not. Literal communists walk our soil with absolute impunity. Literal, self-admitted communists are in our supreme court. If they will arrest nazis, then I demand they also arrest these communists. If they refuse, I'm going to start drawing some very ugly conclusions about the system they use to justify their actions and the culmination of those conclusions is the complete rejection of their authority.
The reason I said I agree with you deep down is I've already drawn these conclusions. I'm very uncertain about things right now. It's like nothing is true and everything is allowed. I think I make these comments here partly because I'm mourning and partly because I desperately want someone to prove me wrong.
Which platforms do meta allow children to create accounts? To my understanding, they stopped allowing children on their platforms since social networks can cause major harm to children from things like bullying, and that children can't give consent to the model of data collections that meta do.
You can add barriers and policies but I've been astonished what parents will work around to get their children to use these platforms (I recall a family member using their ID to allow their 11yr old child on Discord).
They have "Messenger Kids" targeted at the under-13 market with safeguards ostensibly in place, and as far as I know children 13 and over can create FB/IG accounts.
It is mind boggling how a high level Apple employee would not use Apple's own parental control abilities to safe guard their child's online activity. They aren't even supposed to be using the IG platform at twelve years old. I hope Meta would throw that back in Apple's face if they kicked up a stink.
Obviously this kind of harassment should not exist in the first place but parents who work in tech should be held more accountable for not protecting their kids imo.
A unique 100k each day or is there overlap? Also are these genuine messages or just wide-cast spam messages? Technically we all get sexually harassed daily if we include our GMail spam folders.
There should be a big banner at the top of every page "Surgeon General's Warning: This app contains a gratuitous amount of dicks." From that point forward it would be caveat emptor.
I was teaching a 9th grade class a few years ago and realized that men from a certain demographic showing pics of their reproductive organs had traumatized the young females in my to the point most of them showed signs if PTSD. They gasped, closed their eyes, and/or recoiled at any image suddenly popping up on their computer screens or on the projector.
I pointed out that this was felony child sexual abuse, and that the perpetrators were easy to find. Law enforcement did not care.
As a parent, Discord is the number 1 threat to my children. I gave them the whole "stranger danger" talk, and I regularly check in to see if anyone is harassing them. My kids are honest and my 14 year old has gotten multiple people asking for inappropriate photos in one of the Fortnite server (not sure if it's an official one?).
I don't want to ostracize them, since they use it for playing games with their friends in private servers. Actually another parent I know specifically admins his own kids server and only lets them communicate with their friends via that server.
Seriously though. People always comment about how Call of duty voice chat was braving the depths of degeneracy, but those people have obviously never done random pick up groups in gaming discords.
If a child is on facebook for 5 years (the max possible without violating age requirements and assuming the age of majority is 18) and if harassment events are independently distributed, the chance of this particular child suffering sexual harassment is (at least) 1 - (1 - 0.00005)^(365 * 5) = ~8.7%.
Moreover, given that the 13-17 (inclusive) age range represents a small portion of FB's total user base, the 0.005% figure is a deep underestimate itself, meaning the 8.7% is as well.
Everyone is free to their opinion, of course, but "only 8.7% of children on FB are sexually harassed" seems a bit cavalier.
There are both unique challenges and unique opportunities online. Child accounts can be more locked down. But children can lie and use parent's id/phone/etc to create adult accounts. Plus government regulation can either discourage a service from offering child accounts in the first place or make these too restrictive to be useful, forcing everyone to lie.
Ultimately parents and perhaps schools have to educate children on realistic dangers they can face both online and on street corners and supervise them until they are ready to responsibly use these spaces alone.