> Our societies are not in any way equipped to deal with putting what may well be a sizable majority of working-age people out of work
Our societies were built before there was a technology that could replace its smartest people with machines that never tire?
I don't get why people keep trying to imagine current society + super-intelligent AI: by definition it won't be our current society if we can actually get there would it?
I mean if we have AI that can even replace the researchers (I wouldn't dream so boldly tbh), imagine how much faster the pace of scientific discovery becomes. Imagine how much more efficient we can make power generation and transmission, discover new treatments for disease, democratize learning at costs never before possible...
I don't love to spend too much time daydreaming what we could do down that because SWEs already feels like a bit of a pipedream, so all novel research being automated away is just completely in fantasy land... but realistically we're already on a pretty terrible trajectory otherwise.
Our next billion people are about to be born into some of the worst off parts of the planet. AI becoming good enough to replace researchers would be an infinitely more positive trajectory than some of the others we could end up on on otherwise.
Social media could have been utopian, too, yet those apps are algorithmic manipulation hellscapes that threaten to bring down even the most robust democracies. The same people who make it so are poised to be the ones in control of these AIs. I don't think they want the kind of utopia you imagine.
What I described doesn't have to be utopian in an absolute sense, just significantly better than where we're currently headed.
I think a lot of the unchecked pessimism around super-intelligent AI is just people being a bit naive or shut off from the reality of just how terrible things are going to be over the next century.
We're waging 25% tariffs over planefuls of people, what's going to happen when it's 100 million people trampling over borders trying to escape disease, famine, and temperatures incompatible with human life?
Compared to that, even if these companies abuse their ownership of AI and monopolize the gains, an AI capable of producing novel research and development by itself would still bring us much closer to solving major problems than otherwise.
And you think the tools that are, as we speak, boiling rivers and lakes in order to power the insanely resource-hungry AIs is the solution to any of those problems, such as famine and rising temperatures? If anything, they're accelerating us towards these issues.
There's about to be 500 billion dollars invested in generating even more electricity for these monstrosities, instead of literally anything else actually useful today that we could be putting towards climate research or renewable energy. Nope, we're just gonna generate even more spam and bullshit while spinning up nuclear reactors to power it all.
If they don't pan out we won't even reach 500 billion dollars actually invested and tbe bubble will pop. And it'll take many many many trillions before AI is even close to the biggest reason climate change is killing people.
People tend to take different pieces of mutually exclusive end states. Like my comment is speaking to a fantasy like outcome where again, we have AI that puts all researchers out of jobs.
If it can do that (which I don't think it will, but if we're dreaming it), let it boil oceans, let them abuse it, you can become unfathomably wealthy by solving world hunger.
Let them force countries to take out loans that practically put then in OpenAI's permanent debt to access the advancements they come up with.
It's still better than the alternative, and it still allows room for them to be as evil as they want.
I mean either way countries are going to become indebted to companies eventually if we keep down the current path. I don't see how the ultra wealthy can't use mass upheaval and desperation to secure extremely cheap labor and solidfy their power even without AI.
At least now, even if it's for their own personal gain and amusement, they can trivially solve
-
It's like we're talking about someone potentially developing a cure for all cancer, but everyone is worried because the company behind it is evil and will hoard it for themselves while charging $1B per dose: let's still get to that cure if it can be developed.
It'll be a truly miserable and awful world watching people we know die while there's a cure, and watching them die because they're not able to pay this villain... but today there's no cure, no proof one can exist, no hints on how to get one.
It's better to have the cure and no clue how to fix the broken situation that it creates, than to have no cure and no clue how to fix cancer, because one is a problem of people/power/ethics and the other is a problem of unknown proportions and guaranteed ongoing suffering not more damaging than having a cure, no matter how inequitable access to it is.
Though I think you misunderstand or underestimate human nature of self interest. Everyone that is in control of a superpower like this will abuse it. Be it on a presidential level, be it on CEO level, be it a major shareholder of a foreign NGO. That is why we had democratic splits of types of power in the first place. I say "had" because the trend globally is leading to right wing autocratic ideas due to manipulation of social media.
Human self interest and egocentric world views is what gave us this mess.
The only thing capable of evening out the odds is a federalistic decentralized approach, which we desperately need for AI. Something like a legislative system for lots of overfitted mini AI assistants that also give outliers a chance to be the social trend.
Otherwise we will land up with the ministry of truth, which, right now is Facebook and TikTok effectively. The younger generations that grew up with social media tend heavily towards populist right wing ideas because those are easily marketable in 30 seconds. Paint the bad guy, say that it is established fact, next video. Nobody is interested in the rationale behind it, let alone finding and discussing a compromise like you would in a real debate that wants to find a solution.
We need to find a way to change beliefs through rationale rather than emotions. Ironically this problem is also reflected in trained LLMs that turn into circlejerks because they've learned that from the dataset of us easily manipulateable humans.
I don't think it's that simple. For one thing, AI isn't a democratizing force. If it's as good as you think it will be, it will be less like having a good education and more like having an indentured servant. Some people will have whole fleets of such servants doing their bidding, while others will have none.
For another, research isn't an end unto itself. As you note, for some people an already-unfathomable level of societal knowledge has resulted in nothing but continued poverty. Benefit from scientific knowledge requires a stable economy full of consumers who can and will purchase high-tech items. Where will that wealth come from once the value of human intellectual labor has been so undercut by cheap AI intellectual labor? Without the capitol to make AIs work for us, most if not many people will be left to live their life as servants to AIs so that those AIs are able to have autonomy in the real world.
Our societies were built before there was a technology that could replace its smartest people with machines that never tire?
I don't get why people keep trying to imagine current society + super-intelligent AI: by definition it won't be our current society if we can actually get there would it?
I mean if we have AI that can even replace the researchers (I wouldn't dream so boldly tbh), imagine how much faster the pace of scientific discovery becomes. Imagine how much more efficient we can make power generation and transmission, discover new treatments for disease, democratize learning at costs never before possible...
I don't love to spend too much time daydreaming what we could do down that because SWEs already feels like a bit of a pipedream, so all novel research being automated away is just completely in fantasy land... but realistically we're already on a pretty terrible trajectory otherwise.
Our next billion people are about to be born into some of the worst off parts of the planet. AI becoming good enough to replace researchers would be an infinitely more positive trajectory than some of the others we could end up on on otherwise.