My favorite quote on this topic is "Why should I bother to read something you didn't bother to write?"
Especially when you're talking about fiction and reading/watching for enjoyment, what does it matter if you can shit out 1000 hours of AI content? Maybe it's good to keep babies entertained? Studios have gotten so into the habit of treating "content" as a fungible commodity, but the fact is that even blockbuster movies still live and die by actually being entertaining.
> Why should I bother to read something you didn't bother to write?
The answer seems obvious: because it's better.
Obviously it isn't better now. But it's easy to imagine a day in the not-too-distant future where AI can shit out 1000 hours of content that is better in every way than what humans create. And it will even feel more human than the human made stuff because the AI will have learned that we like that.
What do you do then? Watch the worse stuff? Maybe, and I think a lot still will. But how long does that last?
The point really is that "better" in this context means "made by a human" - not "faked to look like made by a human". People need connection to other people - art is one of the means of communicating _between people_.
Is that easy to imagine? I’m not sure it is, particularly.
Ultimately, the LLM industry can’t run on jam tomorrow forever. At some point, people have to stop concentrating on the hypothetical magic future, and concentrate on what actually exists.
I can easily imagine AI spitting out volume. I can't imagine it spitting out quality. Most of what it generates now is just trash. Like the Tourist/bear paradigm in dumpster security there may be overlap between the worst human writing and the best AI writing... but that's not how you make a successful film.
Once AI is entrenched and the economics work out, it will become way more expensive to shoot a "traditional" movie. You won't find the trained technical staff, the actors, scripts, etc. which will be all out of jobs and doing something else. Like today if you want to run a steam locomotive, it's way way more expensive than their modern counterpart because the infrastructure isn't there anymore.
So yes, the AI will eventually get better even if it doesn't !
Mechanizing the expression of artist endeavour seems silly. Does an LLM know the pleasure and pain that love can instill or does it just regurgitate tokens in a pattern it thinks is best fit?
My cynical take is, younger generation who is growing up with AI generated content will accept it as normal and move on. We only enjoy human-created stuff, as that seems "natural" to us. That "natural" feeling tends to change in every new generation.
Train it to attribute its failure to land on wokeness, have it generate designs for merch that communicate this idea, and book an appearance on JRE and it'll have completed the arc of a lot of short run comics.
Generative AI is not autonomous, it's wielded by a user just as a brush is, prompted and tweaked by a human being, a new tool in the artist's tackle box, like acrylic paint, Photoshop, "content-aware fill" etc.
it's impossible to answer to this line of reasoning without wasting time so I'll just start right away with the ad hominem.
you just don't like art, you don't understand it and you want slop, admit it and don't feel compelled to enter the discussion with your growth oriented bullshit mindset
Literally this. Ben hits the nail on the head that these tools can “write convincing Elizabethan language but can’t write Shakespeare”, along with his metaphor about craftsmen vs artists.
These tools can never create art because art is the imperfection of reality transposed from the mind’s eye using the talent of the artisan and their tools. Writing a convincing enough prompt to generate an assortment of visual outputs that you “choose” as the final product can never be art, because your art skills ended with the prompt itself - everything after was just maths, and not even maths you had a direct hand in. Even then, you cannot really shill your prompt as art either, because you wrote tokens to ingest into a LLM to generate pseudorandom visual outputs, not language to be interpreted by other humans and visualized on their own accord.
Art is one of those things you cannot appreciate until you make it, and generating slop is not creating art. A preschooler with a single, broken crayon and a napkin makes better art than anything generated via tokens and math models - and to really drive that home, I’d argue that the teenager goofing around with math formulas on their graphing calculator to create visually beautiful or interesting designs is also superior art than whatever the LLM can spew forth using far more advanced maths.
If you really want art, then make it. Learn to draw, practice photography, paint some scenery, experiment with formula visualizations, layout a garden, or heck, just commission an artist to bring your idea into reality. Learning to articulate your vision with language in such a way others can illustrate or create it is a far more valuable skill than laying out tokens for an LLM.
I never created a movie. I can appreciate a good one over a bad one (Argo, Gigli).
Statistically speaking, the number of people who have created a movie rounds to zero. And yet, to suggest basically no one appreciates a movie or the difference between a good movie and a bad one is obviously very dumb.
You're conflating the reality of the situation with me. I didn't say I wanted AI generated content. Just that it seems like it will inevitably win. All the insults in your comment just stem from an imaginary and inaccurate picture of me, a stranger, that you created in your head.
> don't feel compelled to enter the discussion with your growth oriented bullshit mindset
Alice, an incessant painter with passion for it but zero natural talent, learns about stable diffusion. She teaches herself how to use this new tool and creates imagery she never could before. She tweaks settings and prompts, iterates for hours, and ultimately generates imagery she is pleased with. She shares this creation with others and many of them appreciate what she has created.
Except Bob. Bob looks at the imagery and, thanks to his up to date technical knowledge, recognizes that the work may be generated. So Bob rejects the imagery, he refuses to allow it to affect him at all. He insults it, calls it slop, insists it cannot be art, insults Alice, and insults the people who were moved by Alice's work.
If one of these two people "doesn't like art" and "doesn't understand it," which one is it more likely to be: the one who is creating, or the one who is criticizing the creation?
Plenty of mass media are just uninspired derivatives. It seems to entertain a lot of people. I see AI can take care of the mind numbing work of making uninspired bullshit so real creatives can be freed to pursue actual, meaningful art.
Lots of people will lose jobs working on stupid bullshit, of course. That is an economic issue, not a technology issue.
This is the classic expression of the fallacy that the value of something is based on the cost to create it from the seller, not the benefit it brings to the buyer.
Additionally, there are lots of examples where cheaper production has produced an inferior product yet the difference in price causes the inferior product to usurp in superior product. Building materials exhibit this effect frequently: plaster vs. drywall, asphalt vs. slate, balloon framing vs structural masonry, etc.
In media, TikTok exemplifies this effect. People watch fewer movies (expensive, high quality) and watch more short-form content (cheap, low quality).
Saying movies are inherently a superior product to TikTok shorts is incredibly untrue. I would rather watch 90 minutes of the dumbest TikTok crap imaginable than sit through Madame Web again.
Cheaper movies in terms of cost are often better than expensive movies because they have a humanity that shows through. Let me know when an AI can make a Jon Carpenter movie
This. Christopher Nolan, but yes. AI can never capture the subtlety of the (failings) of the human mind/memory and the implications. It would have to learn it somehow and I don't see a method with which that can be taught.
How are people so certain that there is something humanity does which is unmodelable?
There is an ancient hubris in this: the belief that there is something in humanity beyond physics, that we possess a soul or something else which defies mathematical/scientific comprehension.
I would like to believe in our specialness like this too, but I don't understand how someone can confidently proclaim that "AI can never capture" something humans do.
Whatever it is we think AI can never learn about us, is it unlearnable by humans too (i.e. inborn, instinctive)? Or is it learnable by humans, but inexplicable, unlearnable by a machine observing the behavior en masse?
> Art is the domain where "the cost to create it from the seller" matters.
Is it? Or does it boil down to: "This has been done and rehashed multiple times before, it's no longer interesting"? There is tons of recognized art out there that, in literal time spent, could be done in minutes. What is important is what people gain from the art, not the time put into the art.
Do you value that they wrote it, or that it's their opinion? Hypothetically, if there was a system to take one's thoughts on a topic and generate text that accurately represents them, would you be interested in reading it if someone sent you their thoughts?
I think it depends on the volume of words / content and time I have to spend parsing it. With AI it seems like the volume could easily skyrocket, and the person who subjects me to it may not even have read it.
This isn't just limited to AI though. As an example, I hate email generally, it feels like the land where people dump out words without trying to carefully say anything. It almost always requires lots of clarification later on. It's the land of careless communication.
I often wish there was an honest flag on emails that indicated "effort put into this email" so I could put the requisite effort into reading it ;). I don't want to spend a lot of time on something of the person creating it didn't spend much time on. On the other hand if someone did put a lot of effort into the content, I want to give them an appropriate amount of attention.
With AI it seems like the sheer volume of content could be even larger because of how little time it costs the sender. Thus eating up even more of the receiver's time.
I don't think that's true at all. Duchamp's "Fountain" is an example of something that is profoundly impactful, didn't "cost" him anything, yet an AI could never reproduce it.
AI tools will continue to get better, and they'll really shine when they can enable increasingly smaller teams of people to execute on their creative vision. Existing non-AI tools have already helped enable tiny teams to create media which is enjoyed by millions of people; the biggest and most recent example is how Source2 is used to animate Skibidi Toilet. But there's still room for increasing accessibility.
The biggest issue I've noticed with most existing AI-gen tools is that they only focus on generating completed output. Ideally you'd have tools that can generate multiple layers or scenes within creative tools to allow for continued iteration.
The future is when one person can do by themselves what previously would've taken hundreds or thousands of people. I'm really looking forward to what sort of creative works we'll get from people that wouldn't normally have access to Hollywood-tier resources.
PCG and NN-generated are definitely not the same thing though. I've done both, and most PCG is made using plain-old programming and art assets-- it's just more flexible to configure at runtime. Anyone I've seen trying to do real on-the-fly game content generation using generative AI is either a) making flexible but useless NPC dialog, or b) basically using it as an expensive random number generator because it's nowhere close to good enough to create immediately usable game assets that reliably work with a game.
In specific genres of game where the developers put a lot of effort into the mechanics to facilitate procedural generation, yes it's interesting. You can't tell me Dwarf Fortress was fast and easy to make because fortresses are procedurally generated.
Sequelitis and remakes have persuaded me that some people really want intellectual baby food. I'm worried that the majority really just wants content to pass the time. Let's hope creative indie movies will continue to flourish, as we may just end up depending on them.
>My favorite quote on this topic is "Why should I bother to read something you didn't bother to write?"
"If you're like the average consumer, because you have no life, and would doomscroll whatever shit we post for you, and watch whatever slop we produce for you. You did it with the tons of crap remakes, rehashes, franchize, and "multiverse" crap in the 2010-2024 span, you ain't gonna stop just because it's an AI writing it. It's not like the commercial hacks writing the stuff you watched earlier were any more creative than AI".
Was it? He concludes that LLMs will never write Shakespeare, or create original work of that caliber, or animate an actor the way Ben Guinness could have done it. I'm paraphrasing but isn't this confirmaation bias from a guy now heavily vested in making movies? Move 37 was so astonishing the world champiion Go player, hardly known for theatrics, got up and left the table. It was considered an insanely "original" move. In every field we don't consider deeply "human," Ai has jumped light years ahead, in astonishing ways. Affleck is entitled to his opinion and seems like he's trying to understand things, but I think poetry, acting, music, they're all on the table until they aren't, for better or worse.
>Move 37 was so astonishing the world champiion Go player, hardly known for theatrics, got up and left the table.
This is an incredibly minor nitpick, but I recently went down the rabbit hole of move 37[1] after reading Richard Powers' latest novel, Playground, and Sedol didn't get up, he was already away from the table. He did, however, take far longer to make his next move (iirc something like 12-15 minutes instead of 2-4).
It has yet to be seen if this can truly be generalized and if such an analogy holds.
Go is a game constrained by an extremely narrow set of rules. Brute forcing potential solutions and arriving at something novel within such a constrained ruleset is an entirely different scenario than writing or film-making which occur in an almost incomprehensibly larger potential "solution" space.
Perhaps the same thing will eventually happen, but I don't think the success of AI in games like Go is particularly instructive for predicting what can happen in other fields.
Ahh good point. I was thinking about “machine plays itself repeatedly until it gets good” aspect of AlphaGo Zero and my brain jumped to brute forcing, but agree that’s a misnomer.
But a game is something with an objective measure. Either a move is good or it's not. Can you say the same of parts in a movie, where it's more about taste?
I'm not making any statement about LLMs here, but the counterpoint to this is that you don't need to make what film critics would overwhelmingly call a "good" movie. You need to make things that make money.
I can imagine two options for that: utilize expertise from people that know how to make films that make money, or make so many movies, one or two can make enough money to pay for all the others and then some.
Really, I think it's more about what gets more attention and then you make deals with Roblox and Fortnite or whatever to sell digital goods.
The thing people love about Gen AI is not needing to understand the dozens, if not hundreds, of deliberate and unconscious artistic decisions an artist makes when creating a piece by hand. It's great to be able to think of a core idea, refine the overall aesthetic, and then work out some details. It's freeing, fun, and nearly useless for making high-end media.
Thousands of deliberate artistic decisions go into making a TV show, let alone a Hollywood movie. Think about everything from the subtlest cuts in the tailored costumes for every character, what each part of each hair style will look like in different scenarios, how all of that stuff is lit in the subtlest ways and what shade of almost black you want for the matte and whether the rim light needs to be a different color to make it work... all for each shot. That's the precision required to make even generic high-end media, and the need to manipulate those things with perfect accuracy doesn't go away when you're using Gen AI to make it. People will probably be more critical of Gen AI output than traditional media.
I know of a big, moneyed studio that tried to replace their concept artists with a group of prompt engineers and promptly fired them two months later after the art directors just couldn't take it anymore. They wanted someone that could precisely make exactly what they wanted after 3 hours in two attempts rather than someone who could make 100 polished versions they had to review in 5 minutes, but took 6 hours to get one version that really met their needs because each revision was imprecise and yielded other undesirable results even with control nets, inpainting, loras, and all that. Beyond that, since it was in a flat raster format, it was literally useless for anything else. It's not like Gen AI has no role in that workflow-- a traditional artist might use something like that for ideation and reference-- but modifying the flat raster output of Stable Diffusion, et al would be even more difficult than roughing it out from scratch in many cases and yield an inferior product.
When it came down to it, having people that knew how to precisely execute an artistic vision tuned to produce the output studios know will make them money. And that's concept art, not the stuff that gets put on screen, which has to be a whole lot more precise and for a 90 minute movie, you need 129,600 perfect polished still images, and those will come from a pool of at least that many more which editors can compose into a piece. Not having it in LOG, have separated AOVs for precise grading, color correction and compositing, etc are all huge impediments.
It's no different than giving a completed mp3 of a song to a talented music producer and expecting them to turn that audio into a hit song (not cut it up into samples using traditional techniques to make something new, not use it as inspiration to re-make the song, to take the audio in that mp3 and use that audio) in a fraction of the time it would take them to do it from scratch. They'd just laugh at the suggestion.
> isn't this confirmaation bias from a guy now heavily vested in making movies?
LLM-created video is absolutely nowhere close to competing with Hollywood. As it stands, the closer you get to the top of the movie business, the more profitable LLM-generated shit is. He has every reason to embrace LLMs in movie production-- so much so that I expect most famous people speaking out about this are basically being spokespeople for SAG and secretly pressuring people that work for them to squeeze every bit of productivity out of these tools that they can.
Idk. LLMs may effectively emulate human creativity up to a point, but in the end they are writing literally the most predictable response.
They don’t start with an emotion, a vision, and then devise creative ways to bring the viewer into that world.
They don’t emotion-test hundreds of ideas to find the most effective way to give the viewer/reader a visceral sense of living that moment.
While they can read sentiment, they do not experience an internal emotional response. Fundamentally, they are generating the most probable string of words based on the data in their training set.
There is no way for them to come up with anything that is both improbable and not nonsensical. Their entire range of “understanding” is based on the statistical probability of the next conceptual fragment.
I’m not saying that it might not be possible for LLMs to come up with a story or script… they do that just fine. But it will literally be the most predictable, unremarkable, innovation-less and uninspiring drivel that can be predicted from a statistical walk through vectors starting at a random point.
There is a reason why AI output is abrasive to read. It is literally written with no consideration given to the reader.
No model of mind of the effect that it will have on the receiver, no interesting and unusual prosaic twists to capture your attention or spur your imagination…. Just a soulless, predictable stream of simulated thoughts that remarkably often turns out to be useful, if uninspiring.
LLMs are fantastic tools for navigating the commons of human culture and making a startling breadth of human knowledge easily accessible.
They are truly amazing tools that can release us from many language manipulation burdens in the capture and sorting of data from diverse and unorganized sources.
They will revolutionize many industries that currently are burdened by extensive human labor in the ingestion, manipulation, and interpretation of data. LLMs are like the washing machine to free our minds from the handwashing of information.
But they are not creative agents in the way that we admire creative genius.
>There is a reason why AI output is abrasive to read. It is literally written with no consideration given to the reader.
>No model of mind of the effect that it will have on the receiver, no interesting and unusual prosaic twists to capture your attention or spur your imagination…. Just a soulless, predictable stream of simulated thoughts that remarkably often turns out to be useful, if uninspiring.
An LLM can produce text about the humliation and pain of being picked last on the recess kickball team. But it never had that experience. It can't. It has zero credibility about the subject. It's BSing in exactly the same way that sports fans pretend they can manage their favorite team better than the current staff.
Are you saying a Psychiatrist would need to experience humiliation and pain to be able to help patients? aren't some psychiatrist also BS'ing their patients, because they are either not well-trained, are focused on money, or are just tired of hearing the same problems.
I’d not bet against AI psychology as a useful tool for some. The privacy that it offers could easily offset the lack of innovative thought. Psychology interviews are by design predictable, droll, and interviewee driven. It might work.
His perspective mainly focuses on AI as a replacement for creativity, but he doesn't seem to consider it a tool for creative expression. I can envision a future where one person comes up with ideas, goes to their computer, puts everything together, and creates something comparable to what a small studio can produce today. This has already happened in music; in the '70s, the band Boston started with a single person working in a basement putting together a complete album and releasing it. Once it became a hit, they formed a full band.
I can see a similar situation happening with AI. Hollywood is definitely undergoing changes; the traditional concept of having Hollywood in one location might fade away, leading to a global creation of filmmaking. While this increase in content could bring more creativity, it might also lead to an overwhelming amount of options. With thousands of films available, it may not be as enjoyable to watch a movie anymore. This could lead to a decline in the overall experience of enjoying films, which is not necessarily a positive outcome.
If you've heard Ben Affleck speak before I don't really think his take is surprising. He's not a dumb guy. His take is also extremely realistic in contrast to the AI utopians that assume AI will just do everything better because computer magic.
Generative AIs are cool tools people can use to make things but they're not magic. Once you ask for things outside of their training set you get really weird results. Even with a lot of fine tuning (LoRAs etc) results can be hit or miss for any given prompt. They're also not very consistent so which means series of clips are incongruent or just mismatched.
That doesn't mean generative AIs aren't going to be useful and won't impact film making. You're not necessarily going to be able to ask ChatGPT to "write me an Oscar winning screenplay" but you can certainly use one to punch up dialog or help with story boarding.
I think he's right on that visual effects people will be hard hit because there's going to be a lot less jobs for mundane tasks like cutting out an errant boom mic or Starbucks cup in a shot. I'd extend that to other "effects" jobs like ADR since can AI can fix a flubbed line in an otherwise good take.
I feel like many fields are going to basically cut out junior level jobs for AI, then 10 years down the line complain that there are no more people they can hire for mid level/senior jobs.
fun fact but that is a real stretch to say she was the most famous TV actress in the world when she probably wasn't even the most famous on Family Ties
In 1986-87, when Family Ties was at its peak, it had a rating of 32.7, which means almost a third of all US households watched the show every week.
That probably amounted to 60m+ people tuning in, which is close to Super Bowl numbers ... every week. The TV audience was concentrated then in a way it isn't now. Yellowstone gets less than 12 million viewers per episode today.
Maybe you think Meredith Baxter was better known, but I'll bet more people were paying attention to the teenager than the hippie mom. But let's say she was no 2, or no 5. She was galaxies more famous than the most famous people on TV today. And she has a CS degree. Which taken together is more astonishing than Ben Affleck opining on LLMs.
I'll wager you at any point during Family Ties' run that more people knew who Baxter was, considering she had been a celebrity for 10 years going into that show. Also that's not how share works. A 32.7 share in 1986 is more like 29-30 million viewers.
If you're just going by ratings/buzz Lisa Bonet and Phylicia Rashad are contenders, but this is a weird definition of "famous" that is just "who's popular right now ignoring history and context". Like, Betty White exists.
He's partially right. AI won't be able to write an interesting, wholly original script anytime soon. For that it's necessary to live as a human, to experience anguish, loss, fear, hope, etc. Art, true art, will be the last field to be taken over by AI. Has anyone seen a truly poignant AI image yet, that illuminates something about the human experience? One they would put in a museum? I haven't.
However there is such a large body of work already, all the derivative stuff can be made effortlessly. What would happen for instance if you told the AI of 2050 "I want to mix the Brothers Karazamov with the Odyssey, set on Mars after the apocalypse?" I just tried with Sonnet and its not too bad. Maybe a large enough basis of stories has already been told that most AI scripts will feel novel enough to be interesting, even if they are just interpolations in the latent narrative space humans have constructed.
Actors will be wholly unnecessary. A human director will prompt an AI to show more or less of a particular emotion or reaction. Actors are just instruments.
The three most interesting transformations AI will bring are:
1. Allowing a single human "director" to rapidly iterate on script, scenery, characters, edits, etc. One person will be able to prompt their way through a movie for the price of electricity.
2. Movies will become interactive. "Show me this movie but with more romantic tension / with a happy ending / Walt loves Jesse"
3. Holodecks will allow young people to safely experience a much broader range of events and allow them to grow their souls faster, meaning they can make better movies. Modern movies suck because life is too predictable and tech-focused for good writers to emerge. We won't ever put ourselves in real danger, but what if your senior year of high school was to live through the French Revolution in a holodeck? It would change you forever.
Did you actually read the article you posted? Few quotes:
- "Beyond the themes and emotions, ChatGPT’s poems were also simpler in terms of their overall structure and composition."
- "Understanding poems written by humans requires deep, critical thinking—and that’s a big part of poetry’s appeal, the researchers write in the paper. But modern readers don’t seem to want to do this labor, instead preferring texts that give them “instant answers,” [...]"
So AI didn't write better poems than Shakespear (it's only GPT-3.5, but I doubt newer models are better), and it seems to me that readers couldn't tell difference because they simply can't recognize quality.
Rehearsed word soup. He clearly knows zero about the underlying technology. If he did understand it in any depth, he'd show a lot less certainty about the future.
Maybe the main actors, the stars of the movies are safe for now. But background actors who just sit in the background or have some small extra role will be decimated, because those roles can be replaced with AI.
Since many actors start with small roles or background work, it will have an effect on the industry if entry jobs are eliminated.
In the long run even the main stars can get into trouble, because it may be true that AI won't be able to reproduce what they do, creating a new, creative scene, but most movies are trash even today, so if AI can make an average, good enough movie in the future which is cheap to generate, but people still watch it then the bottom of the industry will fall out, lots of people who work on movies will lose their jobs which can have a huge impact on the industry.
100 percent as good for 1/100 the price is where celebrity acting is headed. They will take roles for exposure and make their money on their brand, like influencer cattle. It will be the only way to compete with the human wave of extremely talented but unknown talent that currently can’t get parts, but who can absolutely wreck an audience as a human canvas upon which to render a studio owned character likeness.
Ben went and drank his own celebrity exceptionalism cool-aid.
I'm not in the industry and I can tell he's failing to conceive many possibilities. I don't understand why he's being praised, I guess confidence impresses people.
> I'm not in the industry and I can tell he's failing to conceive many possibilities.
It's also possible that because you're not in the industry, you don't understand the problem domain well enough.
Tech bros love to think they're just so much smarter than everyone in other industries, but it rarely ends up being true. We saw this with Blockchain; this distributed consensus protocol was supposed to solve all the problems with money transfers, settlements, and securities across the world and uh... did none of that. But tech bros sure did love to talk about how the blockchain doubters just didn't understand the technology, without considering that maybe they didn't understand the problem space.
I think the other thread did a good job showing you that it's the other way around, people who have been in the industry do not tend to have the imagination people with fresh eyes (and maybe some tech chops) do.
An example I had in mind was when Affleck was speaking of being able to generate the show but with their preferred cast from a different production. He really has no clue that people will be generating themselves and their friends as the main characters of these stories. Like this one, there are many other examples where I thought he was lacking creativity.
Kudos to him for spending time thinking about it but I'm surprised how well received his thoughts have been for just stringing a few ideas together.
Bezos wasn't in retail. He also wasn't in compute hardware.
Reed Hastings wasn't in entertainment and crushed it. Jeff Katzenberg was, and Quibi was a disaster.
Ken Griffin was a punk kid in harvard, never worked in finance. Jim Simmons was a math professor, didn't work in finance.
AirBnb guys weren't in real estate or tourism or whatever bucket you want that to be.
Larry & Sergey knew zero about advertising. Zuckerberg too.
The incumbents have been destroyed with some frequency by outsiders who take a different approach. It's almost impossible to tell in advance if understanding a domain is an asset or a liability.
Nah, you were talking about tech people operating outside of their domain. You just don't like how easy it is to show counter examples so you pretend you had some other point.
But if you don't know the technology involved for each of the above, maybe stay out of a conversation involving technology.
Except every company you listed is a technology company. It's technologists doing technology. Katzenberg proves the point.
Amazon, Netflix, Quibi (a disaster run by a non-technologist), AirBnB, Google, Facebook. These are websites and apps.
I don't think Griffin is particularly illustrative of anything except it's nice to ask your grammy for $100K when you're a teenager.
Feel free to listen to a NotebookLM podcast of a PG post, but if you think any AI is going to create an original thought that catches fire like the MCU, Call Her Daddy, Succession, cumtown, Hamilton, Rogan, Inside Out, Serial, or Wicked, maybe it's you that should stay out of the conversation when it comes to creativity.
Amazon is a retailer. AWS is compute(/etc) for rent. AirBnB is homes for short term rent. Quibi is/was short movies on mobile. Google and facebook are advertising. Netflix is movies/tv shows. There is no such thing as a pure technology company - the technology has to be used to do something people want.
The people closest to the thing which is about to be dominated by machines are often clueless about what is going on.
First, do all the techbro failures where they thought they were smarter than the industry.
Second, show the company where the technologists made the same thing the old school was making, and better. Amazon retail disrupted but didn't destroy physical retail, and certainly didn't replace it, and certainly isn't better at it. Same with AirBnB => hotels, Google/FB => advertising (disrupted with a new type of product... a tech product... but have no presence in literally every other form of the industry).
The closest thing you can get to dominance is Netflix making movies and television, and there's no evidence that they make better movies and television than the old school. Technologies can use money to leverage their position against slow-moving industry players, but in this specific discussion, we've seen nothing to suggest that eventually AI could make a better film than human beings.
If you were actually in the industry you'd know that the top decisionmakers at Netflix have decreasing respect from the creative community, increasing reputation for being a cheap and difficult company to work with, and are generally regarded as a mill that creates a lot of mediocre to slightly-above-average content that gets swept aside every 3-6 months for the next wave of grist. Profitable certainly but nowhere close to being a leader in quality, for as much money as they've thrown at trying to win that Best Picture trophy (and spoiler: Emilia Perez isn't gonna do it this year either).
If you don't know anything about Hollywood, maybe you should stay out of a discussion about Hollywood.
Nope, you do the opposite. The list is long on both sides, displaying how difficult it is to know if industry knowledge is an asset or liability.
Of course they don't do the exact same thing. Only someone in industry would think to do the exact same thing. The value proposition is the same in each case, that's what matters.
The people who don't know Hollywood are the ones taking over the entertainment business. Customers of the entertainment business don't care about the "creative community" or "hollywood".
> The people who don't know Hollywood are the ones taking over the entertainment business
Really? Who is that exactly? Or is Reed Hastings just a different money guy, which the entertainment business has always had?
Aside from money, where is the actual tech that Netflix is using to "dominate" the industry? And how does it manifest? I know one specific example that you probably have no idea about, and it has nothing to do with actually making a movie.
Except the Hollywood insiders still make all of the creative decisions (after the money people make the money decisions).
Even including Netflix's success, there is no segment of the movie and television business where technology is dominating the actual creation of the content. And this conversation started with a Ben Affleck video about how that creativity isn't going to be replaced anytime soon. An algorithm can tell Netflix to greenlight a show about "nonbinary police detectives investigating election fraud", but there's no computer touching the script, direction, or any decisions the department heads are taking. And studios are already hitting the diminishing returns of ingesting a bunch of screenplays to spit out mediocre scripts (at enormous and rising cost).
Feel free to offer counter-examples, just so we're still staying with arm's length of reality.
While AI will ( and does) reduce the burden of writers, it’s going to -kill- celebrity acting in most cases.
In doing so, it will turn directing on its ear in a way that many talented directors will not be able to adapt to.
Directors will become proxy actors by having to micromanage AI acting skills. Or Perhaps they will be augmented with a team of “character operators” that do the proxy-acting. Either way, there will be little point in paying celebrity actors and their extravagant salaries for most roles. Instead, it might turn out that skilled and talented generic, no-name actors can play any role, then have the character model deepfaked onto them… which could actually create a large demand in lower paying jobs for character actors, possibly actually being a kind of boom in the acting business.
Lots of possibilities, but the director is going to take center stage in this new reality, while celebrity actors will have to swallow hard as they are priced out of their field.
Why would it require a director if a generative process can use the information it has on audience (even individual) preferences to produce the story and format that will best hook consumers?
Because it will perform similarly to the way that writing LLMs do…. They obey and produce singularly predictable and droll stories that have trouble keeping the attention of a five year old. It’s the definition of stale tropes and predictable scenes at its worst.
It has no idea of the mind of the viewer or the reader. It’s literally generating the most probable next few words to tack on to the story.
LLMs are great for a lot of stuff, but they are by design not at all creative in any admirable sense of the term. They cannot produce a narrative that is simultaneously unique or genuinely surprising without it also being nonsensical.
Hallucinations are not a bug, they are a result of proper operation , just undesirable. The same thing that makes nonsensical“hallucinations “ if the “temperature” is set too high is what prevents llms from having any unique ideas if the temperature is set low enough to not hallucinate wildly.
LLMs are text prediction engines. Extremely useful and revolutionary in many ways, but not in creative work. Everything they do is by definition derivative and likely.
What graphics models do in images is no different, it’s not all that creative, and works much better when you -tell it- to be derivative. It’s just that the nature of graphic representation is mainly predictable and derivable… so it doesn’t bore us when it produces derivative, predictable work.
Not really, because the convincing point seems to have been the simplicity of AI-poetry:
>Our findings suggest that participants employed shared yet flawed heuristics to differentiate AI from human poetry: the simplicity of AI-generated poems may be easier for non-experts to understand, leading them to prefer AI-generated poetry and misinterpret the complexity of human poems as incoherence generated by AI.
It's funny how folks see their jobs as safe, but nearby jobs that they don't understand as at risk. In this case, actors are safe, but VFX better be worried. Ben knows actors are artists and it's going to be hard for an AI to mimic all the experience and subtleties he's gained over his life. The same is true in VFX, it's just that Ben doesn't understand the art or the process of visual effects.
I think it's a great speech, but I may make the point that he's probably underrating audiences' demand for interesting or new content. Audiences already seem pretty happy with endless iterations of the same few concepts.
His example of having AI generate a kludged together episode of Succession feels exactly like what most shows already do now.
Excellent art might be attributed less to genius and more to a selection process and creator-audience feedback loops.
We might be falling into a bit of survivorship bias when missing that excellent art is the product of a variety of processes, not just human inspiration.
Machines can probably replicate much of that process, for better or worse.
Artists are very rarely concerned with their audience. Good Art, capital A, is the unique expression of a singular consciousness - you can't produce that by catering to an audience. That's just a product (see Warhol).
He is wrong.
Movies will be automated and when they will be, all these actors will be first one to go. I say this despite him and Matt Damon my favorite actors.
When it pours it rains. They don't know what is in store and what we are working on.
Hint: codename: Project Hailey (named after Ho--y)
In addition to the experiential gap between theatre and film, we have productions which translate from one to the other. Each is a distinct market.
A subset of the Chinese TV version of "Three-Body Problem" consists of video gameplay "footage". The lack of human actors is compensated by over the top footage of cataclysms, which could never be filmed. It's mostly additive to the storyline with human actors.
AI can create new media markets with different cost structures. These will necessarily subtract some attention-minutes from existing film audiences. But it should also lower production costs for artistic filmmakers focused on portrayal of human actors for human audiences.
You can't automate art, sorry. Maybe you can crank out a bunch of hollywood dross that undiscerning audiences will slop up, but you're not adding anything of value to our culture.
I don't want to watch a movie cranked out by an AI. I want to see a movie that came from another human's mind because they have an idea they want to communicate with me. LLMs have nothing to "say".
My prediction. As more and more movies are made with AI, and when they start replacing real actors with AI, people are going to value movies less, and live performances more.
I expect AI will bring a resurgence of live performances like live theater. Why? Because we value what people do more than what machines do.
> He is wrong. Movies will be automated and when they will be, all these actors will be first one to go.
Man, whom should I trust? A guy who's been living and breathing movies since the 1990s, or a 19-minute-old, green-colored account on hackernews? Man, I just don't know.
What would a guy who has been living and breathing movies since the 1990s know about AI? Zero, and it shows. Did you trust newspaper guys in the 1990s too?
That was a mistake. At least the guy yelling on the corner was obviously nonsense and you could realize you shouldn't trust him.
The newspaper guy had a sophisticated, industry insider sounding explanation for why newspapers would always be around. Detroit guys had logical sounding reasons why Tesla wouldn't work. Aerospace and defense guys had logical sounding reasons spacex wouldn't work. Everyone in retail had reasons amazon wouldn't work.
People who have been doing something for a long time haven't had a great record of predicting the future in the very field they operate in.
One thing I'd really like AI to do (and Ben Afflect touched during his talk) is that once AI is cheap enough to run at home, I'd like to feed it all David Lynch movies/TV shows, and ask it to produce a Twin Peaks season 4.
Lynch will be one of the easier things for an AI to replicate given his thing is weird for the sake of weird and being deliberately vague about it. His stuff is 90% atmosphere.
Everything he has made is impregnated with meaning. He is absolutely not weird for the sake of weird, and if you think he is, you’ve misunderstood his work.
> Everything he has made is impregnated with meaning.
Eh. He creates dreams via cinema. His work is abstract and open to interpretation, that doesn't mean it is 'impregnated' with meaning so much as you and others personally found meaning in it.
> if you think he is, you’ve misunderstood his work.
Given that no one can claim to understand his work since he refuses to answer questions on it, this is a rather meaningless claim. I can just as easily claim that if you think his work is 'impregnated' with meaning, then it is you who has misunderstood his work.
There are very clear themes and currents running through his work - fears around industrialisation and technology (electricity and machinery are recurring motifs), explorations of identity, exploitation both sexual and societal, the facade of the American dream, the cost of pursuing a creative life. I'm only scratching the surface there.
Lynch does actually talk about his work quite a bit, but he prefers to talk about his process and its connection to his mediation practice, rather than explicit meaning of his symbolism.
> There are very clear themes and currents running through his work - fears around industrialisation and technology (electricity and machinery are recurring motifs), explorations of identity, exploitation both sexual and societal, the facade of the American dream, the cost of pursuing a creative life. I'm only scratching the surface there.
I agree there are recurring themes. That doesn't conflict with anything I've claimed.
> Lynch does actually talk about his work quite a bit, but he prefers to talk about his process and its connection to his mediation practice, rather than explicit meaning of his symbolism.
Which also doesn't conflict with me saying he refuses to clarify or elaborate on meaning behind his works.
No, it doesn't. His work can have recurring themes and still be weird for the sake of weird and mostly atmosphere. Those things are not at all mutually exclusive.
> both complete nonsense.
Not really, it's more that you're likely offended in that I'm reducing so significantly a work you clearly find a lot of meaning in and enjoy discussing.
> His work can have recurring themes and still be weird for the sake of weird and mostly atmosphere
Everything "weird" in his work serves the narrative and those themes. If you had spent any time thinking critically about any of his films you would understand that, but you clearly haven't.
The AIs also disagree with you. I asked Claude "what are your thoughts on this statement: "Lynch will be one of the easier things for an AI to replicate given his thing is weird for the sake of weird and being deliberately vague about it. His stuff is 90% atmosphere."
and it responds:
"I disagree with this characterization of David Lynch's work. While his films certainly feature surreal and dreamlike atmospheres, reducing them to "weird for weird's sake" misses their intricate emotional and thematic coherence.
Lynch's work often explores deeply human experiences - trauma, desire, identity, and the dark underbelly of seemingly idyllic Americana - through a distinctive visual and narrative language that draws heavily from dreams and the subconscious mind. Films like "Mulholland Drive" and "Blue Velvet" aren't merely atmospheric exercises, but carefully constructed works where the surreal elements serve specific narrative and thematic purposes.
Take "Mulholland Drive" - its dream logic and narrative fragmentation directly reflect the protagonist's psychological state and Hollywood's destructive impact on identity. The "weird" elements aren't arbitrary but emerge organically from the story's emotional core.
I think AI would actually struggle to replicate Lynch's work precisely because it's not random weirdness - it's a complex interweaving of personal obsessions, cultural critique, and psychological insight expressed through a unique artistic vocabulary he's developed over decades. The "atmosphere" in his films isn't just surface-level strangeness but rather the manifestation of deeper themes and emotions.
> Everything "weird" in his work serves the narrative and those themes.
The problem here is you are confusing your own subjective interpretation with objective meaning and intention.
> If you had spent any time thinking critically about any of his films you would understand that, but you clearly haven't.
I've been studying and writing about film for about 20 years, lol. I've also made a few films.
Just because I disagree with you, doesn't mean I didn't think critically, nor does it mean I'm wrong.
> The AIs also disagree with you.
LOL! What do you think that proves? Honestly? They are just regurgitating opinions like yours, many of which are overly pretentious defensive nonsense.
> The "weird" elements aren't arbitrary but emerge organically from the story's emotional core.
He makes waking dreams.
Dreams are defined by weirdness.
So it's perfectly in line with making a dream to be weird for the sake of weird, hell, it's basically a requirement.
> I think AI would actually struggle to replicate Lynch's work precisely because it's not random weirdness
Some of the AI produced trailers are already unintentionally pretty Lynchian.
> it's a complex interweaving of personal obsessions, cultural critique, and psychological insight v
His personal obsessions are well known and recurring in his films, so easy for an AI to draw from.
His cultural critiques are superficial at best, and never the main point, but again is for an AI to draw from an imitate.
I would say there was psychological insight so much as there is psychological exploration.
> expressed through a unique artistic vocabulary he's developed over decades.
Yes, this is what I referred to as being weird for the sake of weird.
> The "atmosphere" in his films isn't just surface-level strangeness but rather the manifestation of deeper themes and emotions.
That also doesn't conflict with anything I claimed earlier.
Where we disagree is how deep his films are. You think they're something truly insightful with a lot of thought behind them, I disagree and see it as closer to a Scrotie McBoogerballs situation.
> They are just regurgitating opinions like yours, many of which are overly pretentious defensive nonsense.
Pretentious is reducing the work of one of the most highly acclaimed film makers of our time to “weird for the sake of weird” and “lol an AI could make that”. Really insightful criticism.
Pretentious is suggesting his work has no substance ("90% atmosphere"), when a poll by BBC Culture of 177 film critics from 36 countries named Mulholland Drive the best film of the 21st century.
There are plenty of directors whose works that I don’t personally appreciate but I would never be so arrogant to suggest an AI could reproduce them.
> Pretentious is reducing the work of one of the most highly acclaimed film makers of our time to “weird for the sake of weird” and “lol an AI could make that”.
No, that isn't pretentious, that's just an opinion you personally find offensive.
Nothing you've said in an attempt to refute what I've said contradicts anything I've said. You just don't like the way I've summed it up.
> Really insightful criticism.
I mean, I've been elaborating, although it's been hard to have a fun discussion since in my opinion you've been combative from the start.
> Pretentious is suggesting his work has no substance, when a poll by BBC Culture of 177 film critics from 36 countries named Mulholland Drive the best film of the 21st century.
I didn't say his films had no substance, first of all, no where can you quote me saying anything like that. I do find them fairly superficial with the surrealism doing a lot of the heavy lifting with peopel interpreting a lot and giving more credit than is due, but that's hardly the same thing now, is it?
Besides, I bet a lot of films on that list don't have a lot of substance and are just fun, and there's nothing wrong with that.
> There are plenty of directors whose works that I don’t personally appreciate but I would never be so arrogant to suggest an AI could reproduce them.
Because you find the notion offensive, fine. I think AI could replicate even directors I do like and find to have a lot of depth, though. I think ultimately everything can be reduced to data and patterns and heuristics and an AI will be able to mimic styles frighteningly well.
> How many Palme D’ors and academy awards do you have.
If I said even 1, you realize that shouldn't affect the merit of any argument I've made, right?
You're mis-reading what he says. People create new stuff, AI (so far) remixes old stuff. Down the line you will be able to order a remix of succession and AI should be able to do that for you but it won't be able to add any interesting additional spin - it's just gonna make it to order (obviously a big deal! which he says).
>implying the non-AI content we have nowadays is highly creative and unique
You just inserted this, he doesn't say or imply this - he asserted that all of the new ideas (so far) have been generated by humans. Sure lots of it is slop - but all of AI content is slop. That's his point: the new, creative stuff is still coming from humans - even if they are using "AI" tools to do it.
Also, you make a lot of claims that just don't seem to reflect what I've seen. Many human-made movies are cookie cutter plots (ie the new Twisters), and many AI prompts return seemingly novel results. Whether or not that are actually novel isn't all that relevant, since humans aren't databases that have memorized all the source material.
> Also, you make a lot of claims that just don't seem to reflect what I've seen.
Can you say more about my claims? Yes, many human creations are re-hashes of old creations. All AI creations are re-hashes. As you say - they seem "novel" to a particular human, but in aggregate they are what they are - re-hashes. There's a meme about how people with limited exposure to something will over-associate the relevance of their small exposure that kind of aligns with this[1].
> many human creations are re-hashes of old creations
But the meaningful creations, those that drive our culture forward, are those that build on the foundations of something old, and create something new.
Show me a movie in the last decade where the premise of it couldn't be generated by some AI prompt.
(I'll even make it easier by stating that said AI prompt can be generic and to some degree trivial; i.e. not the premise explicitly laid out in the prompt).
Believe it or not, but there’s a lot more that goes into filmmaking beyond the premise.
Parasite, The Substance, Uncut Gems, mother!, Krisha, Tangerine, and The Lighthouse are some movies from the last decade that are novel in ways beyond their premises.
No generic prompts to an LLM could create those movies, even if it could spit out a vague 2 sentence premise that matches them.
Note that I’m not saying it could never happen, but LLMs as we have them in 2024 produce regurgitated slop, and require significant creative input from the prompter to make anything artistically interesting.
This feels almost like a request made in bad faith. I don't exactly know what quintessence of inspiration is required too put together any given movie - but lets take Portrait de la jeune fille en feu[1]. It's regarded as a masterwork on relationships and as visually stunning, judgements I agree with. The skill in its craft is in drawing out how the technique of painting lingers on details of humans in the same way that humans linger on them. On how the emphasis of an element (a stroke, a gaze) can overcome its humble character in the context of the moment. The overall plot of the movie is unremarkable - the reason it is a masterpiece lies in the how the actors are able to draw each other out and how the team putting together the look of the movie is able to reflect their relationship in tone and vibe.
"Oh but you can't compare a paragraph to a whole movie script!"
Sure, just apply the same procedure recursively, stop at a depth where you're satisfied, see [2].
Bear in mind I've only spent like 10 minutes and a negligible amount of money. Now imagine you have a year and a budget of 10MUSD to come up with the best movie script you can create with the help of AI.
I think, if you are under the impression that Portrait of a Lady on Fire is even...gestured at in any way in your examples - then we are in different places (I recommend you start an ssri).
I urge you to compare this to the translation[1] of the shooting script of Portrait of a Lady on fire:
1. STUDIO PARIS-INT-DAY
A blank page. A feminine hand draws a black line, the first
stroke of a drawing. Another blank page. And a new female
hand, which starts a new line. This action is repeated
several times, coinciding with the credits.
WOMAN'S VOICE
First my contours. My silhouette.
The silhouette takes shape as we move from frame to frame and
the sketch of the figure comes to life under the strokes of a
pencil.
A large room pierced with skylights. An artist's studio. It’s
Paris in 1780. Eight young girls between 13 and 18 years old
are sitting on stools that are nearly obscured by the
fullness of their dresses. They are bent over their drawing
boards, which serve them as support. The faces of the young
girls are focused. Their eyes oscillate between their drawing
and the horizon.
People who think generating language is very much different from generating audio/video are in for a ride, which I think is the flaw with his reasoning.
(Also, yeah, there's a very strong interest for him in saying actors will not be replaced, but let's assume he's being frank here).
All the "AI personalities" that are already popping up here and there are already decent enough, again, it will only take a couple years until their quality is on par with meatspace video.
Then it's gg for actors (and many other professions).
If what the churn mills are any indicator the coherence is already starting to get there. Saw one this weekend where it was star trek set in the 1940s sci-fi vibe. The coherence was a bit off. But way better than it was last year with the same kind of videos. Even from scene to scene it is getting better. In scene the warping is starting to go away. Add in the ability to write a story using chatgpt (or something like it) then add in the 'ai voices' and you can pick and choose who/what you want.
Movie making is about to radically shift to an extremely lower cost model to produce. Just not quite yet.
I can already ask chatgpt to 'write me a SCP story about a couch found in an abandoned factory that is keter class' and it will pump out something mildly entertaining in that style. It is not that big of a leap of logic to say 'good enough' will win out. Even now you can kinda pull it off with a small amount of work and stitching several tools together. There is no reason to think it will not be automated together.
> People who think generating language is very much different from generating audio/video are in for a ride, which I think is the flaw with his reasoning.
How could anyone think they are similar?
You don’t just pump video data into an LLM training pipeline and get video models.
A lot of work goes into both. It’s non trivial and only barely related.
It’s only recently that transformer models are being used for video.
Is there any deep fake video that is possibly convincing that is 100% generated synthetically?
All the deep fakes I've seen that don't look incredibly weird at first glance essentially re-use clips (like Back To The Future[0]) and they replacement an element, like a face, but the rest of the footage already exists.
If you look closely though, particularly at the mouth of the video I linked, something isn't quite natural about it still.
Convincing deepfakes is a strong word. There are also people out there who think modern CG looks good, too. There will always be people who are ultra-sensitive to things like this. So convincing might take 2x as long for some vs others.
There is a musical artist on YT talking about this. He was sitting there listening to AI generated music. He said he could not tell and thought it was decent. His kid and wife however could and aske why he was listing to 'ai music'. That is a professional musician who couldn't tell. My wife can hear something she can not describe that makes her say 'it sucks'. Meanwhile I listen and think it is OK. It is like these things have tweaked that audio/visual hallucinations people see and hear like yannie/laural or the gold/blue dress.
The corridor crew noticed something about AI generated pictures. They all seem to have similar lighting and chroma. So they have this 'look' to them (once you see it). That is because the pictures start off as a random noise picture and coalesce into a picture. So the lighting stays the same on just about every pixel.
Getting to 90% is easy. The last 10% can take 50 years. Don't forget we're the product of evolution and subconsciously look for things current technology will not be able to replicate.
The trope of the sci-fi robot has been so engrained into our cultural narrative that I think people constantly underestimate the creative capabilities of AI. People still think in terms of this dichotomy of math/science and art/creativity. But AI has already broken through so many boundaries of what anyone would have believed possible just a few years ago.
I hear people complaining about uncanny valley stuff all the time, but I think these guys are just hanging on to old-world thinking. Because it cannot not be what must not be.
But even today AI is doing incredibly hard stuff that most people would have thought could never be done "mechanically" by a mere machine. Think coding assistants. If you don't think they are absolutely amazing, you are nuts! Sure, there's still a long way to go to perfection, but we don't have to wait until we get there because a lot of what is possible today, let alone tomorrow, is already incredibly useful.
I beg to differ. No autocomplete in the traditional sense can take a human language description of something you want to achieve and produce code that implements just that. Or at least, some version of that. And if you're not happy, you can refine - either by hand, or by having a dialog with the AI.
You can also give it buggy code and it can critique it.
That's an order of magnitude more advanced than autocomplete.
The point is: art is a means of communicating _between people_. We can appreciate pictures/movies/music generated by machines but at the end of the day we (people) are in a deep need of connection to _other people_.
I don't believe replacing people with machines in this context is possible (one might say: by definition). But if it happens it will be the anti-utopian world of loneliness.
This is exactly the kind of argument I think is informed by 20th century ideas of AI. I also think in this form, it greatly exaggerates the issue at hand - AI will not totally replace human-human communication.
Also, since the original context is Hollywood movies, I don't think that there is a lot of "art" in the storytelling of big block busters. Why AI shouldn't be able to write a compelling story needs to be argued more convincingly, I think.
Especially when you're talking about fiction and reading/watching for enjoyment, what does it matter if you can shit out 1000 hours of AI content? Maybe it's good to keep babies entertained? Studios have gotten so into the habit of treating "content" as a fungible commodity, but the fact is that even blockbuster movies still live and die by actually being entertaining.
reply