I think this is already happening, the monthly HN "Who's Hiring" threads have felt like attending a funeral since Microsoft-Google-Twitter mega-layoffs two years ago.
It's definitely bad, although attributing the state of the job market to AI is a mistake. As a result of wider economic conditions, money is expensive and companies can't justify borrowing money to have massive headcount, driving down the number of software engineers in demand. This caused a big glut of qualified software developers seeking employment, only to find that the music had stopped and there weren't any chairs left. This coincides with an ever growing number of new graduates in CS who are seeking to find their place in the market. In other words, the market is very competitive. This is especially true for fresh graduates.
To make matters worse, the big investment opportunity right now is AI/AGI/LLM/Agents and so forth. As money flows toward AI focused firms, they are spending their money on GPUs and electricity. Or, alternatively, they're paying for NVIDIA/OpenAI/.. to do that for them. Some jobs have opened up in this space, but a relatively small percentage is spent on labour.
Regardless of that reality, circadian rhythms have been extensively studied, and there is more evidence than just this study to support the claim they're making. Patterns and routines are generally beneficial as a rule.
That being said, there is a lot of diversity amongst us, and I'm quite sure that when you factor in (epi-)genetic variation - particularly in the short to medium term - there are some unexpected advantage/cost ratios to wildly different strategies.
You are willfully taking more away from that than was stated. Such a fallacy is a favorite trope among science denialists, and those who would distort the objectivity of research for misinformed (and/or dishonest) ends. I don’t know what, if any, specific motivations you have, but I think it’s worth pointing out.
Their point was, there are other confounding variables, and therefore the parent comment doesn't negate the grandparent comment. I agree, and I am absolutely not a science denialist.
What do you mean? The commenter just pointed out (in a joking manner) that, even if some confounding factors were taken into account, the result might still be caused by other confounding factors. That's a serious critique, not a fallacy.
The scientific method explicitly requires peer-review and feedback, in part because methodological flaws may have been overlooked by those who designed and/or performed the original study.
Ad-hominem attacks against a person offering good-faith methodological criticism (such as calling them a "science denialist", or accusing them of being "misinformed" and/or "dishonest") is behavior that seeks to defend the results of a study more strongly than it seeks to discover the truth.
If you have a critique of my methodological criticism itself, by all means, please share it, but the entire scientific community would be better off if we could do away with this kind of emotionally-charged quasi-religious dogma that seeks to suppress legitimate scientific concerns through social ostracism.
---------------
Compare and contrast the above with what follows:
---------------
Your post reminds me of the reaction of the Catholic Church to Copernicus's assertions of a heliocentric solar system.
---------------
Notice how sticking to objective, unemotional, and impersonal language in the first section is more conducive to earnest scientific inquiry than the personal attack in the second section?
I did this in my final days at an agency. Built a db search backend to drive a product query UI for a client. 10 years later I randomly met somebody that worked for the client and they were still using it. Kinda cool, actually.
Sometimes you plant a seed and watch it wither, sometimes you walk away and return to a tree. Moral of the story? Just plant seeds
Lack of releases from OpenAI make me more bullish on them. Clearly they have something big they’re working on, they don’t care about competing with current models
Releasing powerful, novel models like Sora shortly before a major election is just asking for trouble.
I believe they are restraining themselves in order to stay somewhat in control of the narrative. Donald Trump spewing ever more believable video deep fakes on twitter would backfire in terms of regulation.
Besides, isn't it over the top expensive for a few seconds of video ? Election is a factor but even without it I don't know if there's much of a business plan there, what would they have to charge, $20 / minute? Then how many minutes of experimenting before you get a decent result?
The impressive thing about GPT 4o is how well it performs at most metrics. GPT 3.5 was already very impressive - most other companies are just catching up now. GPT 4o is a huge step above.
GPT-4o replaced GPT-4, not 3.5, so it’s not “a huge step above” it, at least not subjectively. It is much faster though, so at least it’s got that going.
The voice thing is a potential killer feature though, I can’t wait to try it, and to have my kids use it.
I would argue that the biggest novelty is being able to share your screen or camera feed with the live voice - and there's no announced timeline on that yet at all.
Yeah, but then Claude 3.5 Sonnet came out, so they took the lead.
Tangentially speaking, having no skin in this game, it's extremely fun to watch the model-wars. I kinda wish I started dabbling in that area, rather than being mostly an infrastructure/backend fella. Feels like I would be already way behind though.
I mean, consistent mediocre releases is exactly what we have gotten out of OpenAI.
But we know they started training Orion in ~May. We know it takes months to train a frontier model. Lack of release isn't promising or worrying, it's just what one should expect. What is promising is the leaks about the high-quality synthetic data that Orion is training on. And the fact that OpenAI seems to be ahead of all the other labs which are only just now beginning training runs on next-gen models. OpenAI seems to have a lead on compute and on algorithmic innovation. A promising combination if there ever was one.
Web designers used to be the combination of what you'd today call UX designers and frontend developers. The technical knowledge and understanding in the UX-only tribe that seems to have become the majority is abysimal. They are nearer to product management than to engineering. The capability of their current tools enables to cheaply spit out high-fidelity prototype which are good enough to give the technically clueless business completely false impressions about the cost for the promised capabilities, ever increasing the demand and sophistication of the actual engineering part of the frontend.
I don't see that being automated by language models in the sense of that it would enable the technically unsophisticated designers of today to come up with even an actually deployable UI, let alone one that is maintainable, so that a change in requierements would not lead back to square one.
I'm willing to be a spectator in attempts of business to chase that folly, though. Once systems start to degrade engineering demand will go up.