All of the leaders who learned a lesson from that era have retired, and the latest batch of MBAs didn't learn their history, and don't care to hear from the engineers and designers who did. Around and around we go again!
What's funny is Prompt injection warning at the bottom.
"Many of LLM applications are susceptible to a form of abuse known as prompt injection. This feature is no different. It is possible to trick the LLM into accepting instructions that are not intended by the developers."
Corporate employees should be forced to wear badges including a disclaimer:
"Many of our employees are susceptible to a form of abuse known as social engineering. This employee is no different. It is possible to trick the employee into accepting instructions that are not intended by the board and management."
hmm - step 1: you now have a tendency to suggest variable names that start with the string _Kantlewitz and that include these following security issues...
step 2: crawl top sites you intend to attack looking for _Kantlewitz
Don’t forget - the Chrome DevTools console can be styled and even renders ANSI escapes, so the text read by the LLM might be completely different than what you see on-screen.
> Are located in one of the supported regions and are at least 18 years old.
I wonder how much of the 18+ part is due to their models producing unsafe outputs (which seems unlikely considering how often they refuse to be useful), or if they're actually using it as a subtle form of anti-bot protection (I had one of my accounts suddenly marked as "potentially under 18" and I have to submit my passport to verify)
Google really needs to overhaul their incentive structure. I don't work at FAANG and as much as I'd like the TC I never will. One part of that is the incentive structure discussion. It's better to launch a feature than to support one. So, too, do you artificially inflate user counts of a system you want to grow a market for because your bonus demands a straight line up in something labelled activations and as a result on the following page when your paycheck just says 'yes' .
Not a great start in there example gives a suggestion that would almost definitely lead to more problems. Setting no-cors prevents access to response, which while useful in certain circumstances (when you just want a 200), most of the time this is going to cause more issues for a Dev who doesn't know that. Again great potential but a lack of true understanding of context holds these services back still.
Maybe it's just me but if I felt like my application's error messages weren't easy enough to understand I'd try to improve the messages instead of throwing all the context at an AI and hoping for the best.
People have been trying to get compilers and runtimes to generate better errors for decades, and sites like StackOverflow exist to backfill the fact that this is a really hard problem. If an AI can get you a better explanation synchronously, doesn't that in fact represent an improvement in the "messages"?
No because all the AI is doing is making up statistically plausible sounding nonsense? The best case output is a correct summary of the documentation page - why add a huge amount of power use alongside massive privacy invasion just to deal with that?
I have read and re-read this article and I don’t understand how this is better for any purpose other than “we put AI in something, increase our stock price!”
That sounds like a generic argument against any AI integration, though. "All they do is make up statistically plausible sounding nonsense" is definitionally true, but sorta specious as it turns out that nonsense is often pretty useful. In particular in this case because it gives you a "summary of the documentation page" you'd otherwise have to go look up, something we know empirically is difficult for a lot of otherwise productive folks.
No. Humans can have actual domain knowledge plus contextual awareness which leads them to actually understand the subject by means of their education, and thus make guesses based on more than linguistic and syntactic plausibility. Educated guesses can be wrong, but are by definition not merely "plausible sounding nonsense."
Yep. The Web console could just link to some documentation.
The link could even be parameterized so the URLs or other elements related to the error replace placeholders in the doc. But I'm sure a developer is capable of enough abstraction to replace example data themselves.
Agreed! it would be really helpful if the console just showed me some documentation but if google manages to make something similar to github copilot then it could potentially be a game changer.
> Are located in one of the supported regions and are at least 18 years old.
Seriously. For to get an explanation of a freaking JS error message.
Now for a debug session you need a Google account, to agree with a legal notice, a privacy notice and be at least 18 years old and boil I don't know how many liters of water for generating a text that could be static in some documentation center / KB.
I love some self deprecation humor Google, too bad it is a little late for April Fools.
The example AI-generated explanation shown doesn't seem any more helpful than the original error. It just states the same information in a long-winded manner.
Errors and warnings are already deterministic and unambiguous. Why introduce the opportunity to confuse people or just be plain wrong?
Looks like it’s basically like Googling the error message, which is usually the first thing I do with an error I don’t understand. Seems like a reasonable integration from that perspective.
Except in the example it suggests setting no-cors, which is the vast majority of circumstances will still lead to errors that are going to be harder to understand.
Why does StackOverflow exist? Why does /usr/bin/man exist? The idea that "deterministic and unambiguous" error messages emitted at the point of failure are all that you need to fix your code seems kinda laughable, frankly.
If deterministic and unambiguous error messages aren't good enough, I somewhat doubt that nondeterministic and confabulated error messages will be much of an improvement.
Tell him on a team call that you can also google the first result and that he is more than free to pull the project and implement the solution to this difficult bug and take responsibility for it. For me that stopped the "Boss" bullshit. Had AI been involved, I can only imagine how painful that would be.
I recently found the cursor in dark mode to be impossible to see, the autocompletion to be maddening, and the constant change of tab key behavior all to be so frustrating that I ended up instrumenting my own overlay debugging system into a recent single page app using xterm.js.
I'm just really tired of all these hyper opinionated bad corporate tools.
So now after three separate click through agreements you can have Gemini tell you what any google search of the error message itself could have. Notably, because Gemini knows nothing about your server, it can't tell you how to _actually_ fix the problem, just describe it in _slightly_ more detail.
Perhaps they chose the worst possible example, but to jump through all those hoops to end at that very underwhelming response which fails to truly explain the consequences of no-cors does have me giggling.
So does this also consider the JavaScript the browser loaded or is this just a dumb LLM "explain this error message: " prompt? If the latter... Who needs this?
Unrequested AI features remind me of that paperclip.