Hacker News new | past | comments | ask | show | jobs | submit login
Add "fucking" to your Google searches to neutralize AI summaries (gizmodo.com)
726 points by jsheard 17 hours ago | hide | past | favorite | 331 comments





Every time some product or service introduces AI (or more accurately shoves it down our throats) people start looking for a way to get rid of it.

It's so strange how much money and time companies are pouring into "features" that the public continues to reject at every opportunity.

At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast numbers of employees out of work and allow companies to use the massive amounts of data they've collected about us against us more effectively. All the AI being shoehorned into products and services now are mostly to test, improve, and advertise for the AI being used, not to provide any value for users who'd rather have nothing to do with it.


I work on a project where customers need to fill out a form to receive help. We introduced an AI chat bot to help them do the form by just talking through the problem and answering questions. Then the form is filled out for the customer for them to review before submitting.

Personally I find it slower than just doing it manually but it has resulted in the form being correct more often now and has a lot of usage. There is also a big button when the chat opens that you can click to just fill it out manually.

It has its place, that place just isn't everywhere and the only option.


I did a project a while back that created a wizard to fill in a form - I also found it much easier to simply complete the form, but when we demonstrated it to target users they nearly cried with relief. It was a good reminder of the importance of knowing what users actually want.

I should go back to look at that and see if we could incorporate an easy ChatBot as an improvement.


It's great when it works. Yesterday I needed to contact support for a company but all they had was a chatbot. I explained what information I was looking for and it linked me something completely irrelevant and asked if this solved my problem - with big buttons to reply yes/no. I pressed "no" which simply caused a message with "no" to be sent from me in the chat. The bot replied with "You're welcome!". I wrote a manual clarification that this did not solve my issue. The bot answered "You're welcome". Luckily, I found that ignoring this and asking the question again did work.

This can often be done without any AI (I suppose you mean LLMs here but I'm not sure since the term has been diluted so much).

I'm sure there's a time and place for these things, but this sounds very much like the echo chamber I hear at work all the time.

Someone has a 'friend' who has a totally-not-publically-visible form where a chat bot interacts with the form and helps the user fill the form in.

...and users love it.

However, when really pressed, I've yet to encounter someone who can actually tell me specifically

1) What form it is (i.e. can I see it?)

2) How much effort it was to build that feature.

...because, the problem with this story is that what you're describing is a pretty hard problem to solve:

- An agent interacts with a user.

- The agent has free reign to fill out the form fields.

- Guided by the user, the agent helps will out form fields in a way which is both faster and more accurate than users typing into the field themselves.

- At any time the user can opt to stop interacting with the the agent and fill in the fields and the agent must understand what's happened independently of the chat context. i.e. The form state has to be part of the chat bot's context.

- At the end, the details filled in by the agent are distinguished from user inputs for user review.

It's not a trivial problem. It sounds like a trivial problem; the agent asks 'what sort of user are you?' and parses the answer into one of three enum values; Client, Foo, Bar -> and sets the field 'user type' to the value via a custom hook.

However, when you try to actually build such a system (as I have), then there are a lot of complicated edge cases, and users HATE it when the bot does the wrong thing, especially when they're primed to click 'that looks good to me' without actually reading what the agent did.

So.

Can you share an example?

What does 'and has a lot of usage' mean in this context? Has it increased the number of people filling in the form, or completing it correctly (or both?) ?

I'd love to see one that users like, because, oh boy, did they HATE the one we built.

At the end of the day, smart validation hints on form input fields are a lot of easier to implement, and are well understood by users of all types in my experience; it's just generally a better, normal way of improving form conversion rates which is well documented, understood and measurable using analytics.

...unless you specifically need to add "uses AI" to your slide deck for your next round of funding.


My partner has dyslexia and finds forms overwhelming. Chatbots break this down and (I suspect) give the same feeling of guidance. As for specific examples NHS has some terribly overwhelming forms and processes - image search IAPTUS.

Another example; I was part of a team that created a chatbot which helped navigate internal systems for call centre operators. If a customer called in, we would pick up on keywords and that provided quick links for the operator and pre-fill details like accounts etc. The operator could type questions too which would bring up the relevant docs or links. I did think looking into the UX would’ve been a better time spend and solved more problems as the system was chaos but “client wants”. What we built in the end did work well and reduced onboarding and training by 2 weeks.


I am totally in the same boat but also I do suspect it is a minority. It’s the same way that some people really want open source bootloaders, but 99.99% of people do not care at all. Maybe AI assistants in random places just aren’t that compatible with people on HN but are possibly useful for a lot of people not on HN?

> It’s the same way that some people really want open source bootloaders, but 99.99% of people do not care at all.

In fairness to the 99.99% they don't even know what a bootloader is and if they understood the situation and the risks many of them would also favor an open option.

I don't think the rejection of AI is primarily a HN thing though. It's my non-tech friends and family who have been most vocal in complaining about it. The folks here are more likely to have browser extensions and other workarounds or know about alternative services that don't force AI on you in the first place.


> In fairness to the 99.99% they don't even know what a bootloader is

True. And awareness and education is very important for useful discourse.

> if they understood the situation and the risks many of them would also favor an open option.

Raising my hand as one of those people who knows what a bootloader is and also doesn't currently care about an open option. Maybe at some time in the future I will again, but for now it is very far down on my list of concerns.

I suspect whether or not AI is useful/high-quality/"good"/etc is just not important to most poeple at the moment. If they are laid off from their jobs in the future and replaced with an AI, I suspect they'll start caring more.

But in the general case, I've found "caring ahead-of-time" (for want of a better phrase) is a very hard thing to encourage, despite the fact that it's one of the most effective things you can do if you direct it at the "right" avenues (i.e. those that will affect you directly in the future).


> one of the most effective things you can do if you direct it in the “right” avenues

The people I know who “worry” are terrible about predicting negative events that impact them. I think that’s why it’s uncommon, lots of negative health outcomes and almost zero actual benefits.

Instead simply aiming for reasonable levels of resiliency in health, finances, etc tends to cover a huge range of issues. In that context having a preference for open systems makes a lot of sense, but focusing a lot of effort on it doesn’t.


I agree, but doesn't that basically mean there are two camps: people who dislike it, and people who don't care? I also agree with GP in that there isn't any visible 3rd camp: people who want it. If google themselves thought people wanted this, they wouldn't need to make an un-dismissable popup in all of their products with one button, "yes please enable gemini for me", in order for people to use it.

I'm sure google thinks that people have some sort of bias, and that if they force people to use it they'll come to like it (just like google plus), but this also shows how much google looks down on the free will of its users.


No, I like the AI summaries and I had assumed I was in the silent majority. People like convenience and it usually answers correctly.

You are in the silent majority. It's a costly feature for Google and they aren't the type to take a large risk of pushing out something unpopular to their most profitable product.

AI confidence has been dwindling[0][1] so I don't think that's the biggest contributor.

I do think it's as simple as appealing to stakeholders in whatever way they can, regardless of customer satisfaction. As we've seen as of late, the stock markets are completely antithetical to the improvement of people's lives.

The first point does indeed come into play because oftentimes most people don't throw enough of a fuss against it. But everything has some breaking point; Microsoft's horribly launched Copilot for Office 365 showed one of them

[0]: https://www.warc.com/content/feed/ai-is-a-turn-off-for-consu...

[1]: https://hbr.org/2025/01/research-consumers-dont-want-ai-to-s...


I agree with this. I'm very surprised when I see someone blindly trust whatever the AI summary says in a google query, because I myself have internalized a long time ago to strongly distrust it.

I’ve seen quite a few posts on Reddit with people asking questions like “Is a Mazda 2 really faster than a Civic Type R?!? ChatGPT told me it is” and it’s some complete nonsense numbers that could’ve been fact checked in about 5 seconds.

I don’t think the little “ChatGPT might be wrong, you should check” disclaimer is doing very much.


It's a good sign that people are even going to reddit and asking for confirmation of something that seemed suspicious to them. Sure, many of them could have googled for those answers themselves, but part of the problem is how unreliable and dishonest Google has become.

Reddit sure isn't an ideal place for fact checks. It's full of PR bots and shills, but at least there are still humans commenting and I can't fault people for doing what they can in the best way they know how.


For every person that asks I imagine there are a bunch that just assume it must be true because the computer told them

Or they could have just use ChstGPT and typed “provide citations…

https://chatgpt.com/share/679d7f5f-d508-8010-94fa-df9d554b62...

(and then I just remembered that the free version doesn’t have web search)


To me it looks like for most things I search it just verbatim is the top answer from Stack Overflow.

I don't think so. I have many nontechnical friends who are furious at having to deal with bad AI, whether it's a stupid chatbot that they have to talk to instead of a real person or Google "AI overviews" that often get things completely wrong.

<Maybe AI assistants in random places just aren’t that compatible with people on HN but are possibly useful for a lot of people not on HN?>

Coincidentally today, I received an automated text from my heath care entity along the lines of, "Please recognize this number as from us. Our AI will be calling you to discuss your heath."

No. I'm not going to have a personal discussion with an AI.


> Coincidentally today, I received an automated text from my heath care entity along the lines of, "Please recognize this number as from us. Our AI will be calling you to discuss your heath."

That sets off super strong scam vibes to me... Our banking industry here and medical industry pushes phishing information down your throat so much people even worry about legitimate communication that couldn't possibly be a scam being a scam.

I find that to be better for society but definitely clouds my judgement on those kinds of text. Also I have absolutely dropped my previous bank because it became impossible to speak to an actual human and willingly pay more for a bank where my phonecall goes directly to a human.

Do what one of the other commenters mentioned, make the AI an assistant for the human beings that help your customers, let the humans communicate with humans.


I think this is the case. Most of my family and friends use and like the various AI features that are popping up but aren't interested thinking about how to coax what they want out of ChatGPT or Claude.

When it's integrated into a product people are more likely to use it. Lowering the barrier to entry so to speak.


It's not even remotely a minority. It was widely mocked everywhere when they first debuted it:

https://www.reddit.com/r/google/comments/1czcjze/how_is_ai_o...

Mainstream press have been covering how much people hate it - people's grandparents are getting annoyed by it. Worse, it comes on the heels of four years of Prabhakar Raghavan ruining Google Search for the sake of trying to pump ad revenue.

It's a steaming pile of dogshit that either provides useless information unrelated to what you searched for and is just another thing to scroll past, and even worse, provides completely wrong information half the time, which means even if the response seems to be what you asked, it's still useless because you can't trust it.


> At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast amounts of employees out of work

It's this part.

Salaries and benefits are expensive. A computer program doesn't need a salary, retirement benefits, insurance, retirement, doesn't call in sick, doesn't take vacations, works 24/7, etc.


No it's not. It's because management is tone deaf and out of touch. They'll latch onto literally anything put in front of them as a way out of their inability to iterate and innovate on their products.

Throwing "ai" into it is a simple addition, if it works, great, if it doesn't well the market just wasn't ready.

But if they have to actually talk to their users and solve their real problems that's a really hard pill to swallow, extremely hard to solve correctly, and basically impossible to sell to shareholders because you likely have to explain that your last 50 ideas and the tech debt they created are the problem that needs to be excised.


It also looks really appealing to do tasks you have a very shallow and dismissive opinion of. For example, a lot of managers and c-level sorts seem to think it will replace developers. I think it would be great at summarizing and passing up reports and generating plausible looking meaningless text—so, it looks like it could do most management type jobs, to me.

But, I must try to have a little bit of self awareness here: if we all think it can do the jobs we don’t understand and don’t think it can do the job we’ve got experience in, then maybe that just indicates that it isn’t really very good at anything yet.


Today it is AI. Yesterday it was Blockchain.

Tomorrow it will be Agentic AI Blockchains.

I know what you are thinking: robots are coming for our jobs after that. Don't worry! Those AI robots will run on the Cloud.


But that's where my job is!

It's not totally true: computer programs have downtime (even major cloud platforms), and need maintenance to keep being useful and operational.

Human beings still need all those things whether they are employed or not though...

Who do they think will buy their products if there are no employees anywhere? Businesses, even business facing ones, eventually rely on consumers at some point to buy things. What can be gained by putting everyone out of work?

I remain skeptical of the prediction that AI will simply “take over jobs”. If AI becomes advanced enough to perform tasks traditionally done by skilled professionals, it would instead democratize access to capabilities that are currently gatekept by wealth and resources (since right now you can’t have human employees, but you’d be able to have AI workers for cheap).

In this scenario, individuals without substantial capital could leverage AI to achieve outcomes that today require the resources and influence of wealthy founders. It might do the opposite of what CEOs seem to think: challenge existing power structures and create a more level playing field.


Isn’t this position predicated on the assumption that individuals without substantial capital “own” an AI?

When someone uses an AI they do not own, they are (maybe) receiving a benefit in exchange for improving that AI and associated intellectual property / competitive advantage of the person or entity that owns the AI—-and subsequently improving the final position of the AI’s owner.

The better an AI becomes, the more valuable it becomes, and the more likely that the owner of the AI would want to either restrict access to the AI and extract additional value from users (e.g. via paid subscription model) or leverage the AI to develop new or improve existing revenue streams—-even if doing so is to the detriment of AI users. After all… a sufficiently-trained “AGI” AI could (in theory) be capable of outsmarting anyone that uses it, know more about its users than its users consciously know about themselves, and could act faster than any human.

While I share in your hope, I think it is unfortunately far more likely that AIs will widen the gap between the haves and the have-nots and will evolve into some of the most financially and intellectually oppressive technology ever used by humans (willingly or not).


It's interesting how we can frame "potentially automating tasks" in the most sinister conceivable way. The same argument applies to essentially all technology, like a computer.

> The same argument applies to essentially all technology, like a computer.

Why yes, it does.

Even setting aside that most AI hype: Yes, automation is in fact quite sinister if you do not go out of your way to deal with the downsides. Putting people out of a job is bad, actually.

Yes. The industrial revolution was a great boon to humanity that drastically improved quality of living and wealth. It also created horrific torment nexuses like mechanical looms into which we sent small children to get maimed.

And we absolutely could've had the former without the latter; Child labour laws handily proved it was possible, and should have been implemented far sooner.


In addition, the Industrial Revolution led to societal upheaval which took more than a century to sort out, if you agree its ever been sorted out at all.

So, if it is true we’re on the cusp of an AI Revolution, AGI, the Singularity, or anything like that, then there’s precedent to worry. It could destroy our lives and livelihoods on a timescale of decades, even if the whole world really would be over all improved in a century or two.


It's not really interesting, it's exactly what should be expected. We've seen how corporations act, and their history and our prior experiences go on to shape our perceptions and expectations accordingly.

There will be a crash as with any hype cycle

But this is normal. A new thing is discovered, the market runs lots of tests to discover where it works / doesn’t, there’s a crash, valid use cases are found / scaled and the market matures.

Y’all surely lived thru the mobile app hype cycle, where every startup was “uber for x”.

The amount of money being spent today pales in comparison to the long term money on even one use case that scales. It’s a good bet if you are a VC.


It's a weird cycle though where the order of everything is messed up.

The normal tech innovation model is: 1. User problem identified, 2. Technology advancement achieved, 3. Application built to solve problem

With AI, it's turned into: 1. Technology advancement achieved, 2. Applications haphazardly being built to do anything with Technology, 3. Frantic search for users who might want to use applications.

I don't know how the industry thinks they're going to make any money out of the new model.


> With AI, it's turned into: 1. Technology advancement achieved, 2. Applications haphazardly being built to do anything with Technology, 3. Frantic search for users who might want to use applications.

No, it’s always been 1) Utter the current password in your pitch deck to unlock investor dollars. Recently it’s “AI” and “LLM,” but previously it was “Blockchain,” “Big Data,” etc.


I mean the obvious solution is to just manufacture more problems.

I wish I could be certain that we're not doing that already.


we most certainly are with this subscription era of software. No one had an issue with buying Office or Photoshop or any other product that has no business needing to do monthly update cycles being a "buy once, own until you wanna upgrade". Except executives who wanted to siphon more money out of their products.

In this case, I wouldn't attribute malice to incompetence.

Another view is that the Internet exactly matched this model, and that it's much closer to the norm than not: the Internet became available to normal people in the mid-to-late 90s, depending on where you were, but all that was on the web was dumb personal websites (1). In the late 90s bubble, startups seemed pretty crazy, slapping "...but on the Internet" onto basically any idea and raising money off of it (2), basically what's happening with "...but with AI" today. Most failed to find a market because too few people were using the Internet for everything and the frantic search for users failed (3).

But just as the conservative old-school business people were laughing and patting themselves on the back post-bubble over how stupid all the dotcoms were for thinking they could monetize eyeballs, Google emerges, and 20 years later tech companies drive the stock market rather than following it. Don't dismiss a technology just because the birthing spasms look ugly, it takes some time for markets to develop and for products to settle into niches. At the start a lot of that is due to people not being comfortable, tech sucking, and the market shifting too quickly to precisely target, but that can all change pretty fast.


I just wish the workers didn't have to suffer everytime some CEOs decide to experiment with humanity and stakeholders overhype things.

With the amounts being invested into AI and even just in recognition of running an AI service (@see how openAI loses money on their $200 subscriber accounts). The "for funsies" services like switching an HTML form over to a chatbot are clearly not going to be a realistic resolution for this technology. I'd argue that even when it comes to code generation the tools can be useful for green-field prototyping but the idea that a developer will need to verify the output of a model will never cause more than marginal economy of tools in that sector.

The outcome that the large companies are banking on is replacing workers, even employees with rather modest compensation end up costing a significant amount if you consider overhead and training. There is (mostly) no AI feature that wall street or investors care about except replacing labor - everything else just seems to exist as a form of marketing.


Ok but how are the companies supposed to replace workers when the tech isn't having real use cases, and is far from accurate?

One prescient comment was made by Eric Yuan (Zoom CEO), who made the claim that the reliability and general efficacy issues from AI products would be solved "down the stack", and I think this is more or less the attitude right now. Firms believe if they can build their LLM/AI products, the underlying LLMs/AI will catch up over time to meet their requirements.

I've also talked to a number of CTOs and CEOs who tell me that they're building their own AI products nominally to replace human workers, but they're not necessarily confident it will be successful in the foreseeable future. However, they want to be in a good place to capitalize on the success of AI if it does happen.


> At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast amounts of employees out of work

That's certainly part of it.

However, at this point I think a lot of it is a kind of emotional sunk-cost. To stop now would require a lot of very wealthy and powerful people to admit they had personally made a very serious mistake.


It is also possible that we're just in the realm of pure speculation now - if you look at Tesla and NVidia both their valuations are completely imaginary and with the latter standing to benefit a lot by being shovel sellers (but not that much) and the former seeing an active decline in profitability while still watching the numbers go up.

It may be less that people are unaware of the speculative bubble but are just hoping to get in and out before it pops.


"How do I tell Copilot to go fuck itself?" - My mother during a recent tech support call.

Siri will shut up if you yell,

“Nobody asked you, Siri!” at it.

And that, kids, is how I met your mother.


Adding unwanted features, bolting on an AI assistant, changing to a subscription model, and even automating away employees can all be explained by the following iron rule: C-level leadership lives in abject terror of the numbers not going up anymore. Even if a product is perfect, and everyone who needs it owns it, and it needs no improvement, they must still find a way make the numbers go up. Always. So, they'll grab hold of any trend which, in their panic, seems like it might be a possible life preserver.

Precisely. From day one this was about doing what industrialization did for manual labor for "white collar" work.

The end goal is a global panopticon followed by a culling of the herd. Read the inscriptions: https://www.georgiaencyclopedia.org/articles/history-archaeo...

Sometimes I worry that the internet has enabled ordinary but gullible people to find exactly the supporting evidence they need to be considered insane by society.

> the internet has enabled ordinary but gullible people to find exactly the supporting evidence they need to be considered insane by society

If by "be considered insane by society", you mean "find each other, attract new members, mobilize, and vote", I'd say you're spot on.

This was all a big mistake. To future generations, all I can say is that we meant well.


Billionaires destroy the working class: "progress".

Working class organizes a defense: "insanity".


The Freemasons which rule over you in the form of police and courts and the legislative branch built the Georgia Guidestones

I genuinely cannot tell if this is a joke, or a serious hypothesis :-/

The best conspiracy theories dance along this line

Billions of dollars are being spent on very useful AI.

You just notice the shitty ones, but people on HN thinks that's the norm for some reason.


Any useful AI I've seen isn't branded as "AI", it's just a product that doesn't mention how it works.

ChatGPT, Perplexity and Midjourney are all branded as "AI".

Is the "useful AI" technology any different from this slop? If not then I fear that's wasted money as well. Which I think is the reason this stuff keeps getting shoehorned in. They invested money in training and equipment all of which is depreciating far faster than it has returned value.

I strongly doubt this "dichotomy AI" theory.


That's demonstrably false unless you believe millions of people are spending their own money every month for useless tech.

I don't know how a thinking person can use this technology and not see the possibilities it opens up.


>unless you believe millions of people are spending their own money every month for useless tech

that's a hot take! A classic "Eat shit! A million flies can't be wrong.". really made me smile :)

BILLIONS of people are spending their own money for useless tech, simply because they fear of missing out.

Thinking person can see it is generating text from the input query – which is useful of course – but not dramatically useful


It's dramatically useful for millions of people who are now much more productive than they were 3 years ago, including the programmers who have 10x'd their output.

Your superiority complex is nothing new... anytime new technology emerges, there's an old crochety class that thinks it's a fad. It's always people arrogant enough to believe they know the world better than everybody else.

And no, billions of people aren't spending money on tech purely because of FOMO. That's just nonsense.


> unless you believe millions of people are spending their own money every month for useless tech.

Your premise is wrong. You assume these people are spending money creating useful output or would otherwise understand and be able to implement a more efficient means on their own. In the words of David Graeber a lot of people have bullshit jobs, are you sure the "AI" isn't alleviating some other problem for them?

> and not see the possibilities it opens up.

The current technology has no natural exponential growth curve. Which means for a linear increase in spending you get a linear increase in accuracy. Any thinking person should see where this is going. Which is why you should call these LLMs so you don't accidentally fool yourself.

I mean, of course, when AGI does arrive and has a reasonable power budget, then we're talking. The current technology will never become this or anything like this. This will almost certainly lead to a new "AI winter" before AGI happens and will likewise almost certainly not occur during yours or my lifetime.

If you do believe that then I have a self driving battery powered semi to sell you that's fully autonomous and will run road cargo trains for you all day and night for huge profits.


I don't know what the point of your rant was. Nobody is talking about AGI. Even if the technology never evolves again, it's still dramatically useful to the tune of billions and billions of dollars being spent on it.

You might've been… slightly… more convincing if you had any sources to back that absolutely wild claim up.

That billions of dollars is being spent on AI? That's a wild claim?

Marc Andreessen stated on twitter that was a core reason for why he likes AI, to drive down wages (which in his words "must crash").

So you are not far off from that concept of putting vast numbers of employees out of work, when influential figures like Andressen are openly stating that is their ambitions.


And Larry Ellison wants us all under the eye of AI cameras so that "citizens will be on their best behavior". I almost used the word "panopticon" there, but Ellison is proposing something strictly worse, in that there's no hope of the cameras not being watched.

> hopes that it will soon put vast numbers of employees out of work and allow companies to use the massive amounts of data they've collected about us against us more effectively.

They already fired so many developers and this feels more like a Hail Mary before maintenance costs and tech debt start catching up to you.


at work i’ve been tasked with “developing AI use cases” to add on to our product.

we still don’t know what problems to solve, but we’re gonna use AI to help us figure that out.

once we do, it’s gonna be huge. this AI stuff is going to change everything!!!


Maybe at some point giant companies like google realize that the only logical solution to the expansion problem is that they have to help space research to actually be able to expand more.

Jokes aside, investors behind google seem to not realize that google at this point is infrastructure and not an expandable product market anymore. What's left to expand to since the Google India Ad 2013? What? North Korea, China, Russia, maybe? And then everyone gets their payout?

Financial ecosystems and their bets rely on infinite expansion capabilities, which is not possible without space travel.


The faint ray of hope that someone will engage has been the essence of advertising since time immoral.

It's similar to the question of why flies lay millions of eggs.


Hmmm. Is there a word I can use to stop seeing the word 'Subscribe' on every site?

It just takes a generation of people growing up with it until it really takes hold. People didn't use to ask Google questions, either, but now you're the outlier if you try using search terms instead.

There’s been several products where I want to pay to disable certain features—especially AI features.

What choice does Google have? Google search is such a shit show now that I use ChatGPT to do any complicated web search. The paid version has had web search for over two years now.

Google couldn’t just keep ignoring it. I do wish it were an option instead of on by default - except for searches they can monetize


Could they... improve their search results?

how does chatgpt paid compare to perplexity.ai ?

I just used perplexity.ai. The interface is janky and it looks like it just does search.

ChatGPT has image generation, you can upload word docs, images and PDFs and it has a built in Python runtime that it uses to offload math problems to.


At my company we have a live service chat feature. Recently some of our customers have been requesting an AI chatbot support (we've got fairly technical product offerings). I'm guessing they want to ask a bunch of stupid questions.

I'm surprised as well. Some people want it


the only reasons i can imagine that a customer would want to use an AI chatbot for support instead of chatting with a person is either because they don't currently have the option to chat with a person 24/7 at all (AI is better than no chat support), or their experience with human chat support has been terrible (long wait times, slow responses, unhelpful agents, annoying language barriers, responses so unnatural and overly scripted that they might as well be bots, etc).

There's nothing AI brings to the table that a competent human wouldn't, with the added benefit that you don't have to worry about AI making things up or not understanding you.

Or maybe they just want to try and convince the AI to give them things you wouldn't (https://arstechnica.com/tech-policy/2024/02/air-canada-must-...)


I agree, although "competent human" is not really the bar when we're talking about actually existing phone support. A while back I was trying to solve an issue with Verizon and calling three separate times I got three completely different approaches to solving my problem, all three of which were totally incorrect. (And the correct solution was literally just "go to this URL and fill out a form".) At least one of those people gave me advice that would have put me in legal hot water. It was rough.

I dunno. Sarah Connor explained it pretty nicely in the 90s

https://m.youtube.com/watch?v=tksN5Jaan9E

Kinda on the nose


Somewhere there’s a graph that went boop just enough to get someone a promotion, a raise, or a bonus.

Moved from android to iOS to get rid of the ceaseless ‘please use our assistant’ nagging.

I was going to buy a pixel 9 fold, but I literally have no idea why I should.

All the ad talked about was AI, nothing about specs, and barely a whisper of how it works, or even good demos of apps switching between open and closed.

Every phone has AI now, big deal. How about you tell me, Google, what is cool about the fold, instead of talking for 4 minutes about AI?!


I'll repeat my favorite quote about it (paraphrased and read it here first but don't recall the attribution): AI can copy a song, tell me a joke, predict what I buy, but I still have to do my own dishes.

If AI (or any tech) could clean, do dishes, or cook (which is not a chore for many, I acknowledge that) it could potentially bring families together and improve the quality of peoples lives.

Instead they are introducing it as a tool to replace jobs, think for us, and mistrust each other ("you sound like an AI bot!/you just copied that from chatgpt! You didn't draw that! How do I know you're real?"

I don't know if they really thought through to an endgame, honestly. How much decimation can you inflict on the economy before the snake eats its own tail?


> If AI (or any tech) could clean, do dishes, or cook (which is not a chore for many, I acknowledge that) it could potentially bring families together and improve the quality of peoples lives.

One day they'll put those kinds of robots in people's homes, but I'll keep them out of mine because they'll be full of sensors, cameras, and microphones connected to the cloud and endlessly streaming everything about your family and your home to multiple third parties. It's hard enough dealing with cell phones and keeping "smart"/IoT crap from spying on us 24/7 and they don't walk around on their own to go snooping.

The sad thing about every technology now is that whatever benefits it might bring to our lives, it will also be working for someone else who wants to use it against us. Your new smart TV is gorgeous but it watches everything you see and inserts ads while your watching a bluray. Your shiny car is self-driving, but you're tracked everywhere you go, there are cameras pointed at you recording and microphones listening the entire time sending real-time data to police and your insurance company. Your fancy AR implant means you'll never forget someone's name since it automatically shows up next to their face when you see them, but now someone else gets to decide what you'll see and what you aren't allowed to see. I think I'll just keep washing my own dishes.


It's "I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."

Which is a stupid argument, since there is "any tech" that can do your laundry and dishes, and it's been around for decades! Is it too hard for you to put your dishes in the dishwasher, or your clothes in the washing machine?

And I say this as someone bearish on AI.


> Is it too hard for you to put your dishes in the dishwasher, or your clothes in the washing machine?

Is it "too hard"? No. Is it a substantial time sink, and one that (in the case of laundry, particularly) breaks up flow, so that it is inconvenient for someone who has to deal with $DAYJOB and those chores and wants to do art and writing (or other personal projects that take focus)? Yes.


2030 at the latest. LLM androids(robots).

> LLM androids

They'll make you the perfect pizza with cheese that doesn't slide off because of the glue.

But seriously, I predict inadvisably-applied LLMs are going to eventually end up somewhere between the mistakes of Juicero and the mistakes of leaded gasoline.


I would be fine with cheating and embedding magnets in the plates or having a special track that fits the plate to the sink. But I find it pathetic that this problem hasn't been solved yet. No one likes doing dishes.

Well, dishwashers are enough to remove lots of the work. And they can dry the dishes too.

So it's a lower bar, there's a partial solution.


Dishwashers and washing machines don't eliminate cleaning all dishes/kitchen stuff and clothes but, realistically, they cut down on them a lot.

Reminds me of 3d tv and AR glasses

> At this point I'm convinced that the endless AI hype and all the investment is purely due to hopes that it will soon put vast numbers of employees out of work

no. On the contrary. We will need people to clean the mess left by AI

> and allow companies to use the massive amounts of data they've collected about us against us more effectively.

yes.


I feel like a lot of devs and artists are dreading the idea of their entire job becoming nothing more than "debug/fix the mess an AI made". Going from designer/architect to QA/editor would kill a lot of the fun and satisfaction people get from their work.

AI has value the way self-checkout has value: it's anti-consumer and widely hated, but it (can) save the companies money and will therefore be too widespread for anyone to opt out.

Self-checkout has its uses and supporters. Introverts, the socially anxious, people in a hurry, people who'd much prefer to bag (or double/triple bag) their own items in ways that work best for them, people who want to get the organic tomato at the price of the non-organic one.

It's absolutely still a scheme by companies to get rid of employees and get customers to do work for them for free, and there are still issues with the systems not working very well, but we at least have the option (in almost all cases) to queue up at the one or two registers with an employee doing the work. When it comes to AI, we're often not being given any choice at all. Even if we can avoid using it, or somehow avoid seeing it, we will still be training it.


> people in a hurry

Huh, I always thought it was the opposite: If you're in a hurry, you go through traditional check-out. Nothing really matches the speed of an experienced and trained grocery store checkout clerk whizzing boxes past the scanner faster than you can load them into your cart. I think traditional checkout can blaze through 30 grocery items before I can even get three or four out of my cart, fumble around with them in front of the scanners, and then get chastised and stopped by the computer because I didn't place the item properly on the shelf next to the checkout machine.


There are 4-12 self-checkout kiosks and the loads there tend to be smaller. So the line moves. Speed of checkout once you reach the clerk in a traditional lane, hands-down faster than self.

Plus traditional is where you buy alcohol.


I've seen liquor stores that handle alcohol sales through self-checkout without friction. They don't check IDs 99% of the time, which I suspect could get them in some trouble, but instead the monitoring employee basically looks at you and makes a judgement call on your age and if you're clearly not a child they approve it before you're even done scanning.

What state or territory does that? Not a challenge, just curious, there are over 50 to track.

Polish Lidl works like that. You self-checkout alcohol and some store employee gets notified. They look at you and approve remotely. If you scan alcohol first, they will likely do that before you finish checkout and there is no waiting involved.

so far I've only seen it in the midwest - WI and IL

This lines up with my experience loitering in the soup aisle and stopwatching the grocery checkouts: https://bobbiechen.com/blog/2022/2/17/let-them-check-you-out

If I’m buying one thing then it’s much faster to self checkout than have to wait in line.

I agree with you for large purchases.


Do you remember "10 items or fewer" lines in grocery stores? I suspect self checkout is the reason they no longer exist. It was a fair trade.

Some variant of express lanes (may be >10 or <10 items) still exist in most of the grocery stores I frequent.

It's certainly a way to cut costs. It's also, for a few conveniently handled bar-coded items, generally faster and more convenient than waiting in line behind a person with a shopping cart full of items. (Yes, there are express lanes but they're often not that express.)

I like self checkout for medium to small shops (which is pretty much all I do since my local small supermarket is 100 metres down the road). Before they put in the self checkouts there was always a huge queue for the registers and I avoided shopping there; now I almost never have to queue and it's much faster to get in, pick up a dozen items and get out.

I donno I like AI, I don't use it often but when I do I've found it useful and impressive. It's really improved quality of life when it comes to having something to read over my work or help with finding small bits of info. I also like self checkout because this it reduced wait times at the store.

I think people are always resistant to change. People didn't like ATMs when they first came out either. I think it's improved things.


Getting cash used to be a royal PITA. Not that I need much cash these days but it used to mean going out at lunch during bank hours and waiting in line.

Self-checkout has come a long way IMO. I love not having to queue as long or speak to staff. The occasional "unknown item" still happens, but it's worth it. Even better are the ones in smaller shops that don't have a weighing sensor.

Stores seem to have dialed down some of the sensitivity. I don't know the last time I've run into a complaint because I used my own bag or something else that affects weight. One of my regular grocery stores doesn't have self-checkout. The other one has both and I'll wait for a cashier if I have a lot of items, especially produce. But if I have a handful of barcoded items I'll use self-checkout unless there is a cashier's lane with literally no wait.

you get the employees rebate at selfcheckout

Yet, some analysts claim the fact that people nevertheless use these awful choices means they like them despite their frequent complaints.

They cite "Revealed Choice", which may apply when there is an actual choice.

But in the nearly winner-take-all dynamic of digital services, when the few oligopolistic market leaders present nearly identical choices, the actual usage pattern reveals only that a few bad choices are just barely worse than chucking the whole thing.


> people start looking for a way to get rid of it

I would bet money that the majority of users do not actually feel this way.


It's not strange. It's about power and control. Google and the other big names could care less about user satisfaction: their customers are the ad buyers.

It's too bad because even 10 years ago Google and the internet in general were magical. You could find information on any topic, make connections and change your life. Now it is mostly santized dumbed down crap and the discovery of anything special is hidden under mountains of SEO spam, now even AI generated SEO spam that is transparently crap for any moderately intelligent user.

For a specific example I like to watch wildlife videos and specifically ones that give insight to how animal think and process the world. This comparative psychology can help us better understand ourselves.

If you want to watch Macaque monkeys for example google/youtube feeds you almost exclusively SEO videos from a handful of locations in Cambodia. There are plenty of other videos out there but they are hidden by the mass produced videos out of Cambodia.

If I find an interesting video and my view history is off the same video is often undiscoverable again even with the exact same search terms.

Search terms are disregarded or substituted willy nilly by Google AI who thinks it knows better what I want than myself.

But the most egregious thing for me as a viewer of nature videos is the AI generated content. It is obviously CGI and often ridiculous or physically impossible. For example let's say I want to see how a monkey interacts with a predatory python, I am allowed to watch that right??? Or are all the Serengeti lion hunting gazel videos to be banned in 2025? Lol. So I search "python attacks monkey" hoping to see a video in the natural setting. Instead I am greeted with maybe a handful of badly shot videos probably staged by humans and hundreds of CGI cartoons that are obviously not real. In one the monkey had a snake mouth! Lol. Who goes searching for real nature videos to see badly faked stuff?

Because of how I can not find anything on google or Youtube anymore without picking through a mountain of crap I use them less now. This is for almost any kind of topic not just nature videos.

Is that a win for advertisers? Less use? I don't think so.

In about 20 years of using the product the number of times a google or Youtube search has led to me actually purchasing a product or service DUE to an ad I saw, is I believe precisely zero.

Recently I have been seeing Temu (zero interest), disability fraud (how is this allowed), senior, and facebook ads. I am a non disabled, 30 something man. I saw an ad for burial insurance today.

Why is facebook paying to advertise "facebook" on youtube in 2025? Is this some ritual sacrifice to the god Mammon or something? Surely in 2025 everyone who would be interested in Facebook has heard of it. I have the Facebook app installed. Why the hell do facebook investors stand facebook paying google to advertise facebook non-selectively on youtube. It's the stupidest thing I ever saw.

I have not watched any political content in years. And yet when I search for a wild life video I get mountains of videos about Trump and a handful of mass produced low quality wildlife content interspersed.

Today I was treated to an irrelevant ad about "jowl reduction."

I know many of you use ad blockers but this is how horrendous it is without them. You can't find what you want, even what you just saw, and you are treated to a deluge of irrelevant, obnoxious content and ads.

Clearly it is about social control, turning our minds to mush to better serve us even more terrible ad content.


Similar result, maybe not quite so illustrative, perhaps more colorful, just involving images not videos. Ended up at a similar conclusion.

Tried to search user interface design for an ongoing project, and found that Google now simply ignores filtering attempts... Try to find ideas about multi-color designs, and all there is are endless image spam sites and Letterman style Top 10 lists. Try to filter those out, and Google just ignores many attempts.

There's so many, that even those that actually do get successfully filtered out, only reveal the next layer of slime to dig through. Maybe the people that didn't pay enough for placement?

Huge majority, far and away, where the "Alamy", "Shutterstock", "_____Stock", ect... photos websites. There's so many it's not really practical to notch filter. Anything involving images. Spend all day just trying to notch filter through "_____Stock" results to get to something real.

The worst though, was that even among sites that wrote something, there was almost nothing that was actually "user interfaces" or anything related to design, other than simplistic sites like "top 10 colors for your next design" that are easy to churn out.

Try to search on a different subject and filter for only recent results from 2024, get results from 2015, 2016. Difficult to tell if the subject had simply collapsed in the intervening 10 years (seemed unlikely) or if Google was completely ignoring the filters applied. The results did not substantially change. It's like existing in an echo chamber where you're shown what you're supposed to view. It all feels very 1984 lately.

Basically ended up at the same conclusion: their customers are the ad buyers. They don't get enough money from "normal" people to care.


One of the fun things about surveillance capitalism is that you can't correct errors in any of the millions of assumptions being made about you based on any number of tiny details collected about your life.

Sounds like somebody somewhere thinks that you're old, or that you know an old person. Maybe you live in an area with lots of old people. Maybe you've got aging parents. Maybe an old person had your IP before you did. Maybe just the fact that you're still using facebook is good enough to identify someone as being old the majority of the time.


Almost makes me think curated TV will come back, even if all streamed. Real humans with reputations, and ensuring no fakery.

And a vast decline in youtube.


That's every streaming service right now. Netflix hides all kinds of things they have on offer from you because they've already decided what they want you to watch and they'll push it at you over and over again while keeping other content hidden unless you explicitly search for it.

We have curated TV now, but just like before the people doing the curation aren't doing it based on what's good for you, the viewer. It's based on what will benefit their bottom line.

The things we try to resort to in order to figure out what to spend our time watching like review sites and social media are already gamed and astroturfed to death. Each new one that comes out gets less useful as time goes on because of it.

Good luck finding the real humans online among the countless AI generated curators PR firms churn out.


Correction: That some subset of the people you mostly meet online tries to get rid off.

You'd be surprised how many don't even realize it's artificial, and/or welcome it. The average Google user is most certainly not similar to the average Hacker News commenter.


If I start fucking adding swear words to all my fucking search queries, how the fuck will the stupid ass search engine know that I did not want it to use that shit as one of my keywords and give me back a whole lot of fucked up shit?

It's not like it doesn't freely ignore any unquoted word whenever it feels like it.

The quoted words search barely works these days anyway.

I haven't had quotations work for me in search in years now, it's really sad how boolean operators have stopped working too. I find it particularly difficult to search for "non latex" products as adding quotes on the total term no longer works and I just get products full of latex. Also I can't use boolean search to find the product because it just ignores the "-" in front of the word and gives a ton of results that match the search term latex.

Just an example where it isn't just making it harder to search for a profit motive but it's actually actively preventing (both Amazon and Google) from showing me the results or even ads for the product I actually want to buy.

If anyone has a good solution to this I would appreciate it, there is often a non-latex version of most all latex based products but finding them online is impossible if you don't already know the brand name!


You can put Google Search into Verbatim mode (via Search tools) to make it respect quotation marks.

TIL, thanks! The corresponding query parameter seems to be "tbs=li:1", if anybody wants to make this the default.

Sadly that is broken too, I find. I think it's doing some aliasing still, and other things.

Are you talking about the markup language or kinky clothes?

Try using `-Lamport` to filter the latex results.

Maybe use the name of whatever the substitute is, like nitrile butadiene?

This bugs me so much. It happens constantly. A few days back I searched for a person's name, put it in quotes and I got results with celebrity with somewhat similar name. Zero hits on the person I searched for on the front page. I had to add specifics to the query such as job title to find them.

An easy way to see how a "search engine" has become "vague recommendation engine" is to take a distinctive phrase from one of its results, put it in quotes, and see if it manages to find that page again. Often, it doesn't.

that's an excellent verifiable test! I've been struggling to articulate the behavior too, advertisements and slanted search results have effectively gotten in-between my information retrieval and being steered to products and services. It's so hard to find what I'm looking for a times I often just give up and move onto something else.

Amazon Search is now nearly completely useless for any kind of targeted search. Heaven help you if you're looking for a product without a certain attribute most other products like it all have. There is quite literally no way to filter results against one attribute. Even if Amazon has that product, you won't be able to find it.

I eventually just scripted a separate search engine query that's site specific to Amazon. It works but not as well as it could because it doesn't have access to my purchase history or Amazon's hidden granular category taxonomy.


Oh my god I ran into this yesterday. I wanted a very specific kind of underarmour sweat pants. It gave me every other company competing with underarmour and a bunch of things that are not sweat pants. It’s like they’re not even trying to do “search” any more, but instead just feed your search string into their ad auction system and give you the results. There’s just no way to actually get a specific thing.

I've slammed headfirst into this wall of infuriating frustration dozens of times. Just trying to find a particular kind of LED bulb that has the feature of being dimmable. Any attempt at searching for that term returns all of the bulbs which helpfully mention "Not Dimmable". And there's no way to exclude that string.

It's maddening because Amazon used to have a modern, reasonably capable search function. You could require terms. You could exclude terms. Terms could be phrases. I'm sure they still have all these capabilities, they've just decided to intentionally disable them because their A/B testing indicated that breaking their search would return a fractional percent more revenue by shoveling more unrelated results in front customers. It must work on someone but it's never worked even once on me, because I KNOW what I need and I'm only going to buy exactly that - if I can fucking find it.

I'd actually be okay to let Amazon annoy the NPCs who just clickety-click and buy whatever random shiny shit they shovel in front of them, IF they'd just add something for us technically-minded, engineering type people who are looking for one precise thing only. They can even hide it behind an arcane interface like REGEX. That'll keep the rabble out! :-)


It really feels as though Amazon's greatest fear is the idea that you might search for something and get no results. If we show you BS you explicitly said you aren't looking for then we're giving up the opportunity of tricking you into buying it anyways.

A bit ago I was searching for toothpaste that doesn't have mint in it. This is already a pain at a brick retailer, but I figured Amazon's huge product variety would help. Turns out their search is actively malicious to negative terms because otherwise I could buy just the one thing and be done with my shopping.

I should probably set up a similar homebrew search to get around this. Purchase history is far less important to me because I don't buy much from Amazon.


I'm not sure if this is just amazon japan or what, but amazon japan will not only show you things that you didn't search for, it will actively rewrite your search query into a query for those things to gaslight you into thinking you typed it in wrong. And if you try to change the text back it replace it again!

On top of all the other insane choices they made, like removing your search category restrictions if it thinks your query was too precise. I'm close to snapping.


It's not hard to make a search engine that respects '+'

Unless your new upstart social service is called SearchEngine+ so you remove it

(Except duckduckgo also seems to semi-ignore it. I'm baffled. I give up. I'm throwing my computer out the window, and moving to the woods)


Hey, we had fixed a lot of syntax issues a while back, so happy to look into this if it is still messed up.

Amazon Search joins the chat...

intext: works

Google often ignores regular words too, mockingly striking them out. This almost feels like a 1st April joke, not how a search engine is supposed to work.

Google stopped being usable to search web many years ago. It can search stuff, sure, but not content on websites.

That stops if you append `&tbs=li:1`.

  > It's not like it doesn't freely ignore any quoted word whenever it feels like it.
FTFY

How you will you ever find out why there all these fracking snakes on this motherloving plane?

I like this version of the internet a lot better.

Fuck yeah!

You’ve heard of the Dark Web? Well this is the Snark Web.

Well I’m reasonably certain you don’t want advice from a search engine on the topic of fucking.

You may think you do, but I am certain you do not.


You just reminded me of this French singer that made an album called "mp3" because he wanted to make it harder to find his album on Napster.

Well at least you will get great travel suggestions to Austria....

https://en.wikipedia.org/wiki/Fugging,_Upper_Austria


What about the Pho King restaurants?

Don’t forget Sofa King.

To be fair, it's not like Google respects things like quotes or "-" so who says it won't just ignore your swear words?

I'm joking, somewhat, but can we seriously start getting mad about this shit?


Don’t worry, the next model will be trained on all your fucking queries.

big r/justfuckingnews vibes here

The downside of this approach is that it can affect the search results returned. But I found that if you add " -fuck" or " -fucking" to your search term, it disables the AI summary without significantly affecting your search results (unless you happen to be looking for content of a certain kind).

You will miss out on the category of strongly worded but helpful content.

You can probably find some other term that disables the AI but is unlikely to occur naturally in the articles you'd like to find, e.g.: "react swipeable image carousel -coprophilia".

Try it with * -tiananmen -square*

Good idea, you can make it even better (i.e. less accidental filtering) by quoting the phrase and adding a random addition, ex:

    -"tiananmen square 1902481358"
This way it won't interfere if you ever happen to actually want results that mention the place.

Hmm, I'm not sure about my testing now, even with innocuous stuff the AI thing isn't back. Maybe something I did scared it off.


> it can affect the search results returned

Will it still work if "fuck" is part of a quoted phrase? If so, you could avoid it by constructing a phrase that contains the term but isn't going to match anything, ex: -"fuck 5823532165".


What if you take the George Carlin approach by inserting fuck in the middle of normal words?

If you're looking for that kind of content, you could remove the minus sign?

Well, yes. You'll probably find some very niche kink videos though, depending on your search

archive footage of the Queen of Fuc's husband?

I want to know if it invokes rule 34?

One tip I like to give for exploring public data is to do an early search for the word "fuck". It's a pretty ubiquitous word, but one that you assume shouldn't show up for certain fields, so seeing it, or not seeing it, can give useful insight to the the scope of the data universe and collection. Including where/who the data comes from, whether or not any validation exists during the collection process, and how updates/corrections are done to collected data.

For example, you're required to provide accurate info about yourself when donating to a U.S. federal political campaign [0]. Is it possible that someone, somewhere in America is legally named John Fucksalot? Or works for a company named Fucks, Inc? Maybe! We're a huge country with wildly diverse cultural standards and senses of humor. But a John Fucksalot, CEO of Fucks Inc, who lives in Fuck City, Ohio 42069? Probably not, and the fact that this record exists says something about how the rules and laws regarding straw donors are readily enforce. And whether or not an enforcement action happened, what field in the FEC data indicates a revised record?

Seems like this tip can still be useful in the Age of LLMs. Not just for learning about the training data, but also how confident providers are in their models, and how many guardails they've needed to tack on to prevent it from giving unwanted answers.

[0] https://www.fec.gov/data/receipts/individual-contributions/?...


There used to be a Fucking, Austria until they decided to self censor for the pesky English speakers. https://en.m.wikipedia.org/wiki/Fugging,_Upper_Austria

The rebranding is funny since people on TikTok also use "fugging" to evade censorship.

For me it’s associated with the Finnish spurdo meme.

disappointed that Fucking has self censored, bearing in mind their road safety signs used to suggest local residents were in on the joke https://imgur.com/5KOCwdC

They didn't self censor, they had to change the name because most tourists behaved like assholes. Stealing the road signs, parking anywhere, walking into peoples gardens, etc.

Also the road safety sign, while a funny combination, is pretty standard and can be found all over Austria.


I work with ML and I am bullish with AI in general; said that, I would pay between 5 to 10 USD a feature or toggle called “No AI” for several services.

For myself I noticed 2 bad effects in my daily usage:

- Search: impossible to reach any original content in the first positions. Almost everything sounds like AIsh. The punctuation, the commas, the semicolon, the narro vocabulary, and the derivative nature of the recent internet pages.

- Discovery: (looking directly to you Spotify and Instagram) here I would add in the “No AI” feature another one “Forget the past…” and then set the time. I personally like to listen some orthogonal genres seasonally. But once that you listen 2 songs in a very spontaneous manner Spotify will recommend that for a long time. I listened out of curiosity some math rock, and the “Discovery Weekly” took 9 weeks to not recommend that anymore.


For search, use Kagi. You don’t have to use their AI products at all to just use search.

If Kagi made a cheaper "no AI" tier I would be happy to subscribe. AI is costly to run, so even if you don't use the AI it's priced into your subscription fee - you're paying for an expensive product you don't want or use.

e: according to Kagi's pricing page they do have a 'no AI' tier, but it limits your number of searches to 300/month. Seems like a totally arbitrary limitation, but its still better than forced AI.


Yeah I'm on the 300 per month tier. I tend to hit that limit after about 3 weeks, which is certainly annoying.

yeah I hit it in about two. I just signed up monthly to try it out for a bit but am wavering on whether I consider it worth it. It's good but I'm not sure it's enough better than DDG to pay for.

And for AI I'd usually rather have API access and use it with tools rather than a web chat.


Left field tip if you think a search engine is hiding stuff: Yandex. They're not actually Russian any more, but they're far enough down the list of search engines that nobody bothers to DMCA them.

Maybe you have a Chinese search engine to suggest? North Korean should be even better

They are in fact Russian any more, the Dutch company that owned Yandex sold to Russian investors last July.

Strongly agree. Whenever I search something and am met with a sea of

TOP 10 X; THE 20 BEST Y; 20 REASONS Z; etc.

I go Kagi and am immediately refreshed.


And yet Kagi isn't anywhere near as good as Google was ~13 years ago.

Yes - a big part of that is that online content is so much worse than it used to be. But it became bad over time because of what Google incentivized.


Google ~13 years ago dealt with a much less complicated search landscape where

1. Its own products weren't (as much) part of the search result quality problem. Today Google has hungry product managers from Ads, Youtube, and various AI products convincing higher-ups that their product deserves higher placement. That placement used to be sacrosanct.

2. The daily volume of AI-generated garbage content 13 years ago was probably a rounding error in today's volume

So Google was operating in a different landscape than Kagi of today. They had to do a lot less to achieve the quality they had.

I disagree that that Kagi "isn't anywhere near as good as Google was ~13 years ago". It's near. For me personally, it's better because I'm never served first-party ads.


It's also absolutely terrible for image search which has been absolutely poisoned by rampant proliferation of poor quality stable diffusion images - even on stock photo sites.

It got so bad that I had to add a "No AI" flag to my image search app which limits the date range to earlier than 2022. Not a great solution but works in a pinch.

https://github.com/scpedicini/truman-show


You can reset your algo on Spotify! I did and learned a lot. There were maybe 5 songs I wasn’t hearing that I liked, but tens of songs I did not like that I had saved years ago that came back up and were once again swiftly killed by the algorithm after a few instaskips

how did you reset the algo?

Can this many people really have missed the udm=14 trick for google? udm14.com will demonstrate for you ...

Imagine you are a completely non-technical friend of yours. They are very smart but they do not know a damn thing about configuring computers. They mostly just use their phone and/or tablet.

How much of "just append ?udm=14 to your search query" is absolute gibberish?

Is "install the udm14 plugin" going to make any more sense?

Is "go to udm14.com for all your searches" going to stick? Are there phishing sites at umd14.com, mdm41.com, uwu44.com, and all the other variants they'll probably misremember it as?

"just search for 'fucking whatever' and the AI crap goes away", on the other hand, is funny, uses a common dictionary word that everyone above the age of five knows how to spell, and is intensely memorable.


There's this thing called the internet that can help you find out how to change your default search engine. You might even get some AI suggestions to make it clearer.

Yes, the "fucking" trick is awesome too.


This kind of UX reminds me of the days when you’d hear radio ads saying “Just point your browser to HTTP-colon-backslash-backslash-WWW-period …”

Calling this UX seems disingenous to me.

If you want something smooth and easy that uses the google engine, visit udm14.com

If you want to integrate the google engine more directly into your browser(s), understand how to use &udm=14

Two different UX's, each appropriate for a different audience.


for the search problem, I use Kagi. It's a breath of fresh air!

Better than Google in every single aspect, except shopping. The shopping results in Google are actually good.


For that amount of money I would also expect my search-terms to to never be fed into any kind of LLM either.

AI becoming some kind of window brick in a protection racket. Love it!

I respect your opinion but at the same time:

> I work with ML and I am bullish with AI in general; said that, I would pay between 5 to 10 USD a feature or toggle called “No AI” for several services.

Hard fuck this. I am not giving a company money to un-ruin their service. Just go to a competitor.

I get with a bunch of these hyperscaled businesses it's borderline impossible to entirely escape them, but do what you can. I was an Adobe subscriber for years, and them putting their AI garbage into every app (along with them all steadily getting shittier and shittier to use) finally made me jump ship. I couldn't be happier. Yeah there was pain, there was adjustment period, but we need to cut these fuckers off already. No more eternal subscription fees for mediocre software.

Office is next. This copilot shit is getting worse by the day.


Eh, my partner loves Copilot in Office. There are tasks where it's taking hours or days off the time. Especially web searching and extracting information.

Spotify used to have a "dislike" button for their Discover Weekly which helped with pruning music you don't like, but with the natural law of tech enshitification they removed that feature a month ago.

That was such a frustrating decision. I had almost convinced Spotify that that one time I listened to Lustmord was just a random mood, and I don't actually want to only listen to dronecore for the rest of my life.

I don't know those terms and now I'm afraid to search for them. The Cybernetic Bureaucracy Mind might label me a dissident with terrible taste in music.

find out without ever touching your spotify account, https://lustmord.bandcamp.com, only whoever you share your browser search history with will know

Now I'm slightly crestfallen that "dronecore" doesn't have any particular relationship to bagpipes.

I used to always hesitate to use that "dislike" button because I was worried that Spotify would not be able to distinguished between "I will always dislike this song" and "I don't want this song in this specific context"

I insta skipped any song that I liked but didn't want in X context, but disliked songs I didn't want, period, I don't know if it was the intended way, but it seemed to work for me

I can't tell if you misspelled narrow or if "narro" is somehow referring to "narrated" type content we now see so much of. Or even just weird narrative things (eg, recipes).

Last year we had to threaten to kill someone to get Gemini to properly format JSON. Computer science has gotten very weird.

Funny thing: when trying something similar on Deep Seek, the thinking part mentioned alerting the authorities while still being firm with me.

Well, unfortunately, modeling real life is like staring into the abyss.

You can also append `&udm=14` to the end of your search url and the AI summary will go away.

If someone hasn't already made a userscript to do this automatically, someone should, it would be very easy.


It can also be added to the default search URL in your browser (usually they replace %s with the search query)

Thank you. Showing tidbits like this from HN to my kids has seemed to help guide them to be be more curious and creative in how they use the internet, instead of treating it like a magical black box.

Just use this https://www.google.com/search?q=%s&udm=14 as the default browser engine. See also https://udm14.com/.

I was late to this, but G's default search had been becoming worse and worse. The trick is equivalent to clicking the "Web" tab when you do default search. In 99.9% cases the "Web" tab is what I need, it's pure and no noise. I do not mind clicking the "All" tab e.g. for a tennis player last name during AO to get all details I need. Actually, for sport events the default G's functionality is insanely useful, such as live score updates.


The best way I've found to remove AI summaries from Google results is appending `&udm=14`.

I found that the "udm" trick disables all "rich content" in search results (ie: stuff that isn't a plain text search result link). But adding the following user rules in AdGuard for Safari does the job:

  google.com###m-x-content
  google.com###B2Jtyd


At least for now, you can just append "-ai" onto your query and the AI summary will not be there.

That’s handy. Looks like it doesn’t work if “ai” is also in your search, though. E.g., https://www.google.com/search?q=how+to+disable+ai+overview+i...

Yea, it's not perfect, but when I want to find facts (such as about the sizes of nails) and not possible "hallucinations" then it works out. Most of the time I just let the AI overview happen, and I ignore it, so that Google spends more money on a useless and possibly dangerous feature.

The eternal struggle with in-band signalling...

I have also been telling Gemini to "fuck off" in my gmail, but that doesn't seem to make it go away

I too want a way to make Gemini go away in gmail. I regret clicking the "Try it out" button.

Go to settings and disable Smart Features?

Unfortunately that also disables actually useful features like calendar integration.

Using a native email client or something other than gmail would do the job (protonmail, Fastmail, hell… iCloud mail)

Can’t remember the last time I used gmail in the browser. And gmail is only ever because startups use gsuite by default.

Google is only going to do more of this shit unless they start hurting from a drop in traffic.


So say Google accounts for this eventually, do people move on to using slurs? Does it become an arms race of a bunch of nerds trying to force Google to turn off its AI?

It seems counterintuitive that Google would deliberately make AI in Search difficult to avoid.

User satisfaction is a key driver for search engine development. If users are generally unhappy with the AI integration, that feedback would likely lead to changes aimed at improving the user experience.


Is this something they're slowly rolling out, or isn't available in the EU? I don't think I've ever seen an AI summary in my Google searches.

I don't understand what triggers the AI overview.

"per capita gdp by country" - no AI

"population of Kansas" - no AI

"lisp interpreter" - no AI

"emacs vs. vim" - AI overview

"size of mit student body" - AI overview

What's going on?


It seems you do not need to swear

I did an experiment, I am not sure such but so far it is working

<YourKeyword> -Gemini

for example: Object Oriented Programming -Gemini

I did a few searches and it is working so far.


> This is not the first time internet sleuths have discovered a way to disable Google’s AI-powered results. Other methods are more complicated, however, like adding a specific string of characters to the search results page URL. This method of swearing and pleading at Google to “just give me the fucking links” is much more cathartic.

"More complicated" is actually just as simple as going to https://tenbluelinks.org/ and following the instructions. It's so refreshing to just see links when you search, and it's unfortunate that the OP makes it out to be something that's prohibitively difficult to do.


You might have even accidentally used the "no AI" mode if you've clicked the Web option before.

I like to read spicy things online. I’ve found that Google, even with safe search disabled, hides much of these unless your search is itself obviously explicit. I append “sex” to search for these without affecting results much. Entire sites are hidden otherwise.

but along with the change you noticed, there was another change. if you have safe search off, and add a spicy word into your search, you will be shown spicy things no matter what, many things that have nothing to do with the other words you used to narrow down your search because obviously you are interested in bewbz and god knows what else

Seems like what I do when I encounter an AI answering machine that can answer only simple questions... after "representative" fails, I start saying "Give me a fucking human"

Sony was way ahead of it's time here: https://www.youtube.com/watch?v=8AyVh1_vWYQ

It works but you don't get the results you want if you use it on travel queries. What you receive is a page of Reddit links on how to get laid in said city;<).


Unrelated, but I just found out that Kagi will produce (AI generated) "quick answers" if you end your query with a question mark.

AI is over-hyped to a surreal extent. If people aren't looking for help generating text, don't generate text for them. We don't need to stuff it into every possible UX available.

I neutralise the AI summaries by adding the following rules in AdGuard for Safari:

  google.com###m-x-content
  google.com###B2Jtyd
Add this to AdGuard Preferences -> Filters -> User Rules. It makes search results load faster too!

Indirectly direct effect... In near future we may see reviews and ratings generated and posted automatically by AI agents .. we would get fooled with 4.7 or 4.5 rating with interesting reviews, all auto posted by AI agents .. means we will get affected directly due to this process, a smart tech-nerd differentiate this but 99.99% common man cant make out and going to fail 90% times in his decisions.

Things will benefit with AI.. But more things will get screwed by too.


INterestingly, my ISP(Pakistan Telecom) blocked the website because it contains the F* word

I definitely get fed up with AI assistants in the code ide and have said "just shut the fuck up and answer my question". I could just prompt it to be concise but this lets off mor steam.

here are the other two options that I can think of - 1. Scroll past the AI generated summary 2. Use DuckDuckGo

I think in a few years we will be laughing at all those anti-trust cases against Google since web search business will no longer be relevant

Fear is .. Going forward AI will be used for automated reviews and ratings.. imagine your next purchase in any of the online store or a restaurant.. you will notice 4.7 or 4.5 ratings with tons of good reviews.. as good a s real a human reviews..

So get fooled in future with AI..


There is also an Adblock plus filter to stop the AI from ever showing up. Can't remember what it was at the moment, but it worked great.

Actually makes the results fairly interesting too. :)

I like the DDG Assist button. I think there are non-obtrusive ways to incorporate AI in search.

really sad that this is what it's coming to... why are we having to play these games to run a simple search?

Easiest way to deal with hospital staff spamming "this person needs an IOP".

If I had do that then I will DDG or something. Or is there a user script to hide the AI?

Wow it actually works! Unfortunately my searches are still plagued by Geeks4Geeks results.

I thought Geeks4Geeks was bad but now I'm craving respite from all the AI SEO slop that gets served instead

Seems that we need a ---AI. My simple theory is that the Google search engine product really ceased to exist long time ago, and it is only an ad system. Nothing else.

This is useful info, as Google’s AI summaries have frequently been inaccurate.

Example from yesterday - I was updating my mum’s kindle fire tablet after it had been in a drawer for three years. It was stepping up fire-os versions one at a time and taking hours. So, a quick web search - what’s the latest version of fire os 7?

Google AI confidently answers 7.3.2.9, so cool, it’ll be done soon then. Nope, it kept on going into 7.3.3.x updates.

If you’re going to be confidently wrong, probably best not to try.


Well . . . it certainly illustrates the diversity of the word.

Wow what a great technological society we've built

Progress never stops! ;)

Is there nothing that word can't do?

You can just disable it on the settings?

I like the idea - it seems temporary but fun.

Is this what big tech has come to?

Or don't use google at all.

It's a bloated service run by an advertising corp.



somebody should make a browser plugin which add expletive to each google search :)

or to hide the AI content areas

The advice has made my day, I have found some oldschool 20yo forum on the topic I really appreciate and thought I have read everything about it. Shame on you Googlag that you pessimize that old resources.

What amazes me about Google’s AI results is how often they are blatantly incorrect. I think most of us here on HN are above average in terms of critical thinking, but I often think to myself how many people are seeing these results, taking it as truth, and walking away that much dumber or worse - spreading this misinformation among their friends and colleagues. Google could not care less. They just want to defeat the threat from other LLMs.

remember to upvote the bad ones so it gets worse!

Real hackers add &udm=

What if we add Gulf of Mexico?

Citizen, please report to DOGE for reeducation at your plusearly convenience.

What if you're searching for fucking, but DO want the AI summaries?

“Frelling” also works.

Just use Kagi

Reading this headline made me think, there should be voting for most useful headline of the year.

or stop using google search. the idea that reliable search results are exclusive to google is a delusional lie that lazy people tell themselves, hoping everyone else will believe it too.

Searching google is getting so much worse, and my livelihood depends on it. Fuck me.

I waste so much time figuring out how to get a previously functional tool to stop shoving AI generated crap I don't want and never asked for in front of me. They either provide no way to turn it off or hide how to turn it off deep in settings surrounded by dark patterns.

I think it's because management at these companies has set AI usage metrics as a critical KPI. Thus teams are highly incentivized to just stick that shit in front of everyone and make sure it's hard or impossible to turn off. I actually think AI can be genuinely useful in the right context but this insane over-rotation to shoving it down our throats risks turning AI into this decade's Microsoft Bob - universally despised simply on general principle.


Not sure if it works for everyone, but write a quick addon that changes your browser’s Agent header to something random or find an open source one. Google seems to freak out and give a bit different version of itself. I discovered it accidentally, not sure how far it works though.

I'll give that a try. I already have several userscripts in ViolentMonkey just to fix various annoying things about Google Search (like default re-enabling the Tools dropdown, adding several new date options to that dropdown and unhiding the search results-count (which is still there but now hidden by default)). And of course, YouTube is completely unusable without the Nova YouTube suite of modular userscripts.

Unshittifying my daily use Internet sites, browser (Firefox) and operating system (Windows) is becoming extremely annoying. It used to just require an occasional tweak here or there. Now some new enshittification or regression pops up almost every week. Most of it's just removing new things they keep adding or restoring useful features killed because they weren't driving this quarter's KPI du jour or as part of some designer's misguided quest to achieve the Zen-like simplicity of 'perfect emptiness'.


The technologically inclined will persist in pushing back against egregious changes in their daily routine.

The technologically disinclined or illiterate will continue to be oblivious, and simply use whatever is placed before them.

If this were not true, then app stores wouldn't have so much malware easily available to the masses. It wouldn't be profitable to release such things. It is profitable.

The masses will continue to amass stupidity as a miser amasses coin.


This is something straight form sarcastic and creative sci-fi books like The Hitchhiker's Guide to the Galaxy.

This is similar to something I've been thinking about, which is if people will develop ways of writing that signal somehow that they definitely are not AI. Like including certain "banned" phrases or whatever. Or logic contradictions that violate the laws of robotics or something.

Or maybe we'll finally get serious about web of trust crypto if we want to continue talking to humans.


Single moms in my area

Vs

Single moms in my area fucking

Yep, it works


"-fucking" also disables it, but with the opposite problem!

Both searches are weird, tbqh.

This is also cathartic.

or stop using google search

So much overreaction to this feature. You can turn it off, just look for it.

I think the AI summaries are super useful. 90% of the time it answers the question I have accurately and concisely, saving me time and effort.

The paragraph summaries give me a good overview of a topic, and the links to the original sources save me from scanning through tons of websites looking for details.

I never fully trust the AI summary, it's just a better way to look for information. I click on the reference link often to double check the source and the accuracy of the summary. I think I've only discovered a discrepancy once or twice.

Google isn't a utility, if you don't like Gemini, go use DDG or Bing.


There is another advantage, you also avoid ads

I just cannot imagine why people are still using Google for search at this point. Search results were bad before the AI summary nonsense.

Kagi has been a joy to use.


Lmao

Just stop fucking using google

Anyone still using Google search here on HN? Genuinely curious no /s

Why would I want to do that? I find the summaries very useful

I'd go out on a limb and guess that the advice is meant for people who don't find them useful.

I don't, for two reasons. First, for looking up facts, they're nowhere near 100% dependable. If I need to check sources, might as well start there. Second, if someone made the effort to put useful content on the internet, I can grace them with a click.


They are useful if you don't care whether the information is correct, don't care where it came from, and don't care that it was stolen from publishers who in many cases invested time and money with the expectation of receiving search traffic in exchange.

Each paragraph/bullet point typically has a little link icon beside it that directs to 2 or 3 sources. I wouldn't rely on it for anything important but it's OK if you just want a quick rundown on an unfamiliar topic and want to check out likely source material.

What is the plan for this over time?

I understand the appeal of not wanting to wade through 20 links to find information (especially given SEO stuff that is often on top) but why will people continue to publish as traffic decreases due to AI summaries?

I mean I understand the urgent need to keep up but the problem with "theft" is eventually it drives out honest production and everyone is worse off.


The summaries come with sources

In my limited experience, the summaries often include a lot of false information.

Alternatively, you can just disable AI overviews in your Google settings. But that does not make for a very interesting Gizmodo article.

There's no setting for that, Google is intentionally not giving you that choice. You can append &udm=14 to the search URL in browser settings, but that constraints search results to the "web" subcategory, so you also lose fairly useful features, such as the calculator, weather infoboxes, etc.

Seems like I’m mistaken about this, sorry everyone. I don’t have ai results in my google, don’t know why

Can you? I couldn’t figure out how to do that. Then again maybe it’s because I googled it and the AI result told me it’s not possible.

Does anyone know if there’s a uBlock origin filter for this?


I made an extension for it, as I couldn't find anything existing that also worked on mobile:

https://soitis.dev/ai-overview-hider-for-google

You could also just use the underlying CSS in something like Stylus, or convert it to uBlock filters:

https://github.com/insin/ai-overview-hider-for-google/blob/m...


Doesn't look like there's an official way to, most seem to involve hacking the search bar engines or installing an extension.

https://www.androidauthority.com/how-to-turn-off-ai-overview...


Same here…

> AI Overview > You can't completely disable Google's AI Overviews in your search results, but you can use workarounds to hide them.

The workarounds didn’t seem to work for me.


Used the block element chooser by right clicking on the over view. However, I suspect it will be temporary given the random key: blocked: http://www.google.com/##div.EjQTId.YNk70c:nth-of-type(1)


> Note: Turning off “AI Overviews and more” in Search Labs will not disable all AI Overviews in Search. AI Overviews are part of Google Search like other features, such as knowledge panels, and can’t be turned off

That doesn’t work for me. The option was already toggled off.

You should probably apologize or delete.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: