Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI The Internet

'Yes, I am a Human': Bot Detection Is No Longer Working 34

The rise of AI has rendered traditional CAPTCHA tests increasingly ineffective, as bots can now "[solve] these puzzles in milliseconds using artificial intelligence (AI)," reports The Conversation. "How ironic. The tools designed to prove we're human are now obstructing us more than the machines they're supposed to be keeping at bay." The report warns that the imminent arrival of AI agents -- software programs designed to autonomously interact with websites on our behalf -- will further complicate matters. From the report: Developers are continually coming up with new ways to verify humans. Some systems, like Google's ReCaptcha v3 (introduced in 2018), don't ask you to solve puzzles anymore. Instead, they watch how you interact with a website. Do you move your cursor naturally? Do you type like a person? Humans have subtle, imperfect behaviors that bots still struggle to mimic. Not everyone likes ReCaptcha v3 because it raises privacy issues -- plus the web company needs to assess user scores to determine who is a bot, and the bots can beat the system anyway. There are alternatives that use similar logic, such as "slider" puzzles that ask users to move jigsaw pieces around, but these too can be overcome.

Some websites are now turning to biometrics to verify humans, such as fingerprint scans or voice recognition, while face ID is also a possibility. Biometrics are harder for bots to fake, but they come with their own problems -- privacy concerns, expensive tech and limited access for some users, say because they can't afford the relevant smartphone or can't speak because of a disability. The imminent arrival of AI agents will add another layer of complexity. It will mean we increasingly want bots to visit sites and do things on our behalf, so web companies will need to start distinguishing between "good" bots and "bad" bots. This area still needs a lot more consideration, but digital authentication certificates are proposed as one possible solution.

In sum, Captcha is no longer the simple, reliable tool it once was. AI has forced us to rethink how we verify people online, and it's only going to get more challenging as these systems get smarter. Whatever becomes the next technological standard, it's going to have to be easy to use for humans, but one step ahead of the bad actors. So the next time you find yourself clicking on blurry traffic lights and getting infuriated, remember you're part of a bigger fight. The future of proving humanity is still being written, and the bots won't be giving up any time soon.

'Yes, I am a Human': Bot Detection Is No Longer Working

Comments Filter:
  • Sounds like we're headed that way.

  • by GrahamJ ( 241784 ) on Friday December 20, 2024 @08:51PM (#65029827)

    The idea of models browsing the web for us is hilarious. What we need is more effort to make online services interoperable via API, not LLMs pretending to be humans operating browsers. That's just the most complicated Rube Goldberg machine ever.

    • The challenge of APIs is that somebody has to build and maintain them. Many sites have no API at all, and many others have an API that is outdated and buggy. Most sites put their best effort into making their sites usable by humans, they don't really care about API developers. Why should they? APIs don't serve ads, and therefore don't bring in income.

      So while LLMs browsing the web might be inefficient, it allows people to use automation on websites that don't bother to offer an API, and to do so without req

      • The challenge of APIs is that somebody has to build and maintain them. Many sites have no API at all, and many others have an API that is outdated and buggy. Most sites put their best effort into making their sites usable by humans, they don't really care about API developers. Why should they? APIs don't serve ads, and therefore don't bring in income.

        So while LLMs browsing the web might be inefficient, it allows people to use automation on websites that don't bother to offer an API, and to do so without requiring writing client code.

        There are different kinds of efficiency. The kind you would like to eliminate, is excess human-oriented baggage. But doing so creates another kind of inefficiency: the requirement that programmers write software to browse the web.

        There's a legit security concern with bots and APIs and 3rd party programs. Many businesses rely on good faith of people not exploiting them in order to offer a product or a service. For example, a loyalty rewards program that gives you a free cup of coffee for signing up...well...there's a famous case where a company did something similar and assholes on the web wrote bots to exploit them and make them cancel the program. Another example is limited editions. A company wants to make a special limited ed

      • Some don't implement APIs anyway and for good reason. Retail sites have to advertise their prices to customers but they don't want competitors getting that data so easily. Obviously retail sites are not going to expose a pricing api publicly but competitors will scrape their site for them.
    • Except maybe when they're first getting started and they need users. As soon as they get established they're going to yank those tools out from under you so that they can monopolize the data being generated by their users.

      Remember you are the product.
  • The real solution to this is to probably not even attempt to figure out whether a client is controlled by a human or not, but instead rely on domain specific behavior patterns. In the end, what service providers really want to achieve is, not to block automated clients, but to block unwanted behaviors.
    • by mysidia ( 191772 )

      providers really want to achieve is, not to block automated clients, but to block unwanted behaviors.

      Yes, exactly. That unwanted behavior is usually: signing up for or accessing many accounts to conduct large-scale automated attacks or exploits of some kind, when it is intended that a person have only one account.

      This is not the same thing as a person having an automated agent that the person the agent is acting on behalf of granted permission conducting legitimate tasks on behalf of that one person at

      • There are plenty of alternatives to CAPTCHAs, such as requiring the user provide a phone number, then you send them a SMS text message, and the user proves control of the US phone system while providing permission.

    • by PPH ( 736903 )

      but to block unwanted behaviors.

      Like posting opinions that don't fit the approved narrative?

      But don't worry. Musk will buy your social network board.

    • In the end, what service providers really want to achieve is, not to block automated clients, but to block unwanted behaviors.

      Is that true? Bots are not eyeballs for ad revenue. A good personal automated client (good from my perspective) would report back from the site with what I am interested in, not ad content. That makes the automated client an adblocker.

      • Why would AI bots be immune to ads? Maybe the ads could train the bots too, and not just we humans?

        • Just what we need - our bots to become sleazy hallucinating salesmen, offering us deals that do not even exist.

          • Just what we need - our bots to become sleazy hallucinating salesmen, offering us deals that do not even exist.

            I would LOVE for that to happen. It would totally fuck the Web for corporate commerce. Maybe then it could go back to being a service for people instead of an ad platform for corporate leeches vacuuming the money form people's pockets while propagandizing them into oblivion and damaging their psyches.

        • Why would AI bots be immune to ads? Maybe the ads could train the bots too, and not just we humans?

          Can you imagine it? The bots get trained on "Bratty step sister sucks cock for the car keys!" Now polluting the LLM database using something like Pornhub or xhamster would be hilarious. Then add in extremely bad grammar and screwed up pronouns from non-English posters and you'll really have something.

  • Who finds it entertaining that the "bad actors" are creating bots that are doing a better job of training google's self-driving object recognition than humans? Shit, just skip the middleman already. Either that or they're creating enough noise that now the "captcha as ai training" system has turned counterproductive.

  • Do what Musk proposed for X. Charge $5/yr or whatever. Nominal costs to break bot scaling. Payment systems can at least limit the number of accounts that can be economically created.

  • to solve CAPTCHAs for me, because I couldn't solve them!

    • to solve CAPTCHAs for me, because I couldn't solve them!

      Yeah. I love when I get the all too common request to pick motorcycles, bicycles, traffic lights, or whatever. It hatches the photo into parts, but does a shitty job of it. So now do I click on the boxes that have traffic lights or motorcycles or whatever clearly in the box? Or do I need to click on the ones where part of the traffic light exceeds over the line? I usually wind up having to go through three or four attempts because of this.

      Of course, AI is going to excel at this, and frankly it has for a lo

    • There are browser extensions that claim to do just this. But I haven't had any luck with them so far.

  • I've had some systems block me for solving captchas too much like a machine. Not sure if that's speed, accuracy, or some combination thereof.

    And it's a really stupid filter, because it's easy to slow down a captcha-solving AI or add randomness to mouse motion.

  • so web companies will need to start distinguishing between "good" bots and "bad" bots.

    The ones that scrape your data for free and use it to make us money are good.
    The ones that have narcissistic robber barons drone on with deceit and lies about how sharing is bad and greed is great are good.
    Every other one are bad, shall be banned and you'll need fingerprints to access our AI.

  • With every captcha we are helping train AI to be more human.

    • by xevioso ( 598654 )

      This comment appears in the bottom of my feed, and yet it is the most prescient thing said here.

  • It's already quite apparent which ideas are stupid and which actually work to make a functioning society of civilized equals. We are just failing to, you know, *do* it, cuz we don't want to, I guess. evolve or die
  • by madbrain ( 11432 ) on Saturday December 21, 2024 @01:17AM (#65030215) Homepage Journal

    They are all huge problem for vision impaired. On one site, I made over 100 attempts, with different images. I couldn't solve one of them. There was no audio option. No email to contact the site administrator. Any contact required signing up first. I tried to guess the admin email. Sent over a dozen emails. They all bounced. I was unable to join the site as a result. This was a few months ago. More recently, I tried again, and solved it on the first try.

    I now always click on the audio option if there is one. Oftentimes, it doesn't work in Firefox with ad blockers. The sites tell me there is suspicious aciltivity. Fortunately, my hearing loss is only in one ear, and minor, and so far I have always succeeded on the first try with voice challenges.

    You can however have both visual and hearing impairments. A machine is always going to do better and faster in those cases.

    My main browsing machine is a desktop which lacks any kind of biometrics. It does however have a webcam. Maybe some kind of real-time video challenge would work, with the site asking the user to do a particular gesture. The AI can fake video too, but perhaps not in real-time - yet.
    And there are obviously huge privacy concerns with that.

  • There is a solution, but most of us will not accept that.

    DRM. Yes, that DRM.

    A browser that is fully secure end to end, including mouse and keyboard hardware attested by a trusted notary and all connections going through your client SSL certificate... would solve the "human detection" problem. (At least until they make physical robots that are capable in typing similar to a human).

    And of course this idea is bollocks, and will never pass, except high security systems, like confidential work or government. And

Arithmetic is being able to count up to twenty without taking off your shoes. -- Mickey Mouse

Working...