Google’s having another shot at solving one of the biggest problems of the modern age — the tiny pain in the arse that is the Captcha human verification text check system.
You know, the one that makes you try to identify what a toddler has been scrawling on the walls with its own poo before being allowed to carry on conducting your important internet business. That thing.
The big new idea is to replace wonky letter examination skills with a simple question: are you human? The clever stuff is all done prior to that click, though, with Google suggesting a mysterious combination of mouse speed, click accuracy, computer stats and IP details are used to work out if you’re a person or a data-harvesting software routine before you click.
Only then, if you fail, is there a back-up test involving clicking on photographs of cats.
The problem is, if we really are all just software people living in a simulated universe, won’t we all be picked out as robots? Introducing this could bring about the realisation that everything we believe in is a lie.
On Wired, the argument quickly turned into a debate about the privacy implications of the new Captcha system, with reader Symplectic pointing out there’s a lot more at stake than just knowing if we’re made of meat or silicon, saying: "Google is using your browsing history and your mouse pattern to identify you as human. It’s not the fact you’re human that privacy-conscious users won’t like being reminded of — it’s the fact that Google knows what websites you browse and which images you hovered over without clicking."
Commenter Velocipedes thinks there’s not much more Google needs to know about us, quipping: "If you’re using any of Google’s services, you’re voluntarily providing that information."
Grover Nilkvist thinks the effort should be being made by the robots anyway, asking: "Why are WE always the ones to have to make the effort? Why can’t THEY just click the ‘I’m a robot’ checkbox? Anyone who thinks we’re going to get the same preferential treatment when they’re in charge is just delusional. Wake up people."
Readers on The Verge turned their attention to the cat photos, questioning how the vagueness of the images could be used by Google to fine-tune its own photo recognition and categorisation tools.
Commenter Outerwave claims the cat-matching game is deliberately vague, posting: "The message being somewhat ambiguous is part of the system. These captchas also improve Google’s search. What is considered ‘similar’ to the original picture is probably different from person to person. But after 100 people take the test, Google (or whoever) would have an improved idea of what ‘similar’ meant to most people by comparing what was selected most of the time."
Which made some clipart of a lightbulb appear over the head of reader Miku, who replied: "Ah! This is not really about making Captcha better, this is about harvesting Captcha to improve their image search results. That makes sense."
So Google’s not interested in reworking the Captcha system, it’s simply come up with a front for a method of harvesting our spare seconds to process its photos. Google’s turned the whole world into an enormous complimentary data processing farm.
The Register reader Irongut has already had his feelings hurt by rogue AI, claiming to be a constant failure at existing Captcha technology, and therefore life. He posts: "I usually find I have to ask for a different image at least 3 times because I can’t make sense of the first few. Even when I get an image I think I can read I’m usually wrong. Generally a CAPATCHA will take me about 5 minutes to get an answer the site will accept, assuming I can be arsed to keep trying."
Commenter I Ain’t Spartacus was the first to pick the low-hanging fruit here, replying with: "If it takes you 3 tries, and ‘today’s artificial intelligence technology can solve even the most difficult variant of distorted text at 99.8 per cent accuracy’ – then does that mean you’re actually not a human?!"
He continued: "I’ve never been able to get my Captcha ratio much above 1 in 3 either. So does that mean I’m some sub-standard part of the Matrix?"
Performing worse than a bot is indeed something to be quite ashamed of, although the bot at least has the advantage of being programmed to do only the one task and isn’t managing 20 tabs of streaming video in the background.
There Is An Advert That Never Goes Out
Commenters beneath TechCrunch‘s story about the new crowd-computing system Google’s come up with were mostly of the serious type, debating how people with disabilities could cope, especially if they have some sort of software helper.
Stephen Hawking, for example, would be locked out of Ticketmaster’s shopping basket were he to order his computer assistant to attempt to buy tickets to see Morrissey at the O2.
The ethics of Google using people power to sort cat photos was also called into question, with reader Wesley Joseph asking: "How is it ethical to ask people to type a word or click on a picture for ‘security’ or ‘non-robot verification’ purposes and then use that information for corporate financial gain? If I’m doing something that makes them money and they are lying to me about what I am doing, that’s wrong."
Derek Williamson thinks that’s a bit hypocritical of an opinion to have, though, saying: "Digitizing books and identifying road numbers for Maps/Streetview are both fairly useful to society as a whole as well, though. So you were okay with it when it was completely useless, but it’s wrong if it’s actually somewhat useful?"
And Sebastian Vidal further heaped shame upon Wesley, with his bitter: "Newsflash: Google uses everything for corporate financial gain – it’s kind of their thing."