Wednesday 20 April 2011

Why you are not dead from the robot-induced nuclear apocolypse (or why CAPTHAs still wotk)

If you are reading this then you are not dead. That is generally a good thing. Now, you may ask yourself as to why you should be dead. Well according to the popular Terminator series of movies, 18th of April the day when we all bite the big one. Unless you happen to be John Conner.

The original premise of the movie was that the Artificial Intelligence (AI) known as "Skynet" became self aware and decided that it reaaaaallly hated humans. So, it decided to get rid of them the best way it knew how; it nuked the living daylights out of EVERYTHING (Barring John Conner and the other lucky guys) and then some. And then we have time-travel and fights and craziness spread over 4 movies and a TV series. Why, you may ask, do I care about this. Well, apart from being a massive geek?

There is a mild connection between the Skynet and computer security. What Skynet represents is a sentient AI, which is basically a computer that can think for itself. Now, having personally worked with an AI (for my undergraduate thesis) I now look at all movies/TV shows that have real full AI with a great amount of scepticism. I know I am not an expert by any metric, but it took me close to 2 months to write an AI agent that learns to play blackjack. Nothing fancy at all, just plays a basic strategy. The upshot: it's really really hard.

I know that they have made some massive leaps in the field, such as Watson and Deep Blue, but that's not quite the same. There are supervised AI agents that can just do one thing and had to be fed a ton of data before hand. Watson for example has the whole of Wikipedia stored, which is in my books cheating just a little bit. Although these are very impressive, they are still far away from full self-awareness and sentience.

The only way that could happen is if we had an unsupervised agent, that is, an AI agent who is given only knowledge of the problem and needs to learn how to solve it. For example, you could tell an agent what a maze it and then put it in a maze and tell it to find it's way out. It will (eventually) learn how to do that. Then you give it another maze and it will learn and so on and so forth. And then you have to make it able to learn new tasks. But once you hammer out those little details, then you have the computer that will end the world.

To, the main point: the reason AI is interesting from a computer security point of view is several things, but the main is CAPTCHAs. Now, it's a little bit of an abuse of the term, but for simplicity I use CAPTCHA to mean all the systems and implementations thereof. The general idea is that you are shown a picture of somewhat distorted letters and you have to enter the letters in to prove you are a human.

The reason for this is that computers, such as automated sign-ups and spambots etc were use to make THOUSANDS of false email accounts for the purpose of spreading spam, viruses, boredom, marketing maybe, but generally evil stuff. IF you can prove you are human, then all will be hunky-dory and you can sign up. A computer will almost always fail these tests. AIs can try to learn these things, for example, using neural networks. And some of these algorithms have had some reasonable success. This is exactly why you sometimes get a CAPTCHA that is completely unintelligible, to protect against smart AIs, with the side effect of annoying the humans, ironically, as the machines wanted.

But, if we could for a second just shift back to self-aware AI. Well, the upshot is if the AI were self-aware, it's well beyond breaking CAPTCHA. We would basically have a robot with human cognitive skills, but much more computing power. I leave it to your imagination (mostly sculpted by TV and movies) to do the rest.

SIDENOTE: More real/relevant blog posts to come soon.

No comments:

Post a Comment