From protecting your credit to the possibility of real Transformers, Drew Turney looks at what artificial intelligence is doing for us today… and what it can’t do.
It’s been bandied about the science and business worlds a lot over the last ten years and like many buzzwords the reality of artificial intelligence has fallen far short of the promise. Part of the problem is our limitations in understanding just what intelligence is to begin with.
A dictionary describes it as ‘the ability to acquire and apply knowledge and skills’, but from a mechanistic point of view that doesn’t say much. Without a physical property to measure like we have for like blood sugar or body mass, there’s no telling what intelligence is ‘made’ of, seriously limiting construction of an AI system.
The result has been that the tasks we assign to ‘artificial intelligence’ have shifted along with our definition of it. “It doesn’t have one definition,” says associate professor Ann Nicholson of Monash University’s Clayton School of Information Technology. Nicholson is a respected AI theorist after a 20-year academic career in bewildering-sounding fields like ‘probabilistic reasoning’, ‘knowledge engineering’ and ‘evolutionary ethics’.
“Some people thought of computers as good measures of intelligence so we set up little tests like the Turing test,” she says, “but all that really says is artificial intelligence can just pretend to be human. Then people thought games would be the thing, but we still don’t have general all-purpose artificial intelligence. Humans have things like common sense, reasoning, adapting to change and language that are very hard for a computer to do in a general sense. You get a greater appreciation of adaptability and flexibility in a five year child than a computer system.”
Nicholson’s mention of games is pertinent after the hand wringing that swept the tabloid press after IBM’s Deep Blue system famously beat chess grandmaster Gary Kasparov in 1997. Amid the resulting calls to remember that computers can’t write poetry or fall in love that followed, it was worth remembering that chess is just the sort of thing we invented computers to do better than us because it can be characterised as a series of mathematical or probabilistic calculations. They work with huge and complex but strictly defined values, whereas our turf is small but much fuzzier abstracts. One takes intelligence, the other is just being good at sums.
Of course, we might not realise how mathematical the world really is. Much human cognition is based on recognising patterns, and as patterns are expressible mathematically that means a computer can appreciate them too. Could even our morass of reason and emotion be broken down into finite mathematical probabilities so a computer could understand — and eventually replicate — them, resulting in a true, human-style artificial intelligence? Take music, which isn’t as free from programming as you think. “A lot of people do programs to write music and some of it sounds quite good,” says Ann Nicholson. “Music has a lot of rules and patterns behind it.”
So after all the wrangling over definitions the most accurate model for intelligence we have might be the sum total of cellular behaviour. As early as 20 years ago physicists like Australian Paul Davies were theorising that intelligence might simply be an emergent property of complexity, millions of individual cells going about their individual tasks, consciousness arising spontaneously out of the collective swarm.
We’ve seen similar behaviours in computing such as SETI@Home or Galaxy Zoo, ‘smart’ distributed processing for crunching radioastronomy data and cataloguing astronomical objects respectively. In each case, millions of individual processes create a sort of nanoswarm of intelligent enquiry, and Ann Nicholson believes such specialist processes are the AI we should be looking for rather than creating an artificial human that does everything.
“We’re not going to build an all seeing computer program like a HAL 9000 that’s hooked into the whole world to make all the final decisions,” she says. “We’re going to make incremental improvements in what intelligent systems can do. They’re there to support human decision making and help humans filter this overload of information, change data into useful information, filter it, provide guidance, extra checking, common sense and so on — little pieces of what we call decision support. And only in a few cases would it actually really replace people, such as a robot to diffuse a bomb or go to Mars.”
Forget almost everything you know about artificial intelligence from the movies. AI isn’t going to mean all-purpose android helpers or the enslavement of humanity for a long time yet. Though it might surprise you to hear it, we can’t really improve upon the machinery itself. The real action in AI for researchers and business is in theory and methodology.
“Computers are now incredibly fast,” says Nicholson. “They do most of the things we want them to and we’re not really limited when it comes to speed, we’re limited by structuring problems.” To her, the answer isn’t to make computers faster crunching the data, it’s getting them to recognise the data they need to do their jobs. “Think of the roads. They think when they build the roads they’ll get rid of the traffic jams but we end up expecting more travel so we still have traffic jams. AI is a bit like that — the data we collect just keeps running ahead of the things we’re trying to develop to make it more useful and keep it under control.”
A good example is virus protection. The traditional approach is to refer to a definitive profile of every known virus and ignore anything else, even while it blows up your hard drive. Security vendor AVG’s products use heuristic analysis together with the traditional method to detect malware, a process marketing manager Lloyd Borrett calls the ‘if it walks like a duck and squawks like a duck then it must be a duck’ approach.
Rather than just search for known file signatures, AVG products look for suspicious file structures in the code that don’t match those of most commercial programs. Then it runs the program in a protected environment to see how it behaves, anything suspect or virus-like flagged for attention. Heuristics is a good example of AI because it’s the human-like ability to extrapolate results even without past experience or knowledge of the values involved.
Put another way, heuristics catches important elements that aren’t immediately obvious because they have previously unknown characteristics, and then self-programs to recognise more characteristics like them. It’s a quality we’d traditionally call ‘intelligence’, and combined with the sheer volume of data we’re generating, it’s proving a very lucrative trade. Flagging ‘unusual’ behaviours is protecting against everything from credit card to insurance claim fraud as computers combine spending patterns, geographic location, income, medical history and more to identify transactions or movements that might be suspect.
Such data mining can spot an errant purchase before it’s approved. Once, the unlucky cardholder would simply have to make a claim after learning their account had been emptied by the Russian mafia. Today a computer can deduct that an Australian who lives in a poor suburb on a government pension probably won’t be trying to buy a Rolex in a Bulgarian marketplace, flagging the purchase for checking by a human operator.
And it’s big business. Michael Chiapetta, vice president of product development for predictive analytics provider Fair Isaac, told the San Diego Union Tribune newspaper that one-tenth of one percent of credit card transactions are fraudulent, representing between $1-2b annually. He claims the company’s flagship fraud detection product, Falcon, reduces fraud by up to 50 percent.
Of course, there is a movement to make AI agents more human-like, and it’s coming from the not inconsiderable profit motive of the gaming industry. Just one is the deployment of software bots that play poker in online casinos like the one engineered by the University of Alberta’s Computer Poker Research Group. While the academics were up front about pitting a computer against human players in a 2008 tournament, online casinos fielding accusations of unleashing pokerbots are becoming more common. And when (according to the American Gaming Association) worldwide revenues from online gambling during 2008 totalled USD$26.9b, operators are likely to deploy any advantage they can muster for a slice of the action.
AI is also being programmed into anything from one to thousands of secondary or extra characters in video games. As the graphics and processing capability in PCs and game consoles expand exponentially and backdrops and environments become more cinematic, players expect them to react realistically to their surroundings.
“In Ghost Recon Advanced Warfighter we had different types of enemy soldiers and key values differentiated them, like detection distance, accuracy, prioritising being in cover and weapon type,” says Per Juhlén of Sweden’s GRIN Studios. He’s explaining a set of labyrinthine-sounding programming rules and parameters that constrain in-game movement and behaviours but which actually come from fairly rigid mathematical values.
“You have triggers that can be based on placement, time and collision detection, so when a certain trigger happens you can pretty much decide to trigger any kind of event. In movement you can have a dynamic movement pattern that’s generated upon unit activation or you can pre-calculate the movement pattern, which off-loads the processor and will be easier to predict, but of course limits what you can do in such an environment. The key is that most things are based on a fairly small amount of core functions, they just need to be composed wisely.”
Now that the intelligence benchmark has moved on from tacky online software bots holding ‘conversations’ and beating us at chess, where does it go from here? “Poker playing robots are specifically adapted to the problem they’re applied to,” Ann Nicholson reminds us. “They have a model of poker, a model of probabilities and a model of how people might bet, so they do that very well but they don’t do anything else. And we’re getting pretty good at those individual parts. Concentrating on these individual tasks in the immediate term will give us useful computer programs. Poker isn’t a very serious example but imagine better models to predict weather or fires. Some people will think about integrating them [into an intelligent robot] but I think we could spend 50 years on one big over-arching system and not get very far.”
So while people will always try to build C-3PO-style machines that look and talk like us, the real advances will come in individual components we design to do one job well. Human intelligence arose from the unconscious co-operation of such systems to sustain the organism. Maybe if we get the constituent ‘smart’ parts right, a true AI system will spontaneously evolve — dare we say it — intelligently…