Category Archives: Artificial Intelligence

The Limitations of AI from the Horse’s Mouth

I wrote an opinion piece a long time ago on the absurd amount of hype around AI, both optimistic and pessimistic.  But I’m just a guy on the internet.  Andrew Ng, on the other hand, is a world-renown expert on all things AI.  Not only does he teach the famous Coursera Stanford Machine Learning class, he was the founding lead of the Google Brain team and currently leads Baidu’s AI team.

His article is similar to his teaching: concise, clear, and enlightening.  Give it a read:  What Artificial Intelligence Can and Can’t Do Right Now.

Hardware to Enable A.I.

In my previous post I mentioned that I thought a major limiting factor to developing “real” A.I. will be hardware that supports massively parallel information processing.  In short, my criticism was that existing A.I. solutions typically rely on brute force calculations and very powerful machines but that this was very different than what the brain does.  I used the example of computing how a school of fish move by traditional modeling techniques versus how the real system does it, which is each fish figures out where to go next by itself and doesn’t need a master book-keeper tracking the movements of everyone and telling it where to go next.

I’ve been reading about IBM’s efforts to build hardware that is a step toward the type of information processing I had in mind.  They call them Neurosynaptic Chips.  I’m very much interested in learning how they write programs that utilize such functionality.  I imagine this is a bit like trying to fit a square peg into a round hole.  The brain’s “program” is not software, but instead is hard-coded by the structure of the connections between neurons.  Or put another way, the “program” is in the hardware architecture itself.  Perhaps a hardware/software model is more flexible than what the brain uses since one could conceivably use software to re-route information pathways instead of needing to alter the physical paths themselves.  It’s far from obvious that nature’s solution to the intelligence question is the best one in principle, but we’ll have to wait until we understand how the brain does it before we can start parsing out the elements that are critical from the ones that are merely consequences of the constraints of evolutionary dynamics.  It’s fun to think about!

Speculation/Opinion: A Call for Calm Amid A.I. Fears

It seems there’s been a recent surge of famous smart people expressing their fears about the future of artificial intelligence and the ramifications of passing the so-called singularity, which is where machine intelligence surpasses humans’.  Voices like Stephen Hawking and Elon Musk seem to be the loudest.  A recent book by philosopher Nick Bostrom warns of so-called Super Intelligence, which is a speculative concept on intelligence that is orders of magnitude beyond human capability.

First, let’s assess where we are currently:  nowhere remotely close.  Some great computer scientist have gotten computers to do some really cool things, but very little of it could be mistaken for an intelligence of anything except the programmer.  Let’s consider a couple examples.

light-brain_132796_4

Obligatory Brain Picture

The classic:  playing chess.  Quite simply, the machine is good because it has the ability to simulate the outcome of billions of possibilities and to choose the move that maximizes the probability of winning at each stage.  Humans are incapable of this amount of computation yet can still compete at a high level by a mechanism I won’t pretend to know how to describe.  In this case, the technology that enables the computer to compete has little to do with creating intelligent software and everything to do with the hardware that the computers use to do their relatively simple calculations very quickly.

Another illumination of this idea is presented by Jeff Hawkins in his book On Intelligence and a similarly themed Ted Talk.  I will paraphrase.  Imagine coming home from work, walking up to your house and going through the door like you do every day.  However, on this particular day someone stuck a small piece of sandpaper on the bottom of the door knob.  The moment your hand touches the knob it would likely get your attention as being abnormal.

The current way to mimic this with software is similar to how a machine is used in chess:  brute force.  Your robot would climb the stairs (something incredibly difficult to get a robot to do, although toddlers learn it with ease while chewing gum and having a conversation) and then need to run down the list of all the things to check.  If the programmer didn’t have the foresight to to check the texture under the knob as it was grabbed, then the machine won’t detect the abnormality.  Even if the programmer were clever enough to hard-code this routine there’s a huge space of other possibilities that could occur and the programmer would always be playing catch-up to account for new curve balls.  Whatever it is humans are doing, it seems it doesn’t include ruling out all conceivable possibilities.

These are fundamentally different approaches.  Computer scientists have found great ways of finding patterns in data and getting machines to be able to find (some) cats in videos using the best engineers and most powerful systems, but the approaches remain in the category of “huge numbers of calculations” and the results do not allow easy translation to new types of problems.

Getting a robot to demonstrate the flexibility of motor skills of a 3 year old is beyond the state of the art by a large margin.  But a computer can multiply numbers far better and faster than the most trained mathematician.  Finding clever ways to use mathematics and billions of fast computations to get a computer to approximate something humans do with ease is not anything like machine intelligence.  The fact is we know very little about building intelligent systems and to suggest we’re something on the order of a decade or two away from building a technology that surpasses human ability is as fanciful as thinking we would be driving flying cars in the year 2000.  Barring a radical new approach that enables more progress in a decade than has been accomplished in 50+ years, it just isn’t going to happen.  Arguments that quote Moore’s Law fail to see the distinction between powerful computation (which we’ve already figured out) and intelligence, where we’re clueless.

Hardware differences

A major difference, I speculate, that accounts for a portion of the discrepancy is that in information processing architectures.  Although exceptions certainly exist, the modus operandi of a computer is having a task, computing the outcome, and then going to the next task.  This is easy for us to overlook because computers complete their simple tasks very, very quickly.  Let’s say you want to model the movement of a school of fish.  The typical approach is to stop the clock, cycle through each fish and calculate what they will do next given some simple rules and the current configuration that’s frozen in time, let one tick on the clock click, stop the watch again, move the fish to their new positions, then recalculate for each fish again and repeat this loop many times.  In reality, each fish is worrying about themselves and there’s no master book-keeper tracking each and every movement.

Similarly, even if we “train an artificial neural network” so that it can detect faces in an image, the master book-keeping computer still has to sequentially compute how each pixel is affecting each artificial neuron.  In the retina/brain, the magic is largely happening in parallel.  Like the fish, the neurons are reacting to their particular environments and it doesn’t need to wait in line to carry out its action.  Understanding what each fish is doing individually is only part of the story—the collective behavior of many simply interacting fish is the emergent behavior of interest.

There are of course technologies that take a computer program and break down its tasks into smaller pieces for parallel computation if the problem at hand lends itself to such parallelization, but it’s far from the scale of what is happening in the brain.  Having a cluster of 10 powerful computers breaking down billions of computations and re-assembling the results is qualitatively different than a billion simple things making simple calculations simultaneously with ambiguous “inputs” and “outputs”, with many signals being sent among units on a complex network structure.  Statements that suggest that the problem is that we don’t have the computational power to simulate billions of neurons I think again miss the mark:  these implicitly assume the approach should be the status quo—serial computation (or parallel computation to speed up a largely serial task).  I would guess advances in AI research will be as much limited by hardware development as software and that doesn’t mean more clock cycles per second.  However, developing the tools to control and measure such a massively parallel system are challenges themselves.  I think the real meat of the problem is in emergence phenomena of complex systems, but of course, being a person studying in this field, I would think such a grand thing about it.

Simulating the dynamics by stopping the clock and making incremental changes may help us understand mechanisms and be useful exploratory tools, but there’s no reason to expect the best way to build a “school-of-fish-like” machine is with serial processing.  The fact that a lot of the AI community and the institutions that fund them are focused on traditional software-on-computers types of solutions suggests not only are we far away from realizing this “real” Artificial Intelligence, we’re not even in the right paradigm.

The Fear

So where does the fear come from?  I won’t pretend to have an answer to such a question, but it seems peculiar that people largely regarded for their intelligence fear intelligence that’s beyond their own.  Perhaps they’re fearful that their strength will be marginalized?  Regardless of their personal motivation, it’s all too common to see the public influenced by sensational reporting.  The fact that you likely understand the Hal 9000 and Skynet references not only shows this fear of advanced AI isn’t a new one, but also that their “distant future” predictions (2001 and 2029 respectively) tend to massively underestimate just how far away radical technological developments are to their present.  Although there’s little harm in needlessly speculating on how to prevent a Super Intelligence from destroying the human race (it feels absurd even typing it), there could be harm in taking some of the recommended actions, like creating institutions and legislation to fix the problems before they’re problems.  If we’re completely in the dark on how to build intelligent machines because we have a poor understanding of intelligence it seems reckless to speculate on the potential powers and “intentions” of such technologies.  Suggesting ways of dealing with these speculative powers and intentions appears, to me, quite absurd.  To imagine that we’ll stumble upon the technology to create a Super Intelligence and then make a mistake that allows it to super-manipulate us into it super-destroying our entire race vastly underestimates the challenges we’re facing.  Thinking we should start crafting ideas for controlling it seems to me like drafting traffic laws for flying cars in the 1970s.  Before we start building the Faraday cages to suppress the beast, let’s first see if we can get that voice recognizer built with the power of Google to do a little better than this on my voicemail:

“Ohh. Hi. This is Tony for maim client, she. I’m calling about steps 32 in the C R bye hey you know I’mtalking about once a step 30 to Okay bye”