What is AI?

What does Artificial Intelligence mean?

When we think of intelligence we tend to think of people like von Neumann, Einstein, Galileo, Archimedes or da Vinci but in the overall scale of intelligence this is a very anthropomorphic perspective.

This post is a brief look at what form intelligence, created by man or “artificial intelligence”, is likely to be like.

I’ll borrow here from Nick Bostrom’s book “Superintelligence” (Bostrom, 2014) which breaks down three main types of AI super intelligence. These definitions also fit AI in general or at least where it is headed. This list isn’t all inclusive, but it gives a good overview of how different true AI is likely to be to human intelligence

1) Speed Superintelligence – Intelligence equivalent to a human but at machine speed. Bostrom gives an interesting example of a human dropping a teacup, to us it falls and spills too quick to stop it but to a speed superintelligence running a whole brain emulation (human brain uploaded to a computer somehow ) it would seem to take hours or a Bostrom puts it from the ASI perspective “enough time for you not only to order a replacement cup but also to read a couple of scientific papers and take a nap”

2) Collective Superintelligence – Multiple lower intelligences working in parallel – think of army ants in nature or perhaps the race to put a man on the moon in the 60’s . Such an AI would be especially good at tasks that can be broken into sub parts to be worked on in parallel and would organise the parts better (no striking, moody or sick workers, can work 24/7 without a break and can perfectly recall everything almost instantaneously) The ASI would be better in any one area than is possible by any one individual and overall would be better in any area it chooses or is designed to pursue.

3) Quality Superintelligence – this would do things that humans simply cannot do whilst being at least able to things that we can. Think of humans and apes, we talk in a highly structured way, make complex tools, plan for the distant future. We can do many things that say an elephant or whale cannot do despite having bigger brains. A quality superintelligence would be capable of things we cannot currently conceive of.  A quality superintelligence will therefore have “magic”.

So intelligence from a machine perspective is likely to be very different, one might even say “alien”, to the intelligence of man.

Two common terms used by AI researchers to describe AI  are AGI or Artificial General Intelligence (at least equivalent a human) and ASI or Artificial Super Intelligence (greater than any human, ever, intelligence).

What approaches are likely to reach AGI/ASI?

The journey to AGI can be broadly broken down to two paths – biological (human augmentation) or machine. Proponents of the biological such as Ray Kurzweil  believe that by the 2030’s we’ll have nanobots in our brain and all be connected to the cloud wirelessly. However, in order to get such tightly knit brain-computer wet technology we’ll need to understand how the brain works at a fundamental level, and if we can do that, we’ll already be at a stage where we can have an AGI emulating a human brain in hardware. An Oxford survey (Sandberg and Bostrom, 2011) of AI experts done in 2011 gave the results in the graph below for when they think AGI will be reached, Kurzweil would appear to be highly optimistic in his forecasts if this probability distribution is in any way accurate.

oxford
Machine Intelligence Survey, Sandberg and Bostrom, 2011 (Sandberg and Bostrom, 2011)

Biological approaches such as sperm selection or dna augmentation will have long time scales due to the fact that humans take a long time to grow. Even then, if we imagine the smartest person ever to have lived as Einstein, how much further can we go biologically? To come back to the point I raised at the start of this article – are we in danger here of anthropomorphizing?

Bug_Eyed_Cover_large
Why do bug eyed monsters always get the girl?

Eliezer Yudkowsky, a renowned researcher in the area of AI describes human intelligence in the context of overall intelligence akin to:

The entire range from village idiot to Einstein, or from village idiot to Bismarck, fits into a small dot on the range from amoeba to human (Yudkowsky, 2008)

Like like some old science fiction story (see image above) where we imagine that robots or bug eyed monsters are interested in pretty girls, we anthropomorphize intelligence and think that there is a great leap from the village idiot to Einstein but in the scale of intelligence it’s miniscule.Tim Urban graphically compares these perspectives nicely in his article on AI on waitbutwhy.com (near end of the article).

If we have managed already to go from the amoeba to AGI the step from AGI to ASI is not the huge leap it may seem from a human perspective.

My next article will discuss the the potential impact of ASI if and when it arrives sometime in the future.

I’ll finish off with a quote from I J Good (Good, 1965):

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the _last_ invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

References

Bostrom, N., 2014. Superintelligence: Paths, dangers, strategies. OUP Oxford.
Good, I.J., 1965. Speculations concerning the first ultraintelligent machine.Advances in computers, 6(99), pp.31-83.
Sandberg, A., Bostrom, N., 2011. Machine intelligence survey. FHI Technical Report 1.
Yudkowsky, E., 2008. Artificial intelligence as a positive and negative factor in global risk. Global catastrophic risks 1, 303.

Advertisements

3 thoughts on “What is AI?

  1. I enjoyed reading this Muireadach. I would have been guilty of anthropomorphism myself. Separating man from machine as you described is i think provides a clearer view of the technology, without trying to impose ourselves into the equation.

    Like

  2. I think that as our brains are programmed to visual things so anthropomorphism is a natural reaction when we encounter an intelligent machine as we associate intelligence with humans.

    Like

  3. I am kind of terrified by the ASI which is beyond our control. Like, if we embed nanobot in our brain, “human” will not be human. I would say we will be more like semi-robot. Hope we can find a way to make the best use of it rather than turning human into robots.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s