Tell us about your forthcoming talk.
The title is "Why is my laptop so dumb?", or "Why isn't it smart?". Here it is, in front of me. It has more memory and more processing capacity than your average rodent, but it doesn't help me. It's a passive object. I have got to the point where if you give me a laptop that is ten times more powerful than this in terms of processing, memory and display, I can't work any faster. I can't output any more. I am asking fundamental questions, like why, when I type in "Jane", does it not immediately go to my wife instead of giving me, at random, another 100 Janes from my address book? Why doesn't it observe what I'm typing, what I'm working on, who I'm talking to, and go away and find me information pertinent to that stream of work that I am actively engaged in?
I can go onto Google and type "Artificial Intelligence" and it will come back with 37.5 million references in under .8 of a millisecond. It's very impressive, but very useless. It gives me the first 10 of those 37.5 million; how about giving me the top, most relevant, references, to the work that I am doing? Just that level of intelligence would make me a faster, more efficient, human being.
On top of that, I think that the whole interface is very poor. We're still using fingers; we can talk to computers, but we are nowhere near a conversational level yet. The big question that I am proposing is - why not? Here is a different slant; I now talk to audiences where I can remind them of that famous line in 2001, where HAL is being requested to open the pod base doors... "Sorry Dave, I can't do that". Here we have an intelligent machine in a movie from the late 1960s, predicting that in just over 30 years, we would have a conversational computer, and we don't. Later reference points include R2D2 and Commander Data, of course. Science fiction has been predicting intelligent machines for a long time. The AI community, back in the 1970s, said that HAL would be here in 25 years, and he isn't - and may not be here for another 25.
So, why are machines so dumb? I got some surprising answers, and managed to come up with a formula that demonstrates that processing power and memory are not the key features of a sentient being. It is the sensory capability and the output mechanism of an entity that really contributes much more to its intelligence.
If you take a selection of comatose human beings, people in a vegetative state and are kept alive by machinery, and put them through an MRI scanner and whisper something like "Imagine playing a game of tennis" to them, in around 60% of cases, their brain activity lights up in the same way that you and I would. Here we have people who are not buried alive, but entombed in meatspace. They can sense, they can hear, but have no way of reacting: no way of giving an output. The formula that I come up with ranks the ability to input and output information from an entity as the key element that defines the intelligence, not the memory and processing power. If we leap to the futurists and to people like Ray Kurzweil and the priesthood of the singularity, their calculations and projections are based on a very coarse product of processing power multiplied by memory. What I am saying is, that is not the key feature. The key feature that dictates intelligence, is input sensory capability and the output actual capability. The flow that I have produced does not predict an exponential growth in intelligence of machines, but much more logarithmic or, at best, linear growth which is a lot slower. The expectation that we have had in getting intelligent machines has been on the wrong hypothesis. The hypothesis has been an exponential growth in operating speeds, processing powers, and memory, will give us exponentially faster capability and an exponential growth in intelligence. That's not the case.
The implications are profound. If you have a look at the advances that IBM have made with Watson; it has gone from being a contestant in Jeopardy!, to becoming a professional medical adviser that is able to give far better diagnoses of human ailments that any human doctor. The real problem that for about the last 30 years, no doctor has been able to read and keep up with all the research development and the latest work in any field. Watson is able to access all of that infomraiton. If it can only get hold of all previous diagnoses, treatments, prognoses and outcomes, then it would be even more powerful. We now have an exemplar of a machine that we can talk to. and it will give us an answer better than any human being. That works in the medical profession, but I think that it will work in architectural design, civil engineering and construction, electronics, defence, and business management. We are seeing the arrival of the sentient machine, but from directions that we didn't expect. Some of the most challenging questions come about this way.
We live in a world that was built bottom-up. We started with small particles and the formulation of atoms and molecules, then things like amino acids, then organisms and then intelligent organisms. The result is a human being which has a physical and mental ability to inquire, discover and understand. That type of ecology works very well, but it is slow. We are now attempting a top-down ecology where we build an intelligence that has not had an experience of growth and learning. We build the knowledge in from day 1, and it's now going to learn top-down. That is very exciting, and humans will be augmented by this third intelligence of machines. Will this actually arrive as a laptop? It's unlikely, but in cloud computing, it's going to be available. Watson is being engineered to offer mobile access, because of the huge amount of processing that it requires. That will give you and I, as human beings, possibly the first step to a new working environment where we don't just talk to people... we talk to intelligent machines and we can work far faster.
If I can ask this machine a simple question and get a verbal answer, instead of having to type, search, sort, categorise, and rationalise the answers put before me, that will make me so much more efficient. The world is no longer a 1 or 2-dimensional place; it's an n-dimensional, multi-disciplinary place. In medicine, you have nuclear physics in there, chemistry, bioengineering, electronic engineering, signal processing, and so on. Medicine is, in itself, an epitome of our lives. No longer is life simple; no longer is it a single discipline that we are dealing with. It has become much more entwined and engaged. The arrival of Watson is the peak of development in medicine, and could eclipse everything else before it, including the discovery of antibiotics.
In terms of where we are now - in terms of consumer machines that do not understand - how much of where we have got to is the basis of very safe thinking... UI design and development ensuring that users get something back which they expect, in a decontextualised way? Has it led us down the wrong path?
I see technology as a series of cul-de-sacs. The telephone is a good example, a technology that turned out to be a cul-de-sac. You come to the point where you don't want a telephone on a piece of wire; you want to do new things. I have no doubt that this QWERTY keyboard will, in time, disappear, but we have had a love affair with this keyboard for the best part of two centuries. As a species and as individuals, we do not like change, especially if it is an uncomfortable one. Speech is a very natural thing, as is visual communication. Gesticulation is hard to replicate electronically, but talking to a computer, providing it is not stilted, is comfortable for people. Look at the success of Siri, and of Watson. It's one thing for technologists, futurists, scientists and engineers to enthuse about a technology, but when you get the end-user, the MD, enthusing about it, you know that you've really got something.
Do these ways of interaction also change how media are consumed: the WIMP-based way in which we use the web, for example?
Without a doubt. If you experienced MS-DOS, then what a revolution windows-based interfaces were. The mouse, and all of these devices that we now take for granted... to see a young child try to swipe the picture on a TV set is an example. All of these interfaces that I have around me, switching from one to another... the paradigm of using my fingers on the iPad is confused with the paradigm of using my laptop. I'm having to go a stage backwards.
We lurch forward as we exploit a technology to its limit. Windows is great, but I'm uncomfortable with 10 layers on my screen. On a physical desk, I spread things out. My screen is not big enough for that. Effective surface area is a big deal with interfaces that carry information, whereas our devices tend to have these dinky little screens.
Are we playing catchup with what we want? Decent consumer technology has historically been unaffordable, so are we now at a point where we can look at interfaces in a mature fashion, given that Siri, tablets and the like are now accessible to all?
The next phase is an interesting one. All of this processing power that we are carrying will shrink down rapidly, as we get into the cloud. The amount of processing power and memory that we need in the devices that we own, may well rocket downwards as we use more online. At the moment, the bandwidth is not available, but I have gone down from a quad-core MacBook Pro to a single-core MacBook Air, and I'm doing OK, but I need more bandwidth.
In one month, Apple ships hundreds of millions of devices. Running parallel to this is a manufacturing and distribution capability, the like of which the human race has never had before. It's not just that these devices are affordable, they are affordable in massive numbers. To sit here and say that with 7 billion people on the planet, we have over 6 billion mobiles and half of them are 'smart' enough to get on the web... that's quite remarkable. When you imagine that, during my childhood, there was one telephone box in the street, and there are 90 million phones for 60 million people in the UK... that's astonishing. However, the mechanisms are no longer simple; they are very complex. Our dependence on technology is driving demand, as are our human and commercial needs. The manufacturing and distribution capabilities have driven down the price point, and this feedback mechanism continues.
How do we move to this more sensory environment, how disruptive would it be to both consumer and business technology?
As we get AI that really works and is accessible, the difference between people and companies using this technology will be like the difference between two armies, one using a bow-and-arrow and the other using a machine gun. The advantage that it will give us is so phenomenal, that it will just be on that scale. It will be an entirely different world. I don't know that we can exactly predict when or what impact it will have, but it could be bigger than the Internet itself.
How should we start to gear ourselves up for these changes: cloud, Siri, Watson... we are starting to see this stuff take place around us. We are moving to a different method of digital interaction. That changes our view of what technology is: ubiquitous, and yet, somewhat ironically, more invisible.
My endpoint is: if a technology does not give me a return on my investment in time and effort, and improve me in what I can do, then I abandon it. Technology should amplify my capability.
The endgame really is James T. Kirk on the Starship Enterprise. The Enterprise has a rather interesting community; everyone knows quite a bit about technology. Everyone can stumble into an interface and get by. They can all use machines, but they are not carrying any screens themselves. They can talk to any screen, and we are heading towards this any-screen, any-interface world, where the smartest people will be able to wander up to any device and using it.
Presently, you find yourself with too many interfaces. You know that you have spent too much time on the laptop if you double-click the button in the lift. If I told someone 20 years ago that a car could have a mouse, I'd be called insane. But, a BMW 5 or 7 series have a mouse to control the car's interface. It's an awkward interface, but you can get by. You can just poke around the interface and it works... you just figure it out. The automobile interface itself has standardised on 2 or 3 pedals; the Model T had 5. The fun starts on the dash: the radio interface, the climate control, the steering wheel... all different. When you hire a car, you have to learn quickly and, remarkably, we do. Human beings get by. We have become very adaptable in a world of multi-interface on things fixed and mobile, but they will become far more conversational. In the same way that a car's core physical interface has become standardised, we will get to a point where the interfaces of our machines will settle down to be conversational, gesture-driven, and not involve a mouse or a keyboard.
How are you applying these thoughts into your own work?
I had a request from a defence company to give them a means of assessing 3 AI engines that were fed by thousands of fixed and mobile inputs, including people, communication channels, sensors, radar, acoustic systems and so on, and in process, make sense of it all, and then give hundreds of outputs. The combinatorial dimension to this is the Internet. It's so big that you can't run tests that check out every dimensional possibility of what these machines can do, so you have to come up with some kind of metric that's all-embracing while gives you some kind of assessment of the capability that you're looking at. Hence, I derived this formula; it led me on the direction of where we're going, what we're doing, and how it might impact the workplace.
It span off into areas such as direct communication from and into the human nervous system. I was once involved with work on implants such as pacemakers and brain stimulators, and the interference caused by radio systems. But, now there is another dimension: telepresence, AR, and so on. At one point, I was engaged with the US Air Force who were looking at flying aircraft by thinking. That technology is now used in some games consoles. There is also a need to help people to walk and to have better articulated limbs. Increasingly, we're looking at tapping into the human nervous system, which opens up a whole set of questions around where the organic entity finishes and machines start - if you can decide - and what it means to be human or machine. We're making inroads into human and machine communication that is brought together in a much more cerebral way than a tactile one.
Further information on Peter and his work is available at his website. Peter's talk "Why is my laptop so dumb?" takes place at the Real Time Club, 1 Whitehall Place, London, on 25/09/12. For further information and to book, visit the Real Time Club website.