Companionship is part of what makes us human. It drives our need to be with and around people; to converse; to share, and to debate.
The digitisation of companionship is a phenomenon that has obviously been increasing in its absorption into everyday life. Two words sum this up for many people: social media. And, with social media, comes the concept of lifestreaming: the sharing of the important and the trivial, the good and bad, onto digital media. It's the status update, the sharing of photos, and the liking of links. It's an increasingly important part of who we are.
With the digitisation of companionship comes the possibility of developing technological answers to socio-technological questions: principally, how companionship could be replicated artificially. The concept of automata understanding humans is, of course, wider than companionship itself: it covers Machine Translation, conversational systems, and other areas which have surfaced into everyday life as translation services, chatbots, and so on.
Professor Yorick Wilks is an academic within three organisations, a winner of many computer and linguistic awards, and founder of the Institute of Language, Speech and Hearing at the University of Sheffield. His recent work has included the development of two companions, with relevance to two stages in a person's life.
The first is the Senior Companion, which provides companionship for elderly people. It can provide comfort through a shared reminiscence of the past, and plays a role that Professor Wilks calls the “furry handbag”: something warm, cosy and dependable. Such companions validate photographs through the Internet, and allows for the labelling of people to provide conversational cross-references: in other words, a more elaborate form of photo-tagging. Dialogue tags the photo with discourse.
The second is the Health & Fitness Companion, for any age. It talked about what the person was eating, how much they were exercising (which was monitored), and providing dietary and fitness advice as a result. It acts as an “always-on personal trainer”. Key to this Companion is the knowledge base: the companion knows an increasing amount about you, and what it says and does is dependent on that.
Projects elsewhere in this field include companions which are less emotionally involved, and are simply more reactive. They are the devices which can handle a debrief on a bad day at work, and can just “take it”. They fulfill a short-term need, but Wilks doesn't see the longer appeal: “It tries to judge whether you're miserable or happy, and tries to say something to cheer you up. If you say that you're miserable about your job, it will say something like 'you won't get fired, really'. It's the standard consoling thing that you say, without thinking much about what they've said.”
Retention of lifestream data
Companions are less socially frightening that one might think; after all, look at the historical success of the Tamagotchi. However, a development that has taken place since the Tamagotchi's success is the lifestream: the constant recording of social occurrences. The volume of data for an 80-year lifestream has been valued elsewhere at 80TB.
As we increasingly record our daily, routine events onto digital media, there is a logical connection to be made between the intelligence and “memory” of companions, and the data recorded by humans. The volume of shared messages, photos, and “social detritus” makes it easier for humans to relate to companions. After all, if the companion can remind you of the great speech that you made at a conference – because it accessed the information which you recorded about it – then you are more likely to engage with it. The critical point to remember, however, is how the companion must understand context amongst all of that data.
The challenge for companion technology is to provide agency; to know what's important. If it doesn't effectively handle agency, the rules of inference build up a picture that might be false. If a single recorded fact in your lifestream actually plays against all other recorded facts – whether said in truth or in jest – then it's a challenge for companions to understand the impact. And here, lies one of the most subtle yet important emotional aspects of humans, and for companion technology to grapple with: whether to believe everything that someone tells you.
One of the key roles that popular social networks can play in the development of companions, is to retain and offer data recorded by people, for their companions. Given the user base of Facebook, it seems to be a natural role for the company to play.
“Bit by bit, people like Facebook are building structures which [academics] would do well to take account of. [The concept of] annotating the universe: buildings I like, food I like... if Facebook goes on like this, then to have access to someone's full Facebook account is already to have a huge debate about a life. We're not talking about a companion accessing it - it's already there. You've declared it to Facebook, so it's about organising it.”
Lifestream data is further transformed when location awareness is layered on top. Companions then won't need to ask humans about where the fantastic speech took place – it would know.
“Bit by bit, people like Facebook are building structures which [academics] would do well to take account of.”
Professor Yorick Wilks
As we already know from other matters, particularly with reference to Facebook, the capturing and subsequent re-broadcast of lifestream data attracts inevitable concerns about privacy. Mark Zuckerberg's famous quote, “Privacy is no longer a social norm”, conjures up a feeling among many, including Wilks, of pathos: a strong feeling of mixed emotions about the impact of no privacy, as much as the impact of privacy in general.
“You think back to Puritan times, and the Dutch not having net curtains, because what did you have to hide? Why should anybody hide anything? And yet, that doesn't fit what people feel.
This goes right back to the beginning of the Internet. I worked in John McCarthy's lab in Stanford in the early 70s, and the Internet was just starting; Stanford was one of the first sites. John in 1971 put out a doctrine that although we could protect our files, he wasn't going to protect his, and he didn't expect us to. He thought that everybody's files should be readable. All that he would preserve were job references. He was trying to create a culture of non-privacy, in 1970. He was ahead of his time.”
With Facebook the most well-known example of the clash between social networks and privacy, Wilks sees it as a standard trade-off, although one that is less well understood and accepted by the general public.
“To me, the first time I hit that question was when Vodafone and Google offered free email. We weren't naive; we knew that they wanted to scan your mail, and that they would know what appropriate ads to display, based on your mail. A computer would scan for keywords based on your mail, and you either took the view of 'That's benign, who cares?' or 'I don't want them reading my email and selling me sunglasses'. So, it was right there.
“They were offering you storage so they could penetrate your material - and it's gone on from there. Facebook has had to retrench as it's slightly getting out of hand, I don't have particularly strong feelings about it; I don't feel the strong knee-jerk feeling of 'My life is my life! I don't want anyone to see my medical records.' I don't give a bugger, basically.”
“It's anthropomorphism to think that they are reading your mail. Your mail could be thrown into a bag of words and they couldn't extract a single statement from it... so in that sense, it's not private, they're just looking for key words. Anybody that knows anything about information retrieval knows what it is to extract key terms, and it ain't reading. But, for the public, I'm not sure that those are different ideas.”
The future of companionship is also tied up with the future of societal acceptance of such a concept. Clearly, for the concept to be accepted in wider society, its introduction has to be subtle. Taking cues from earlier references to physical and virtual companionship, mass adoption of physical companions could be an end goal, which could easily start with the adoption of companions in virtual space: “I'm sure that you could fit a companion into Facebook. There could be Facebook members that aren't human. That's the way in which we will let non-humans into our society.”
For acceptance to be undertaken on a mass scale, it is not just the role of a wider society to understand this concept, but government. “What are governments for? They have to deal with things that they have never thought of. The law has to face up to things that it has never faced up to before. In English common law, dogs have a special position, because they can have character. You can't put down a dog, unless it's known to be of bad character. I see no trouble in thinking that there will be non-human members of society; they will be smarter than dogs.”
As with the principle of governmental acceptance, there is the potentially massive issue of legal acceptance. Who is responsible for the companion? What happens if something goes wrong? A heavy amount of responsibility is usually laden on the role of manufacturers - “you can't keep on saying that it's the manufacturer's fault” - but the legal impact of companionship feels carte blanche. It's a massive issue that it would be wise for lawyers to start to consider. Here, Wilks considers there to be a breakthrough moment. There will be a test case; but, of course, we don't yet know what form it will take.
Wilks sees a two-stream future for the development of companion research and products. There are the emboded conversation agents – those that are concerned with anthropomorphism, and work on the more physical, human facets of automata. Otherwise, there are those, like Wilks, that are more focussed on virtual companions, and the use and processing of language and emotions within this frame. However, according to Wilks, they will come together, and he sees nothing in companionship that fundamentally requires embodiment.
A logical way to end the conversation is with death – or, the possibility of retaining characteristics of one's life. Wilks takes a pragmatic view, based on our evolving relationship with posthumous, electric republication: “If people could tolerate it esthetically and morally, I don't see what's going to stop it. Somebody said to me 10 years ago, 'there are going to be more Marilyn Monroe movies' - in that you could make a fake movie with that person. I don't see what's going to hold us back.
“There are so many variants of digital lives continuing. It's amazing for our relatives, 100 years ago, that we could go on watching films of the dead. They would have found that creepy. We're used to it. That's trivial; but, in 1850, movies of dead people were unthinkable.”
Posthumous companionship materialises itself in all sorts of ways, with the recession being no challenge for the death industry. The Vidstone, a product which embeds video memories of people into their gravestones, is a low-tech but, clearly, emotionally engaging way to retain and directly replay memories of people after their death.
And, finally, we return to the living. Wilks sees no end companionship; it's a conversation that continues in perpetuity; it's there for life. “In a ticket sale, or a bank interaction, it's over. It don't think that companionship ends. There's no reason that it should stop. It should just go on.”
Professor Yorick Wilks is Professor of Artificial Intelligence (Emeritus) at the University of Sheffield, a Senior Research Fellow at the Oxford Internet Institute, and a Senior Scientist at the Florida Institute for Human and Machine Cognition.
Professor Wilks' latest book is Close Engagements with Artifical Companions, now available in hardback.