Quantcast
Monday, 22 August 2011 12:10

In conversation with... David Berry and Andy Piper

David Berry, Andy Piper. Photography by courtesy of David Berry, Andy Piper

 

Software is increasingly becoming part of the world around us: from cars to washing machines, we are working with "smarter" devices that offer levels of service and interaction that have never been seen before. What does this mean for society, and how society understands the role that software plays within it? We invited two leading thinkers – David Berry and Andy Piper – to discuss software from a socio-cultural perspective.

How has the perception of software changed?

DB: Software is both vanishing from both the collective and individual consciousness, and at the same time, moving forward into the forefront of peoples' minds - which is a very strange phenomenon to try to understand and map. It could be said, and many theorists argue, that where people are continually programming computers, they don't actually realise that they are programming. The user interfaces have developed to such an extent that they don't feel like we are using a computer - it's such a fluid experience.

At a societal level, we have a situation where software increasingly structures, formats and organises our social environment - I'm thinking social media as much as satellite navigation systems. We forget that it's software that's doing this job, that it's taking on the cognitive load. We outsource a certain portion of our labour to these devices.

At the same time, we also have this panic; the future lies in high technology and software, and so we have this desperate need to understand, to teach, to be ahead in the skills race. It's a very strange, schizophrenic relationship at a societal level, which I think is mirrored at the consumer level. We are increasingly finding software-enabled devices, where the software is trying very hard not to be at the front of the device. Although it may be the thing that is adding value to the device. People use these devices without thinking about it. This raises questions for peoples' way of life, because they are going about their everyday lives and not critically thinking about how software is so important.

The recent riots bring to the fore this often-forgotten world of software and the amount of structure that it places for us, which gives a moral panic; in the 1950s there was the idea of an "autonomous technology", where technology had a deterministic will of its own. It's interesting that this fear of mankind comes forward occasionally. We see Cameron determined to stamp out the devices that seemingly allow such riotous behaviour to take place. So, in many ways, software is a very interesting, nascent subject.

 

AP: As someone that comes from more of a software development background, software is becoming invisible, more pervasive, more in people's lives, and there not necessarily being a firm realisation of it – there maybe being a realisation when it surfaces in the form of something like social media.

Fundamentally there's a software dimension to make Twitter and YouTube work, but there is also a hardware dimension to make it available and interactable, as well as there being a data dimension. I agree that software is invisible. One of my colleagues, Grady Booch, talks about code being the "invisible thread" upon which we weave our technology these days.

Maybe people see technology as "the thing with which they interact", and don't necessarily distinguish between the device, the hardware, the computer - if they are still visible - and software. Software then becomes a reflection, in some ways, as an extension of the programmer - the "soul of the machine".

 

DB: The question of "what is software?" is critical. It's important to distinguish between technology and software. Technology applies to the water mill, as it does to the computer. I do think that there's something specific about the emergence of software. Technology is certainly getting close to what many think is its theoretical limit, in terms of processing speeds. Google bases itself on very standard hardware technologies, then writes incredibly sophisticated software on top of it - to "abstract away" the hardware.

The concept of technology itself is continually changing. How people relate to their software-enabled objects is a question of language. We are trying to come to terms with a software-enabled world. Unless we make a distinction between previous technologies and software-enabled digital technologies, we make it more difficult to understand this world. Previous technologies - houses, water systems, electricity grids - of course they're important. But, the amount of actual study of software itself as a phenomenon, particularly in social science and the humanities, is actually rather small. There has been lots of work in computer science, but it's interesting as to how little is done on the social side.

 

AP: Given that distinction, do people "see software" or technological objects that contain software? I agree with you that software itself is more or less invisible; you mentioned that people don't realise that they're programming. I don't necessarily realise that I am programming my PVR, or my phone. I'm just using them for how they were designed. There is still an individual that has developed the code, but I don't think that person or their code is necessarily visible to the user any more.

 

DB: That, again, raises a very important distinction that we need to make between code and software. These distinctions are always fraught with problems, and there is always the potential for simplification, but it is helpful for us in order to think this through. I draw a distinction in the book between code and software, and try to point to the fact that code is essentially source code, with development practices around it. Software is the finished product, where people don't see code, and is the environment that people can buy and use.

If we talk about software alone, then the industry has spent the last 40 years desperately trying to educate the everyday person into what software is. Versioning is where year is used at the end of the name, such as with Windows 95. This whole creation of a product industry, where you go into a shop and buy software: you would buy a big box of software because it was very expensive, and inside was this tiny disc which doesn't fill the box.

This educative process over a few decades has had an effect. We are at a point where people do know what software is. When people talk about their phones, they talk about the version and are often prompted to upgrade the software. The furore over viruses, bugs, and Trojans has also educated people into the dangers of software. The security industry takes the line "Are you protecting your children?". However, there is an understanding of software that I think may be passing.

 

AP: Boxed software and annual revisions are going away, and it's becoming more of a constant process.

 

DB: We are getting what are called Delta upgrades, so that software is now in a constant update cycle, so much so that it no longer even tells you that it's updating. That points to what some have called "Enchanted objects" where we can identify objects that have a "liveness" to them, a vitality. People don't need to know that there is software on there. The question is: what happens when we start moving from a world of mechanical software, where it is a technical process and a technical requirement, to a more organic part of the ecology of life, where software fits in so well that it becomes invisible? It has lots of interesting, but also very scary, implications.

 

AP: It partly comes back to the demonisation that has occurred to sell - for example, security. As things become invisible, people have this inbound fear of things happening without their permission or knowledge, particularly when something comes along in social media where their privacy is being given away. Those aspects feed that fear. You also have political angles where certain parts of society, or the political sphere, wish to demonise or attempt to wrestle back control from the crowd by imposing control.

 

DB: This moves back to your original point on what a philosophy of software could be, and you talk about software as a sort of extension. That's an interesting liberal conception - a strong notion of the human actor, having agency, having a Kantian notion of autonomy. It's interesting to consider the effect of software on the subject - the self-contained person. I'm thinking of how software fragments our cognitive abilities - and plays with the notion of what it is to be human. Google has a rather scary idea of this, when they talk about "Augmented humanity". This notion of Google offering your memories for you - you don't need to remember any more, as Google will do it for you. They will know where you are at this very second, and second-guess what it is that you want. Part of the work that I am doing now is on the notion of "Algorithmic humanity" - what happens to us, that "I", when the I isn't centered any more? This is where software is associated with ideas of post-modernity, and the breakdowns of grand narrative, and notions of the individual. The scariness that is being touched on here, is really about what humanity is, today. We no longer have to perform calculations in our head; we no longer need to know the capital of Iceland. It raises questions as to what we are.

 

AP: This reminds me of Cartesian Dualism: a more casual interaction between the real and the immaterial.

 

DB: Foucault argues that he notion of man is historical, stretching from the 1700s until the 1960s. The idea of man as having a certain kind of education, of knowledge, of culture... in many ways, these are breaking down. We have this world of augmented humanity. Because computation costs money, in a similar way to education costing money, people will be more knowing and more able to predict the future, and it will come down to this wealth.

 

AP: From an educational background, are we approaching things in the right way? Systems are becoming augmentative. I would be personally concerned if we didn't teach individuals any history, or the background or context in which they live at all; instead, allowing them to rely on search and data mining. Are we spending enough time helping individuals and children to analyse the information that is around them? How should the modern world help individuals to grow up, to understand what's around them?

 

DB: This is crucially important to a philosophy of software. As this world emerges, and becomes more and more a part of our lives... what are we teaching people? The classical distinctions between the disciplines do not make sense any more. we can think of the digital, of software, as an acid which dissolves all of the disciplines. Franco Moretti argues: what is literary studies? It's this crazy notion where you need to read eight books from the 19th century, and suddenly you are an expert on 19th century literature. But, thousands of books were published in the 19th century. With previous technologies it was impossible to have access to all of them, and secondly it was impossible to read them. With these technologies and data mining, it is possible to have at least some notion of what was going on, from all of these books. That allows us to do very different things: large-scale analysis. Students are not being taught these skills - "there is the canon of books that have been read and continue to be read". Students are not taught critical post-modern readings; they are being taught the books to "attack". This happens in history as well.

 

AP: From a historian's perspective, while there is still data out there, you still hope that people can take a critical view rather than just allowing a piece of software to do the spidering and analysis for you, and to come to your own conclusions. There are still multiple angles in which to view a field of study. Otherwise, are we giving up our critical abilities as individuals to the machines?

 

DB: A critical reader is a historical concept in itself. To be a critical reader is to sit down and undertake close readings of texts. To be a critical reader in a digital age... what would be critical software? What would it be to do a critical coding, in order to open up a field of studies, to a different way of reading? We need different analyses, different levels of analyses. It's fascinating that these digital approaches are finding it difficult to break through. There is a lot of stasis and conservatism towards these digital technologies in the education system. However, the issue of critical reading is crucially important, and we need to find out what that means.

 

AP: What also strikes me in this area is the concept of the Filter Bubble, and allowing the streams which you follow and the friends which you have, to influence your thinking. Is Google starting to direct the way you think, unless you deliberately follow some paths or individuals who challenge you every day? I follow a couple of people on Twitter deliberately, and a couple of times, I have come close to unfollowing them; not necessarily because they are offensive, but they can be so shocking and so surprising to my way of thinking. I deliberately keep them in there, as I want to know the different angles on a particular topic.

 

DB: In my book, I talk about the real-time stream as a crucial medium in our societies, and something that we need to get to grips with and try to understand. It raises so many questions - that streaming form of information, knowledge, and data. What does it mean to step into different streams? How are you a critical reader of a stream? Technology is important in offering distance to the stream; it can be so dense and fast-flowing, it can be difficult to comprehend it. Streams go so quickly, you sometimes to let it whiz past and pay partial attention to it.

What is a critical reader, and what does it mean to live in a world of real-time streams? By nature, they are not narrative forms. How do people connect the streams up, and make sense of the world when it is moving so quickly? The filter bubble itself raises questions, as it is a programming exercise. You have to select the streams to follow. You have to connect them together, merge them, and follow them.

I recently experimented with Twournal; they take your Twitter stream and turn it into a book.

 

AP: Twournal is very interesting.

 

DB: The way of reading changes, because you have a much more distanced reading, but you're also able to undertake a close reading. That points to the importance of the medium. As a society, we are concentrated on the printed form of media, which tends to be slow-moving. We are moving to a world where knowledge is moving incredible quickly, and the stream forms are so thin, it's fascinating. If you're getting a news stream of 140 characters, then you have a very different civic education of how politics works, than an in-depth article in a newspaper or magazine, or even a blog post, which not so long ago was considered a shallow form.

Twitter streams are just like echo chambers, magnifying people's prejudices. It's such an understudied form. These questions are hugely important. For software, these are processual forms. Software is a processual form. We have trouble thinking in terms of that moving motion; we want to think in terms of "things".

 

AP: Streams are essentially data, where software provides the analytical side. Maybe that's Google doing it for us, or it may be a crowdsourced set of opinions where we have gathered that data into Twitter through the lens of our friends who we follow.... or maybe our own minds are analysing the data, the stream of information.

 

DB: But... what is data?

 

AP: This is the question. A lot of this folds in on itself: you have the data, the software, the hardware, interaction, person, mind... in some ways, it's easier to divide out the topics but they obviously all interact.

 

DB: The problem with the term "data" is that it has this notion of something that has things done to it. To some extent this is correct, but data cannot exist on its own. You couldn't have data without software. One of the least discussed elements of software, is software as a container. We don;'t think of software as something that you pour stuff into. Databases are a good example of that.

The other side of data is that data can be software. A couple of weeks ago, there was a competition in Interactive Fiction to write an Inform7 book that would fit into 140 characters. People wrote books that would essentially execute from Twitter. The data became the code that became the book. This notion of data is convenient for software engineers to analytically distinguish between things, but we do have to bring them together to understand what's going on here.

It's like content and form: which is more important? Essentially it's a combination of the two. When we look at books, it was so common to not think about the medium, the book itself, and think only of the content. It's common in software studies for us to assume "just the software" or "just the data", in the same way. We need to bring them together.

 

How should we deal with real-time streams and filter bubbles? Are we seeing a resurgence of the human qualities within curation?

 

AP: Eli Pariser recently gave an interview with BBC Click, which asked some listeners to try to validate some of the points which he was making around Google curating content. People from around the world were undertaking the same search, but across locations and devices. They found that there wasn't a significant amount of differences. The differences come in with left-right political philosophies rather than general terminology - the term used was "platform" - to try to be fairly neutral. I'm not sure how far engines such as Google are responsible for filter bubbles.

My own personal view is that the first element is the individual, making their own bubble by making their own friendships. I have friendships and people that I follow, trying to pay attention to divergent voices to gain a broader awareness. The second element comes back to education: can we help children listen - not necessarily to accept - to other opinions and to learn critical thinking? Having come through an educational system where I absolutely had to distinguish different historians' views of different events in our past, with every event surrounded by context and different viewpoints... I wonder whether we are missing or injecting that thinking into our educational system. Far from thinking that we should be using whatever social network comes along and making friends on all of them, are we helping people to listen to divergent voices, opinions, and thought processes?

DB: Filter bubbles were discussed 10 years ago with the work of Cass Sunstein. A similar argument was made about Usenet and mailing lists. This fear of fragmentation in a filter bubble is not new. We have a constant return of a fear of societal breakup into small, fragmentory units. In some sense, this is a fear of the breakdown of the common: what do we share? It comes back to education: people once read the same books by the same historians. They were taught to be critical of them, but there would be a shared notion of a shared history. We do have this very strong Enlightenment notion that commonality of thought to our society is important. When we come to real-time streams, the form par excellence of the software age, we are looking at a fragmented, broken, non-narrative form. We are narrative beings: we need stories, things that connect them back together. Companies are jumping in here, such as Wolfram Research, doing some very interesting work with Computational Documents in narrative. Software is fragmentory, but it also allows the possibility to bring things back together again. Data journalism allows the creation of narratives that people can walk through, to understand the world. Storify attempts to narrativise the Twitter stream, and is increasingly used by journalists in order to make sense of this deluge of information.

We should also think about the software curator. You have a windowing effect that people cast their interpretation onto the world, and provide a "layer of sense" onto the world. Through that, you get the notion of what's happening. David Gelernter wrote about Mirror Worlds, arguing that you need trellises in order to fit together this disparate data. Curation is important, but as an assemblage of human and non-human activities. Software and education are again important: how is importance ranked? Who should you follow? Who should you be suspicious of? Who is a real person in a stream?

 

AP: Would Wikipedia be a good example of human curation, or would you say that there is an element of software curation? Can human curation - given the culture of Wikipedia, for example - offer the ability to be truly crowdsourced and offer a truly open, free store of information?

 

DB: Wikipedia demonstrates our desperate need for information, but is a desperate failure. It reads like a stream of facts. [Entries] are often badly written; they often lack a narrative dimension. Wikipedia is increasingly automating its processes because of the volume of data. Who is doing the curation is becoming an issue. Bots trawl through, finding new entries, marking as stubs... the things that you expect humans to do. None of this could take place without the software that enables people to post comments, to roll back, to undertake versioning. Wikipedia is an attempt at human/software curation - it's algorithmic humanity. It's getting towards impossible for a human to undertake this process.

What's also interesting here, is the emergence of different forms of media, as streams. Apple are about to engage in photo streams; Instagram is a photostream, as is Glimmer.

 

AP: ... and Photovine.

 

DB: We're trying to find ways to curate this enormous quantity of data. YouTube is becoming stream-like; iTunes can generate playlists through Genius. By moving to the cloud, what do streams mean for media? The record industry went through the floor as it stayed wedding to things that it thought that people wanted to buy. Streaming is a threat through Spotify. How people make sense of, and negotiate, streams, will be extremely important, and software will be part of that process.

 

AP. The cloud brings this conversation ofull-circle: control being taken away from the human. We're already at a point where data is being shared in the notion of a cloud, and less is being stored locally. The idea that all of this is going up into a transnational cloud, beyond the control of individual governments... we have given up our data, and some of our cognitive abilities. That idea comes back around: the fear of the autonomous machine.

 

DB: But, what is human? We're not "giving up", because we never had a photo library in our heads, and the number of photos that we would take, would be limited by the physicality of the photos.

AP: Absolutely, but the photos may well have been locally in your own album, in your own home, in your own machine and increasingly, they are now beyond that.

 

DB: The point is that when they are streamed in this way, they are instantly available, and changes the notion of memory. Bernard Stiegler talks about tertiary memory - memory that is in the machines. It's not about "giving it up" - the "you" is the sum total of the machine memory and your own human memory. This is something that we are going to have to negotiate. As software pulls us apart as Cartesian subjects, where do the boundaries lie? Where are "you"? Where do "you" reside for social and legal reasons - where you may have material stored in the US, but is illegal in the UK? There are so many difficult questions to be negotiated, particularly as law itself tends to legitimise and guarantee the notion of the liberal self - that you are self-contained and autonomous, and you use technology, rather than are shaped by the technology itself.

 

 

Does the changing shape of hardware – such as the Kinect and Wiimote - alter our perception of software?

 

AP: That comes back to my earlier point that software is becoming increasingly invisible to the consumer, the human. Certainly, there is an element of the hardware and software having a close relationship; in order for that hardware to work, it needs "stuff" inside it: invisible code that does the work for you. I have long adopted as many of these technologies as possible in order to understand where they are taking us. I have been playing with the Kinect for a while. The barrier to adoption of different technologies has fallen, and the visibility of the software has fallen.

 

DB: What is happening with hardware, is that software has increasingly been given more perceptive apparatus - eyes, ears - which makes software itself more functional. This is a transitional time; add-ons such as the Kinect will increasingly be made into the devices, in the same way that Apple increasingly adds interesting technologies such as GPS, compasses, and accelerometers into their phones. Who would have thought that a compass would be so useful in a mobile phone - and yet it opens up a whole world, because it gives the phone an ability to know which way it is facing which, when built on with software, enables much more sophisticated software products. When you look an iPad, you don't tend to add much to it; it's a blank slate. It's self-contained, with the software doing all the work.

I remember my first car, being amazed at how simplistic it all was: big, clunky pieces. Now, I wouldn't dare to take anything apart in my most recent car. It's what Simondon calls "Concrete" - the separate technologies that made an object up, have been increasingly melded together to create a pure object, harder and harder to pull apart. The software provides the glue.

The shape of hardware matters now, because it's becoming less messy. It's passing. We're moving to enchanted objects, part of the ecology, part of how we experience the world, and we won't think in this software/hardware way.

 

AP: I certainly perceive that with my own family. When I had an 8-bit computer, I was plugging cables into the CRT, loading analogue programs from a cassette player. Now, they have these magic sheets of glass which they can touch and will do amazing things. They become our way of interacting.

 

DB: Underlying there is the games console. There's a lovely video on YouTube by Dennis Jerz, putting his 11-year-old son in front of adventure games. It's fascinating to watch someone engage with it. In 10 years' time, people will look at the XBox, Kinect, and think... how old-fashioned, that you would have to wait for your computer to boot up, or to load a game. We have to be careful about thinking that this will be the way in which things will be in the future. I very much doubt that.

 

 

 

DB: What's becoming increasingly important, is how people now consider themselves as software. The Lifehacker movement, where people treat themselves as a real-time stream of data, and stream their heart rate, blood-sugar, and other levels straight onto their Twitter feed, become self-monitoring through technology. They see themselves as data, and upgrade themselves by hacking their bodies in particular ways. Humanity 2.0 by Steve Fuller explores what it means when people incorporate software as a model of living. That's an interesting question for us to think about.

 

AP: That is, as it's something that I work on from a telemetry perspective... the software, code, and sensor that enable an individual to do exactly that, and the serendipitous opportunities to either "upgrade ourselves", or join that data with other streams in order to provide new opportunities.

 

David M. Berry is a Senior Lecturer in Digital Media in the Department of Political and Cultural Studies at Swansea University. "The Philosophy of Software: Code and Mediation in the Digital Age" is out now. He is @berrydm on Twitter.

Andy Piper is Consulting IT Specialist and Websphere Community Lead at IBM. His website and blog is The Lost Outpost, and he is @andypiper on Twitter.

 

Random article

@Imperica

RT @Matt_Muir: If you, like me, are just starting your day, why not try accompanying bleary-eyed commencement with Web Curios: http://t.co/
RT @thisisjukebox: The End Of Absence? http://t.co/cXA6Cekifw ~ super interesting read on connectivity etcs. by @Vancouverharris, ht @imper
RT @Matt_Muir: ICYM earlier, have an end-of-Friday Web Curios: http://t.co/smJz1SScpV