Bricolage

Imperica's pick of interesting, long-form articles on contemporary culture from around the Internet.
Subscribe to our newsletter to get it all wrapped up, every Friday. 

Artificial intelligence is beginning to replace many of the workplace roles that men dominate. The parts of those jobs that will have staying power are those that rely more heavily on emotional intelligence — skills in which women typically excel.

Many researchers are reporting, and our research confirms, that artificial intelligence (AI) will reshape our economy — and the roles of workers and leaders along with it. Jobs that don’t disappear will see a significant shift as the tasks that are easily and inexpensively accomplished by robots become automated. The work that remains will very likely focus on relating. To adapt and prosper, the smart worker will invest in “human relating” skills — empathy, compassion, influence, and engagement. For simplicity, let’s call these emotional quotient (EQ) skills. These are skills in which women commonly excel.

Gender differences are a sensitive topic and we address them in this article with trepidation. There is a fine line between understanding commonalities and stereotyping, and the debate about nature versus nurture is robust. But whether you believe that men and women, on average, have different types of brains (as Simon Baron-Cohen, a British clinical psychologist and professor of developmental psychopathology at the University of Cambridge, has theorized) or that gender differences are a result of cultural norms and conditioning (as numerous other studies have explored), the real-world results are similar: Men and women, on average, excel in different dimensions and take on different roles in the workforce. By no means does that suggest that men and women are not equal — just different.

It is clear that men have quite an advantage in the working world — just check out the latest research by McKinsey & Co. on gender equality in the workplace. Men have greater representation among leadership roles, greater presence in higher-paid industries, hold nearly 80% of board seats, and earn higher compensation on average, even for the same jobs.

We believe that AI has the ability to help level the playing field. It will do so, we think, by replacing many roles and functions where men typically dominate.

Read more (MIT Sloan)

Fred Turner is one of the world’s leading authorities on Silicon Valley. A professor at Stanford and a former journalist, he has written extensively on the politics and culture of tech. We sat down with him to discuss how Silicon Valley sees itself, and what it means when the tech industry says it wants to save the world.

Let’s start with the idea that technology is always a force for good. This strain of thought is pervasive in Silicon Valley. Where does it come from? What are its origins?

It owes its origins to 1960s communalism. A brief primer on the counterculture: there were actually two countercultures. One, the New Left, did politics to change politics. It was very much focused on institutions, and not really afraid of hierarchy.

The other—and this is where the tech world gets its mojo—is what I've called the New Communalists. Between 1966 and 1973, we had the largest wave of commune building in American history. These people were involved in turning away from politics, away from bureaucracy, and toward a world in which they could change their consciousness. They believed small-scale technologies would help them do that. They wanted to change the world by creating new tools for consciousness transformation.

This is the tradition that drives claims by companies like Google and Facebook that they are making the world a better place by connecting people. It's a kind of connectionist politics. Like the New Communalists, they are imagining a world that’s completely leveled, in which hierarchy has been dissolved. They’re imagining a world that’s fundamentally without politics.

It's worth pointing out that this tradition, at least in the communes, has a terrible legacy. The communes were, ironically, extraordinarily conservative.

When you take away bureaucracy and hierarchy and politics, you take away the ability to negotiate the distribution of resources on explicit terms. And you replace it with charisma, with cool, with shared but unspoken perceptions of power. You replace it with the cultural forces that guide our behavior in the absence of rules.

So suddenly you get these charismatic men running communes—and women in the back having babies and putting bleach in the water to keep people from getting sick. Many of the communes of the 1960s were among the most racially segregated, heteronormative, and authoritarian spaces I've ever looked at.

Read more (Logic)

My dinner-party party piece for many years was to say, “Well, actually, I invented Baileys. You know, Baileys Irish Cream. I did that back in 1973.”

If one of the unfortunate listening group is a woman – and this is based on actual past experience - she is likely to respond something like this: “Oh-my-God. Baileys. My mother absolutely adores it. Did you hear that, Jocasta? This man invented Baileys. It’s unreal. I don’t believe it. He must be terribly rich. Baileys Cream. Wow!”

And it’s not as if these rather posh people really adore Baileys. Or even hold it in the same esteem as, say, an obscure Islay single malt or a fine white burgundy from Meursault. Not a bit of it. They might have respected it years ago but most people of legal drinking age regard Baileys as a bit naff. To my mind, they’d be very wrong.

On December 3rd, 2007, Diageo announced the sale of the billionth bottle of Baileys since it was first introduced in 1973. That’s a thousand million bottles. And they will have sold at least a further 250 million bottles in the decade since then bringing the total up to something in the area of 1,250,000,000. If we assume that every bottle of Baileys delivered eight generous servings that suggests that over 12 billion glasses of Baileys have been poured since it all began.

The initial thought behind Baileys Irish Cream took about 30 seconds. In another 45 minutes the idea was formed. Baileys was like that for me. A decade of experience kicked in and delivered a great idea. It wasn’t as instant as it seemed. This is the story of its creation.

Read more (The Irish Times)

Since 1965, British artist Stephen Willats has self-published Control magazine, a seminal forum for artists’ writings on art practice and social organization. With over 150 contributors throughout its 50-year run, Control has drawn on research from cybernetics, advertising theory, and behavioral science to develop models for how artworks operate in dialogue with an audience and society at large. Last year Willats published the 20th issue of Control, in which he continues to pose incisive questions about the ethics of information systems and networked artistic practice that feel more crucial than ever.

Cybernetics was famously defined by Norbert Wiener as “the scientific study of communication and control in the animal and the machine.” The models of feedback that cyberneticians developed were transdisciplinary from the outset, bridging the worlds of computation and engineering with those of design, art, and counterculture.

According to Anthony Hudek, “It is … Control’s function as a self-determining information network, instead of its content, that makes it truly cybernetic”: while being about networks, the magazine also represents a network in itself. Willats’ choice of title, Control, signals this departure from traditional models of editorial authority, seeking instead to develop a conceptual practice determined by the networked relationships of coordinating agents. Artists’ publishing served as a key means of actualizing these ideas. The magazine has always been self-published, self-funded, and free of advertising, while also attaining a broadly international reach.

The interview that follows focuses specifically on Control’s early years, notable for their iconic cover illustrations by designer Dean Bradley. Released between 1965 and 1970, Control’s first issues mark a period when cybernetic ideas resonated broadly within the visual arts, from Jasia Reichardt’s 1968 Cybernetic Serendipity exhibition at the ICA London, to Stewart Brand’s Whole Earth Catalog in California. Willats’ own practice deployed the frameworks that he and his collaborators devised across Control’s pages in a variety of ways, from computer simulations to social and educational projects such as the Centre for Behavioral Art (1972-73). Control is not only a key node within Willats’ body of work; it offers a fascinating toolkit for reconsidering the present status of social hierarchy and networked interaction.

Read more (Avant)

We are surrounded by hysteria about the future of artificial intelligence and robotics—hysteria about how powerful they will become, how quickly, and what they will do to jobs.

I recently saw a story in ­MarketWatch that said robots will take half of today’s jobs in 10 to 20 years. It even had a graphic to prove the numbers.

The claims are ludicrous. (I try to maintain professional language, but sometimes …) For instance, the story appears to say that we will go from one million grounds and maintenance workers in the U.S. to only 50,000 in 10 to 20 years, because robots will take over those jobs. How many robots are currently operational in those jobs? Zero. How many realistic demonstrations have there been of robots working in this arena? Zero. Similar stories apply to all the other categories where it is suggested that we will see the end of more than 90 percent of jobs that currently require physical presence at some particular site.

Mistaken predictions lead to fears of things that are not going to happen, whether it’s the wide-scale destruction of jobs, the Singularity, or the advent of AI that has values different from ours and might try to destroy us. We need to push back on these mistakes. But why are people making them? I see seven common reasons.

Read more (MIT Technology Review)

Last year I was working on an article about the tech industry when I decided to interview a software engineer who writes for Quillette under the pseudonym “Gideon Scopes”. Gideon had mentioned to me in passing that he had Asperger’s Syndrome (a mild variant of autism spectrum disorder) and I wanted to find out more about the industry from the point of view of someone who is not neurotypical.

I first asked him when it was that he knew he wanted to work in technology. He told me that he first knew it when he was five. His family got their first home computer and he was transfixed. Later, he would come across a brief introduction to the BASIC programming language in a book and proceed to teach himself his first programming language. He was only seven.

As a child he taught himself programming out of books, mostly alone at home. He told me that his family were not particularly supportive of his hobby. His mother was not happy to see him focus so intently on one interest and viewed his study of programming “as the equivalent of a kid spending too much time watching TV.”

Growing up in suburban New York, he told me that a compiler for a programming language would cost at least $100, and programming books generally cost $40-60 each. His only source of income was a $1 per week allowance, so it would take him a year or two to save for just one item. This was despite the fact that his parents were in a high income bracket, and could have easily provided resources to help him learn. He learned anyway.

Despite his cognitive ability, however, Gideon underperformed early on in his schooling. He thinks it may have been because he experienced the school environment as overly rigid and inflexible, and the work was just not challenging enough to engage him. It wasn’t until he was able to take accelerated math and science classes that his grades reflected his ability.

Fast forward several years, and today Gideon is a successful senior software engineer in a prestigious technology company in New York. He loves his job and he loves where he works. He is grateful for the fact that his company values his work, and not how he promotes himself and how he dresses. He feels that the technology industry rewards talent and hard work, and that it is one of the best places for “Aspies” to be. He tells me that the only drawback is the occasional bar event (where he doesn’t like the noise) and a weird and somewhat rigid political culture.

Read more (Quillette)

In early 2006, I got a call from Chris Kelly, then the chief privacy officer at Facebook, asking if I would be willing to meet with his boss, Mark Zuckerberg. I had been a technology investor for more than two decades, but the meeting was unlike any I had ever had. Mark was only twenty-two. He was facing a difficult decision, Chris said, and wanted advice from an experienced person with no stake in the outcome.

When we met, I began by letting Mark know the perspective I was coming from. Soon, I predicted, he would get a billion-dollar offer to buy Facebook from either Microsoft or Yahoo, and everyone, from the company’s board to the executive staff to Mark’s parents, would advise him to take it. I told Mark that he should turn down any acquisition offer. He had an opportunity to create a uniquely great company if he remained true to his vision. At two years old, Facebook was still years away from its first dollar of profit. It was still mostly limited to students and lacked most of the features we take for granted today. But I was convinced that Mark had created a game-changing platform that would eventually be bigger than Google was at the time. Facebook wasn’t the first social network, but it was the first to combine true identity with scalable technology. I told Mark the market was much bigger than just young people; the real value would come when busy adults, parents and grandparents, joined the network and used it to keep in touch with people they didn’t get to see often.

My little speech only took a few minutes. What ensued was the most painful silence of my professional career. It felt like an hour. Finally, Mark revealed why he had asked to meet with me: Yahoo had made that billion-dollar offer, and everyone was telling him to take it.

It only took a few minutes to help him figure out how to get out of the deal. So began a three-year mentoring relationship. In 2007, Mark offered me a choice between investing or joining the board of Facebook. As a professional investor, I chose the former. We spoke often about a range of issues, culminating in my suggestion that he hire Sheryl Sandberg as chief operating officer, and then my help in recruiting her. (Sheryl had introduced me to Bono in 2000; a few years later, he and I formed Elevation Partners, a private equity firm.) My role as a mentor ended prior to the Facebook IPO, when board members like Marc Andreessen and Peter Thiel took on that role.

Read More (Washington Monthly)

The Bauhaus movement in Germany, roughly 1919-1933, marked a major turning point for design and its role in society. It exerted a powerful and influential role in the development of artist style. But today, for many designers, it is more of a historical curiosity than a role model. Why? What has changed?

The Bauhaus grew out of crafts and the fine arts. Its focus was style and form. Although it had a huge amount of influence, today that influence is muted by the heavy artistic emphasis. There was little emphasis upon the people for whom the objects were being designed, no discussion about practicality or everyday usage. Even in architecture, the emphasis was form, not the people who had to suffer living and working in the clean, sterile environment that the architects championed.

The Bauhaus movement provides an interesting paradox. Although it had a great cultural impact upon design as art, it failed to produce any single object that changed people’s lives in any fundamental way. Why didn't the Bauhaus rethink the nature of things, of the way that products impact people’s lives and activities? Today, designers relish the opportunity to invent entirely new ways of working, playing, and living. Instead, at the Bauhaus, the emphasis was on simplicity, which is fine as long as one is designing simple things, such as kitchen tools, tableware, and jewelry. But the world is complex, so too must be the things that enable us to work within this world (Norman, 2010). Complexity is a fact of life. Simplicity, on the other hand, is in the mind – it is the designer’s task to make the complex understandable and usable. And when a complex thing is easy to understand, we call it “simple.”

Read more (Don Norman, LinkedIn)

“What should a city optimize for?” Even in the age of peak Silicon Valley, that’s a hard question to take seriously. (Hecklers on Twitter had a few ideas, like “fish tacos” and “pez dispensers.”) 1 Look past the sarcasm, though, and you’ll find an ideology on the rise. The question was posed last summer by Y Combinator — the formidable tech accelerator that has hatched a thousand startups, from AirBnB and Dropbox to robotic greenhouses and wine-by-the-glass delivery — as the entrepreneurs announced a new research agenda: building cities from scratch. Wired’s verdict: “Not Actually Crazy.” 2

Which is not to say wise. For every reasonable question Y Combinator asked — “How can cities help more of their residents be happy and reach their potential?” — there was a preposterous one: “How should we measure the effectiveness of a city (what are its KPIs)?” That’s Key Performance Indicators, for those not steeped in business intelligence jargon. There was hardly any mention of the urban designers, planners, and scholars who have been asking the big questions for centuries: How do cities function, and how can they function better?

Of course, it’s possible that no city will be harmed in the making of this research. Half a year later, the public output of the New Cities project consists of two blog posts, one announcing the program and the other reporting the first hire. Still, the rhetoric deserves close attention, because, frankly, in this new political age, all rhetoric demands scrutiny. At the highest levels of government, we see evidence and quantitative data manipulated or manufactured to justify reckless orders, disrupting not only “politics as usual,” but also fundamental democratic principles. Much of the work in urban tech has the potential to play right into this new mode of governance.

Read more (Places Journal)

As someone who grew up on the internet, I credit it as one of the most important influences on who I am today. I had a computer with internet access in my bedroom from the age of 13. It gave me access to a lot of things which were totally inappropriate for a young teenager, but it was OK. The culture, politics, and interpersonal relationships which I consider to be central to my identity were shaped by the internet, in ways that I have always considered to be beneficial to me personally. I have always been a critical proponent of the internet and everything it has brought, and broadly considered it to be emancipatory and beneficial. I state this at the outset because thinking through the implications of the problem I am going to describe troubles my own assumptions and prejudices in significant ways.

One of so-far hypothetical questions I ask myself frequently is how I would feel about my own children having the same kind of access to the internet today. And I find the question increasingly difficult to answer. I understand that this is a natural evolution of attitudes which happens with age, and at some point this question might be a lot less hypothetical. I don’t want to be a hypocrite about it. I would want my kids to have the same opportunities to explore and grow and express themselves as I did. I would like them to have that choice. And this belief broadens into attitudes about the role of the internet in public life as whole.

I’ve also been aware for some time of the increasingly symbiotic relationship between younger children and YouTube. I see kids engrossed in screens all the time, in pushchairs and in restaurants, and there’s always a bit of a Luddite twinge there, but I am not a parent, and I’m not making parental judgments for or on anyone else. I’ve seen family members and friend’s children plugged into Peppa Pig and nursery rhyme videos, and it makes them happy and gives everyone a break, so OK.

But I don’t even have kids and right now I just want to burn the whole thing down.

Someone or something or some combination of people and things is using YouTube to systematically frighten, traumatise, and abuse children, automatically and at scale, and it forces me to question my own beliefs about the internet, at every level. Much of what I am going to describe next has been covered elsewhere, although none of the mainstream coverage I’ve seen has really grasped the implications of what seems to be occurring.

Read more (James Bridle, Medium)

At the 9th Berlin Biennale, artists Simon Denny and Linda Kantchev presented Blockchain Visionaries (2016), an exploration and celebration of the blockchain phenomena. Denny, a self-professed enthusiast described the blockchain as, "a great model for dreaming dreams and telling a diverse and divergent set of new (and not so new) stories about how the world might organize in the future". Similarly, in his New York show Blockchain Future States (2016) Denny set out to investigate ‘three financial companies at the forefront of Bitcoin’: Ethereum, 21 Inc. and Digital Asset. In the press release his gallery stated: "At a moment when public debate spotlights a global governance system that seems to ignore the needs of many of its participants, starkly contrasting visions for alternative political systems are emerging. What would a world look like where the collusion of an elite few would be rendered technically impossible? Can a truly inclusive global future exist?"

Whilst expressing a political vision familiar from any article on cryptocurrency, the bland inferences about a tech fix for ‘elite’ power read as a bromide. On closer inspection some of these assertions have a lineage that is far from emancipatory, however.

The art world is ardently advocating for Bitcoin and other blockchain technologies. For instance, it has recently been suggested that the blockchain might ensure a system by which artworks are provided with trustable provenance (‘a spreadsheet in the sky’); or used to enforce contractual obligations; or to establish a ledger so that artists are paid any royalties due. There is even a scheme to encourage small investors to acquire tiny portions of famous masterpieces – a form of fractional ownership that is clearly derived from the Bitcoin paradigm. Behind the digital dreaming much of this ‘utopianism’ appears as an effort to shore up value in the art market, which has been sagging ever since the 2008 crisis. None of this is particularly surprising given that art has long been a form of speculative investment, but this indicates how Bitcoin and blockchain boosterism regularly disguise baser imperatives (whether the boosters are themselves aware of it).

Read more (Mute)

Digital marketing has unleashed an obsession with efficiency and short-termism, one that's trading long-term brand-building for short-term ROI. We've put the golden goose in a battery farm of scientific efficiency, and it's killing the brand, business growth and profit.

Companies such as Procter & Gamble, Coca-Cola and Motorola, for example, have raised the issue recently. This past summer, the world's largest advertiser, P&G, announced it had slashed digital budgets by $140 million, and yet, sales still went up. In July, Motorola CMO Jan Huckfeldt went on the record saying, "If you want to revive a brand and you really want to build a brand quickly, if you bank on social and digital, it's not going to work."

Read more (Ad Age)

Twitter, Facebook
Terms & Conditions, Privacy, Cookies

x