Former Google design ethicist Tristan Harris on Big Tech’s antitrust hearing
That’s why I spent the last three years as a Design Ethicist at Google caring about how to design things in a way that defends a billion people’s minds from getting hijacked. When using technology, we often focus optimistically on all the things it does for us. But I want to show you where it might do the opposite. Where does technology exploit our minds’ weaknesses?
Tristan Harris was Google’s “Design Ethicist” where he studied how design choices directly affect people’s behavior in conscious and unconscious ways. He’s also a practicing magician! As he says, “Magicians start by looking for blind spots, edges, vulnerabilities and limits of people’s perception, so they can influence what people do without them even realizing it.” Over at Medium, Harris wrote a fascinating post about persuasive technology and how design can “exploit our minds’ weaknesses.”
Tristan Harris Google Design Ethicist The Social Dilemma Is Big Tech Too Big?
“This is an arms race for attention,” says James Steyer, CEO and founder of Common Sense Media, a nonprofit that provides media reviews for schools, about the way big technology companies like Apple and Facebook have designed their products.
The doctors, researchers, policymakers and technologists that took the stage didn’t mince words in conveying that, if left unregulated, technology could pose an existential threat to society—and kids in particular.
“I see this as game over unless we change course,” says Tristan Harris, a former ethicist at Google who founded the Center for Humane Technology.
Former Google design ethicist Tristan Harris explains how tech companies make their products hard to resist. He now advocates for a Hippocratic Oath for tech designers. So here’s the thing. When you think about Internet addiction, tech companies actually want people, not just teenagers, to spend time on their apps and their devices. And while that doesn’t always lead to addiction, it does become a habit, sometimes an unhealthy one for many people.
“They give people the illusion of free choice while architecting the menu so that they win, no matter what you choose. I can’t emphasize enough how deep this insight is…. When people are given a menu of choices, they rarely ask:
‘what’s not on the menu?’
‘why am I being given these options and not others?’
‘do I know the menu provider’s goals?’
‘is this menu empowering for my original need, or are the choices actually a distraction?’”
To find out, we — along with co-author of the study, danah boyd (who prefers lowercase letters in her name) — studied those doing the work of ethics inside of companies, whom we call “ethics owners,” to find out what they see as their task at hand. “Owner” is common parlance inside of flat corporate structures, meaning someone who is responsible for coordinating a domain of work across the different units of an organization. Our research interviewing this new class of tech industry professionals shows that their work is, tentatively and haltingly, becoming more concrete through both an attention to process and a concern with outcomes.
Is Big Tech’s embrace of AI ethics boards actually helping anyone?
Researchers from Google, Microsoft, Facebook, and top universities objected to the board’s inclusion of Kay Coles James, the president of right-wing think tank The Heritage Foundation. They pointed out that James and her organization campaign against anti-discrimination laws for LGBTQ groups and sponsor climate change denial, making her unfit to offer ethical advice to the world’s most powerful AI company. An open petition demanding James’ removal was launched (it currently has more than 1,700 signatures), and as part of the backlash, one member of the newly formed board resigned.
Google has yet to say anything about all of this (it didn’t respond to multiple requests for comment from The Verge), but to many in the AI community, it’s a clear example of Big Tech’s inability to deal honestly and openly with the ethics of its work.