The AI Dilemma – Center for Humane Technology Talk

I continue to be super-excited about the opportunities that AI can bring to our lives. However, on a macro level there is an arms race going on right now between the major tech giants such as Microsoft, Google, Twitter, TikTok and Facebook.

 

This is not just a race to “the bottom of the brain-stem”, it’s a race to intimacy to ensure market dominance.

 

Tristan Harris and Aza Raskin, co-founders of the Center for Humane Technology and creators of the Emmy-winning Netflix documentary The Social Dilemma, discuss the implications of artificial intelligence (AI) in their recent talk, The AI Dilemma. They introduce AI as a paradigmatic technology, one that presents a new way of thinking that challenges existing beliefs and values.

They discuss the responsibility that comes with inventing a new technology, as it uncovers a new class of responsibility. They argue that if the technology confers power, it will start a race, and if the race is not coordinated, it will end in tragedy.

Harris and Raskin give the example of social media as the first contact moment between humanity and AI, which resulted in information overload, addiction, shortened attention spans, polarisation, fake news, and breakdown of democracy. They argue that the engagement economy, which maximises engagement of users for profit, has already created an AI that has taken hostage many aspects of society.

They move on to discuss the second contact moment with GPT-3, the new large language model, which presents benefits such as increased efficiency, solving scientific challenges, and making money. However, they warn of the misalignment problem with AI, AI bias, job displacement, and the need for transparency.

They argue that AI will become entangled in society, and it is important to understand the narratives that we use to talk about it before it becomes too late. They call for the responsible release of new large language model AI and the need to take collective action to prevent AI from becoming a dangerous weapon.

It’s a fascinating talk and well worth a watch if you have not already seen it.

 

My key takeaway…

There is still time to choose our future! Do we really want a world where five people at five big companies onboard humanity onto these AI platforms before we figure out the future we actually want?