The primary fallacies of “the AI dilemma”

author image
Mark Vletter
17 May 2023 Clock 6 min

Social media contains a host of negative effects. Think addiction, disinformation and polarisation. There is also a debate about where to draw the line between censorship and freedom of speech. Tristan Harris, among others, addressed these topics in the Emmy-winning Netflix documentary The Social Dilemma.

Fundamental thinking errors

Along with Aza Raskin, Harris recently gave a talk on “the AI dilemma” and I highly recommend watching it. They highlight many of the potential problems AI can cause. But watching the presentation, I felt friction. I think Harris and Raskin make some fundamental thinking errors. In this article, I explain some of the key differences I see between AI and social media and talk about why the future could look completely different from what we currently think.

Understanding drivers and business models

To understand why AI is currently fundamentally different from social media, it is important to look at the primary drivers and business models of AI and social media companies

Advertisements and the indirect business model

The primary goal of companies in the current system is simple: they want to create shareholder value. The business model of both social media and search is ads. A user may use a search tool for free, but gets advertising in return. This means that Google’s real customers are the companies that pay for the advertising. This is also known as an indirect business model.

Data collection is the driver for search companies

The indirect business model among search companies has a major drawback. They counted on the fact that if a search engine knows more about its users, it can tailor its ads to that user. This caused such companies to collect data and work with targeted ads.

Data collection as the primary driver of social media companies promotes fake news, polarization and addictive platforms

For social media companies, the indirect business model is an even bigger negative driver. The time people spend on your platform is the main driver, because more time means more ads and therefore more revenue. The quality of the content on the platform – true or false, good or bad – becomes irrelevant. All that matters is that people stay on the platform. This results in more data collection, fake news, polarisation and addictive platforms. Because again, the advertiser is the real customer, not the user. Access to those users’ data is the value the platforms create for advertisers.

Quote The advertiser, not the user, is the real customer of social media companies.

To be successful, social media platforms need the following:

  • More users
  • More time spent on the platform
  • More data from users
  • More content for users

The business model of AI companies

Now that you know what the business model of search companies is, it’s a good idea to look at the model that AI companies adopt and the customer value they create with it.

The direct business model

First, I’ll mention Midjourney. Midjourney is a generative artificial intelligence program created by the company of the same name. The aim of their software tool is to generate the best images for the user. At a basic level, there is a freemium model where you can use the service for free. If you want to use the tool more often, you pay a monthly fee. This is also known as a direct business model: users pay directly for the service they use.

ChatGPT also uses such a model. You can use version 3.5 for free, but if the service is busy, paying customers are given priority. And if you want to use the superiour variant – ChatGPT 4 –  you pay for that too.

Now, this is not the full story. To zoom in a bit further, I will explain what AI companies need to be successful.

What AI companies need to build a good product

To improve AI models, a few things are needed:

  • Datasets: a collection of structured data used to train, validate and test AI models.
  • Feedback on output: to improve, AI models need feedback from users on the quality of the output. This is also known as Human Reinforced Learning (RLHF).

Currently, users provide input to the model, adding to the dataset. The input and feedback provided by users increase the quality of the product and the quality of its output.

There is a direct link between customer value; what AI companies require to be successful, and the business model. As long as your data is safe and private and you retain ownership of it, you are willing to give the AI company more of your data because there is a direct benefit to you as a user.

The link between customer and business value

This direct link between customer value, business value and business model is the key to building a healthy business. This is also where the current AI business model and the indirect business model differ.

Companies like Microsoft (throught their search engine Bing) and Google are experimenting with advertising in AI tools. This makes sense from a business point of view. Google’s primary business model is ads, which is how the company generates 80% of its revenue. But it would be better for society if the advertising-driven indirect business model disappears. Companies will benefit most from AI tools. This is something that OpenAI – the company behind ChatGPT – is aware of. They will soon launch a business offering for ChatGPT. And, as an entrepreneur, I don’t mind paying for such a service at all.

Microsoft is also integrating ChatGPT into its business offering so that the customer pays for the service directly. This in turn strengthens the direct business model.

Social media expertise does not translate well to AI

Back to Harris and Raskin’s presentation. They are doing something that many experts do, including myself. If you have much knowledge about a phenomenon, you tend to attribute predictive skills to yourself: you know exactly how another phenomenon will develop. The speakers seem to assume that we all need to learn from how social media has developed and affected society, so that we can intervene earlier if things threaten to go wrong with AI.

But no matter how “woke” we are now, we cannot predict what will happen with generative AI. The scenarios predicted by Harris and Raskin make sense, but even if the companies start using a direct business model, it is very likely that we will be heading in a very different direction than we currently think.

It is common knowledge that “past results do not guarantee future results”, but we often completely ignore that warning immediately after hearing it.

The main message of the presentation that I do think is true is this: AI seems to be bringing about a major paradigm shift. We should therefore follow AI developments closely and react in time when something happens. But above all, let’s not pretend that we already have an idea about what exactly that “something” means ?

Mark Meinema, thank you for the new insight you gave me and for your help in writing this article.

Read Pt 1: We need to talk about AI and productivity

Read Pt 2: The future of AI: from ChatGPT and AutoGPT to a personal AI assistant

Read Pt 3: This is why AI is not going to replace humans

Keen on a quarterly slice of succinct insights from the inside track? Sign up to our newsletter.

Sign Up!