The Future of Platform Power: Fixing the Business Model
Digital platform companies whose business models hinge on the monetization of personal information, notably through targeted advertising, have run roughshod over competitors, communities, and democracy itself in the single-minded pursuit of market dominance and ever-increasing profits. An expert consensus that this business model is responsible for these platforms’ negative externalities has emerged in recent years, yet the nature of the targeted-advertising business model is often poorly understood. It rests on three theoretical pillars: surveillance capitalism, Chicago School neoliberalism, and “technosolutionism.” The “middleware” proposal merely displaces the perverse incentives inherent to the business model. However, policy interventions focused on reforming online advertising, notably through comprehensive privacy reform, are much more promising.
During Mark Zuckerberg’s April 2018 inaugural appearance before the U.S. Senate, a question from then-Senator Orrin Hatch stood out for its simplicity: “How do you sustain a business model in which users don’t pay for your service?”
“Senator, we run ads.”1
At the time, the commentariat mocked Hatch’s question as a sign of ignorance about today’s digitized world. Yet the truly telling part of this exchange was Zuckerberg’s flippant response. The Facebook founder likely thought that he was stating an uncontroversial fact. Instead, he pointed to the north star guiding his company onward to ever-increasing profits. In this single-minded pursuit, Facebook and other tech giants have run roughshod over competitors, communities, and democracy itself.
In the three years since the hearing, policy makers and tech-insiders-turned-critics have come to a rough consensus that the collateral damage big-tech companies have inflicted is intrinsically linked to their business model. These commentators are late to a party originally thrown by activists and academics, but are welcome nonetheless. Newly mainstream, the idea that “it’s the business model” forms the core of a reform proposal out of Stanford University that was recently detailed by Francis Fukuyama in these pages.
Fukuyama’s article synthesizes the emergent consensus about the challenges of content governance for social-media platforms:
Some decisions to flag or remove posts have been either more contentious or simply erroneous, particularly since the platforms began to rely increasingly on artificial-intelligence (AI) systems to moderate content during the covid-19 pandemic. An even more central question concerns [End Page 157] not what content social-media platforms remove, but rather what they display. From among the vast number of posts made on Twitter or Facebook, the content we actually see in our feeds is selected by complex AI algorithms that are designed primarily not to protect democratic values, but to maximize corporate revenues (p. 38).
The essay similarly echoes the expert consensus in arguing that any governmental response must not “aim at silencing speech deemed politically harmful,” as this would immediately founder on the U.S. First Amendment and the universal human right to freedom of expression (pp. 38–39).
Yet despite this apparent agreement, not everyone is clear on what the “business model” actually is. Looking more closely, the targeted-advertising business model rests on three theoretical pillars. First is the conviction that all of reality and human life is a commodity, from which companies are free to extract and monetize data. Shoshana Zuboff calls this ideology “surveillance capitalism.”2 The second pillar is blind faith in the invisible hand of the market (including automated advertising auctions) as the best way to allocate resources. This is commonly called Chicago School neoliberalism, in reference to a group of influential twentieth-century economists. The third pillar is technosolutionism, the idea that the solution to social and political problems lies in technology and, more specifically, in the use of “big data” to automate complex decision-making processes.
What does this mean in practice? First, social-media companies gather as much information as they can about who we are and what we do, collecting some data directly from users (first-party data) and additional data through other entities (third-party data). All these demographic and behavioral data are combined and analyzed to sort internet users into groups of potential interest to advertisers. Then, advertisers are invited to buy ad “impressions,” specifying which groups they want to reach. Some social-media companies use what are known as ad-targeting algorithms to make that determination automatically for their clients. This feature is central to several legal claims against Facebook for violating U.S. civil-rights legislation by enabling advertisers to exclude users from their target audiences on the basis of “ethnic affinity” and other protected categories.
Companies that make money this way are incentivized to increase users’ time on and engagement with the platforms: This increases both the number of “impressions” available for sale to advertisers and the amount of data that can be collected to improve ad targeting (or at least convince advertisers that targeting is being improved). Platforms use content-curation algorithms to show individual users the content most likely to keep them engaged, often by eliciting strong emotions such as excitement, anger, or fear. Executives such as Facebook’s Monika Bickert claim that the algorithms favor “meaningful” content,3 but it is [End Page 158] unclear how “meaningfulness” can be measured—unlike engagement, which can be readily assessed through clicks, “likes,” and comments.
In Facebook’s world, things that can be measured matter most.
But this type of algorithm is only part of the equation. Over time, platforms have also developed complex systems for identifying content that should not be seen and either removing it altogether or showing it less frequently (“downranking”). Companies based in the United States have historically wanted to do as little of this content-moderation work as possible, both out of a commitment to free-speech norms and because moderating is expensive, impossible to get right all the time at a global scale, and guaranteed to provoke controversy. Companies also try to automate moderation as much as they can, since this allows them to reduce labor costs and to blame any mistakes on the algorithm, which they invariably promise to improve whenever errors come to light.
Facebook has further tried to outsource responsibility for high-profile content decisions to a company-funded “Oversight Board” comprising free-expression and human-rights experts from around the world. The company has neither the legitimacy nor the expertise to make hard calls about balancing freedom of expression against other fundamental rights—and, of course, executives would rather avoid being blamed by those who disagree with platform decisions. It remains to be seen whether this buck-passing scheme will succeed: In May 2021, Facebook’s Board punted the decision about whether former U.S. president Donald Trump should be permanently banned back to the company.
Many observers (including both Fukuyama and myself) blame the combination of these three types of algorithmic systems—ad targeting, content curation, and content moderation—for a range of harms, including illegal discrimination, targeted harassment, hate speech, election interference, and incitement to violence. We also agree that restructuring the social-media industry is a more productive approach than micromanaging individual content-governance decisions. Where Fukuyama and I disagree is on how to address these dynamics to achieve our shared goal of “making the internet safe for democracy.” Like most discussion on platform governance, the Stanford proposal focuses exclusively on user-generated content while ignoring the moderation and targeting of ads—the key to social media’s financial viability. I suggest the opposite [End Page 159] approach: Fix how platforms govern the content and targeting of ads, and the rest will follow.
The “Middleware” Proposal
The key insight in the Stanford proposal is that social media can be reformed by disrupting the connection between income-generation (in this case, advertising) and editorial functions, much as traditional media organizations purport to separate these activities. Unfortunately, the proposal doubles down on two of the fallacies that got us here in the first place—Chicago School neoliberalism and technosolutionism—while failing to address surveillance capitalism at all. To wit, the Stanford group has released a report entitled Middleware for Dominant Digital Platforms: A Technological Solution to a Threat to Democracy.
Fukuyama’s proposal, variations of which have also been advanced by Ethan Zuckerman, Richard Reisman, and others, identifies content governance as too consequential for society to trust companies to do, and too fraught for companies to 4 Why not saddle someone else with this responsibility, then? Rather than delegating decisions to pseudo-independent experts, as Facebook is trying to do, the “middleware” proposal turns to market competition and its supposed ability to fuel extraordinary technological innovation.
Fukuyama and his colleagues suggest placing content governance in the hands of new intermediaries dubbed “middleware firms.” Each of these firms would set its own rules for what users (and presumably advertisers, though this is not explicitly stated) can and cannot post, and who sees or does not see discrete pieces of content. Each firm also would have to figure out how to use a combination of technology and human labor to enforce its rules. Governments and citizens would point fingers at middleware providers when their decisions inevitably drew controversy, and users could vote with their mouseclicks by choosing among competing middleware services—whose purveyors might range from Breitbart to the American Civil Liberties Union.
Users would file into their “filter bubbles,” perhaps to an even greater degree than they do now. Still, depending on the variety of middleware on offer, some people might be less likely to come across the types of extreme content that currently thrive thanks to attention-driven algorithms. Fewer people, for instance, might join groups such as the Facebook “Stop the Steal” group to which the January 6 attack on the U.S. Capitol has been attributed—only users whose chosen middleware provider allows this kind of content would become aware of it through recommendation algorithms. But those who did join the group would be able to organize on the platform without interference, as the middleware firms whose algorithms recommended the group would be unlikely to take it down. It is impossible to know whether this scenario would lead [End Page 160] to better or worse outcomes. Perhaps fewer people would show up at the Capitol. But those who did might be better organized, since they would be able to plan uninhibited by corporate Trust and Safety teams seeking to quash incitement to violence.
There is, however, at least one even bigger problem, as Fukuyama allows: “There is no clear business model that will make [middleware services] viable today” (p. 42). This is essential: Middleware firms will have their own set of incentives and will need to be accountable to someone, be it a board of directors, shareholders, or some other entity. Incentives and accountability both depend on how the “middleware” providers will make money. Fukuyama suggests the government should “set revenue-sharing mandates that will ensure a viable business model for middleware purveyors” (p. 43). Under this scheme, the platforms themselves would still earn money from targeted ads, but fork over a slice of that pie to the middleware companies—which would therefore have an incentive to keep the pie as big as possible. It is here that we see the third pillar of surveillance capitalism rear its head.
Setting aside the legal and political viability of such a mandate, it is unclear how this arrangement would bring any real change. As long as the entities that govern content rely on revenue from targeted advertising, even if indirectly, they will be incentivized to reward engagement. Alternatively, if these firms were funded by grants or contracts from governments or private foundations, their decisions would likely reflect—or at least be seen as reflecting—these funders’ priorities, thus deepening conflicts over the legitimacy of content curation and moderation. Consumers are unlikely to be willing to pay for subscriptions to such services, and even if they were, the start-up costs would be insurmountable.
It is safe to say, then, that introducing “middleware” would not change the fundamental business problem: Moderating and curating content in the public interest is difficult, contentious, and expensive, all the more so when the imperatives of the targeted-advertising business model militate against it. Other interventions, including some dismissed by Fukuyama in his essay, stand a better chance. Knock down at least one of the platform business model’s three pillars, and the whole house of cards comes crashing down.
Fixing Targeted Advertising
It is hard to say what might break the habits of reflexively turning to the market or to technology to solve entrenched social problems. But the way to stop surveillance capitalism in its tracks is clear: comprehensive U.S. federal privacy legislation with enforcement mechanisms that go beyond fines, which big tech has proven ready to dismiss as a cost of doing business. This legislation should apply to all types of actors (including government agencies, nonprofits, and companies of [End Page 161] all sizes) and enshrine in law a first-party data paradigm: Companies should have access only to those data gathered in their own direct interactions with users. Policy announcements by Google and Apple underscore that a shift in this direction is already underway. But change must not stop with voluntary efforts by companies that have everything to gain from writing the new rules of the game.
To regulate the handling of first-party data, the law should include data-minimization requirements (data processors can collect only those data that they need to perform the service requested by the user) as well as purpose limitation (they can use data only for the specific purpose for which they collected it—meaning not for ad targeting or content curation). Privacy legislation should ban uses of data that violate civil and human rights, just as the law already prohibits targeting algorithms that exclude people from seeing ads for jobs, housing, education, or financial services on the basis of protected categories. It should offer users granular control over the collection and use of their data, on an opt-in basis. This would impede companies from customizing users’ individual media diets and steer the ad sector back toward the contextual-advertising paradigm that prevailed until relatively recently (in which ads were simply selected based on the content of the pages that hosted them).
We are facing a multifaceted problem and privacy legislation alone is no panacea, though I believe it is the pi`ece de résistance. A second promising angle of attack would be to introduce some degree of intermediary liability for advertising content and targeting, as my colleagues at Public Knowledge have suggested. This would force companies to develop robust sociotechnical systems for ensuring that ads comply with both the platforms’ own rules and applicable national laws—systems that could then be adapted to govern user content more precisely, even without the threat of litigation. And any solution must be accompanied by antitrust enforcement, corporate-governance reform, and mandatory transparency regarding content-governance processes, all essential tools for corporate accountability.
NOTES
1. Bloomberg Government, “Transcript of Mark Zuckerberg’s Senate Hearing,” Washington Post, 10 April 2018.
2. Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: PublicAffairs, 2019).
3. Monika Bickert, Testimony Before the United States Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law, 27 April 2021, www.judiciary.senate.gov/imo/media/doc/Bickert%20Testimony.pdf.
4. Ethan Zuckerman, “The Case for Digital Public Infrastructure,” Knight First Amendment Institute at Columbia University, 17 January 2020, https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure; Richard Reisman, “The Internet Beyond Social Media Thought-Robber Barons,” Tech Policy Press, 22 April 2021, https://techpolicy.press/the-internet-beyond-social-media-thought-robber-barons.