To Hold Tech Accountable, Look to Public Health

The field of public health has transformed medicine, yet failed the most vulnerable. This trajectory can be avoided.
Photo illustration of a Black person's hand with an IV students in a college class and graphs
Photo-illustration: WIRED Staff; Getty Images

How is it that public health has delivered on its promise to improve the lives of millions, while failing to resolve the dramatic health disparities of people of color in the US? And what can the movement for tech governance learn from these failures?

Through 150 years of public institutions that serve the common good through science, public health has transformed human life. In just a few generations, some of the world's most complex challenges have become manageable. Millions of people can now expect safe childbirth, trust their water supply, enjoy healthy food, and expect collective responses to epidemics. In the United States, people born in 2010 or later will live over 30 years longer than people born in 1900.

Inspired by the success of public health, leaders in technology and policy have suggested a public health model of digital governance in which technology policy not only detects and remediates past harms of technology on society, but also supports societal well-being and prevents future crises. Public health also offers a roadmap—professions, academic disciplines, public institutions, and networks of engaged community leaders—for building the systems needed for a healthy digital environment.

Yet public health, like the technology industry, has systematically failed marginalized communities in ways that are not accidents. Consider the public health response to Covid-19. Despite decades of scientific research on health equity, Covid-19 policies weren't designed for communities of color, medical devices weren't designed for our bodies, and health programs were no match for inequalities that exposed us to greater risk. As the US reached a million recorded deaths, Black and Brown communities shouldered a disproportionate share of the country's labor and burden of loss.

The tech industry, like public health, has encoded inequality into its systems and institutions. In the past decade, pathbreaking investigations and advocacy in technology policy led by women and people of color have made the world aware of these failures, resulting in a growing movement for technology governance. Industry has responded to the possibility of regulation by putting billions of dollars into tech ethics, hiring vocal critics, and underwriting new fields of study. Scientific funders and private philanthropy have also responded, investing hundreds of millions to support new industry-independent innovators and watchdogs. As a cofounder of the Coalition for Independent Tech Research, I am excited about the growth in these public-interest institutions.

But we could easily repeat the failures of public health if we reproduce the same inequality within the field of technology governance. Commentators often criticize the tech industry's lack of diversity, but let's be honest—America's would-be institutions of accountability have our own histories of exclusion. Nonprofits, for example, often say they seek to serve marginalized communities. Yet despite being 42 percent of the US population, just 13 percent of nonprofit leaders are Black, Latino, Asian, or Indigenous. Universities publicly celebrate faculty of color but are failing to make progress on faculty diversity. The year I completed my PhD, I was just one of 24 Latino/a computer science doctorates in the US and Canada, just 1.5 percent of the 1,592 PhDs granted that year. Journalism also lags behind other sectors on diversity. Rather than face these facts, many US newsrooms have chosen to block a 50-year program to track and improve newsroom diversity. That's a precarious standpoint from which to demand transparency from Big Tech.

How Institutions Fall Short of Our Aspirations on Diversity

In the 2010s, when Safiya Noble began investigating racism in search engine results, computer scientists had already been studying search engine algorithms for decades. It took another decade for Noble's work to reach the mainstream through her book Algorithms of Oppression

Why did it take so long for the field to notice a problem affecting so many Americans? As one of only seven Black scholars to receive Information Science PhDs in her year, Noble was able to ask important questions that predominantly-white computing fields were unable to imagine.

Stories like Noble's are too rare in civil society, journalism, and academia, despite the public stories our institutions tell about progress on diversity. For example, universities with lower student diversity are more likely to put students of color on their websites and brochures. But you can't fake it till you make it; cosmetic diversity turns out to influence white college hopefuls but not Black applicants. (Note, for instance, that in the decade since Noble completed her degree, the percentage of PhDs awarded to Black candidates by Information Science programs has not changed.) Even worse, the illusion of inclusivity can increase discrimination for people of color. To spot cosmetic diversity, ask whether institutions are choosing the same handful of people to be speakers, award-winners, and board members. Is the institution elevating a few stars rather than investing in deeper change?

Why is it so hard for institutions to change? One reason is that inequalities within public-interest institutions are often anchored in self-definitions that marginalize the people we say we are here to serve. In the history of public health, for example, the nascent field of obstetrics defined itself by attacking midwives, who were largely Black and immigrant women. Although many hospitals had worse mortality rates than midwives at the time, obstetricians defined their profession with racialized stereotypes of midwives as unsanitary and unscientific. Journalists and historians argue that the deliberate exclusion of Black women from scientific obstetrics is one reason for higher death rates of Black women in pregnancy today.

Beyond public health, people who study the urgent concerns of marginalized groups are often questioned by their institutions as less objective. Consider the idea of objectivity in journalism, one justification offered when the University of North Carolina at Chapel Hill rejected the tenure case of Pulitzer Prize–winning journalist Nikole Hannah-Jones in 2021. In academia, researchers who work with marginalized groups are often similarly sidelined through biased ideas about "basic" versus "applied" science. In the social sciences and computing, research with wealthy white groups in industrialized nations is often considered more fundamental science than research with communities of color. In computer science, subfields that study social and ethical issues attract women and students of color at much higher rates, while often being rated as lower-status when it's time to hire new faculty.

To become more equitable, public interest institutions can treat diversity as a core consideration in every part of the organization rather than an add-on for consultants and PR teams to handle. Leaders can start by deliberately defining our professions with the whole of society in mind. Institution-level equity also requires pay-equity changes through better graduate student fundingfull-time employment rather than unpaid internships, and compensated board seats. For primarily-white institutions, changes regarding who is included at the table also need to be matched with changes in culture.

Rethinking How Researchers Treat the Public

Ethics scandals have also poisoned the well for public-interest research on the tech industry. In  2012, when university researchers conducted a study to test the effect of Facebook's newsfeed on mental healththeir decision not to seek consent helped fuel a scandal that caused a chilling effect on platform research. The public also distrusts journalists who use social media data, with a third of Canadians reporting discomfort with the practice. 

NGOs and academics have also exposed communities to privacy risks when crowdsourcing evidence. In one case, a research team recruited internet users without consent to study government censorship. By automatically posting censored material online and observing the response, researchers could have exposed people to unwarranted government investigations in authoritarian countries. Marginalized communities also notice when predominantly white researchers and NGOs parachute into communities without consulting or investing in them. Parachute research is common in tech accountability, too. Multiple large NGOs have organized one-off citizen science "Big Tech" projects as ways to grow email and donor lists without investing in long-term community empowerment and organizing. 

Tech accountability organizations sometimes gesture at the urgency of current crises to justify our decisions not to share power or invest in the communities we say we serve. In the short term, organizations that ride the news cycle will seem unusually successful at mobilizing people, raising money, and gaining acclaim. But as the scholar Zeynep Tufekci has observed, this short-termism can't develop the deep relationships needed for lasting change. Every time we de-prioritize communities for fads in funding, research, or causes, we are making a sustainable future harder to attain.

Breaches of trust are particularly harmful because communities are essential to successful technology governance. Decades of research has shown that community organizing is an essential force for successfully governing complex problems. Communities collect data about problems, test ideas for change, organize pressure campaigns, and take civic action. In public health, community health workers serve as important bridges among communities, hospitals, scientists, and government agencies. If the US took a public health approach to technology governance, every region would have access to respected community technology workers who similarly care for our digital well-being.

At the Citizens and Technology Lab (CAT Lab), I have seen the benefits of community organizing through our community/citizen science on online harassment. People facing harassment often come to CAT Lab when their questions and ideas are ignored by corporations, governments, and academics. For eight years now, we have worked alongside communities to document institutional failures in harassment response and test independent ideas for harassment prevention. Together we have made fundamental advances in policy and science that are making a measurable difference in the daily lives of millions of people. Tech platforms and policymakers are now adopting community insights validated by the science we have done together. We are also seeing community members take up new career and education opportunities created through their involvement in community science.

Society will succeed at equitable technology governance when we have strong community networks who have relationships of reciprocity with scientists, journalists, and NGOs. Entirely new professions could offer well-paying jobs to people who do this essential community work. In the meantime, we can start by building reciprocal relationships with partner communities, changing incentives for community-engaged work, and offering equitable grant-making for communities involved in public-interest technology projects.

Ecosystem Strategies for Technology Governance

What might equitable technology governance look like across the sector as a whole? When Chicago ended its secretive predictive policing system in 2020, the city's decision showcased the benefits of a strong ecosystem of public-interest institutions. 

Successful governance of Chicago's predictive policing program required research, lawsuits, and organizing from many communities and organizations. Journalists at the Chicago Sun-Times won a lawsuit compelling the city to publish data from the Strategic Subjects List. Researchers at the RAND Corporation had analyzed the system's ineffectiveness. Immigrants rights advocates and university-based lawyers challenged biased decisionmaking, freeing unlawfully detained people from detention. Even the scholars whose work inspired Chicago's policing system joined the opposition. Each study, lawsuit, shocking news article, and data visualization helped policymakers and the public conclude that the system did more harm than good. The resulting decision to shut down the Strategic Subjects List could be a textbook illustration of ecosystem models for governing complex systems. 

In public health, successful crisis response also requires a cooperative network of scientists, government agencies, and community groups. But while ecosystems can sustain life, they can also collapse. The response to Covid-19 was one of the most sophisticated life-saving endeavors in the history of our country. According to National Institutes of Health (NIH) director Francis Collins, American institutions ignored lessons from past pandemics and treated health as a problem of medical science rather than a combined social and technical problem. These mistakes cascaded across other parts of the social safety net; by underfunding Black and Brown scientists, the NIH also created gaps in the knowledge and networks of trust needed to prevent the spread of Covid. Similar inequality in journalism disconnected people from resources and institutions they needed to stay alive. Government policies were designed in ways that failed to reach the people most affected by the pandemic. Black, Hispanic, and Asian businesses were least likely to access pandemic relief largely because they lacked access to prerequisites like banking services.

No single institution can fulfill democracy's full promise to all people. Nor can systemic failures be explained through the mistakes of a single actor. To build technology governance that reliably serves the public interest, we need to recognize systemic risks and imagine systemic solutions.

Ecosystem failures in technology governance occur when many institutions make mistakes that add up to less than the sum of their parts. For example, many organizations in technology governance choose to hire a small number of the same activists and scholars of color rather than cultivate a deep bench of diverse leaders. At universities, the resulting game of musical chairs leaves less famous scholars unsupported, entrenching inequality. Boards and conferences seek out the same handful of people to serve everywhere, taking them away from the work that gained them prominence in the first place. This hyper-visibility can also catalyze backlashes to equality when people who are fooled by the diversity shell game overestimate progress on diversity and resist change.

Another public-interest technology failure happens when institutions hire from the least diverse fields in America for jobs that those fields do not prepare them for. When governments and NGOs build technology policy teams, they often create positions limited to "technologists" and "engineers" who are rarely trained in the social science mindsets and methods to consider the social dimensions of technologies. By redefining jobs to include more diverse fields such as communication, information science, and human-computer interaction, organizations can develop essential expertise while also expanding organizational diversity.

Technology governance institutions also have a responsibility to inspire the next generation. Tech companies have teamed up to invest hundreds of millions of dollars to convince young people that working for tech firms is the best way to achieve financial security while doing good in the world. As a college professor, I routinely hear students worry that studying social issues or ethics could harm their chances of an industry job. In their view, topics more "basic" to computer science will get them ahead, given the prejudices of the field. Many BIPOC students choose degree programs that they know will undervalue them as people—hoping it will improve their job opportunities. We can free students from this dilemma by collaborating to create equitable public-interest careers in technology and society.

Working Together for Equitable Technology Governance

The public health model of technology governance represents a powerful opportunity for the common good. It calls us to imagine a vision of communities, scientists, and regulators working toward a flourishing society that protects the vulnerable from technology failures. 

How can we fulfill that promise when we face the same forces that have created a legacy of inequity in public health? In his book The Voltage Effect, John A. List points out that product teams are often hostage to the populations they start with. The reverse can also be true. By starting early with a diverse group and working to avoid predictable equity failures, we have a once-in-a-lifetime opportunity to build a field of technology governance that genuinely serves everyone.