The first Model T hit the market more than 100 years ago, in 1908.
For American consumers, mass access to the automobile was a miracle—a game changer that enabled vast opportunities for commerce, travel, and social mobility. But the prevalence of this new technology also presented a novel set of previously unforeseen challenges.
Cars jolted off the road, killing pedestrians—and even children playing in the street. They competed for space on unmarked lanes with horse-drawn carriages and cable cars. Without a clear consensus on who owned the roads, there was no way to regulate them and ensure the common safety of all. And eventually, pollution and exhaust proved global problems, spilling into the atmosphere and skyrocketing carbon emissions.
That is, until automotive executives, legislators at the local, state, and federal levels, civil society organizations, and consumers realized inaction was dangerous and unsustainable. Together, they built the physical infrastructure to create safer roads with speed limits and stop signs—and the regulatory infrastructure to govern behavior on those roads. Over a century of multisector collaboration, we’ve adopted seatbelts, insurers, safety and emissions checks, even innovated to develop electric cars and charging stations.
All to fortify this automotive-related infrastructure to advance the public good. Some of these challenges have been mitigated, some are managed. Some are still emerging. Working together, we have built collective and connective muscles to evolve and adapt to new challenges with new solutions.
Today, we are living through a new industrial revolution. Each passing year, hundreds—if not thousands—of Model T-like innovations are introduced into our digital lives in the form of new websites, apps, currencies, algorithmic tools, automations, and more.
And once again, our society’s infrastructure is not keeping pace.
In the past 30 years, digital communications technologies have transformed how we connect and engage with the world around us, creating opportunities in every area of contemporary life. But as often as these technologies foster learning and promote justice, they have also been used in ways that amplify inequality. Too many people—particularly those who have historically been excluded or marginalized—aren’t able to access, benefit from, or influence digital platforms. The governance and use of technology are implicated in nearly all the drivers of inequality, underscoring the extent of this problem.
Policy makers are struggling to regulate everything from AI-operated phones to advertising models. Consumers jump to use new apps and services, often without understanding the adverse consequences they entail. And as a result, their privacy—and sometimes, their safety and even their attention and ability to distinguish truth from fiction—are casualties in the race to create new tech.
What’s clear is that the solution to these challenges lies not only in an approach focused on developing tech for good, but also asking, “good for who?”
A pioneering yet still emerging field, public interest technology (PIT), is both asking that critical question and finding answers. Public interest technology is a cross-disciplinary approach that demands technology be designed, deployed, and regulated in a responsible and equitable way—in other words, in service of the public interest.
It operates on the understanding that technology is not, and has never been, “neutral.” That’s why public interest technologists—engineers, scientists, community organizers, activists—center the experiences of historically marginalized groups, those who have been both targeted and neglected by technology.
This field, and indeed this in-depth series, cuts across sectors and calls for those in academia, civil society, private and public sectors to individually and collectively build a society that can fully benefit from technology—and limit its adverse consequences.
The examples that follow animate the critical contributions public interest technologists are making across each sector. Together they create a road map of multisector solutions and advance a holistic understanding of a connected future guided by public interest technology insights.
Academic and training institutions lay the knowledge foundations for technology experts, present and future. Lessons taught in these institutions play an outsized role in shaping the technologies we use in our everyday lives. To that end, ensuring that our academic and workforce training institutions are adequately prepared to build a diverse, equitable, and cross-disciplinary public interest technology ecosystem is essential. Institutions of higher education codify knowledge and best practice while also nurturing the next generation of wisdom and service. Yet, too often, university curriculums for computer science, data science, and engineering departments emphasize technical skills in disciplinary silos at the expense of transdisciplinary skill sets.
As a result, many talented designers, engineers, and technologists who are building the next generation of technology have not been trained to anticipate relevant insights from fields like public policy or sociology—insights that drive the way people interact with digital goods, services, and platforms.
At best, this lack of training will keep us from avoiding harms or developing necessary solutions; at worst, it will replicate and reinforce existing inequities, invisibly encoding them in the digital world. Consider the criminal justice system, where more than 20 states use algorithmic modeling to calculate the recidivism risk of criminal defendants—despite the fact that the algorithms exist in a black box, without transparency into the ways they calculate their recommendations.
When tested by experts, some of these algorithms have been proven to be biased and 77 percent more likely to erroneously predict Black defendants will commit a violent offense than a white defendant. This is, in part, a reflection of biases in the data that has been used to train the predictive technologies. When a person is stopped by police or arrested, that data is fed into predictive algorithms. Because predictive algorithms are easily skewed by arrest rates, and because bias in policing leads to Black people being more than twice as likely to be arrested than white people and more than five times as likely to be stopped by police without just cause, the data used to assess criminal defendants is inherently stacked against Black people. And these types of black-box algorithms are unaccountably making decisions in every sector where we live and work—from schools to stores to hospitals.
To meet the urgent need for experts who understand both technology and its ethical and societal implications, academic institutions must make this pipeline a priority. Universities can begin by developing, expanding, and investing in PIT programs so that technologists of the future graduate with both the technical expertise and the interdisciplinary wisdom necessary to build technology that anticipates harms and materially improves people’s lives. Fortunately, universities don’t have to build without a blueprint. We already have exemplary models of PIT programs, many led by student activists and thinkers. For example, the Public Interest Technology University Network includes 49 member colleges and universities and eight international affiliates committed to promoting experiential learning opportunities, increasing faculty support, and building PIT initiatives and research.
In our own work, we have seen their efforts pay off manyfold. For instance, at the Public Interest Tech Lab at Harvard University (a program co-led by Sweeney), students were challenged to implement scientific experiments of their own design that document unexpected and unforeseen adverse consequences of technology on society. The best experiments would earn students the chance to present their ideas to regulators in Washington, DC.
Leaders at the Public Interest Tech Lab expected to bring one, maybe two students to Washington. But in the end, we took 26, who presented projects ranging from price discrimination for online tutoring services to racial bias in online services. And the work from these and subsequent students have helped ignite new public protections, modified regulations, and changed business practices at big tech companies. These real changes lay the foundation for a more just digital reality.
The harms wrought by technology infiltrate every area of life, and we will need to update our understanding of how technology’s presence and influence can undermine or advance our goals, including for democracy itself. One initiative pioneered by the Public Interest Tech Lab at Harvard, VoteFlare, alerts voters to changes in their registration status and mail-in ballot acceptance so they can assure the information is accurate and monitor against tampering or changes that might keep them from casting their vote.
These are proof points of the doorways and beginnings academia can offer technologists to lead purposeful, successful careers, and to forge a more just technological future for us all.
Private sector companies are rapidly innovating new technologies that change the fabric of our day-to-day lives—from apps and websites to currencies and virtual realities. Many of these innovations are privately owned by a select few tech giants. Such private sector companies spend millions to recruit top technical talent, but they often fail to invest in cross-disciplinary experts trained to anticipate the unintended consequences of certain design choices—and protect users and companies from potentially egregious errors.
Today, because that perspective remains underdeveloped, companies are often reactionary when faced with their oversights, playing catch up after the damage is done—all while enabling misinformation, privacy violations, biased artificial intelligence, and spurring record low public trust in the tech industry. For example, companies that embrace data-heavy marketing strategies without adequate oversight may be implicitly embracing bias. Fair housing and fair employment advocates revealed how digital advertisers were able to target Facebook ads using data proxies for race, age, and gender in order to allow customers to illegally block certain violations from applying for jobs or housing.
Though such behaviors proliferate in big tech, a few key outliers in the private sector are leading the way when it comes to partnering with cross-disciplinary public interest technology experts to understand and mitigate the adverse consequences of their technologies. Consider Airbnb: Following documentation that hosts of Asian origin were making up to 20 percent less than their white counterparts in some locations, Airbnb reinvented its booking procedure in the name of objectivity and updated its nondiscrimination policy.
The company also partnered with civil rights organizations to create Project Lighthouse, a US-based initiative to help Airbnb collect and understand data on racial discrimination on its platform and inform future policies and features to overcome it. By working with public interest technologists and institutionalizing their findings, companies can follow Airbnb’s model and ensure that they are considering impact and equity from the start.
As technology companies become increasingly powerful, today’s civil society organizations are striving to hold monumental institutions—governments and tech giants alike—accountable. They are being asked to reach new, and larger, communities—online and in-person. And they are doing so in an increasingly technological world, in which Americans spend more than 10 hours a day online, and data is arguably the world’s most valuable commodity.
Yet, many of these organizations lack the technological expertise to scale their efforts—and to articulate how technology is related to, and entrenched in, civil and human rights concerns.
Without interdisciplinary experts working collaboratively with civil society leaders to illuminate the relationships between equity, technology, and community engagement, digital practices that marginalize vulnerable groups will continue unabated.
To address this gap, funders must empower civil society organizations with the resources and expertise necessary to adopt a public interest technology lens. Not only will such an approach help them serve their communities, it will also help diminish the silos within and between those working in the public interest and private sector, between civil rights and digital rights.
Already, we’ve seen what happens when these sectors advance solutions together. In 2011, the Ford Foundation invested in civil and internet rights organizations who established a Civil Rights Privacy and Technology Table, a 10-year coalition of more than 30 organizations. The Table leverages relationships among leading experts in civil rights and technology to address concerns like voter suppression, digital surveillance of immigrant communities, and hate speech.
Through their collaborations, this coalition has successfully advocated for Google to ban advertisements for payday loans, predatory loans that target communities of color. And it has persuaded the Federal Communications Commission to limit the exorbitant cost of phone calls made from prison, as well as expand subsidized phone plans for lower-income Americans to include broadband internet access. The coalition has collaborated on hundreds of issues and while we’ve seen progress, so much more is needed.
These successes reveal how, with investment in public interest technology, civil society can play an important role as both an accountability agent and as an active participant in improving our tech ecosystem.
Our technological future demands thoughtful government involvement—both to protect the public interest and to create the robust competitive markets innovation demands. Yet, without technological expertise, including a broad understanding of technology’s potential harms, the government is toothless in its attempts to properly regulate big tech.
And the government isn’t just falling short of regulating technology—it’s failing to harness technology in its own operations. Less than four years ago, the US Department of Defense was still using floppy disks on some legacy systems. And over the past year, the government has come under fire for shaky website rollouts and complex online systems that ensured that wealthy Americans accessed vaccines and COVID-19 testing before their lower-income neighbors.
Solving these astounding dual failures—a lack of regulation and a lack of implementation—should be a major policy priority. But instead, the federal government continues to underfund its technology and cybersecurity operations, effectively holding back our technology infrastructure. Just last year, the United States invested less than $2 billion in technology and cybersecurity in the American Rescue Plan, rather than the $10 billion originally proposed.
In 2021, 27 percent of all federal IT job announcements were never filled. That represents a total of 1,443 IT jobs unfilled in FY2021, and that’s just counting the places where the government has both the clarity and budget to try to recruit this talent.
From the local to the federal level, by welcoming public interest technologists into the policymaking process, governments can more adequately regulate technology—and set a new global standard.
In fact, the Biden administration has already taken a welcome first step. In 2021, the White House hired public interest technologist Alondra Nelson, a racial justice advocate and professor at the Institute for Advanced Study’s School of Social Science, to serve as deputy director of the White House Office of Science and Technology Policy.
Promising signs are popping up at the state level, as well. Throughout the pandemic, a PIT network of 6,000 pro bono scientists and researchers—named the U.S. Digital Response—has helped state governments develop COVID-19 responses, including establishing online data dashboards to track hospital resources and restoring the Department of Labor’s website service. These efforts saved lives, and with additional funding, this and other initiatives can be expanded and formalized at the federal level, to create a more robust, transparent PIT infrastructure.
Together Towards a Just Tech Future
This in-depth series will highlight these and other examples of how we can build a better and more equal technological world with the public interest at its center. And much like the country’s response to the dangers of automobiles in the early 20th century, advancing public interest technology will function best with widespread participation. Social entrepreneurs, business leaders, philanthropists, technologists, civil society leaders, and academics can come together to do this work.
The stakes we’re describing are high, and the problems are complex. As technology expands its reach everywhere, to meet the challenges of our day, we need the vision of public interest tech everywhere as well, in every sector, and through the many lenses it takes, be it civic tech, community tech, tech for good. Now is the time.
While it may seem a steep mountain to climb, we can find solace in the fact that we’ve been here before. From its inception, the field of public interest law faced similarly long odds; but philanthropy, advocates, and communities came together to build a field that is such an integrated part of the social justice and legal landscape we can barely remember when it didn’t exist.
From COVID-19 to criminal justice, digital advertising to defending democracy, and every other area that technology touches, public interest technologists need to be there, ensuring that technology works for good, for all, for now, and into the future. It’s time to invest in this movement.