Risk, Ethics and Artificial Intelligence

I met with one of my mid-career graduates last week, and he said that what he enjoyed most about one of the courses I taught was that, no matter how narrow or technical the issue being discussed, the referential context was what I call the real world.

In the seven years I’ve been teaching, I’ve managed to keep at least one foot out of the university, writing and speaking generally around the ethical and social impacts of information, technology, security and privacy. 

Recently, probably because of local elections and the national political climate, I’ve spent more time reading and thinking about how to “backstop democracy,” – I strongly recommend reading The Brookings Institution’s new white paper, The Democracy Playbook: Preventing and Reversing Democratic Backsliding.

Is it possible to use a risk analysis framework to imagine how we can develop new or modify existing AI tools, and to rethink rather than magnify the great divide that we face in this country? Where does this work belong – in research universities, backed by corporate underwriting and government grants; in the private sector, backed my unlimited amounts of money; or in the public sector, where the impact of democratic backsliding is most strongly felt?  I would argue it belongs in all three arenas.

On Wednesday, I’ll open the SecureWorld Seattle conference with a presentation on artificial intelligence (AI), examining a set of technologies that have emerged rapidly in the form of products being used in both the public and private sector, launched without solid risk analysis or more than proposed guidelines for ethical practices. What does artificial intelligence have to do with democracy?

As it turns out, a great deal, since some AI tools have shown themselves to undermine fundamental rights of a citizen to be secure, or private.  In his report on the Third Annual Aspen Institute Roundtable on AI, David Bollier notes that:

"AI benefits not only include increased efficiencies across societal sectors but also a transformational change in knowledge generation, communication and personalized experiences. At the same time, these advances can have counterweights in certain uses, unintended consequences, or control by bad actors. This includes the potential to disrupt fundamental societal values and norms as well as exacerbate existing systemic issues such as inequality and inequity."

At their 2019 AI Now Institute symposium, co-founders Meredith Whitaker and Kate Crawford focused on pushback to harmful forms of AI, identifying five themes:  “(1) facial and affect recognition; (2) the movement from “AI bias” to justice; (3) cities, surveillance, borders; (4) labor, worker organizing, and AI, and; (5) AI’s climate impact.”  These themes are supported by a schematic that places 2019 events into one of those themes, showing that significant pushback is coming from employees of the very technology firms that are creating tools that magnify inequity and/or remove privacy guarantees previously in place.  The AI Institute is an interdisciplinary research center dedicated to understanding the social implications of artificial intelligence, based at New York University (NYU).

In Seattle, the University of Washington’s Paul G. Allen School of Computer Science and Engineering hosts a powerhouse group studying AI that also collaborates with the private sector Allen Institute for Artificial Intelligence.  The new University of Washington Center for An Informed Public, funded by the Knight and Hewlett foundations, will certainly also be a part of a larger effort to understand the relationship between AI technologies, ethics, risk, and disinformation.

One of the best thoughtful private sector models for AI research and product creation that I know of is at Microsoft, where the firm has developed six ethical principles to guide their research and tool development. Their approach stands at variance with a company like Google, whose original vision statement was “Do No Wrong.”  Google employees have been outspoken about the development or application of tools for purposes that undercut democratic principles.

On the public sector side, significant work has been done by organizations like The Institute for Ethical AI and Machine Learning, a “UK-based think tank that gathers technology leaders, policymakers and academics to develop industry standards” The institute has identified eight ethical principles for researchers in the field to commit to – human augmentation; bias evaluation; explainability by justification; reproducible operations; displacement strategy; practical accuracy; trust by privacy; and security risks. Each principle is backstopped with a literal pledge for the practitioner to make.  The institute’s work is are well worth reviewing and comparing to others such as the “Unified Framework of Five Principles for AI in Society,” written by Luciano Floridi and Josh Cowls.

While I’ll be using both examples and principles at the SecureWorld conference, the last word here goes to the Brookings Institute playbook:

"Companies can act in support of the elements of democratic systems by engaging in corporate social responsibility (CSR)…. The principles of CSR can help promote transparency, corporate accountability, and sustainable development, and help businesses keep in mind the long-term democratic health of their society. (198) …. Within the framework of CSR, companies can also work to defend established standards and regulations that can counter democratic backsliding and can themselves propose their own policies that promote and protect democratic values, even when the state itself rolls back such protections.” (p. 39)

 

Originally published in ASA News & Notes November 11, 2019

 

 

Category: 

Annie Searle

Searle is an Associate Teaching Professor Emeritus at the University of Washington. She is founder and principal of ASA Risk Consultants, a Seattle-based advisory firm. She spent 10 years at Washington Mutual Bank, most of them as a senior executive. Annie is a member of the CISA 10 Regional Infrastructure Security Group. She was an inaugural inductee in 2011 into the Hall of Fame for the International Network of Women in Homeland Security and Emergency Management. She writes a column monthly for ASA News & Notes and is the author of several books or book chapters. She is also a member of the emeritus board of directors for the Seattle Public Library Foundation.


Comments Join The Discussion