Table of Contents
When he was 21, Bill Sourour, a programmer and teacher, was hired by a marketing firm to build a website for a pharmaceutical company. As part of the site, he was asked to code a quiz that would recommend a drug treatment to teenage girls. But the details of the app were misleading: the company told him to display their drug as the only answer, regardless of the combination of answers typed in by the user. Thus, Sourour helped the company avoid Canadian advertising laws in order to convince young women to purchase a particular drug. Later, Sourour discovered that the consequences of his action might have been lethal: the drug was known to increase depression. At least one user had committed suicide while taking it.
Ethical questions inevitably arise with innovation. But they are often an afterthought. Simplistic justifications can often replace serious ethical consideration. For example, when the tension between privacy and security is perceived as zero-sum, privacy often takes the backseat. With clients demanding quick turnaround, and engineers often lacking a profound understanding of civil liberty concerns, privacy often falls through the cracks. While software requires us to consider both privacy and security, the two issues are still perceived as mutually exclusive. Take the Apple v. FBI fight last year, for example. To whom did Apple owe its allegiance? Its clientele, the government, or itself? Should the firm have prioritized national security or consumer privacy?
Schools like Stanford should work to change this mindset by including an ethical requirement for engineering degrees. Stanford should require Computer Science majors to take a course on computer and information ethics.
Unfortunately, the opposite sentiment seems to be taking root on the Farm — as the techie-fuzzy divide widens, students of computer science value the study of ethics less and less. Computer scientists at Stanford are trained in an environment that idolizes entrepreneurship and innovation in the abstract. Too busy catching up on sleep after a hackathon, they are not taught to seriously consider the societal or political impact of their products.
This oversight is troubling since technology touches almost all dimensions of 21st-century American lives. We have all heard spiels about big data and social network revolutions. Retail, news, banking, transport, entertainment, healthcare and even education have all been digitized. And while the far-reaching impact of technology and software is a truth universally acknowledged, Stanford’s educational program does not adequately reflect this fact.
Some argue that computer science does not necessitate a specific subfield of ethics focusing on the social and ethical impacts of information and communication technology. But as Walter Maner, the professor who pioneered the field, has explained, many of the ethical problems of computer and information technology are new and lack an effective analogy from past ethical work. He argues that we need to develop new moral principles in order to formulate new policies.
By neglecting to require computer science majors to study ethics, Stanford produces engineers who can write code efficiently, but not thoughtfully. Professor Reich maintained that, “many students are treating Stanford University as a conveyor belt to technology wealth.” It is true that many Stanford software engineers will work on fairly innocuous projects. Indeed, not all software engineers enter the field with a desire for social impact. However, with technology shaping communication, governance and commerce, no matter a software engineer’s motives, his work inevitably has a human consequence. As Robert Martin, a software engineer and author of The Clear Coder, summarized, “we are killing people. We did not get into this business to kill people. And this is only getting worse.” Software engineers no longer just write video games or mainframes; they program our cars and airplanes and write apps for our children.
Our society’s perception of programmers as code monkeys does not accurately encompass the profession’s true significance or remarkable power. Indeed, as Robert Martin points out, “we rule the world. Other people believe they rule the world but they write down the rules and they hand them to (software engineers). And then (software engineers) write the rules that go into the machines that execute everything that happens.” The inordinate impact of technology on human life demands that responsibility, at Stanford and elsewhere, be taken for these ethical concerns.
Stanford’s computer science department does include a Technology in Society requirement. But it is insufficient. While PoliSci 114S, International Security in a Changing World, teaches international relations theory well, a class that merely summarizes cybersecurity concerns is not adequate to enlighten software engineers about the future ethical implications of their work. CS 181, Computers, Ethics and Public Policy focuses on “privacy, reliability and risks of complex systems, and responsibility of professionals for applications and consequences of their work.” However, as an optional course, it does not prepare all Computer Science majors.
WAYS does includes an “ethical reasoning” requirement. Nevertheless, the plethora of courses offered — including World of Ghandi, Ethics of Religious Politics, and Contemporary Moral Problems — do not all pertain to computer and information ethics. A new major requirement ought to be developed to fill this glaring educational gap. As expressed by Noah Arthurs, a founder of EthiCS, a new student group on Stanford’s campus, “we need to see a larger change in the department as a whole.”
Stanford certainly lies at the forefront of innovation. Professor Rob Reich explained that student groups like EthiCS and CS+Social Good are “a welcome sign about how Stanford can combine a liberal arts education with a skill-based education.” As a university unique in its interdisciplinary prowess and that excels in every discipline, Stanford is ideally positioned to train a new type of engineer and humanist: one fluent in coding languages and ethical issues; technically capable, but careful about the impact of her work on society. If Stanford, the leader of the field, does not establish a standard for ethical computer science, who will?
Engineers aim to improve the human condition and improve people’s livelihoods. If computer scientists do not consider the moral consequences of their inventions, they will always fall short of achieving this goal. Neither technology nor innovation exist in a bubble. Stanford ought to require computer scientists to study computer and information ethics. Giving students the tools to create harm, without giving them the tools to understand it, is itself unethical.