7 min read

Does it Delight? Does it Parse? Does it Pay?

Yellow, diamond shaped traffic sign that says Moral Dilemma Ahead
“Every bad thing that can happen in the real world can now happen on the Internet.”
- Vint Cerf

1843 published a piece recently that highlighted the work of behavior scientist BJ Fogg (among others) and discussed some of the unintentional consequences that fall out from the work of digital makers. At the lighter end of the spectrum, there are the tactics that try to ensure that we will stick to a brand whose products and services we might really enjoy and at the other end are considerations like the fostering of extreme addictions and hate that might cause long term personal and social harm. Also, in the last month, there has been a fresh wave of stories in the US about the power of the Web as an indiscriminate publishing platform. In The Washington Post, Caitlin Dewey interviewed Paul Horner regarding his creation of fake news and its possible impact on the 2016 US presidential election. Questions are being increasingly raised about the responsibilities of companies like Twitter and Facebook to monitor and react to account behaviors related to hate speech.

At the same time, there is a not-so-new conversation being re-energized in the world of artificial intelligence. Greg Satell’s recent piece on the Harvard Business Review site outlines some of the challenges behind teaching machines right from wrong. Surely this is a coding and engineering challenge but there are other concerns as well. With artificial intelligence, we put some decision-making about where the information we publish goes at even greater arms-length than with traditional publishing. Automation gives the algorithm tremendous power — pushing and pulling our content at will (and creating its own). How do we as makers ensure that our information flows the “right” way instead of the “wrong” way? And, what do those who conceive of algorithms (and other functionalities) know about right and wrong in the first place? What are we as makers doing to ensure that ethical considerations inform what we create? Maybe, not a lot. And, that’s not a good answer.

Most of us in the digital maker community know there are ethical concerns related to technology development — we see the database schemas and know what information is being gathered. We create the design briefs that outline how to move users to buy and click and we attempt to answer the questions of executives that want to understand how all our efforts drive the user farther down the sales or donations funnel.

Practically speaking, digital makers know that not everything we make is necessarily of the best quality and that some of the things we design and code, flat out don’t work or lead to negative and unintended outcomes. But we shrug off the responsibility for low quality or unintentional negative outcomes by saying “my boss made me build it.” We, who hold the markup and code that manifest the Web, blame low quality and ethical ambiguities on The Man. Or, we rationalize this as part of the natural outcome of “innovation” saying that these functional and sometimes ethical messes are part of the growth and maturity process of a new technology. I agree with this last rationale in principle. But, in the end, innovation shouldn’t undermine fundamental human rights. We have to try to do good — which means we need to define what it means to do good, online. As designers, and developers, and business people, we need to consider more than if our code parses, our designs delight the user, or our online pay funnel delivers measurable results. We need to consider the ethical implications of what we make.

This is not the first time these sorts of issues have arisen. The history of information collection and dissemination is rife with philosophical and moral debates that arose due to rapid changes in the way groups of people interact around information. In 15th century Europe, both church and state were eager to censure new printing press technologies as new “radical” voices emerged. There’s complexity to this issue. But, a larger question is, what should we do about these challenges this time around? What types of responsibilities might digital makers need to carry as we construct the online universe that, more and more, shapes and evolves our daily reality? Most of the things that we might want to consider track back to some age-old philosophical considerations around concepts like good vs. evil; free will vs. determinism; or, more optimistic considerations like what does it mean to live a good life? That leads to natural questions for digital makers:

  • Do the things that we make promote good or promote evil?
  • Is there a right and a wrong way to make the things we put online?
  • Are digital makers free or is their work determined by the organizations that pay them?
  • Are the things we are making contributing to the good life?

These questions are basic but still offer a rich palette for consideration. And, as is the case with most philosophical questions of this sort, the power is found in the asking and discussion of the questions — and in trying to take a considered approach. Of course, some things we will determine to be fundamentally right or wrong — things we should never make or functionality we should always support. But other functionality is likely to lie somewhere between the binary extremes that we as people (and coders) like to settle things in. But the fear of murkiness about answers shouldn’t stifle the analysis or debate about ethical concerns. Hard-lining to the left or the right of the question in a stalemate will likely get us nowhere and likely leave the maker community in the position of having policy imposed upon us by those who don’t really understand the technologies we create. It’s those of us with our hands in the digital machine who know what cookies are being tossed, what data logged, and what behaviors influenced. We have to act. Especially because, often, those that hold the purse strings of online development have little interest or capacity to ask the right questions.

When should makers consider the implications of their inventions?

It’s no secret that my area of professional focus is digital governance — how to align teams to work together well toward shared goals and objectives. One of the basic considerations I work through with digital teams is understanding when, in a product development lifecycle, to establish governing controls. My usual answer is “not too early but before you scale.” I say that because over-governing product development out of the gate can stifle creativity and invention. But that doesn’t mean never govern. Once you’ve got something that works you have to establish norms to scale effectively. If you attempt to replicate or scale functionality without norms in place, that generally leads to messes, inconsistencies, and a lack of systemic interoperability — unexpected and unintended outcomes. But, many organizations, in the zeal of having invented a digital success story, still scale without emplacing standards, without considering the shape of the team that will operate this new product. That’s usually a mistake.

Not considering the ethical implications of what is being made and brought to market (often at a global scale) is another mistake that is made at this crucial pre-scale/rollout inflection point. We need to stop and take account of the implications of what we are building before we toss it out to the world at large. Most mature product organizations wouldn’t bring to market a product (such as a car) that hadn’t been safety tested — yet organizations do the same with digital products and services all the time. And while safety testing of hardware is not a perfect corollary to the ethical considerations surrounding online development, it’s close and some would say, with connected devices, it’s the same thing.

Who should lead the ethical dialogue?

Most organizations, tacitly or overtly, have some fundamental ethical considerations built into their corporate culture (e.g. they probably wouldn’t promote functionality that promotes intentional physical or emotional harm or uses abusive or bigoted language). So, even for the most innovative and fast-paced, there are, usually, some basic ethical considerations that makers and businesses have silently considered before they even begin to start inventing. But how do we deal with discussions about the stuff that’s in the grey area or things that are new — like so much of what we do in digital? How do we get that conversation started and who should lead it?

The maker community has started to hold itself accountable by raising the conversation of ethics. Some have discussed a voluntary digital Hippocratic Oath and that’s good-intentioned. But in order to have an informed dialogue, the community needs to include other resources in the conversation. In general, coders and designers are usually not educated to understand ethical and sociological, political implications of the work they do. They might have instincts or well-intentioned views but relying on those instincts to ensure that the work they create create is ethically sound is not a sensible approach. There are professional ethicists, sociologists, and historians (to name a few) who can bring different types of views and knowledge to the table to enrich the dialogue about the “right and wrong” of digital making. I take inspiration from Erik Fisher at Arizona State University. His ideas and methodologies around Mid Stream Modulation are being applied to the development of nano-technologies that support stem cell research. Digital technologies are powerful and pervasive so, along with considering how an organization might operationally scale functionality or a platform and bring it to market, they should also consider some fundamental ethical concerns and employ professionals to help them do so.

This will be a hard shift. Our digital maker culture often demands that we make, and make, and make, never turning back to see the messes we might have created. Some of those messes are catching up to us and it’s important to pay attention and realize that knowledge and potential solutions are available to us. We can and must have the conversation and modulate our product development practices to ensure an ethically sound online experience for all of us. How can we teach machines to distinguish right from wrong when we’re not really clear on those answers ourselves?