Developing a Practical Model for Ethical AI in the Business World: Stage I – Executive Design

In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects. To read the introduction, click here.

This blog focuses on Executive Design, the first of the Three Keys to Ethical AI introduced in the last blog.

Stage I: Executive Design

As a starting point, any AI project needs to be analyzed in context of five major questions that are important both from a project management and scoping perspective. Amalgam Insights cannot control the ethical governance of every company, but we can provide a starting point to let AI-focused companies know what potential problems they face. As a starting point, businesses seeking to pursue ethical AI must consider the following questions:

  • What is the goal of the project?
  • What are the key ethical assumptions and biases?
  • Who are the stakeholders?
  • How will AI oversight be performed in an organization?
  • Where is the money coming from?

What is the project goal?

In thinking about the goal of the project, the project champion needs to make sure that the goal, itself, is not unethical. For instance, the high-level idea of understanding your customers is laudable at its surface. But if the goal of the project is effectively to stalk customers or to open up customer data without their direct consent, this project quickly becomes unethical. Likewise, if an AI project to improve productivity and efficiency is practically designed to circumvent legal governance of a process, there are likely ethical issues as well.

Although this analysis seems obvious, the potential opacity, complexity, and velocity of AI deployments mean that these topics have to be considered prior to project deployment. These tradeoffs need to be analyzed based on the risk profile and ethical policies of the company and need to be determined at a high level prior to pursuing an AI project.

What are the key ethical assumptions and biases?

Every AI project makes ethical assumptions, compromises, and biases.

Let me repeat that.

Every AI project makes ethical assumptions, compromises, and biases.

This is just a basic premise that every project faces. But because of the complexities of AI projects, the assumptions made during scoping can be ignored or minimized during the analysis or deployment if companies do not make a concerted effort to hold onto basic project tenants.

For instance, it’s easy to say that a company should not stalk its customers. And in the scoping process, this may mean masking personal information such as names and addresses from any aggregate data. But what happens if the analysis ends up tracking latitude and longitude to within 1 meter, tracking interactions every 10 minutes, and taking ethnic, gender, sexuality, or other potentially identifying or biasing data along with a phone IMEI identification into account as part of an analysis of the propensity to buy? And these characteristics are not taken into account because they weren’t included as part of the initial scoping process and there was no overarching reminder to not stalk or overly track customers? In this case, even without traditional personally identifiable information, the net result is potentially even more invasive. And with the broad scope of analysis conducted by machine learning algorithms, it can be hard to fully control the potential parameters involved, especially in the early and experimental stages of model building and recursive or neurally designed optimization.

So, from a practical perspective, companies need to create an initial set of business tenets that need to be followed throughout the design, development, and deployment of AI. Although each set of stakeholders across the AI development process will have different means of interpreting and managing these tenets, these business guidelines provide an important set of goalposts and boundaries for defining the scope of the AI project. For instance, a company might set as a set of characteristics for a project:

  • This project will not discriminate based on gender
  • This project will not discriminate based on race
  • This project will not discriminate based on income
  • This project will not take personally identifiable information without first describing this to the user in plain English (or language of development)

These tenets and parameters should each be listed separately, meaning there shouldn’t be a legalese laundry list saying “this project respects race, class, gender, sexuality, income, geography, culture, religion, legal status, physical disability, dietary restrictions, etc.” This allows each key tenet to be clearly defined based on its own merit.

These tenets should be a part of every meeting and formal documentation so that stakeholders across executive, technical, and operational responsibilities all see this list and consider this list in their own activities. This is important because each set of stakeholders will execute differently on these tenets based on their practical responsibilities. Executives will place corporate governance and resources in place while technical stakeholders will focus on the potential bias and issues within the data and algorithmic logic and operational stakeholders will focus on delivery, access, lineage, and other line-of-business concerns associated with front-line usage.

And this list of tenets needs to be short enough to be actionable. This is not the place to write a 3,000 word legal document on every potential risk and problem, but a place to describe specific high-level concerns around bias.

Who are the stakeholders?

The makeup of the executive business stakeholders is an important starting point for determining the biases of the AI project. It is important for any AI project with significant potential organizational impact to have true executive sponsorship from someone who has responsibility for the health of the company. Otherwise, it is too easy for an algorithm to “go rogue” or become an implicit and accepted business enabler without sufficient due diligence.

How will AI oversight be performed in an organization?

AI projects need to be treated with the same types of oversight as hiring employees or any significant change management process. Ideally, AI will be either providing a new and previously unknown insight or supporting productivity that will replace or augment millions of dollars in labor. Companies putting AI into place need to hold AI logic to the same standards as they would hold human labor.

Where is the money coming from?

No matter what the end goal of the AI project is, it will always be judged in context of the money used to fund the AI. If an organization is fully funding an AI project, it will be held accountable for the outcomes of the AI. If an AI project is funded by a consortium of funders, the ethical background of each funder or purchaser will eventually be considered in determining the ethical nature of the AI. Because of this, it is not enough for an organization to be pursuing an AI initiative that is potentially helpful. Organizations must also partner with or work with partners that align with the organization’s policy and culture. When an AI project becomes public, compliance officers and critics will always follow the money and use this as a starting point to determine how ethical the AI effort is.

In our next blog, we will explore Technical Development with a focus on the key questions that technical users such as data analysts and data scientists must consider as they build out the architecture and models that will make up the actual AI application or service.

Developing a Practical Model for Ethical AI in the Business World: Introduction

As we head into 2020, the concept of “AI (Artificial Intelligence) for Good” is becoming an increasingly common phrase. Individuals and organizations with AI skillsets (including data management, data integration, statistical analysis, machine learning, algorithmic model development, and application deployment skills) have effort into pursuing ethical AI efforts.

Amalgam Insights believes that these efforts have largely been piecemeal and inadequate to meet common-sense definitions for companies to effectively state that they are pursuing, documenting, and practicing true ethical AI because of the breadth and potential repercussions of AI on business outcomes. This is not due to a lack of interest, but based on a couple of key considerations. First, AI is a relatively new capability in the enterprise IT portfolio that often lacks formal practices and guidelines and has been managed as a “skunkworks” or experimental project. Second, businesses have not seen AI as a business practice, but as a purely technical practice and made a number of assumptions in skipping to the technical development that would typically not have been made for more mature technical capabilities and projects.

In the past, Amalgam Insights has provided frameworks to help organizations take the next step to AI through our BI to AI progression.

Figure 1: Amalgam’s Framework from BI to AI




To pursue a more ethical model of AI, Amalgam Insights believes that AI efforts need to be analyzed through three key lenses:

  • Executive Design
  • Technical Development
  • Operational Deployment

Figure 2: Amalgam’s Three Key Areas for Ethical AI

In each of these areas, businesses must ask the right questions and adequately prepare for the deployment of ethical AI. In this framework, AI is not just a set of machine learning algorithms to be utilized, but an enabler to effectively augment problem-solving for appropriate challenges.

Over the next week, Amalgam Insights will explore 12 areas of bias across these three categories with the goal of developing a straightforward framework that companies can use to guide their AI initiatives and take a structured approach to enforcing a consistent set of ethical guidelines to support governance across the executive, technical, and operational aspects of initiating, developing, and deploying AI.

In our next blog, we will explore Executive Design with a focus on the five key questions that an executive must consider as they start considering the use of AI within their enterprise.