Developing a Practical Model for Ethical AI in the Business World: Stage 3 – Operational Deployment

In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects.

To read the introduction, click here.

To read about Stage 1: Executive Design, click here

To read about Stage 2: Technical Development, click here.

This blog focuses on Operational Deployment, the third of the Three Keys to Ethical AI described in the introduction.

Figure 1: The Three Keys to Ethical AI

Stage 3: Operational Deployment

Once an AI model is developed, organizations have to translate this model into actual value, whether it be by providing the direct outputs to relevant users or by embedding these outputs into relevant applications and process automation. But this part of AI also requires its own set of ethical considerations for companies to truly maintain an ethical perspective.

  • Who has access to the outputs?
  • How can users trace the lineage of the data and analysis?
  • How will the outputs be used to support decisions and actions?

Figure 2: Deployment Strategy

Who has access to the outputs?

Just as with data and analytics, the value of AI scales as it goes out to additional relevant users. The power of Amazon, Apple, Facebook, Google, and Microsoft in today’s global economy shows the power of opening up AI to billions of users. But as organizations open up AI to additional users, they have to provide appropriate context to users. Otherwise, these new users are effectively consuming AI blindly rather than as informed consumers. At this point, AI ethics expands beyond a technical problem into an operational business problem that affects every end user affected by AI.

Understanding the context and impact of AI at scale is especially important for AI initiatives that are focused on continuous improvement focused on increasing user value. Amalgam Insights recommends a focus on directly engaging user feedback for user experience and preference rather than simply depending on A/B testing. It takes a combination of quantitative and qualitative experience to optimize AI at a time when we are still far from truly understanding how the brain works and how people interact with relevant data and algorithms. Human feedback is a vital aspect for AI training and to understand the perception and impact of AI.

How can users trace the lineage of the data and analysis?

Users accessing AI in an ethical manner should have basic access to the data and assumptions used to support the AI. This means both providing quantitative logic and qualitative assumptions that can communicate the sources, assumptions, and intended results of the AI to relevant users. This context is important in supporting an ethical AI project as AI is fundamentally based not just on a basic transformation of data, but on a set of logical assumptions that may not be inherently obvious to the user.

From a practical perspective, most users will not fully understand the mathematical logic associated with AI, but users will understand the data and basic conceptual assumptions being made to provide AI-based outputs. Although Amalgam Insights believes that the rise of AI will lead to a broader grasp of statistics, modeling, and transformations over time, it is more important that both executive and technical stakeholders are able to explain how AI technologies in production are productive, relevant, and ethical based on both a business and technical basis.

How will the outputs be used to support decisions and actions?

Although this topic should already have been explored at the executive level, operational users will have deeper knowledge of how the technology will be used on a day-to-day basis and should revisit this topic based on their understanding of processes, internal operations, and customer-facing outcomes.

There are a variety of ways that AI can be used to support the decisions we make. In some cases, such as with search engines and basic prioritization exercises, AI is typically used as the primary source of output. For a more complex scenario, such as sales and marketing use cases or complex business or organizational decisions, AI may be a secondary source to provide an additional perspective or an exploratory and experimental perspective simply to provide context for how an AI perspective would differ from a human-oriented perspective.

But it is important for ethical AI outputs to be matched up with appropriate decisions and outcomes. A current example creating headlines is focused on the current launch of the Apple credit card and decisions being made about disparate credit limits for a married man and woman based on “the algorithm.” In this example, the man was initially given a much larger credit limit than the woman despite the fact that the couple filed taxes jointly and effectively shared joint income.

In this case, the challenge of giving “the algorithm” an automated and primary (and, likely, exclusive) role in determining a credit limit has created issues that are now in the public eye. Although this is a current and prominent example, it is less of a statement about Apple in particular and more of a statement regarding the increasing dependence that financial services has on non-transparent algorithms to accelerate decisions and provide an initial experience to new customers.

A more ethical and human approach would have been to figure out if there were inherent biases in the algorithm. If the algorithm had not been sufficiently tested, it should have been a secondary source for a credit limit decision that would ultimately be made by a human.

So, based on these explorations, we create a starting point for practical business AI ethics.

Figure 3: A Practical Framework

Recommendations

Maintain a set of basic ethical precepts for each AI project across design, development, and deployment. As mentioned in Part 1, these ethical statements should be focused on a few key goals that should be consistently explored from executive to technical to operational deployment. These should be short enough to fit onto every major project update memo and key documentation associated with the project. By providing a consistent starting point of what is considered ethical and must be governed, AI can be managed more consistently.

Conduct due diligence across bias, funding, champions, development, and users to improve ethical AI usage. The due diligence on AI currently focuses too heavily on the construction of models, rather than the full business context of AI. Companies continue to hurt their brands and reputation by putting out models and AI logic that would not pass a basic business or operational review.

Align AI to responsibilities that reflect the maturity, transparency, and fit of models. For instance, experimental models should not be used to run core business processes. For AI to take over significant operational responsibilities from an automation, analytical, or prescriptive perspective, the algorithms and production of AI need to be enterprise-ready just as traditional IT is. Just because AI is new does not mean that it should bypass key business and technical deployment rules.

Review and update AI on a regular basis. Once an AI project has been successfully brought to the wild and is providing results, it must be managed and reviewed on a regular basis.  Over time, the models will need to be tweaked to reflect real-life changes in business processes, customer preferences, macroeconomic changes, or strategic goals. AI that is abandoned or ignored will become technical debt just as any outdated technology is. If there is no dedicated review and update process for AI, the models and algorithms used will eventually become outdated and potentially less ethical and accurate from a business perspective.

We hope this guide and framework are helpful in supporting more ethical and practical AI projects. If you are seeking additional information on ethical AI, the ROI of AI, or guidance across data management, analytics, machine learning, and application development, please feel free to contact us at research@amalgaminsights.com and send us your questions. We would love to work with you.

AI on AI – 8 Predictions for the Data Savvy Pro

When we started Amalgam Insights, we oh-so-cleverly chose the AI initials with the understanding that artificial intelligence (the other AI…), data science, machine learning, programmatic automation, augmented analytics, and neural inputs would lead to the greatest advances in technology. At the same time, we sought to provide practical guidance for companies seeking to bridge the gaps between their current data and analytics environments and the future of AI. With that in mind, here are 8 predictions we’re providing for 2020 for Analytics Centers of Excellence and Chief Data Officers to keep in mind to stay ahead while remaining practical.

1. In 2020, AI becomes a $50 billion market, creating a digital divide between the haves and have nots prepared to algorithmically assess their work in real time. Retail, Financial Services, and Manufacturing will be over half of this market.

2. The data warehouse becomes less important as a single source of truth. Today’s single source replaces data aggregation and duplication with data linkages and late-binding of data sources to bring together the single source of truth on a real-time basis. This doesn’t mean that data warehouses aren’t still useful; it just means that the single source of truth can change on a real-time basis and corporate data structures need to support that reality. And it becomes increasingly important to conduct analytics on data, wherever the data may be, rather than be dependent on the need to replicate and transfer data back to a single warehouse.

3. Asking “What were our 2020 revenues?” will be an available option in every major BI solution by the end of 2020, with the biggest challenge then being how companies will need to upgrade and configure their solutions to support these searches. We have maxed out our ability to spread analytics through IT. To get beyond 25% analytics adoption in 2020, businesses will need to take advantage of natural language queries and searches are becoming a general capability for analytics, either as a native or partner-enabled capability.

4. 2020 will see an increased focus on integrating analytics with automation, process mapping, and direct collaboration. Robotic Process Automation is a sexy technology, but what makes the robots intelligent? Prioritized data, good business rules, and algorithmic feedback for constant improvement. When we talk about “augmented analytics” at Amalgam Insights, we think this means augmenting business processes with analytic and algorithmic logic, not just augmenting data management and analytic tasks.

5. By 2025, analytic model testing and Python will become standard data analyst and business analyst capabilities to handle models rather than specific data. Get started now in learning Python, statistics, an Auto Machine Learning method, and model testing. IT needs to level up from admins to architects. All aspects of IT are becoming more abstracted through cloud computing, process automation, and machine learning. Data and analytics are no exception. Specifically, Data analysts will start conducting the majority of “data science” tasks conducted in the enterprise, either as standalone or machine-guided tasks. If a business is dependent on a “unicorn” or a singular talent to conduct a business process, that process is not scalable and repeatable. As data science and machine learning projects start becoming part of the general IT portfolio, businesses will push down more data management, cleansing, and even modeling and testing tasks to the most dependable talent of the data ecosystem, the data analyst.

6. Amalgam Insights predicts that the biggest difference between high ROI and low ROI analytics in 2020 will come from data polishing, not data hoarding. – The days of data hoarding for value creation are over. True data champions will focus on cleansing, defining, prioritizing, and separating the 1% of data that truly matters from the 99% more suited to mandatory and compliance-based storage.

7. On a related note, Amalgam Insights believes the practice of data deletion will be greatly formalized by Chief Data Protection Officers in 2020. With the emergence of CCPA along with the continuance of GDPR, data ownership is now potentially risky for organizations holding the wrong data.

8. The accounting world will make progress on defining data as a tangible asset. My expectations: changes to the timeframes of depreciation and guidance on how to value specific contextually-specific data such as customer lists and business transactions. Currently, data cannot be formally capitalized, meaning asset. Now that companies are generally starting to realize that data may be their greatest assets outside of their talent, accountants will bring up more concerns for FASB Statements 141 and 142.

Developing a Practical Model for Ethical AI in the Business World: Stage 2 – Technical Development

In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects.

To read the introduction, click here.

To read about Stage 1: Executive Design, click here

This blog focuses on Technical Development, the second of the Three Keys to Ethical AI described in the introduction.

Figure 1: The Three Keys to Ethical AI

Stage 2: Technical Development

Technical Development is the area of AI that gets the most attention as machine learning and data science start to mature. Understandably, the current focus in this Early Adopter era (which is just starting to move into Early Majority status in 2020) is simply on how to conduct machine learning, data science efforts, and potentially deep learning projects in a rapid, accurate, and potentially repeatable manner. However, as companies conduct their initial proofs of concepts and build out AI services and portfolios, the following four questions are important to take into account.

  • Where does the data come from?
  • Who is conducting the analysis?
  • What aspects of bias are being taken into account?
  • What algorithms and toolkits are being used to analyze and optimize?

Figure 2: Technical Development

Where does the data come from?

Garbage In, Garbage Out has been a truism for IT and data projects for many decades. However, the irony is that much of the data that is used for AI projects used to literally be considered “garbage” and archival exhaust up until the practical emergence of the “Big Data” era at the beginning of this decade. As companies use these massive new data sources as a starting point for AI, they must check on the quality, availability, timeliness, and context of the data. It is no longer good enough to just pour all data into a “data lake” and hope that this creates a quality training data sample.

The quality of the data is determined by the completeness, accuracy, and consistency of the data. If the data have a lot of gaps, errors, or significant formatting issues, the AI will need to account for these issues in a way that maintains trust. For instance, a long-standing historical database may be full of null values as the data source has been augmented over time and data collection practices have improved. If those null values are incorrectly accounted for, AI can end up defining or ignoring a “best practice” or recommendation.

From a practical perspective, consider as an example how Western culture has recently started to formalize non-binary gender or transgendered identity. Just because data may not show these identities prior to this decade does not mean that these identities didn’t exist. Amalgam Insights would consider a gap like this to be a systemic data gap that needs to be taken into account to avoid unexpected bias, perhaps through the use of adversarial de-biasing that actively takes the bias into account.

The Availability and Timeliness of the data refers to the accessibility, uptime, and update frequency of the data source. Data sources that may be transient or migratory may serve as a risk for making consistent assumptions from an AI perspective. If an AI project is depending on a data source that may be hand-curated, bespoke in nature, or inconsistently hosted and updated, this variability needs to be taken into account in determining the relative accuracy of the AI project and its ability to consistently meet ethical and compliance standards.

Data context refers to the relevance of the data both for solving the problem and for providing guidance to downstream users. Correlation is not causation, as the hilarious website “Spurious Correlations” run by Tyler Vigen shows us. One of my favorite examples shows how famed actor Nicolas Cage’s movies are “obviously” tied to the number of people who drown in swimming pools.

Figure 3: Drownings as a Function of Nicolas Cage Movies


(Thanks to Spurious Correlations! Buy the book!)

But beyond the humor is a serious issue: what happens if AI assumptions are built on faulty and irrelevant data? And who is checking the hyperparameter settings and the contributors to parameter definitions? Data assumptions need go through some level of line of business review. This isn’t to say that every business manager is going to suddenly have a Ph.D. level of data science understanding, but business managers will be able to either provide confirmation that data is relevant or provide relevant feedback on why a data source may or may not be relevant.

Who is conducting the analysis?

In this related question, the deification of the unicorn data scientist has been well-documented over the last few years. But just as business intelligence and analytics evolved from the realm of the database master and report builder to a combination of IT management and self-service conducted by data-savvy analysts, data science and AI must also be conducted by a team of roles that include the data analyst, data scientist, business analyst, and business manager. In small companies, an individual may end up holding multiple roles on this team.

But if AI is being developed by a single “unicorn” focused on the technical and mathematical aspects of AI development, companies need to make sure that the data scientist or AI developer is taking sufficient business context into account and fully considering the fundamental biases and assumptions that were made during the Executive Design phase.

What aspects of bias are being taken into account?

Any data scientist with basic statistical training will be familiar with Type I (false positive) and Type II (false negative) errors as a starting point for identifying bias. However, this statistical bias should not be considered the end-all and be-all of defining AI bias.

As parameters and outputs become defined, data scientists must also consider organizational bias, cultural bias, and contextual bias. Simply stating that “the data will speak for itself” does not mean that the AI lacks bias; this only means that the AI project is actively ignoring any bias that may be in place. As I said before, the most honest approach to AI is to acknowledge and document bias rather than to simply try to “eliminate” bias. Bias documentation is a sign of understanding both the problem and the methods, not a weakness.

An extreme example is Microsoft’s “Tay” chatbot released in 2016. This bot was released “without bias” to support conversational understanding. The practical aspect of this lack of bias was that the bot lacked the context to filter racist messages and to differentiate between strongly emotional terms and culturally appropriate conversation. In this case, the lack of bias led to the AI’s inability to be practically useful. In a vacuum, the most prevalent signals and inputs will take precedence over the most relevant or appropriate signals.

Unless the goal of the AI is to reflect the data that is most commonly entered, an “unbiased” AI approach is generally going to reflect the “GIGO” aspect of programming that has been understood for decades. This challenge reflects the foundational need to understand the training and distribution of data associated with building of AI.

What algorithms and toolkits are being used to analyze and optimize?

The good news about AI is that it is easier to access than ever before. Python resources and a plethora of machine learning libraries including PyTorch, Scikit, Keras, and, of course, Tensorflow, make machine learning relatively easy to access for developers and quantitatively trained analysts.

The bad news is that it becomes easy for someone to implement an algorithm without fully understanding the consequences. For instance, a current darling in the data science world is XGBoost (Extreme Gradient Boosting) which has been a winning algorithmic approach for recent data science contests because it reduces data to an efficient minima more quickly than standard gradient boosting. But it also requires expertise in starting with appropriate features, stopping the model training before the algorithm overtunes, and appropriately fine tuning the model for production.

So, it is not enough to simply use the right tools or the most “efficient” algorithms, but to effectively fit, stop, and tune models based on the tools being used to create models that are most appropriate for the real world and to avoid AI bias from propagating and gaining overweight influence.

In our next blog, we will explore Operational Deployment with a focus on the line of business concerns that business analysts and managers consider as they actually use the AI application or service and the challenges that occur as the AI logic becomes obsolete or flawed over time.

Developing a Practical Model for Ethical AI in the Business World: Stage I – Executive Design

In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects. To read the introduction, click here.

This blog focuses on Executive Design, the first of the Three Keys to Ethical AI introduced in the last blog.

Stage I: Executive Design

As a starting point, any AI project needs to be analyzed in context of five major questions that are important both from a project management and scoping perspective. Amalgam Insights cannot control the ethical governance of every company, but we can provide a starting point to let AI-focused companies know what potential problems they face. As a starting point, businesses seeking to pursue ethical AI must consider the following questions:

  • What is the goal of the project?
  • What are the key ethical assumptions and biases?
  • Who are the stakeholders?
  • How will AI oversight be performed in an organization?
  • Where is the money coming from?

What is the project goal?

In thinking about the goal of the project, the project champion needs to make sure that the goal, itself, is not unethical. For instance, the high-level idea of understanding your customers is laudable at its surface. But if the goal of the project is effectively to stalk customers or to open up customer data without their direct consent, this project quickly becomes unethical. Likewise, if an AI project to improve productivity and efficiency is practically designed to circumvent legal governance of a process, there are likely ethical issues as well.

Although this analysis seems obvious, the potential opacity, complexity, and velocity of AI deployments mean that these topics have to be considered prior to project deployment. These tradeoffs need to be analyzed based on the risk profile and ethical policies of the company and need to be determined at a high level prior to pursuing an AI project.

What are the key ethical assumptions and biases?

Every AI project makes ethical assumptions, compromises, and biases.

Let me repeat that.

Every AI project makes ethical assumptions, compromises, and biases.

This is just a basic premise that every project faces. But because of the complexities of AI projects, the assumptions made during scoping can be ignored or minimized during the analysis or deployment if companies do not make a concerted effort to hold onto basic project tenants.

For instance, it’s easy to say that a company should not stalk its customers. And in the scoping process, this may mean masking personal information such as names and addresses from any aggregate data. But what happens if the analysis ends up tracking latitude and longitude to within 1 meter, tracking interactions every 10 minutes, and taking ethnic, gender, sexuality, or other potentially identifying or biasing data along with a phone IMEI identification into account as part of an analysis of the propensity to buy? And these characteristics are not taken into account because they weren’t included as part of the initial scoping process and there was no overarching reminder to not stalk or overly track customers? In this case, even without traditional personally identifiable information, the net result is potentially even more invasive. And with the broad scope of analysis conducted by machine learning algorithms, it can be hard to fully control the potential parameters involved, especially in the early and experimental stages of model building and recursive or neurally designed optimization.

So, from a practical perspective, companies need to create an initial set of business tenets that need to be followed throughout the design, development, and deployment of AI. Although each set of stakeholders across the AI development process will have different means of interpreting and managing these tenets, these business guidelines provide an important set of goalposts and boundaries for defining the scope of the AI project. For instance, a company might set as a set of characteristics for a project:

  • This project will not discriminate based on gender
  • This project will not discriminate based on race
  • This project will not discriminate based on income
  • This project will not take personally identifiable information without first describing this to the user in plain English (or language of development)

These tenets and parameters should each be listed separately, meaning there shouldn’t be a legalese laundry list saying “this project respects race, class, gender, sexuality, income, geography, culture, religion, legal status, physical disability, dietary restrictions, etc.” This allows each key tenet to be clearly defined based on its own merit.

These tenets should be a part of every meeting and formal documentation so that stakeholders across executive, technical, and operational responsibilities all see this list and consider this list in their own activities. This is important because each set of stakeholders will execute differently on these tenets based on their practical responsibilities. Executives will place corporate governance and resources in place while technical stakeholders will focus on the potential bias and issues within the data and algorithmic logic and operational stakeholders will focus on delivery, access, lineage, and other line-of-business concerns associated with front-line usage.

And this list of tenets needs to be short enough to be actionable. This is not the place to write a 3,000 word legal document on every potential risk and problem, but a place to describe specific high-level concerns around bias.

Who are the stakeholders?

The makeup of the executive business stakeholders is an important starting point for determining the biases of the AI project. It is important for any AI project with significant potential organizational impact to have true executive sponsorship from someone who has responsibility for the health of the company. Otherwise, it is too easy for an algorithm to “go rogue” or become an implicit and accepted business enabler without sufficient due diligence.

How will AI oversight be performed in an organization?

AI projects need to be treated with the same types of oversight as hiring employees or any significant change management process. Ideally, AI will be either providing a new and previously unknown insight or supporting productivity that will replace or augment millions of dollars in labor. Companies putting AI into place need to hold AI logic to the same standards as they would hold human labor.

Where is the money coming from?

No matter what the end goal of the AI project is, it will always be judged in context of the money used to fund the AI. If an organization is fully funding an AI project, it will be held accountable for the outcomes of the AI. If an AI project is funded by a consortium of funders, the ethical background of each funder or purchaser will eventually be considered in determining the ethical nature of the AI. Because of this, it is not enough for an organization to be pursuing an AI initiative that is potentially helpful. Organizations must also partner with or work with partners that align with the organization’s policy and culture. When an AI project becomes public, compliance officers and critics will always follow the money and use this as a starting point to determine how ethical the AI effort is.

In our next blog, we will explore Technical Development with a focus on the key questions that technical users such as data analysts and data scientists must consider as they build out the architecture and models that will make up the actual AI application or service.

Developing a Practical Model for Ethical AI in the Business World: Introduction

As we head into 2020, the concept of “AI (Artificial Intelligence) for Good” is becoming an increasingly common phrase. Individuals and organizations with AI skillsets (including data management, data integration, statistical analysis, machine learning, algorithmic model development, and application deployment skills) have effort into pursuing ethical AI efforts.

Amalgam Insights believes that these efforts have largely been piecemeal and inadequate to meet common-sense definitions for companies to effectively state that they are pursuing, documenting, and practicing true ethical AI because of the breadth and potential repercussions of AI on business outcomes. This is not due to a lack of interest, but based on a couple of key considerations. First, AI is a relatively new capability in the enterprise IT portfolio that often lacks formal practices and guidelines and has been managed as a “skunkworks” or experimental project. Second, businesses have not seen AI as a business practice, but as a purely technical practice and made a number of assumptions in skipping to the technical development that would typically not have been made for more mature technical capabilities and projects.

In the past, Amalgam Insights has provided frameworks to help organizations take the next step to AI through our BI to AI progression.

Figure 1: Amalgam’s Framework from BI to AI

 

 

 

To pursue a more ethical model of AI, Amalgam Insights believes that AI efforts need to be analyzed through three key lenses:

  • Executive Design
  • Technical Development
  • Operational Deployment

Figure 2: Amalgam’s Three Key Areas for Ethical AI

In each of these areas, businesses must ask the right questions and adequately prepare for the deployment of ethical AI. In this framework, AI is not just a set of machine learning algorithms to be utilized, but an enabler to effectively augment problem-solving for appropriate challenges.

Over the next week, Amalgam Insights will explore 12 areas of bias across these three categories with the goal of developing a straightforward framework that companies can use to guide their AI initiatives and take a structured approach to enforcing a consistent set of ethical guidelines to support governance across the executive, technical, and operational aspects of initiating, developing, and deploying AI.

In our next blog, we will explore Executive Design with a focus on the five key questions that an executive must consider as they start considering the use of AI within their enterprise.

TEM Market Leaders Calero and MDSL Merge as Global IT Spend Management Consolidation Continues

Key Stakeholders: Chief Information Officer, Chief Financial Officer, Chief Accounting Officer, Controllers, IT Directors and Managers, Enterprise Mobility Directors and Managers, Networking Directors and Managers, Software Asset Directors and Managers, Cloud Service Directors and Managers, and other technology budget holders responsible for telecom, network, mobility, SaaS, IaaS, and IT asset and service expenses.

Why It Matters: The race for IT spend management consolidation continues. The financial management of IT is increasingly seen as a strategic advantage for managing the digital supply chain across network, telecom, wireless, cloud, software, and service portfolios.

Top Takeaway: The new combined business with over 800 employees, 3,500 customers, and an estimated 2 million devices and $20 billion under management both serves as legitimate competition for market leader Tangoe and an attractive potential acquisition for larger IT management vendors.

[Disclaimer: Amalgam Insights has worked with Calero and MDSL. Amalgam Insights has provided end-user inquiries to both Calero and MDSL customers. Amalgam Insights has provided consulting services to investors and advisors involved in this acquisition.]

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

Look Beyond The Simple Facts of the Cimpl Acquisition

(Note: This blog was co-written by Hyoun Park and Larry Foster, an Enterprise Technology Management Association Hall of Famer and an executive who has shape the Technology Expense Management industry. Please welcome Larry’s first contribution to Amalgam Insights!)

On August 22, 2019, Upland Software announced the acquisition of Cimpl (f.k.a. Etelesolv), a Montreal-based telecom expense management platform that was the market leader in the Canadian market and had expanded into the United States market. With this acquisition, Cimpl will become a part of Upland’s Project & Financial Management Solution Suite and add approximate $8 million in annual revenue.

Context for the Acquisition

The TEM (Technology Expense Management) industry has experienced a continual series of ebb-and-flow acquisitions/mergers over the past twelve years. The typical TEM acquisition/merger encompasses two or more independent entities within the realm of TEM, WEM (Wireless Expense Management) or MMS (Managed Mobility Services) merging to create a more comprehensive expense management solution portfolio with superior global delivery capabilities.

The reality is that many of these mergers are driven by economic reasons where one or both entities can reduce overhead by eliminating duplicate services. Overhead is eliminated by unifying back-office operations and amalgamating technology platforms. These types of consolidation mergers are typical in a maturing industry that is eventually dominated by a few leading solution providers representing the majority of market share. All of the leading TEM solution providers including Tangoe, MDSL and Calero encompass a long history of multiple “like-minded mergers”.

Cimpl as an outlier in the TEM market

Until this recent acquisition, Cimpl has maintained the persona of the independent dark horse of the TEM industry quietly residing in Quebec, Canada refining its multi-tenant cloud platform and progressively building its market share.

Unlike most TEM managed service solution providers, Cimpl has decided to focus on being mainly a pure software company and providing a white-label technology platform for its delivery partners. In early 2018 CIMPL stealthily started to expand its physical presence into the United States. Since its inception, Cimpl has continued to progressively achieve conservative incremental success and stay profitable in contrast to a number of TEM vendors that have gone through boom-or-bust cycles driven by external funding (or the lack thereof).

The Challenge for TEM

The traditional acquisition playbook is preventing the TEM industry from being recognized as a strategic asset by organizations. Nonetheless, the TEM industry is experiencing a dramatic paradigm shift as organizations continue to replace legacy communication services with the ever-growing spectrum of cloud-based services. Traditionally, TEM solutions have focused on validating the integrity of invoice charges across multiple vendors prior to payment and allocating expenses to the respective cost centers leveraging the leased service. Enterprises derive value from TEM solutions by enabling a centralized ICT (Information and Communications Technology) shared service to automate the lifecycle from provisioning through payment and managing the resolution of disputed invoice charges for essentially static services.

However, as organizations adopt more ephemeral cloud services that encompass multi-vendor private, public and hybrid leased environments for compute, storage, API-enabled integrations, connectivity, input/output, and telecommunications, the purpose of the centralized ICT business operation is being transformed from managing daily operations to a fiduciary broker focused on optimizing technology investments. Unlike the recurring charges that represent the majority of traditional telecom charges, cloud services are consumption-based, meaning that it’s the responsibility of the client user to deactivate and manage the appropriate configuration of contracted services based on statistical analysis and forecast of the actual usage.

In the world of cloud, the provisioning activities such as activations, changes, and deactivations are done “on-demand,” completely independent from the ICT operation. The primary focus of ITEM solutions is to manage recurring and non-recurring invoice charges in arrears. As ICT operations evolve into technology brokers, they need real-time insight underpinned by ML and AI algorithms that make cost optimization recommendations to add, consolidate, change or deactivate services based on usage trends.

Why the CIMPL acquisition will help Upland

This context brings us to the real ingenuousness of the Cimpl acquisition. In the typical quiet financial days of August when everyone is away on vacation, Upland Software announced an accretive acquisition of Cimpl with a purchase price of $23.1M in cash and a $2.6M cash holdback payable in 12 months. Upland expects the acquisition to generate annual revenue of approximately $8M, of which $7.4M is recurring. The keyword buried within all of those financial statistics is “accretive” which means their strategy is to help increase natural growth.

Upland already has an impressive complementary portfolio of profitable software solutions. A closer look at the acquisition of Cimpl shows how Upland is formulating a solution strategy to manage all aspects of the Information and Communication Technology business operations.

The strategic value of the Cimpl acquisition becomes very clear when you recognize that Upland is the first company to combine an IT Financial Management platform (ITFM), ComSci, with an IT Expense Management-based solution (ITEM), Cimpl. Upland already owns additional complementary solutions such as a document and workflow automation, a BI platform, customer engagement platform, and a knowledge-based platform. With these components, Upland is working to create an industry-leading ERP-type solution framework to automate, manage, & rationalize all aspects of ICT business operations.

Although both ITFM and ITEM support the ICT business operations, they focus on different aspects. ITFM is predominately used on the front end to manage budgets and on a monthly basis to support internal billing/chargeback activities and leveraged by the IT CFO office whereas ITEM solutions like Cimpl are used by analysts and operational managers because they focus on managing the high volumes of transactional operations and data throughout the month, including provisioning and payment of leased services such as both landline and mobile communication services and now the ever-expanding array of cloud services.

Looking Forward: Our Recommendations

In this context, take the following recommendations into account based on this acquisition.

Expect other leading TEM, ITFM, CEM/CMP (Cloud Expense Management and Cloud Management Platform) solution providers to develop competitive solution frameworks that bring multiple IT categories together from a finance and expense management perspective.

ICT managers need to evolve and transform their solution to due diligence approach beyond pursuing and leveraging independent ITFM, ITEM, CEM/CMP solutions to choosing solutions with comprehensive IT management frameworks. As IT continues to become increasingly based on subscriptions, project-based spend, and on-demand peak usage across a variety of categories, ICT managers should aim towards having a single management control plane for finance and expenses rather than depend on a variety of management solutions

Real-time management is the future of IT expense management. The next levels of operational efficacy will be underpinned by more comprehensive real-time insight that helps organizations understand the most optimal way to acquire, configure and consume inter-related cloud services and pay their invoices. This will require insights on usage, project management, service management and real-time status updates associated with expense and finance. By combining financial and operational data, ICT managers will have greater insights into the current and ongoing ROI of technology under management.

Mobile Solutions Launches New Robotic Process Automation Capabilities

[Edited July 25th to reflect Mobile Solutions’ current public-facing offerings]

Key Takeaway: Mobile Solutions is providing Robotic Process Automation for Managed Mobility Services with a focus on mid-market enterprises and organizations. This capability provides Mobile Solutions with a starting point for handling basic service orders.

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

3 Big 2019 Trends and 4 Strategic Tips for Managing the Transforming Cost of Technology

Amalgam Insights estimates that the total technology spend formally and centrally managed by enterprises with over $1 billion in revenue from telecom, network, mobility, Software-as-a-Service (SaaS), and Infrastructure-as-a-Service will double from June of 2019 by the end of 2021 driven by the massive growth of cloud computing and the need to manage a variety of “shadow IT” costs that grow to the size that formal management is required.. By “formally and centrally managed,” Amalgam Insights assumes full visibility of inventory, contracts, billing, and service orders across all vendors with active usage and supplier optimization efforts.

In 2019, Amalgam Insights notes several key trends in the world of technology expense management that IT organizations should be aware of.

First, every IT expense management solution is increasingly focused on cloud-based expenses, either in terms of Software as a Service or Infrastructure as a Service. Established Software Asset Management (SAM) companies are working on their SaaS expense capabilities including Aspera, Flexera, ServiceNow, and Snow Software and a variety of standalone vendors including Alpin, Binadox, Cleanshelf, Intello, Torii, and Zylo are emerging. Amalgam Insights is planning a SmartList for November 2019 to focus on key vendors to manage SaaS across the SaaS expense, SAM, and Technology Expense markets.

On the IaaS side, every major global Technology Expense Management solution has launched IaaS management capabilities, also known as FinOps or Cloud FinOps, including Asignet, Calero, Cass Information Systems, Cimpl, Dimension Data, MDSL, Sakon, and Tangoe. In addition, this market has been an area of rapid acquisition over the past couple of years including Microsoft’s acquisition of Cloudyn, Apptio’s acquisition of Cloudability (which calls this practice “FinOps”), and VMware’s acquisition of CloudHealth Technologies. And there are still standalone players such as Cloudcheckr in this space as well. This crowded market seeking to manage the next $100 billion of public cloud spend represents an interesting set of choices for IT departments in choosing how to aggregate and manage IT spend.

(As an aside, Amalgam Insights finds the use of the term “FinOps” by Apptio and Cloudability to be an interesting way to coordinate multiple departments and provide guidance on how to manage cloud expenses. At the same time, FinOps seems to be recreating the wheel to some extent in rebuilding a set of practices and cross-departmental teams that already have been managing telecom expenses for a number of years. Amalgam Insights is quite interested in seeing how this duplication of effort within IT departments will sort itself either by the establishment of separate Cloud FinOps departments, integration of Cloud FinOps and “Telecom FinOps” a.k.a. Telecom Expense Management, or the integration of both cloud and telecom into a larger IT expense or finance role. This is an interesting transitional period for IT expense as more spend moves to subscription, usage, feature, user, department, and project-based spend and chargeback models. )

A third key trend is the move to Europe. European IT is being targeted as an area that has traditionally been informally managed or managed in geographic silos that prevent strong global management and alignment with strategic enterprise efforts. To solve this problem, there has been a variety of acquisitions and office launches across Europe as the likes of Calero, MDSL, Tangoe, Flexera, Snow Software, Cloudcheckr, ServiceNow, VMware, and others. Amalgam Insights notes that European IT management challenges have been poorly supported in the past by global vendors that have not adequately accounted for the differences in managing data, connectivity, and compliance in each European country and the relative lack of geographic footprint to support services. In light of this, Amalgam Insights has been tracking European IT management successes and provides guidance on this topic for end user, investor, and vendor advisory clients.

To prepare for this evolution in technology expense management, Amalgam Insights provides the following guidance for Chief Information Officers, Chief Procurement Officers, Chief Accounting Officers, and related IT procurement, finance, and expense managers to prepare for the second half of 2019 and beyond based on prior guidance and research.

$100K and 30% are key benchmarks for specialized IT spend categories. Once an IT spend management category exceeds $100,000 per month, organizations start having the potential to save at least one employee’s worth of payroll through optimization by pursuing the 30% savings that typically exist in a previously unmanaged environment that has not been formally managed or audited over the last five years. At this point, your company should start looking for a dedicated solution for expense management, whether it be manual support from an in-house employee with telecom experience, a software platform to support management, or managed services to support contract, invoice, inventory, and usage management. These rules of thumb are especially true in SaaS and IaaS management, which are currently rife with poor governance, duplicate spending, and unmonitored usage patterns associated with decentralized cloud computing purchases.

Enterprise buyers at Global 2000 companies should review their current strategy for IT spend categories with the goal of supporting all cloud, software, hardware, network, and mobility spend from a usage and subscription-based perspective. IT is moving to a subscription and usage-based paradigm that started by enterprise telecom and then evolved through the enterprise adoption of cloud computing. Firms currently considering new or replacement IT expense vendors, due diligence in understanding prospective vendors’ roadmap and experience for new technology categories is vital for futureproofing this investment. This is especially true in an “As-a-Service” world where all aspects of IT are increasingly being sourced and billed in a telecom-like function which shifts these vendor relationships from traditional asset-based approaches to subscription and relationship-based approaches. Look for depth in the following functional areas: inventory, invoice line-items, service orders, disputes, optimization, governance, and security.

Customer satisfaction, geographic expertise, vertical expertise, and depth of managed and professional services are key differentiators. Amalgam recommends that companies looking at TEM solutions focus not only on the technical aspects of invoice and inventory management, but on the alignment between the vendor’s expertise and the potential buyer’s geographic footprint and business model. This alignment is often more important than the extremely granular Request for Proposals that Amalgam has seen in this industry. In 2019, there have been a number of specific trends in this regard, such as the telecom expense management market’s push towards aggregating European spend among market leaders, focus on providing additional managed services such as security and managed mobility, and vertical-specific cost management focuses that also include specific asset management and IT management strategies. Customer retention and satisfaction are also key metrics. For mature solutions, it is not uncommon to see annual customer retention metrics above 95% and to see an year-over-year increase in wallet share as new cloud and app spend is brought into a solution.

Rather than ask hundreds of questions where there is little to no differentiation, such as how invoices are processed and whether a specific type of inventory can be stored within the solution, focus on how the vendor supports the usage and management of value-added technology. The goal of IT is not to restrict the use of helpful technologies, but to increase the use of productivity-driving and outcome-improving technologies and then to providing that optimal level of utilization in a cost-effective manner. Spend management vendors that can help identify technology associated with revenue growth or customer satisfaction provide a competitive edge in understanding the cost-basis of strategic IT. By taking these steps, companies can start to better understand how to manage IT spend more practically.

(Note: This piece is an excerpt from Amalgam Insights’ upcoming SmartList for Technology Expense Management Market Leaders scheduled to publish in August 2019. If you would like more information about this topic, if you are considering a net-new or replacement purchase for IT expense management, or are interested in the upcoming report, please feel free to contact us at info@Amalgaminsights.com)

Strategic Presentation for the Amalgam Insights Community – 5G Context for the Strategic Enterprise

I’ve recently had the opportunity to present on the present and future of 5G as a business enabler. Based on the past 20 years I’ve spent around the carrier, reseller, IT management, and industry analyst sides of the business, I’m looking forward to sharing part of my presentation to the Amalgam Insights audience and show why 5G introduces a new Age of Mobility to follow the Age of Voice, the Age of Text, the Age of Apps, and the Age of Streaming!

If you’re interested in further discussing any aspect of this deck or the repercussions of 5G for your business, please feel free to get in contact with us at info@amalgaminsights.com! Please click on the link below to download the slide deck and learn more about what 5G practically means for the world of business over the next year or two.

Amalgam Insights – 5G Context for the Strategic Enterprise