Note: This blog is a followup to Amalgam Insights’ visit to the “Change the Game” event held by IBM in New York City.
On September 19th, IBM announced its launch of a portfolio of AI trust and transparency capabilities. This announcement got Amalgam Insight’s attention because of IBM’s relevance and focus in the enterprise AI market throughout this decade. To understand why IBM’s specific launch matters, take a step back in considering IBM’s considerable role in building out the current state of the enterprise AI market.
IBM AI in Context
Since IBM’s public launch of IBM Watson on Jeopardy! in 2011, IBM has been a market leader in enterprise artificial intelligence and spent billions of dollars in establishing both IBM Watson and AI. This has been a challenging path to travel as IBM has had to balance this market-leading innovation with the financial demands of supporting a company that brought in $107 billion in revenue in 2011 and has since seen this number shrink by almost 30%.
In addition, IBM had to balance its role as an enterprise technology company focused on the world’s largest workloads and IT challenges with launching an emerging product better suited for highly innovative startups and experimental enterprises. And IBM also faced the “cloudification” of enterprise IT in general, where the traditional top-down purchase of multi-million dollar IT portfolios is being replaced by piecemeal and business-driven purchases and consumption of best-in-breed technologies.
Seven years later, the jury is still out on how AI will ultimately end up transforming enterprises. What we do know is that a variety of branches of AI are emerging, including
- Language-based bots that are better suited to the contact center world
- natural language interfaces that are taking over in analytics and business intelligence
- APIs and microservices used to provide digestible AI, algorithmic, and statistical tools that can be used in agile application development environments easily embedded into applications
- AI-driven applications used to coach, prescribe, and prioritize courses of action either from a transactional or workflow-based perspective
- Data science platforms used to develop analytic and algorithmically driven applications from scratch
- Neural net and computing development used to create increasingly “human” logical and contextual decision-making environments
None of these approaches have effectively taken over their respective markets from a revenue and adoption perspective, meaning that these markets still have massive upside.
The Reality of Enterprise AI
Although we all know that our future will be more AI-driven, we also know that anyone who claims to know exactly what approaches will be most dominant in 2025 are probably lying. This is part of the problem in AI: in comparison to the world of enterprise mobility where we could make clear assumptions for the future in 2010 of a world full of ubiquitous handheld supercomputers connected both to each other and to the cloud, the world of AI is still relatively nebulous in that it is hard to tell which aspects of AI will be most influential in the near future and there is no one overarching approach that defines “AI in the enterprise.” This makes it difficult for enterprises to fully invest in AI as investing in the future of AI means developing a wide portfolio of capabilities.
This was part of IBM’s challenge over the past several years. IBM built out everything from
- IBM Watson as a standalone, Jeopardy!-like solution for healthcare, financial services, and other enterprises
- Watson Developer Cloud to provide language, vision, and speech APIs
- Watson Analytics, which has been integrated with Cognos Analytics, to support predictive and reporting analytics,
- Watson Assistant, which supports chatbots and assistants to support talent management and other practical use cases,
- Watson Studio and Data Science Experience to support enterprise data science efforts to embed statistical and algorithmic logic into applications, and
- Evolutionary neural network, clustering, and design work at the research level.
And, frankly, the velocity of innovation was difficult for enterprise buyers to keep up with, especially as diverse products were all labelled Watson and as buyers were still learning about technologies such as chatbots, data science platforms, and IBM’s hybrid cloud computing options at a fundamental level.
The level of education needed to simultaneously support all of these relatively low-revenue and experimental investments in AI was a tough path for IBM (and, in retrospect, may have been better supported as a funded internal spin-off with access to IBM patents and technology). Consider that extremely successful startups can justify billion dollar valuations based on $100 million or less in annual revenue while IBM is being judged on multi-billion dollar revenue changes on a quarter-by-quarter basis. That juxtaposition makes it hard for public enterprises to support audacious and aggressive startup goals that may take ten years of investment and loss to build.
As a result, IBM has built a deep foundation in AI as well as developed a strong cadre of technology executives who are fluent in the challenges of AI and are building these capabilities out both internally as well as at a variety of emerging startups.
(Note: This is not intended to be a complete analysis of IBM, but is meant to provide the specific AI context associated for analyzing this specific announcement. For those seeking deeper analysis, you know where to find me.)
IBM Trust and Transparency on the IBM Cloud
In context of IBM’s enterprise AI experience, the introduction of IBM’s Trust and Transparency capabilities are interesting. IBM Trust and Transparency capabilities were launched on September 19th and included an introductory freemium option. The solution is touted as being able to detect fairness issues at runtime, provide support for bias mitigation, support AI traceability and auditability for predictions made in production environments, track the accuracy of AI, and explain outcomes in business terms.
IBM’s progress in supporting fairness detection and bias mitigation will be interesting to track as there are many forms of “fairness.” Amalgam Insights looks at data fairness in terms of several different criteria including the assumptions made in determining
- adequate data sampling
- data structuring
- data quality
- data contextualization and augmentation
- data placed in production
- data available for analysis
- business relevancy of data
- analytic processing
- analytic modeling and
- contextual relevance.
As a starting point, Amalgam Insights expects that IBM’s “fairness” is focused on the statistical relevance and confidence associated with specialized data. But given IBM’s strong capabilities across data management, metadata management, Big Data, business services, and real-time data, Amalgam Insights believes that there is additional potential to detect more forms of contextualized fairness over time that are relevant to all areas of enterprise data science that are relevant across all enterprise AI and data science roles.
My colleague Lynne Baer has covered the key roles in data science in the research piece Growing Your Data Science Team: Diversifying Beyond Unicorns. Each of these roles has their own definition of risk, visibility, and relevance for data and analytics. These roles align with the roles that IBM presented to us in their view of data science and AI roles:
IBM Framework for Data Science Roles
Source: IBM, 2018
IBM’s ability to support AI traceability and auditability is a starting point to answer to one of the fundamental challenges of machine learning and data science: understanding the logic of AI as data is processed through a variety of iterative and analytic manipulations to support accurate models.
Businesses seek results that can be verified, governed, audited, and defended across a variety of financial, legislative, industry, and societal standards. Although traceable lineage and visibility are not the most interesting aspects of AI development, Amalgam Insights strongly believes that these capabilities are necessary to support enterprise AI at the scalable and pervasive level necessary to support the multi-trillion dollar value that is hyped for AI.
The last couple of claims in tracking AI accuracy and explaining outcomes in business terms should be no surprise, as IBM has invested heavily in both the abilities to define statistical and AI accuracy and translate analytic outcomes into plain language through SPSS, the Watson platform, and Watson Analytics. The capability of using natural language to define analytic exploration and results is one where IBM has been a strong market leader in the analytics market for years.
Amalgam Insights’ key recommendation associated with this announcement is that we believe all enterprises must start building formal AI strategies and policies to plan the use of AI. The challenge of tracking AI risk, lineage, and traceability is still nascent, but necessary for building trust in AI results. Trust is typically a foundational layer of functionality in enterprise technology, but AI has provided many potential avenues for value that were quickly identified by developers and business executives. This led to rapid functional development of models, statistical outputs, and results that have led to billions of dollars in business value for early adopters who have helped to define the “Multi-Million Dollar Map to Machine Learning.” From a traditional development perspective, AI has leaped from a Level 1 technology to a Level 4 business improvement technology or in some cases even a Level 5 Business Transformation technology.
Source: Amalgam Insights, 2017
And now businesses are filling in the gaps after a quantum leap reminiscent of how enterprises quickly adopted smartphones and turned them into a dominant interface for many business and communications tasks, then had to figure out a mobility management strategy afterwards. Enterprises now must similarly create an AI strategy for ongoing support of augmenting businesses with AI.
In supporting and selling AI for the majority of this decade through the Watson brand, IBM’s experience as a key driver, investor, and product developer for the enterprise AI market provides unique experience in supporting next steps for the data and analytics community. It could be argued that IBM tried to do too much, too soon in pushing out Watson in every possible form to an enterprise market that was not ready to consume. But in providing tools to support trusted AI, Amalgam Insights believes that IBM has taken an important step this year in supporting enterprise preparedness for consuming AI capabilities at enterprise scale and in line with current enterprise demands for developing a basis for AI governance and compliance.