Posted on 3 Comments

Developing a Practical Model for Ethical AI in the Business World: Stage I – Executive Design

In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects. To read the introduction, click here.

This blog focuses on Executive Design, the first of the Three Keys to Ethical AI introduced in the last blog.

Stage I: Executive Design

As a starting point, any AI project needs to be analyzed in context of five major questions that are important both from a project management and scoping perspective. Amalgam Insights cannot control the ethical governance of every company, but we can provide a starting point to let AI-focused companies know what potential problems they face. As a starting point, businesses seeking to pursue ethical AI must consider the following questions:

  • What is the goal of the project?
  • What are the key ethical assumptions and biases?
  • Who are the stakeholders?
  • How will AI oversight be performed in an organization?
  • Where is the money coming from?

What is the project goal?

In thinking about the goal of the project, the project champion needs to make sure that the goal, itself, is not unethical. For instance, the high-level idea of understanding your customers is laudable at its surface. But if the goal of the project is effectively to stalk customers or to open up customer data without their direct consent, this project quickly becomes unethical. Likewise, if an AI project to improve productivity and efficiency is practically designed to circumvent legal governance of a process, there are likely ethical issues as well.

Although this analysis seems obvious, the potential opacity, complexity, and velocity of AI deployments mean that these topics have to be considered prior to project deployment. These tradeoffs need to be analyzed based on the risk profile and ethical policies of the company and need to be determined at a high level prior to pursuing an AI project.

What are the key ethical assumptions and biases?

Every AI project has ethical assumptions, compromises, and biases.

Let me repeat that.

Every AI project has ethical assumptions, compromises, and biases.

This is just a basic premise that every project faces. But because of the complexities of AI projects, the assumptions made during scoping can be ignored or minimized during the analysis or deployment if companies do not make a concerted effort to hold onto basic project tenants.

For instance, it’s easy to say that a company should not stalk its customers. And in the scoping process, this may mean masking personal information such as names and addresses from any aggregate data. But what happens if the analysis ends up tracking latitude and longitude to within 1 meter, tracking interactions every 10 minutes, and taking ethnic, gender, sexuality, or other potentially identifying or biasing data along with a phone IMEI identification into account as part of an analysis of the propensity to buy? And these characteristics are not taken into account because they weren’t included as part of the initial scoping process and there was no overarching reminder to not stalk or overly track customers? In this case, even without traditional personally identifiable information, the net result is potentially even more invasive. And with the broad scope of analysis conducted by machine learning algorithms, it can be hard to fully control the potential parameters involved, especially in the early and experimental stages of model building and recursive or neurally designed optimization.

So, from a practical perspective, companies need to create an initial set of business tenets that need to be followed throughout the design, development, and deployment of AI. Although each set of stakeholders across the AI development process will have different means of interpreting and managing these tenets, these business guidelines provide an important set of goalposts and boundaries for defining the scope of the AI project. For instance, a company might set as a set of characteristics for a project:

  • This project will not discriminate based on gender
  • This project will not discriminate based on race
  • This project will not discriminate based on income
  • This project will not take personally identifiable information without first describing this to the user in plain English (or language of development)

These tenets and parameters should each be listed separately, meaning there shouldn’t be a legalese laundry list saying “this project respects race, class, gender, sexuality, income, geography, culture, religion, legal status, physical disability, dietary restrictions, etc.” This allows each key tenet to be clearly defined based on its own merit.

These tenets should be a part of every meeting and formal documentation so that stakeholders across executive, technical, and operational responsibilities all see this list and consider this list in their own activities. This is important because each set of stakeholders will execute differently on these tenets based on their practical responsibilities. Executives will place corporate governance and resources in place while technical stakeholders will focus on the potential bias and issues within the data and algorithmic logic and operational stakeholders will focus on delivery, access, lineage, and other line-of-business concerns associated with front-line usage.

And this list of tenets needs to be short enough to be actionable. This is not the place to write a 3,000 word legal document on every potential risk and problem, but a place to describe specific high-level concerns around bias.

Who are the stakeholders?

The makeup of the executive business stakeholders is an important starting point for determining the biases of the AI project. It is important for any AI project with significant potential organizational impact to have true executive sponsorship from someone who has responsibility for the health of the company. Otherwise, it is too easy for an algorithm to “go rogue” or become an implicit and accepted business enabler without sufficient due diligence.

How will AI oversight be performed in an organization?

AI projects need to be treated with the same types of oversight as hiring employees or any significant change management process. Ideally, AI will be either providing a new and previously unknown insight or supporting productivity that will replace or augment millions of dollars in labor. Companies putting AI into place need to hold AI logic to the same standards as they would hold human labor.

Where is the money coming from?

No matter what the end goal of the AI project is, it will always be judged in context of the money used to fund the AI. If an organization is fully funding an AI project, it will be held accountable for the outcomes of the AI. If an AI project is funded by a consortium of funders, the ethical background of each funder or purchaser will eventually be considered in determining the ethical nature of the AI. Because of this, it is not enough for an organization to be pursuing an AI initiative that is potentially helpful. Organizations must also partner with or work with partners that align with the organization’s policy and culture. When an AI project becomes public, compliance officers and critics will always follow the money and use this as a starting point to determine how ethical the AI effort is.

In our next blog, we will explore Technical Development with a focus on the key questions that technical users such as data analysts and data scientists must consider as they build out the architecture and models that will make up the actual AI application or service.

Posted on 4 Comments

Developing a Practical Model for Ethical AI in the Business World: Introduction

As we head into 2020, the concept of “AI (Artificial Intelligence) for Good” is becoming an increasingly common phrase. Individuals and organizations with AI skillsets (including data management, data integration, statistical analysis, machine learning, algorithmic model development, and application deployment skills) have effort into pursuing ethical AI efforts.

Amalgam Insights believes that these efforts have largely been piecemeal and inadequate to meet common-sense definitions for companies to effectively state that they are pursuing, documenting, and practicing true ethical AI because of the breadth and potential repercussions of AI on business outcomes. This is not due to a lack of interest, but based on a couple of key considerations. First, AI is a relatively new capability in the enterprise IT portfolio that often lacks formal practices and guidelines and has been managed as a “skunkworks” or experimental project. Second, businesses have not seen AI as a business practice, but as a purely technical practice and made a number of assumptions in skipping to the technical development that would typically not have been made for more mature technical capabilities and projects.

In the past, Amalgam Insights has provided frameworks to help organizations take the next step to AI through our BI to AI progression.

Figure 1: Amalgam’s Framework from BI to AI

 

 

 

To pursue a more ethical model of AI, Amalgam Insights believes that AI efforts need to be analyzed through three key lenses:

  • Executive Design
  • Technical Development
  • Operational Deployment

Figure 2: Amalgam’s Three Key Areas for Ethical AI

In each of these areas, businesses must ask the right questions and adequately prepare for the deployment of ethical AI. In this framework, AI is not just a set of machine learning algorithms to be utilized, but an enabler to effectively augment problem-solving for appropriate challenges.

Over the next week, Amalgam Insights will explore 12 areas of bias across these three categories with the goal of developing a straightforward framework that companies can use to guide their AI initiatives and take a structured approach to enforcing a consistent set of ethical guidelines to support governance across the executive, technical, and operational aspects of initiating, developing, and deploying AI.

In our next blog, we will explore Executive Design with a focus on the five key questions that an executive must consider as they start considering the use of AI within their enterprise.

Posted on 1 Comment

From #KubeCon, Three Things Happening with the Kubernetes Market

This year’s KubeCon+CloudNativeCon was, to say the least, an experience. Normally sunny San Diego treated conference-goers to torrential downpours. The unusual weather turned the block party event into a bit of a sog. My shoes are still drying out. The record crowds – this year’s attendance was 12,000 up from last year’s 8000 in Seattle – made navigating the show floor a challenge for many attendees.

Despite the weather and the crowds, this was an exciting KubeCon+CloudNativeCon. On display was the maturation of the Kubernetes and container market. Both the technology and the best practices discussions were less about “what is Kubernetes” and, instead more about “how does this fit into my architecture?” and “how enterprise-ready is this stuff?” This shift from the “what” to the “how” is a sign that Kubernetes is heading quickly to the mainstream. There are other indicators at Kubecon+CloudNativeCon that, to me, show Kubernetes maturing into a real enterprise technology.

First, the makeup of the Kubernetes community is clearly changing. Two years ago, almost every company at KubeCon+CloudNativeCon was some form of digital forward company like Lyft or cloud technology vendor such as Google or Red Hat. Now, there are many more traditional companies on both the IT and vendor side. Vendors such as HPE, Oracle, Intel, and Microsoft, mainstays of technology for the past 30 years, are here in force. Industries like telecommunications (drawn by the promise of edge computing), finance, manufacturing, and retail are much more visible than they were just a short time ago. While microservices and Kubernetes are not yet as widely deployed as more traditional n-Tier architectures and classic middleware, the mainstream is clearly interested.

Another indicator of the changes in the Kubernetes space is the prominence of security in the community. Not only are there more vendors than ever, but we are seeing more keynote time given to security practices. Security is, of course, a major component of making Kubernetes enterprise-ready. Without solid security practices and technology, Kubernetes will never be acceptable to a broad swatch of large to mid-sized businesses. That said, there is still so much more that needs to be done with Kubernetes security. The good news is that the community is working on it.

Finally, there is clearly more attention being paid to operating Kubernetes in a production environment. That’s most evident in the proliferation of tracing and logging technology, from both new and older companies, that were on display on the show floor and mainstage. Policy management was also an important area of discussion at the conference. These are all examples of the type of infrastructure that Operations teams will need to manage Kubernetes at scale and a sign that the community is thinking seriously about what happens after deployment.

It certainly helps that a lot of basic issues with Kubernetes have been solved but there is still more work to do. There are difficult challenges that need attention. How to migrate existing stateful apps originally written in Java and based on n-Tier architectures is still mostly an open question. Storage is another area that needs more innovation, though there’s serious work underway in that space. Despite the need for continued work, the progress seen at KubeCon+CloudNativeCon NA 2019 point to future where Kubernetes is a major platform for enterprise applications.  2020 will be another pivotal year for Kubernetes, containers, and microservices architectures. It may even be the year of mainstream adoption. We’ll be watching.

Posted on Leave a comment

TEM Market Leaders Calero and MDSL Merge as Global IT Spend Management Consolidation Continues

Key Stakeholders: Chief Information Officer, Chief Financial Officer, Chief Accounting Officer, Controllers, IT Directors and Managers, Enterprise Mobility Directors and Managers, Networking Directors and Managers, Software Asset Directors and Managers, Cloud Service Directors and Managers, and other technology budget holders responsible for telecom, network, mobility, SaaS, IaaS, and IT asset and service expenses.

Why It Matters: The race for IT spend management consolidation continues. The financial management of IT is increasingly seen as a strategic advantage for managing the digital supply chain across network, telecom, wireless, cloud, software, and service portfolios.

Top Takeaway: The new combined business with over 800 employees, 3,500 customers, and an estimated 2 million devices and $20 billion under management both serves as legitimate competition for market leader Tangoe and an attractive potential acquisition for larger IT management vendors.

[Disclaimer: Amalgam Insights has worked with Calero and MDSL. Amalgam Insights has provided end-user inquiries to both Calero and MDSL customers. Amalgam Insights has provided consulting services to investors and advisors involved in this acquisition.]
Continue reading TEM Market Leaders Calero and MDSL Merge as Global IT Spend Management Consolidation Continues

Posted on Leave a comment

The Need for Simulation and Situational Awareness in Cybersecurity Training – A Neuroscience Perspective

Organizations are more vulnerable than ever to cybersecurity threats. Global annual cybersecurity costs are predicted to grow from $3 trillion in 2015 to $6 trillion annually by 2021. To stay safe organizations must train their employees to identify cybersecurity threats and to avoid them. To address this, global spending on cybersecurity products and services is projected to exceed $1 trillion from 2017 to 2021.

Unfortunately, cybersecurity training is particularly challenging because cybersecurity is more about training behavioral “intuition” and situational awareness than it is about training a cognitive, analytic understanding. It is one thing to know “what” to do, but it is another (and mediated by completely different systems in the brain) to know “how” to do it, and to know how to do it under a broad range of situations.

Regrettably, knowing what to do and what not to do, does not translate into actually doing or not doing. To train cybersecurity behaviors, the learner must be challenged through behavioral simulation. They must be presented with a situation, generate an appropriate or inappropriate response, and must receive real-time, immediate feedback regarding the correctness of their behavior. Real-time, interactive feedback is the only way to effectively engage the behavioral learning system in the brain. This system learns through gradual, incremental dopamine-mediated changes in the strength of muscle memory that reside in the striatum of the brain. Critically, the behavioral learning system in the brain is distinct from the cognitive learning system in the brain, meaning that knowing “what” to do has no effect on learning “how” to do it.

Cybersecurity behavioral training must be broad-based with the goal of training situational awareness. Cybersecurity hackers are creative with each attack often having a different look and feel. Simulations must mimic this variability so that they elicit different experiences and emotions. This is how you engage experiential centers in the brain that represent the sensory aspects of an interaction (e.g., sight and sound) and emotional centers in the brain that build situational awareness. By utilizing a broad range of cybersecurity simulations that engage experiential and emotional centers in different ways, the learner trains cybersecurity behaviors that generalize and transfer to multiple settings. Ideally, it is also useful to align the difficulty of the simulation to the user’s performance. This personalized approach will be more effective and will speed learning relative to a one-size-fits-all approach.

If your organization is worried about cybersecurity threats and is looking for a cybersecurity training tool, a few considerations are in order. First, and foremost, do not settle for a training solution that focuses only on providing learners with knowledge and information around cybersecurity. This “what” focused approach will be ineffective at teaching the appropriate behavioral responses to cybersecurity threats, and will leave your organization vulnerable. Instead focus on solutions that are grounded in simulation training, preferably with content and delivery that is broad-based to train situational awareness. Solutions that personalize the difficulty of each simulation are a bonus as they will speed learning and long-term retention of cybersecurity behaviors.

Posted on Leave a comment

Canonical Takes a Third Path to Support New Platforms

We are in the midst of another change-up in the IT world. Every 15 to 20 years there is a radical rethink of the platforms that applications are built upon. During the course of the history of IT, we have moved from batch-oriented, pipelined systems (predominantly written in COBOL) to client-server and n-Tier systems that are the standards of today. These platforms were developed in the last century and designed for last century applications. After years of putting shims into systems to accommodate the scale and diversity of modern applications, IT has just begun to deploy new platforms based on containers and Kubernetes. These new platforms promise greater resiliency and scalability, as well as greater responsiveness to the business. Continue reading Canonical Takes a Third Path to Support New Platforms

Posted on Leave a comment

Augmented Reality in Product Development: A Neuroscience Perspective

 

3D Dynamic Representation

Product development is a collaborative process in which the product evolves from an idea, to drawings and ultimately to a physical prototype. This is an iterative process in which two-dimensional (2D) static images and schematics drive development early in the process only later leading to the development of a physical 3-dimensional (3D) prototype. This approach places a heavy load on the cognitive system in the brain because 3D dynamic representations and imagery must be constructed in the brain from a series of 2D static images. Continue reading Augmented Reality in Product Development: A Neuroscience Perspective

Posted on Leave a comment

Look Beyond The Simple Facts of the Cimpl Acquisition

(Note: This blog was co-written by Hyoun Park and Larry Foster, an Enterprise Technology Management Association Hall of Famer and an executive who has shaped the Technology Expense Management industry. Please welcome Larry’s first contribution to Amalgam Insights!)

On August 22, 2019, Upland Software announced the acquisition of Cimpl (f.k.a. Etelesolv), a Montreal-based telecom expense management platform that was the market leader in the Canadian market and had expanded into the United States market. With this acquisition, Cimpl will become a part of Upland’s Project & Financial Management Solution Suite and add approximately $8 million in annual revenue.

Context for the Acquisition

The TEM (Technology Expense Management) industry has experienced a continual series of ebb-and-flow acquisitions/mergers over the past twelve years. The typical TEM acquisition/merger encompasses two or more independent entities within the realm of TEM, WEM (Wireless Expense Management) or MMS (Managed Mobility Services) merging to create a more comprehensive expense management solution portfolio with superior global delivery capabilities.

The reality is that many of these mergers are driven by economic reasons where one or both entities can reduce overhead by eliminating duplicate services. Overhead is eliminated by unifying back-office operations and amalgamating technology platforms. These types of consolidation mergers are typical in a maturing industry that is eventually dominated by a few leading solution providers representing the majority of market share. All of the leading TEM solution providers including Tangoe, MDSL and Calero encompass a long history of multiple “like-minded mergers”.

Cimpl as an outlier in the TEM market

Until this recent acquisition, Cimpl has maintained the persona of the independent dark horse of the TEM industry quietly residing in Quebec, Canada refining its multi-tenant cloud platform and progressively building its market share.

Unlike most TEM managed service solution providers, Cimpl has decided to focus on being mainly a pure software company and providing a white-label technology platform for its delivery partners. In early 2018 CIMPL stealthily started to expand its physical presence into the United States. Since its inception, Cimpl has continued to progressively achieve conservative incremental success and stay profitable in contrast to a number of TEM vendors that have gone through boom-or-bust cycles driven by external funding (or the lack thereof).

The Challenge for TEM

The traditional acquisition playbook is preventing the TEM industry from being recognized as a strategic asset by organizations. Nonetheless, the TEM industry is experiencing a dramatic paradigm shift as organizations continue to replace legacy communication services with the ever-growing spectrum of cloud-based services. Traditionally, TEM solutions have focused on validating the integrity of invoice charges across multiple vendors prior to payment and allocating expenses to the respective cost centers leveraging the leased service. Enterprises derive value from TEM solutions by enabling a centralized ICT (Information and Communications Technology) shared service to automate the lifecycle from provisioning through payment and managing the resolution of disputed invoice charges for essentially static services.

However, as organizations adopt more ephemeral cloud services that encompass multi-vendor private, public and hybrid leased environments for compute, storage, API-enabled integrations, connectivity, input/output, and telecommunications, the purpose of the centralized ICT business operation is being transformed from managing daily operations to a fiduciary broker focused on optimizing technology investments. Unlike the recurring charges that represent the majority of traditional telecom charges, cloud services are consumption-based, meaning that it’s the responsibility of the client user to deactivate and manage the appropriate configuration of contracted services based on statistical analysis and forecast of the actual usage.

In the world of cloud, the provisioning activities such as activations, changes, and deactivations are done “on-demand,” completely independent from the ICT operation. The primary focus of ITEM solutions is to manage recurring and non-recurring invoice charges in arrears. As ICT operations evolve into technology brokers, they need real-time insight underpinned by ML and AI algorithms that make cost optimization recommendations to add, consolidate, change or deactivate services based on usage trends.

Why the CIMPL acquisition will help Upland

This context brings us to the real ingenuousness of the Cimpl acquisition. In the typical quiet financial days of August when everyone is away on vacation, Upland Software announced an accretive acquisition of Cimpl with a purchase price of $23.1M in cash and a $2.6M cash holdback payable in 12 months. Upland expects the acquisition to generate annual revenue of approximately $8M, of which $7.4M is recurring. The keyword buried within all of those financial statistics is “accretive” which means their strategy is to help increase natural growth.

Upland already has an impressive complementary portfolio of profitable software solutions. A closer look at the acquisition of Cimpl shows how Upland is formulating a solution strategy to manage all aspects of the Information and Communication Technology business operations.

The strategic value of the Cimpl acquisition becomes very clear when you recognize that Upland is the first company to combine an IT Financial Management platform (ITFM), ComSci, with an IT Expense Management-based solution (ITEM), Cimpl. Upland already owns additional complementary solutions such as a document and workflow automation, a BI platform, customer engagement platform, and a knowledge-based platform. With these components, Upland is working to create an industry-leading ERP-type solution framework to automate, manage, & rationalize all aspects of ICT business operations.

Although both ITFM and ITEM support the ICT business operations, they focus on different aspects. ITFM is predominately used on the front end to manage budgets and on a monthly basis to support internal billing/chargeback activities and leveraged by the IT CFO office whereas ITEM solutions like Cimpl are used by analysts and operational managers because they focus on managing the high volumes of transactional operations and data throughout the month, including provisioning and payment of leased services such as both landline and mobile communication services and now the ever-expanding array of cloud services.

Looking Forward: Our Recommendations

In this context, take the following recommendations into account based on this acquisition.

Expect other leading TEM, ITFM, CEM/CMP (Cloud Expense Management and Cloud Management Platform) solution providers to develop competitive solution frameworks that bring multiple IT categories together from a finance and expense management perspective.

ICT managers need to evolve and transform their solution to due diligence approach beyond pursuing and leveraging independent ITFM, ITEM, CEM/CMP solutions to choosing solutions with comprehensive IT management frameworks. As IT continues to become increasingly based on subscriptions, project-based spend, and on-demand peak usage across a variety of categories, ICT managers should aim towards having a single management control plane for finance and expenses rather than depend on a variety of management solutions

Real-time management is the future of IT expense management. The next levels of operational efficacy will be underpinned by more comprehensive real-time insight that helps organizations understand the most optimal way to acquire, configure and consume inter-related cloud services and pay their invoices. This will require insights on usage, project management, service management and real-time status updates associated with expense and finance. By combining financial and operational data, ICT managers will have greater insights into the current and ongoing ROI of technology under management.

Posted on

The Amalgam Insiders have 5 Key Questions for VMworld

(Editor’s Note: This week, Tom Petrocelli and Hyoun Park will be blogging and tweeting on key topics at VMworld at a time when multi-cloud management is a key issue for IT departments and Dell is spending billions of dollars. Please follow our blog and our twitter accounts TomPetrocelli, Hyounpark, and AmalgamInsights for more details this week as we cover VMworld!)

As Amalgam Insights prepares to attend VMworld, it is an especially interesting time from both an M&A and a strategic perspective as VMware completes acquisitions of its sibling company Pivotal and end-user security startup Carbon Black. As these acquisitions are in progress and Amalgam Insights has the opportunity to question executives at the top of Dell Technologies, including Pat Gelsinger and Michael Dell, Amalgam Insights will be looking forward to answers to the following questions:

1. How will VMware accelerate Pivotal’s growth post-acquisition? Back in 2013 when Pivotal was first founded, I stated in an interview that

“Pivotal is the first application platform that combines cloud, Big Data, and rapid application development and it represents a fundamental shift in enterprise IT. By creating an enterprise-grade Big Data application platform, Pivotal has the opportunity to quickly unlock value from transactional data that has traditionally been archived and ignored without requiring a long period of up training, integration, and upfront development time.”

 

The potential for Pivotal was immense. Even in light of The Death of Big Data, Pivotal still has both the toolkits and methodology to support intelligent analytic and algorithm-based application architectures at a time when VMware needs to increase its support there in light of the capabilities of IBM-Red Hat, Oracle, and other competitors. We’re looking forward to getting some answers!

2. How will the Carbon Black acquisition be integrated into VMware’s security and end-user computing offerings? Carbon Black is a Boston-area security startup focused on discovering malicious activity on endpoints and will be a strong contributor to WorkspaceONE as VMware seeks to manage and secure the mobile-cloud ecosystem. And along with NSX Cloud for networking and CloudHealth Technologies for multi-cloud management, Carbon Black will help VMware to tell a stronger end-to-end cloud story. But the potential and timeline for integration will end up defining the success of this 2 billion+ dollar acquisition.

3. Where does CloudHealth Technologies fit into VMware’s multi-cloud management story? Although this 500 million dollar acquisition looked promising when it occurred last year, the Dell family previously invested in Enstratius to manage multi-cloud environments and that acquisition ended up going nowhere. What did VMware learn from the last time around and how will CloudHealth Technologies stay top of mind with all these other acquisitions going on?

4. Where is VMware going with its machine learning and AI capabilities for data center management? I can’t take credit for this one, as the great Maribel Lopez brought this up (go ahead and follow her on LinkedIn!). But VMware needs to continue advancing the Software-Defined Data Center and to ease client challenges in supporting hybrid cloud environments.

5. How is VMware bringing virtualization and Kubernetes together? With VMware’s acquisitions of Heptio and Bitnami, VMware has put itself right in the middle of the Kubernetes universe. But virtualization and Kubernetes are the application support equivalent of data center and cloud, two axes on the spectrum of what is possible. How will VMware simplify this componentization for clients who are seeking hybrid cloud help?

We’ll be looking for answers to these questions and more as we roam the halls of Moscone and put VMware and Dell executives to the test! Stay tuned for more!

Posted on Leave a comment

VMware plus Pivotal Equals Platforms

(Editor’s Note: This week, Tom Petrocelli and Hyoun Park will be blogging and tweeting on key topics at VMworld at a time when multi-cloud management is a key issue for IT departments and Dell is spending billions of dollars. Please follow our blog and our twitter accounts TomPetrocelli, Hyounpark, and AmalgamInsights for more details this week as we cover VMworld!)

On August 22, 2019, VMware announced the acquisition of Pivotal. The term “acquisition” seems a little weird here since both are partly owned by Dell. It’s a bit like Dell buying Dell. Strangeness aside, this is a combination that makes a lot of sense.

For nearly eight years now, the concept of a microservices architecture has been taking shape. Microservices is an architectural idea wherein applications are broken up into many, small, bits of code – or services – that provide a limited set of functions and operate independently. Applications are assembled Lego-like, from component microservices. The advantages of microservices are that different parts of a system can evolve independently, updates are less disruptive, and systems become more resilient because system components are less likely to harm each other. The primary vehicle for microservices are containers (which I’ve covered in my Market Guide: Seven Decision Points When Considering Containers), that are deployed in clusters to enhance resiliency and more easily scale up resources.

The Kubernetes open-source software has emerged as the major orchestrator for containers and provides a stable base to build microservice platforms. These platforms must deploy not only the code that represents the business logic, but a set of system services, such as network, tracing, logging, and storage, as well. Container cluster platforms are, by nature, complex assortments of many moving parts – hard to build and hard to maintain.

The big problem has been that most container technology has been open-source and deployed piecemeal, leaving forward-looking companies to assemble their own container cluster microservices platforms. Building out and then maintaining these DIY platforms requires continued investment in people and other resources. Most companies either can’t afford or are unwilling to make investments in this amount of engineering talent and training. Subsequently, there are a lot of companies that have been left out of the container platform game.

The big change has been in the emergence of commercial platforms (many of which were discussed in my SmartList Market Guide on Service Mesh and Building Out Microservices Networking), based on open-source projects, that bring to IT everything it needs to deploy container-based microservices. All the cloud companies, especially Google, which was the original home of Kubernetes, and open-source software vendors such as Red Hat (recently acquired by IBM) with their OpenShift platform, have some form of Kubernetes-based platform. There may be as many as two dozen commercial platforms based on Kubernetes today.

This brings us to VMware and Pivotal. Both companies are in the platform business. VMware is still the dominant player in Virtual Machine (VM) hypervisors, which underpin most systems today, and are marketing a Kubernetes distribution. They also recently purchased Bitnami, a company that makes technology for bundling containers for deployment. At the time, I said:

“This is VMware doubling down on software for microservices and container clusters. Prima facie, it looks like a good move.”

Pivotal markets a Kubernetes distribution as well but also one of the major vendors for Cloud Foundry, another platform that runs containers, VMs, and now Kubernetes (which I discuss in my Analyst Insight: Cloud Foundry and Kubernetes: Different Paths to Microservices). The Pivotal portfolio also includes Spring Boot, one of the primary frameworks for building microservices in Java, and an extensive Continuous Integration/Continuous Deployment capability based on BOSH (part of Cloud Foundry), Concourse, and other open source tools.

Taken together, VMware and Pivotal offer a variety of platforms for newer microservices and legacy VM architectures that will fit the needs of a big swatch of large enterprises. This will give them both reach and depth in large enterprise companies and allow their sales teams to sell whichever platform a customer needs at the moment while providing a path to newer architectures. From a product portfolio perspective, VMware plus Pivotal is a massive platform play that will help them to compete more effectively against the likes of IBM/Red Hat or the big cloud vendors.

On their own, neither VMWare or Pivotal had the capacity to compete against Red Hat OpenShift, especially now that that Red Hat has access to IBM’s customer base and sales force. Together they will have a full range of technology to bring to bear as the Fortune 500 moves into microservices. The older architectures are also likely to remain in place either because of legacy reasons or because they just fit the applications they serve. VMware/Pivotal will be in a position to service those companies as well.

VMware could easily have decided to pick up any number of Kubernetes distribution companies such as Rancher or Platform9. None of them would have provided the wide range of platform choices that Pivotal brings to the table. And besides, this keeps it all in the Dell family.