What Wall Street is missing regarding Broadcom’s acquisition of CA Technologies: Cloud, Mainframes, & IoT

(Note: This blog contains significant contributions from long-time software executive and Research Fellow Tom Petrocelli)

On July 11, Broadcom ($AVGO) announced an agreement to purchase CA for $18.9 billion. If this acquisition goes through, this will be the third largest software acquisition of all time behind only Microsoft’s $26 billion acquisition of LinkedIn and Facebook’s $19 billion acquisition of WhatsApp. And, given CA’s focus, I would argue this is the largest enterprise software acquisition of all time, since a significant part of LinkedIn’s functionality is focused on the consumer level.

But why did Broadcom make this bet? The early reviews have shown confusion with headlines such as:
Broadcom deal to buy CA makes little sense on the surface
4 Reasons Broadcom’s $18.9B CA Technologies Buy Makes No Sense
Broadcom Buys CA – Huh?

All of these articles basically hone in on the fact that Broadcom is a hardware company and CA is a software company, which leads to the conclusion that these two companies have nothing to do with each other. But to truly understand why Broadcom and CA can fit together, let’s look at the context.

In November 2017, Broadcom purchased Brocade for $5.5 billion to build out data center and networking markets. This acquisition expanded on Broadcom’s strengths in supporting mobile and connectivity use cases by extending Broadcom’s solution set beyond the chip and into actual connectivity.

Earlier this year, Broadcom had tried to purchase Qualcomm for over $100 billion. Given Broadcom’s lack of cash on hand, this would have been a debt-based purchase with the obvious goal of rolling up the chip market. When the United States blocked this acquisition in March, Broadcom was likely left with a whole lot of money ready to deploy that needed to be used or lost and no obvious target.

So, add these two together and Broadcom had both the cash to spend and a precedent for showing that it wanted to expand its value proposition beyond the chip and into larger integrated solutions for two little trends called “the cloud,” especially private cloud, and “the Internet of Things.”

Now, in that context, take a look at CA. CA’s bread and butter comes from its mainframe solutions, which make up over $2 billion in revenue per year. Mainframes are large computers that handle high-traffic and dedicated workloads and increasingly need to be connected to more data sources, “things,” and clients. Although CA’s mainframe business is a legacy business, that legacy is focused on some of the biggest enterprise computational processing needs in the world. Thus, this is an area that a chipmaker would be interested in supporting over time. The ability to potentially upsell or replace those workloads over time with Broadcom computing assets, either through custom mainframe processors or through private cloud data centers, could add some predictability to the otherwise cyclical world of hardware manufacturing. Grab enterprise computing workloads at the source and then custom build to their needs.

This means that there’s a potential hyperscale private cloud play here as well for Broadcom by bringing Broadcom’s data center networking business together with CA’s server management capabilities, which end up looking at technical monitoring issues both from a top-down and bottoms-up perspective.

CA also is strong in supporting mobile development, developer operations (DevOps), API management, IT Operations, and service level management in its enterprise solutions business, which earned $1.75 billion in annual revenue over the past year. On the mobile side, this means that CA is a core toolset for building, testing, and monitoring the mobile apps and Internet of Things applications that will be running through Broadcom’s chips. To optimize computing environments, especially in mobile and IoT edge environments where computing and storage resources are limited, applications need to optimized on available hardware. If Broadcom is going to take over the IoT chip market over time, the chips need to support relevant app workloads.

I would expect Broadcom to increase investment in CA’s Internet of Things & mobile app dev departments expand once Broadcom completes this transaction as well. Getting CA’s dev tools closer to silicon can only help performance and help Broadcom to provide out-of-the-box IoT solutions. This acquisition may even push Broadcom into the solutions and services market, which would blow the minds of hardware analysts and market observers but would also be a natural extension of Broadcom’s current acquistions to move through the computing value stack.

From a traditional OSI perspective, this acquisition feels odd because Broadcom is skipping multiple layers between its core chip competency and CA’s core competency. But the Brocade acquisition helps close the gaps even after spinning off Ruckus Wireless, Lumina SDN, and data center networking businesses. Broadcom is focused on processing and guiding workloads, not on transport and other non-core activities.

So, between mainframe, private cloud, mobile, and IoT markets, there are a number of adjacencies between Broadcom & CA. It will be challenging to knit together all of these pieces accretively. But because so much of CA’s software is focused on the monitoring, testing, and security of hardware and infrastructure, this acquisition isn’t quite as crazy as a variety of pundits seem to think. In addition, the relative consistency of CA’s software revenue compared to the highs and lows of chip building may also provide some benefits to Broadcom by providing predictable cash flow to manage debt payments and to fund the next acquisition that Hock Tan seeks to hunt down.

All this being said, this is still very much an acquisition out of left field. I’ll be fascinated in seeing how this transaction ends up. It is somewhat reminiscent of Oracle’s 2009 acquisition of Sun to bring hardware and software together. This does not necessarily create confidence in the acquisition, since hardware/software mergers have traditionally been tricky, but doesn’t disprove the synergies that do exist. In addition, Oracle’s move points out that Broadcom seems to have skipped a step of purchasing a relevant casing, device, or server company. Could this be a future acquisition to bolster existing investments and push further into the world of private cloud?

A key challenge and important point that my colleague Tom Petrocelli brings up is that CA and Broadcom sell to very different customers. Broadcom has been an OEM-based provider while CA sells directly to IT. As a result, Broadcom will need to be careful in maintaining CA’s IT-based direct and indirect sales channels and would be best served to keep CA’s go-to-market teams relatively intact.

Overall, the Broadcom acquisition of CA is a very complex puzzle with several potential options.

1. The diversification efforts will work to smooth out Broadcom’s revenue over time and provide more predictable revenues to support Broadcom’s continuing growth through acquisition. This will help their stock in the long run and provide financial benefit.
2. Broadcom will fully integrate the parts of CA that make the most sense for them to have, especially the mobile security and IoT product lines, and sell or spin off the rest to help pay for the acquisition. Although Brocade spinoffs occured prior to the acquisition, there are no forces that prevent Broadcom from spinning off non-core CA assets and products, especially those that are significantly outside the IoT and data center markets.
3. In a worst case scenario, Broadcom will try to impose its business structure on CA, screw up the integration, and kill a storied IT company over time through mismanagement. Note that Amalgam Insights does not recommend this option.

But there is some alignment here and it will be fascinating to see how Broadcom takes advantage of CA’s considerable IT monitoring capabilities, takes advantage of CA’s business to increase chip sales, and uses CA’s cash flow to continue Broadcom’s massive M&A efforts.

“Walking a Mile in My Shoes” With Skillsoft’s Leadership Development Program: A Market Milestone

In a recently published Market Milestone, Todd Maddox, Ph.D., Learning Scientist and Research Fellow for Amalgam Insights, evaluated Skillsoft’s Leadership Development Program (SLDP) from a learning science perspective. This involves evaluating the content, and the learning design and delivery. Amalgam’s overall evaluation is that SLDP content is highly effective. The content is engaging and well-constructed with a nice mix of high-level commentary from subject matter experts, dramatic and pragmatic storytelling from a consistent cast of characters faced with real-world problems, and a mentor to guide the leader-in-training through the process. Each course is approximately one hour in length and is comprised of short 5 – 10 minute video segments built with single concept micro-learning in mind.

From a learning design and delivery standpoint, the offering is also highly effective. Brief, targeted, 5 to 10 minute content is well-suited to the working memory and attentional resources available to the learner. Each course begins with a brief reflective question that primes the cognitive system in preparation for the subsequent learning and activates existing knowledge, thus providing a rich context for learning. The Program is grounded in a storytelling, scenario-based training approach with a common set of characters and a “mentor” who guides the training. This effectively recruits the cognitive skills learning system in the brain while simultaneously activating emotion and motivation centers in the brain. This draws the learner into the situation and they begin to see themselves as part of the story. This “walk a mile in my shoes” experience increases information retention and primes the learner for experiential behavior change.

For more information, read the full Market Milestone on the Skillsoft website.

Amalgam Provides 4 Big Recommendations for Self-Service BI Success

 

Recently, my colleague Todd Maddox, Ph.D., the most-cited analyst in the corporate training world, and I were looking at the revolution of self-service BI, which has allowed business analysts and scientists to quickly explore and analyze their own data easily. At this point, any BI solution lacking a self-service option should not be considered a general business solution.

However, businesses still struggle to teach and onboard employees on self-service solutions, because self-service represents a new paradigm for administration and training, including the brain science challenges of training for IT. In light of these challenges, Dr. Maddox and I have the following  four recommendations for better BI adoption.

  1. Give every employee a hands-on walkthrough. If Self-Service is important enough to invest in, it is important enough to train as well. This doesn’t have to be long, but even 15-30 minutes spent on having each employee understand how to start accessing data is important.
  2. Drive a Culture of Curiosity. Self-Service BI is only as good as the questions that people ask. In a company where employees are either set in their ways and not focused on continuous improvement, Self-Service BI just becomes another layer of shelfware.

    Maddox adds: The “shelfware” comment is spot on. I was a master of putting new technology on the shelf! If what I have now works for my needs, then I need to be convinced, quickly and efficiently, that this new approach is better. I suggest asking users what they want to use the software for. If you can put users into one of 4 or 5 bins of business use cases, then you can customize the training and onboard more quickly and effectively.

  3. Build short training modules for key challenges in each department. This means that departmental managers need to commit to recording, say, 2-3 short videos that will cover the basics for self-service. Service managers might be looking for missed SLAs while sales managers look for close rates and marketing managers look for different categories of pipeline. But across these areas, the point is to provide a basic “How-to” so that users can start looking for the right answers.

    Maddox adds: Businesses are strongly urged to include 4 or 5 knowledge check questions for each video. Knowledge testing is one of the best ways to add additional training. It also provides quick insights on what aspects of your video is effective and what is not. Train by testing!

  4. Analytics knowledge must become readily available . As users start using BI, they need to figure out the depth and breadth of what is possible with BI, formulas, workflows, regression, and other basic tools. This might be as simple as an aggregation of useful Youtube videos to a formal program developed in a corporate learning platform.

By taking these training tips from one of the top BI influencers and the top-cited training analyst on the planet, we hope you are better equipped to support self-service BI at scale for your business.

Data Science Platforms News Roundup, June 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, Datawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta.

Databricks Conquers AI Dilemma with Unified Analytics

At Spark + AI, Databricks announced new capabilities for its Unified Analytics Platform, including MLFlow and Databricks Runtime for ML. MLFlow is an open source multi-cloud framework intended to standardize and simplify machine learning workflows to ensure machine learning gets put into production. Databricks Runtime for ML scales deep learning with new GPU support for AWS and Azure, along with preconfigured environments for the most popular machine learning frameworks such as scikit-learn, Tensorflow, Keras, and XGBoost. The net result of these new capabilities is that Databricks users will be able to get their machine learning work done faster.

IBM and H2O.ai Partnership Aims to Accelerate Adoption of AI in the Enterprise

H2O.ai and IBM announced a partnership in early June that permits the use of H2O’s Driverless AI on IBM PowerSystems, Driverless AI has automated machine learning capabilities, while PowerAI is a machine learning and deep learning toolkit, and the combination will permit significantly faster processing overall. This builds on the pre-existing integration H2O’s open source libraries into IBM’s Data Science Experience analytics solution, though when this announcement was made, IBM had not yet debuted PowerAI Enterprise, so the availability of Driverless AI on PowerAI Enterprise remains TBD.

Announcing PowerAI Enterprise: Bringing Data Science into Production

This month, IBM announced the release of PowerAI Enterprise, which runs on IBM PowerSystems. It’s an expansion of IBM’s PowerAI applied AI offering that extends its coverage to include the entire data science workflow. IBM continues to cover its bases by diversifying its data science offerings, adding PowerAI to their existing Data Science Experience and Watson Studio offerings, but this also creates confusion as companies seek to determine which data science platform product suits their needs. We look forward to covering and clarifying this in greater detail.

Alteryx Reveals Newest Platform Release at Inspire 2018

At Alteryx Inspire, Alteryx announced the latest release (2018.2) of the Alteryx Analytics Platform, with improvements such as making common analytic tasks even easier via templates, extending community search across the entire platform, and enhanced onboarding for new users. I detail the new features in my earlier post, Alter(yx)ing Everything at Inspire 2018; the upshot is that Alteryx continues to focus on ease of use for analytics end users.

Introducing Dask for Scalable Machine Learning

Anaconda released Dask, a new Python-based tool for processing large datasets. Python libraries like NumPy, pandas, and scikit-learn are designed to work with data in-memory on a single core; Dask will let data scientists process large datasets in parallel, even on a single computer, without needing to use Spark or another distributed computing framework. This expedites machine learning workflows on large datasets in Python, with the added convenience of being able to remain in your Python work environment.


Finally, I’m also working on a Vendor SmartList for the Data Science Platforms space this summer. If you’d like to learn more about this research initiative, or set up a briefing with Amalgam Insights for potential inclusion, please email me at lynne@amalgaminsights.com.

Destroying the CEO Myth: Redefining The Power Dynamics of Managing DevOps

Tom Petrocelli, Amalgam Insights Research Fellow

I am constantly asked the question “What does one have to do to implement DevOps”, or some variant.  Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology-based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful.

 

My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change the culture and management first. Unfortunately, that’s the hard part. When companies talk about changing culture for DevOps they often mean implementing multifunction teams or something less than that. Throwing disparate disciplines into an unregulated melting pot doesn’t help. These teams can end up as dysfunctional as with any other management or project structure. Team members will bicker over implementation and try to protect their hard-won territory.

 

As the old adage goes, “everything old is new again” and so-called DevOps culture is no different. Multi-functional teams are just a flavor of matrix management which has been tried over and over for years. They suffer from the same problems. Team members have to serve two masters and managers act like a group of dogs with one tree among them. Trying to please both the project leader and their functional management creates inherent conflicts.

 

Another view of creating DevOps culture is, what I think of as, the “CEO Buy-in Approach”. Whenever there is new thinking in IT there always seems to be advocacy for a top-down approach that starts with the CEO or CIO “buying in” to the concept. After that magic happens and everyone holds hands and sings together. Except that they don’t. This approach is heavy-handed and an unrealistic view of how companies, especially large companies, operate. If simply ordering people to work well together was all it took, there would be no dysfunctional companies or departments.

 

A variation on this theme advocates picking a leader (or two if you have two-in-the-box leadership) to make everyone work together happily. Setting aside the fact that finding people with broad enough experience to lead multi-disciplinary teams, this leads to what I have always called “The Product Manager Problem.”

 

The problem that all new product managers face is the realization that they have all the responsibility and none of the power to accomplish their mission.

 

That’s because responsibility for the product concentrates in one person, the product manager, and all other managers can diffuse their responsibility across many products or functions.

 

Having a single leader responsible for making multi-functional teams work creates a lack of individual accountability. The leader, not the team, is held accountable for the project while the individual team members are still accountable to their own managers. This may work when the managers and project team leaders all have great working relationships. In that case, you don’t need a special DevOps structure. Instead, a model that creates a separate project team leader or leaders enables team dysfunction and the ability to maintain silos through lack of direct accountability. You see this when you have a Scrum Master, Product Owner, or Release Manager who has all the responsibility for a project.

 

The typical response to this criticism of multi-functional teams (and the no-power Product Manager) is that leaders should be able to influence and cajole the team, despite having no real authority. This is ridiculous and refuses to accept that individual managers and the people that work for them are motivated to maintain their own power. Making the boss look good works well when the boss is signing your evaluation and deciding on your raise. Sure, project and team leaders can be made part of the evaluation process but, really who has the real power here? The functional manager in control of many people and resources or the leader of one small team?

 

One potential to the DevOps cultural conundrum is collective responsibility. In this scheme, all team members benefit or are hurt by the success of the project. Think of this as the combined arms combat team model. In the Army, a multi-functional combined arms teams are put together for specific missions. The team is held responsible for the overall mission. They are responsible collectively and individually. While the upper echelons hold the combined arms combat team responsible for the mission, the team leader has the ability to hold individuals accountable.

 

Can anyone imagine an Army or Marine leader being let off the hook for mission failure because one of their people didn’t perform? Of course not, but they also have mechanisms for holding individual soldiers accountable for their performance.

 

In this model, DevOps teams collectively would be held responsible for on-time completion of the entire project as would the entire management chain. Individual team members would have much of their evaluation based on this and the team leader would have the power to remediate nonperformance including removing a team member who is not doing their job (i.e. fire them). They would have to have the ability to train up and fill the role of one type of function with another if a person performing a role wasn’t up to snuff or had to be removed. It would still be up to the “chain of command” to provide a reasonable mission with appropriate resources.

 

Ultimately, anyone in the team could rise up and lead this or another team no matter their speciality. There would be nothing holding back an operations specialist from becoming the Scrum Master. If they could learn the job, they could get it.

 

The very idea of a specialist would lose power, allowing team members to develop talents no matter their job title.

 

I worked in this model years ago and it was successful and rewarding. Everyone helped everyone else and had a stake in the outcome. People learned each other’s jobs, so they could help out when necessary, learning new skills in the process. It wasn’t called DevOps but it’s how it operated. It’s not a radical idea but there is a hitch – silo managers would either lose power or even cease to exist. There would be no Development Manager or Security Manager. Team members would win, the company would win, but not everyone would feel like this model works for them.

 

This doesn’t mean that all silos would go away. There will still be operations and security functions that maintain and monitor systems. The security and ops people who work on development projects just wouldn’t report into them. They would only be responsible to the development team but with full power (and resources) to make changes in production systems.

 

Without collective responsibility, free of influence from functional managers, DevOps teams will never be more than a fresh coat of paint on rotting wood. It will look pretty but underneath, it’s crumbling.

Tom Petrocelli Advises: Adopt Chaos Engineering to Preserve System Resilience

Today in CMSWire, Amalgam Insights Research Fellow Tom Petrocelli advises the developer community to support chaos engineering on CMSWire.

As it becomes increasingly important for organizations reliable IT infrastructure, traditional resilience testing methods fall short in tech ecosystems where root-cause troubleshooting is increasingly difficult to manage, control, and fix. Rather than focusing purely on the lineage of tracing catastrophic issues, Petrocelli advises testing abrupt failures that mimic real-world issues.

To learn more about the approach of chaos engineering to managing scale-out, highly distributed, and varied infrastructure environments, read Tom’s full thoughts on CMSWire.

Hyoun Park Interviewed on Onalytica

Today, Onalytica, a leader in influencer marketing, published an interview on Hyoun Park as a key influencer in the Business Intelligence and Analytics markets. In this interview, Hyoun provides guidance on:

  • How he became an expert in the Analytics and BI space based on over 20 years of experience
  • The 4 Key Tipping Points for BI that Hyoun is excited about
  • Hyoun’s favorite BI influencers, including Jen Underwood, Doug Henschen, John Myers, Claudia Imhoff, and Howard Dresner
  • And then the 4 trends that will drive the BI industry over the next 12 months.

To read the interview and see what inspires this influencer, read the interview on the Onalytica website.

What Data Science Platform Suits Your Organization’s Needs?

This summer, my Amalgam Insights colleague Hyoun Park and I will be teaming up to address that question. When it comes to data science platforms, there’s no such thing as “one size fits all.” We are writing this landscape because understanding the processes of scaling data science beyond individual experiments and integrating it into your business is difficult. By breaking down the key characteristics of the data science platform market, this landscape will help potential buyers choose the appropriate platform for your organizational needs. We will examine the following questions that serve as key differentiators to determine appropriate data science platform purchasing solutions to figure out which characteristics, functionalities, and policies differentiate platforms supporting introductory data science workflows from those supporting scaled-up enterprise-grade workflows.

Amalgam’s Assumptions: The baseline order of operations for conducting data science experiments begins with understanding the business problem you’re trying to address. Gathering, prepping, and exploring the data are the next steps, done to extract appropriate features and start creating your model. The modeling process is iterative, and data scientists will adjust their model throughout the process based on feedback. Finally, if and when a model is deemed satisfactory, it can be deployed in some form.

How do these platforms support reproducibility of data, workflows, and results?

One advantage some data science platforms provide is the ability to track and save the data and hyperparameters run in each experiment, so that that experiment can be re-run at any time. Individual data scientists running ad hoc experiments need to do this tracking manually, if they even know to bother with it.

How secure, governable, and compliant are these platforms compared to corporate, standards-based, and legislative needs?

Data access is fragmented, and in early-stage data science setups, it’s not uncommon for data scientists to copy and paste and store the data they need on their own laptop, because they lack the ability to use that data directly while keeping it secure in an IT-approved manner. Data science platforms can help make this secure access process easier.

How do these platforms support collaboration between data scientists, data analysts, IT, and line-of-business departments?

Your data scientists should be able to share their reports in a usable form with the rest of the business, whether this looks likes reports, dashboards, microservices, or apps. In addition, the consumers of these data outputs need to be able to give feedback to the producers to improve results. To capitalize on data science experiments being done in a company, some level of collaboration is necessary, but this may mean different things to different organizations. Some have shared code repositories. Some use chat. Effectively scaling up data science operations requires a more consistent experience across the board, so that everybody knows where to find what they need to get their work done. Centralizing feedback on models into the platform, associated with the models and their outputs, is one example of enabling the consistency necessary.

How do these platforms support a consistent view of data science based on the user interfaces and user experiences that the platforms provide to all users?

This consistency isn’t just limited to creating a model catalog with centralized feedback – the process of going from individual data scientists operating ad hoc and using their specific preferred tools to a standardized experience can meet resistance. Data science platforms often support a wide variety of such tools, which can ease this transition, but not all data science platforms support the same sets of tools. But moving to a unified experience makes it easier to onboard new data scientists into your environment.

What do data science teams look like when they are using data science platforms?

Some teams consist of a couple of people constructing skunkworks pipelines out of code as an initial or side project. Others may do enough ongoing data science work that they work with line of business stakeholders, perhaps with the assistance of a project manager. If data science is core business for your organization, that’s a large team relative to your company size no matter how large your company is, and these teams have different needs. A focus of this research is to categorize typical experiences across the spectrum by team size and complexity, code-centricness, and other measures.

Team ComplexityCode-Based or Codeless

By exploring the people, processes, and technological functionalities associated with data science platforms over this summer, Amalgam Insights looks forward to bringing clarity to the market and providing directional recommendations to the enterprise community. This Vendor SmartList on Data Science Platforms will explore these questions and more in differentiating between a variety of Data Science Platforms currently in the market including, but not limited to: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, Domino, H20.ai, IBM, KNIME, Mathworks, Oracle, RapidMiner, SAP, SAS Viya, Teradata, TIBCO, and other startups and new entrants in this space that establish themselves over the Summer of 2018.

If you’d like to learn more about this research initiative, or set up a briefing with Amalgam Insights for potential inclusion, please email me at lynne@amalgaminsights.com.

 

The Map to Multi-Million Dollar Machine Learning

Amalgam has just posted a new report: The Roadmap to Multi-Million Dollar Machine Learning Value with DataRobot. I’m especially excited about this report for a couple of reasons. First, this report documents multiple clear value propositions for machine learning that led to the documented annual value of over a million dollars. This is an important…

Please register or log into your Free Amalgam Insights Community account to read more.
Log In Register