Posted on Leave a comment

What Wall Street is missing regarding Broadcom’s acquisition of CA Technologies: Cloud, Mainframes, & IoT

(Note: This blog contains significant contributions from long-time software executive and Research Fellow Tom Petrocelli)

On July 11, Broadcom ($AVGO) announced an agreement to purchase CA for $18.9 billion. If this acquisition goes through, this will be the third largest software acquisition of all time behind only Microsoft’s $26 billion acquisition of LinkedIn and Facebook’s $19 billion acquisition of WhatsApp. And, given CA’s focus, I would argue this is the largest enterprise software acquisition of all time, since a significant part of LinkedIn’s functionality is focused on the consumer level.

But why did Broadcom make this bet? The early reviews have shown confusion with headlines such as:
Broadcom deal to buy CA makes little sense on the surface
4 Reasons Broadcom’s $18.9B CA Technologies Buy Makes No Sense
Broadcom Buys CA – Huh?

All of these articles basically hone in on the fact that Broadcom is a hardware company and CA is a software company, which leads to the conclusion that these two companies have nothing to do with each other. But to truly understand why Broadcom and CA can fit together, let’s look at the context.

In November 2017, Broadcom purchased Brocade for $5.5 billion to build out data center and networking markets. This acquisition expanded on Broadcom’s strengths in supporting mobile and connectivity use cases by extending Broadcom’s solution set beyond the chip and into actual connectivity.

Earlier this year, Broadcom had tried to purchase Qualcomm for over $100 billion. Given Broadcom’s lack of cash on hand, this would have been a debt-based purchase with the obvious goal of rolling up the chip market. When the United States blocked this acquisition in March, Broadcom was likely left with a whole lot of money ready to deploy that needed to be used or lost and no obvious target.

So, add these two together and Broadcom had both the cash to spend and a precedent for showing that it wanted to expand its value proposition beyond the chip and into larger integrated solutions for two little trends called “the cloud,” especially private cloud, and “the Internet of Things.”

Now, in that context, take a look at CA. CA’s bread and butter comes from its mainframe solutions, which make up over $2 billion in revenue per year. Mainframes are large computers that handle high-traffic and dedicated workloads and increasingly need to be connected to more data sources, “things,” and clients. Although CA’s mainframe business is a legacy business, that legacy is focused on some of the biggest enterprise computational processing needs in the world. Thus, this is an area that a chipmaker would be interested in supporting over time. The ability to potentially upsell or replace those workloads over time with Broadcom computing assets, either through custom mainframe processors or through private cloud data centers, could add some predictability to the otherwise cyclical world of hardware manufacturing. Grab enterprise computing workloads at the source and then custom build to their needs.

This means that there’s a potential hyperscale private cloud play here as well for Broadcom by bringing Broadcom’s data center networking business together with CA’s server management capabilities, which end up looking at technical monitoring issues both from a top-down and bottoms-up perspective.

CA also is strong in supporting mobile development, developer operations (DevOps), API management, IT Operations, and service level management in its enterprise solutions business, which earned $1.75 billion in annual revenue over the past year. On the mobile side, this means that CA is a core toolset for building, testing, and monitoring the mobile apps and Internet of Things applications that will be running through Broadcom’s chips. To optimize computing environments, especially in mobile and IoT edge environments where computing and storage resources are limited, applications need to optimized on available hardware. If Broadcom is going to take over the IoT chip market over time, the chips need to support relevant app workloads.

I would expect Broadcom to increase investment in CA’s Internet of Things & mobile app dev departments expand once Broadcom completes this transaction as well. Getting CA’s dev tools closer to silicon can only help performance and help Broadcom to provide out-of-the-box IoT solutions. This acquisition may even push Broadcom into the solutions and services market, which would blow the minds of hardware analysts and market observers but would also be a natural extension of Broadcom’s current acquistions to move through the computing value stack.

From a traditional OSI perspective, this acquisition feels odd because Broadcom is skipping multiple layers between its core chip competency and CA’s core competency. But the Brocade acquisition helps close the gaps even after spinning off Ruckus Wireless, Lumina SDN, and data center networking businesses. Broadcom is focused on processing and guiding workloads, not on transport and other non-core activities.

So, between mainframe, private cloud, mobile, and IoT markets, there are a number of adjacencies between Broadcom & CA. It will be challenging to knit together all of these pieces accretively. But because so much of CA’s software is focused on the monitoring, testing, and security of hardware and infrastructure, this acquisition isn’t quite as crazy as a variety of pundits seem to think. In addition, the relative consistency of CA’s software revenue compared to the highs and lows of chip building may also provide some benefits to Broadcom by providing predictable cash flow to manage debt payments and to fund the next acquisition that Hock Tan seeks to hunt down.

All this being said, this is still very much an acquisition out of left field. I’ll be fascinated in seeing how this transaction ends up. It is somewhat reminiscent of Oracle’s 2009 acquisition of Sun to bring hardware and software together. This does not necessarily create confidence in the acquisition, since hardware/software mergers have traditionally been tricky, but doesn’t disprove the synergies that do exist. In addition, Oracle’s move points out that Broadcom seems to have skipped a step of purchasing a relevant casing, device, or server company. Could this be a future acquisition to bolster existing investments and push further into the world of private cloud?

A key challenge and important point that my colleague Tom Petrocelli brings up is that CA and Broadcom sell to very different customers. Broadcom has been an OEM-based provider while CA sells directly to IT. As a result, Broadcom will need to be careful in maintaining CA’s IT-based direct and indirect sales channels and would be best served to keep CA’s go-to-market teams relatively intact.

Overall, the Broadcom acquisition of CA is a very complex puzzle with several potential options.

1. The diversification efforts will work to smooth out Broadcom’s revenue over time and provide more predictable revenues to support Broadcom’s continuing growth through acquisition. This will help their stock in the long run and provide financial benefit.
2. Broadcom will fully integrate the parts of CA that make the most sense for them to have, especially the mobile security and IoT product lines, and sell or spin off the rest to help pay for the acquisition. Although Brocade spinoffs occured prior to the acquisition, there are no forces that prevent Broadcom from spinning off non-core CA assets and products, especially those that are significantly outside the IoT and data center markets.
3. In a worst case scenario, Broadcom will try to impose its business structure on CA, screw up the integration, and kill a storied IT company over time through mismanagement. Note that Amalgam Insights does not recommend this option.

But there is some alignment here and it will be fascinating to see how Broadcom takes advantage of CA’s considerable IT monitoring capabilities, takes advantage of CA’s business to increase chip sales, and uses CA’s cash flow to continue Broadcom’s massive M&A efforts.

Posted on Leave a comment

“Walking a Mile in My Shoes” With Skillsoft’s Leadership Development Program: A Market Milestone

In a recently published Market Milestone, Todd Maddox, Ph.D., Learning Scientist and Research Fellow for Amalgam Insights, evaluated Skillsoft’s Leadership Development Program (SLDP) from a learning science perspective. This involves evaluating the content, and the learning design and delivery. Amalgam’s overall evaluation is that SLDP content is highly effective. The content is engaging and well-constructed with a nice mix of high-level commentary from subject matter experts, dramatic and pragmatic storytelling from a consistent cast of characters faced with real-world problems, and a mentor to guide the leader-in-training through the process. Each course is approximately one hour in length and is comprised of short 5 – 10 minute video segments built with single concept micro-learning in mind.

From a learning design and delivery standpoint, the offering is also highly effective. Brief, targeted, 5 to 10 minute content is well-suited to the working memory and attentional resources available to the learner. Each course begins with a brief reflective question that primes the cognitive system in preparation for the subsequent learning and activates existing knowledge, thus providing a rich context for learning. The Program is grounded in a storytelling, scenario-based training approach with a common set of characters and a “mentor” who guides the training. This effectively recruits the cognitive skills learning system in the brain while simultaneously activating emotion and motivation centers in the brain. This draws the learner into the situation and they begin to see themselves as part of the story. This “walk a mile in my shoes” experience increases information retention and primes the learner for experiential behavior change.

For more information, read the full Market Milestone on the Skillsoft website.

Posted on Leave a comment

Amalgam Provides 4 Big Recommendations for Self-Service BI Success

 

Recently, my colleague Todd Maddox, Ph.D., the most-cited analyst in the corporate training world, and I were looking at the revolution of self-service BI, which has allowed business analysts and scientists to quickly explore and analyze their own data easily. At this point, any BI solution lacking a self-service option should not be considered a general business solution.

However, businesses still struggle to teach and onboard employees on self-service solutions, because self-service represents a new paradigm for administration and training, including the brain science challenges of training for IT. In light of these challenges, Dr. Maddox and I have the following  four recommendations for better BI adoption.

  1. Give every employee a hands-on walkthrough. If Self-Service is important enough to invest in, it is important enough to train as well. This doesn’t have to be long, but even 15-30 minutes spent on having each employee understand how to start accessing data is important.
  2. Drive a Culture of Curiosity. Self-Service BI is only as good as the questions that people ask. In a company where employees are either set in their ways and not focused on continuous improvement, Self-Service BI just becomes another layer of shelfware.

    Maddox adds: The “shelfware” comment is spot on. I was a master of putting new technology on the shelf! If what I have now works for my needs, then I need to be convinced, quickly and efficiently, that this new approach is better. I suggest asking users what they want to use the software for. If you can put users into one of 4 or 5 bins of business use cases, then you can customize the training and onboard more quickly and effectively.

  3. Build short training modules for key challenges in each department. This means that departmental managers need to commit to recording, say, 2-3 short videos that will cover the basics for self-service. Service managers might be looking for missed SLAs while sales managers look for close rates and marketing managers look for different categories of pipeline. But across these areas, the point is to provide a basic “How-to” so that users can start looking for the right answers.

    Maddox adds: Businesses are strongly urged to include 4 or 5 knowledge check questions for each video. Knowledge testing is one of the best ways to add additional training. It also provides quick insights on what aspects of your video is effective and what is not. Train by testing!

  4. Analytics knowledge must become readily available . As users start using BI, they need to figure out the depth and breadth of what is possible with BI, formulas, workflows, regression, and other basic tools. This might be as simple as an aggregation of useful Youtube videos to a formal program developed in a corporate learning platform.

By taking these training tips from one of the top BI influencers and the top-cited training analyst on the planet, we hope you are better equipped to support self-service BI at scale for your business.

Posted on Leave a comment

Data Science Platforms News Roundup, June 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, Datawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta. Continue reading Data Science Platforms News Roundup, June 2018

Posted on 1 Comment

Destroying the CEO Myth: Redefining The Power Dynamics of Managing DevOps

Tom Petrocelli, Amalgam Insights Research Fellow

I am constantly asked the question “What does one have to do to implement DevOps”, or some variant.  Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology-based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful.

 

My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change the culture and management first. Unfortunately, that’s the hard part. When companies talk about changing culture for DevOps they often mean implementing multifunction teams or something less than that. Throwing disparate disciplines into an unregulated melting pot doesn’t help. These teams can end up as dysfunctional as with any other management or project structure. Team members will bicker over implementation and try to protect their hard-won territory.

 

As the old adage goes, “everything old is new again” and so-called DevOps culture is no different. Multi-functional teams are just a flavor of matrix management which has been tried over and over for years. They suffer from the same problems. Team members have to serve two masters and managers act like a group of dogs with one tree among them. Trying to please both the project leader and their functional management creates inherent conflicts.

 

Another view of creating DevOps culture is, what I think of as, the “CEO Buy-in Approach”. Whenever there is new thinking in IT there always seems to be advocacy for a top-down approach that starts with the CEO or CIO “buying in” to the concept. After that magic happens and everyone holds hands and sings together. Except that they don’t. This approach is heavy-handed and an unrealistic view of how companies, especially large companies, operate. If simply ordering people to work well together was all it took, there would be no dysfunctional companies or departments.

 

A variation on this theme advocates picking a leader (or two if you have two-in-the-box leadership) to make everyone work together happily. Setting aside the fact that finding people with broad enough experience to lead multi-disciplinary teams, this leads to what I have always called “The Product Manager Problem.”

 

The problem that all new product managers face is the realization that they have all the responsibility and none of the power to accomplish their mission.

 

That’s because responsibility for the product concentrates in one person, the product manager, and all other managers can diffuse their responsibility across many products or functions.

 

Having a single leader responsible for making multi-functional teams work creates a lack of individual accountability. The leader, not the team, is held accountable for the project while the individual team members are still accountable to their own managers. This may work when the managers and project team leaders all have great working relationships. In that case, you don’t need a special DevOps structure. Instead, a model that creates a separate project team leader or leaders enables team dysfunction and the ability to maintain silos through lack of direct accountability. You see this when you have a Scrum Master, Product Owner, or Release Manager who has all the responsibility for a project.

 

The typical response to this criticism of multi-functional teams (and the no-power Product Manager) is that leaders should be able to influence and cajole the team, despite having no real authority. This is ridiculous and refuses to accept that individual managers and the people that work for them are motivated to maintain their own power. Making the boss look good works well when the boss is signing your evaluation and deciding on your raise. Sure, project and team leaders can be made part of the evaluation process but, really who has the real power here? The functional manager in control of many people and resources or the leader of one small team?

 

One potential to the DevOps cultural conundrum is collective responsibility. In this scheme, all team members benefit or are hurt by the success of the project. Think of this as the combined arms combat team model. In the Army, a multi-functional combined arms teams are put together for specific missions. The team is held responsible for the overall mission. They are responsible collectively and individually. While the upper echelons hold the combined arms combat team responsible for the mission, the team leader has the ability to hold individuals accountable.

 

Can anyone imagine an Army or Marine leader being let off the hook for mission failure because one of their people didn’t perform? Of course not, but they also have mechanisms for holding individual soldiers accountable for their performance.

 

In this model, DevOps teams collectively would be held responsible for on-time completion of the entire project as would the entire management chain. Individual team members would have much of their evaluation based on this and the team leader would have the power to remediate nonperformance including removing a team member who is not doing their job (i.e. fire them). They would have to have the ability to train up and fill the role of one type of function with another if a person performing a role wasn’t up to snuff or had to be removed. It would still be up to the “chain of command” to provide a reasonable mission with appropriate resources.

 

Ultimately, anyone in the team could rise up and lead this or another team no matter their speciality. There would be nothing holding back an operations specialist from becoming the Scrum Master. If they could learn the job, they could get it.

 

The very idea of a specialist would lose power, allowing team members to develop talents no matter their job title.

 

I worked in this model years ago and it was successful and rewarding. Everyone helped everyone else and had a stake in the outcome. People learned each other’s jobs, so they could help out when necessary, learning new skills in the process. It wasn’t called DevOps but it’s how it operated. It’s not a radical idea but there is a hitch – silo managers would either lose power or even cease to exist. There would be no Development Manager or Security Manager. Team members would win, the company would win, but not everyone would feel like this model works for them.

 

This doesn’t mean that all silos would go away. There will still be operations and security functions that maintain and monitor systems. The security and ops people who work on development projects just wouldn’t report into them. They would only be responsible to the development team but with full power (and resources) to make changes in production systems.

 

Without collective responsibility, free of influence from functional managers, DevOps teams will never be more than a fresh coat of paint on rotting wood. It will look pretty but underneath, it’s crumbling.

Posted on Leave a comment

Tom Petrocelli Advises: Adopt Chaos Engineering to Preserve System Resilience

Today in CMSWire, Amalgam Insights Research Fellow Tom Petrocelli advises the developer community to support chaos engineering on CMSWire.

As it becomes increasingly important for organizations reliable IT infrastructure, traditional resilience testing methods fall short in tech ecosystems where root-cause troubleshooting is increasingly difficult to manage, control, and fix. Rather than focusing purely on the lineage of tracing catastrophic issues, Petrocelli advises testing abrupt failures that mimic real-world issues.

To learn more about the approach of chaos engineering to managing scale-out, highly distributed, and varied infrastructure environments, read Tom’s full thoughts on CMSWire.

Posted on Leave a comment

Hyoun Park Interviewed on Onalytica

Today, Onalytica, a leader in influencer marketing, published an interview on Hyoun Park as a key influencer in the Business Intelligence and Analytics markets. In this interview, Hyoun provides guidance on:

  • How he became an expert in the Analytics and BI space based on over 20 years of experience
  • The 4 Key Tipping Points for BI that Hyoun is excited about
  • Hyoun’s favorite BI influencers, including Jen Underwood, Doug Henschen, John Myers, Claudia Imhoff, and Howard Dresner
  • And then the 4 trends that will drive the BI industry over the next 12 months.

To read the interview and see what inspires this influencer, read the interview on the Onalytica website.

Posted on 1 Comment

What Data Science Platform Suits Your Organization’s Needs?

This summer, my Amalgam Insights colleague Hyoun Park and I will be teaming up to address that question. When it comes to data science platforms, there’s no such thing as “one size fits all.” We are writing this landscape because understanding the processes of scaling data science beyond individual experiments and integrating it into your business is difficult. By breaking down the key characteristics of the data science platform market, this landscape will help potential buyers choose the appropriate platform for your organizational needs. We will examine the following questions that serve as key differentiators to determine appropriate data science platform purchasing solutions to figure out which characteristics, functionalities, and policies differentiate platforms supporting introductory data science workflows from those supporting scaled-up enterprise-grade workflows.

Continue reading What Data Science Platform Suits Your Organization’s Needs?

Posted on Leave a comment

The Learning Science Perspective: Why Degreed Acquired Pathgather to Rapidly Grow the Learning Experience Platform Market

On June 20, 2018 Degreed acquired Pathgather. The terms of the acquisition were not disclosed. All Pathgather employees are joining Degreed, creating a team of nearly 250 employees. This represents the merger of two companies present at the birth of the now-booming Learning Experience Platform (LEP) industry. Degreed and Pathgather have been direct competitors since the start. As a single entity, they are formidable with a client base of more than 200 organizations, with over 4 million licensed users and nearly $100 million in funding. From a learning science perspective, the marriage of psychology and brain science—Degreed is now stronger as well. Continue reading The Learning Science Perspective: Why Degreed Acquired Pathgather to Rapidly Grow the Learning Experience Platform Market

Posted on 1 Comment

Mapping Multi-Million Dollar Business Value from Machine Learning Projects

Amalgam has just posted a new report: The Roadmap to Multi-Million Dollar Machine Learning Value with DataRobot. I’m especially excited about this report for a couple of reasons.

First, this report documents multiple clear value propositions for machine learning that led to the documented annual value of over a million dollars. This is an important metric to demonstrate at a time when many enterprises are still asking why they should be putting money into machine learning.

Second, Amalgam introduces a straightforward map for understanding how to construct machine learning products that are designed to create multi-million dollar value. Rather than simply hope and wish for a good financial outcome, companies can actually model if their project is likely to justify the cost of machine learning (especially the specialized mathematical and programming skills needed to make this work.)

Amalgam provides the following starting point for designing Multi-Million dollar machine learning value:

Stage One is discovering the initial need for machine learning, which may sound tautological. “To start machine learning, find the need for machine learning…” More specifically, look for opportunities to analyze hundreds of variables that may be related to a specific outcome, but where relationships cannot be quickly analyzed by gut feel or basic business intelligence. And look for opportunities where employees already have gut feelings that a new variable may be related to a good business outcome, such as better credit risk scoring or higher quality supply chain management. Start with your top revenue-creating or value-creating department and then deeply explore.

Stage Two is about financial analysis and moving to production. Ideally, your organization will find a use case involving over $100 million in value. This does not mean that your organization is making $100 million in revenue, as activities such as financial loans, talent recruiting, and preventative maintenance can potentially lead to billions of dollars in capital or value being created even if the vendor only collects a small percentage as a finder’s fee, interest, or maintenance fee. Once the opportunity exists, move on it. Start small and get value.

Then finally, take those lessons learned and start building an internal Machine Learning Best Practices or Center of Excellence organization. Again, start small and focus on documenting what works within your organization, including the team of employees needed to get up and running, the financial justification needed to move forward, and the technical resources needed to operationalize machine learning on a scalable and predictable basis. Drive the cost of Machine Learning down internally so that your organization can tackle smaller problems without being labor, cost, and time-prohibitive.

This blog is just a starting point for the discussion of machine learning value Amalgam covers in The Roadmap to Multi-Million Dollar Machine Learning Value with DataRobot. Please check out the rest of the report as we discuss the Six Stages of moving from BI to AI.

This report also defines a financial ROI model associated with a business-based approach to machine learning.

If you have any questions about this blog, the report, or how to engage Amalgam Insights in providing strategy and vendor recommendations for your data science and machine learning initiatives, please feel free to contact us at info@amalgaminsights.com.