Posted on Leave a comment

Amalgam Provides 4 Big Recommendations for Self-Service BI Success

 

Recently, my colleague Todd Maddox, Ph.D., the most-cited analyst in the corporate training world, and I were looking at the revolution of self-service BI, which has allowed business analysts and scientists to quickly explore and analyze their own data easily. At this point, any BI solution lacking a self-service option should not be considered a general business solution.

However, businesses still struggle to teach and onboard employees on self-service solutions, because self-service represents a new paradigm for administration and training, including the brain science challenges of training for IT. In light of these challenges, Dr. Maddox and I have the following  four recommendations for better BI adoption.

  1. Give every employee a hands-on walkthrough. If Self-Service is important enough to invest in, it is important enough to train as well. This doesn’t have to be long, but even 15-30 minutes spent on having each employee understand how to start accessing data is important.
  2. Drive a Culture of Curiosity. Self-Service BI is only as good as the questions that people ask. In a company where employees are either set in their ways and not focused on continuous improvement, Self-Service BI just becomes another layer of shelfware.

    Maddox adds: The “shelfware” comment is spot on. I was a master of putting new technology on the shelf! If what I have now works for my needs, then I need to be convinced, quickly and efficiently, that this new approach is better. I suggest asking users what they want to use the software for. If you can put users into one of 4 or 5 bins of business use cases, then you can customize the training and onboard more quickly and effectively.

  3. Build short training modules for key challenges in each department. This means that departmental managers need to commit to recording, say, 2-3 short videos that will cover the basics for self-service. Service managers might be looking for missed SLAs while sales managers look for close rates and marketing managers look for different categories of pipeline. But across these areas, the point is to provide a basic “How-to” so that users can start looking for the right answers.

    Maddox adds: Businesses are strongly urged to include 4 or 5 knowledge check questions for each video. Knowledge testing is one of the best ways to add additional training. It also provides quick insights on what aspects of your video is effective and what is not. Train by testing!

  4. Analytics knowledge must become readily available . As users start using BI, they need to figure out the depth and breadth of what is possible with BI, formulas, workflows, regression, and other basic tools. This might be as simple as an aggregation of useful Youtube videos to a formal program developed in a corporate learning platform.

By taking these training tips from one of the top BI influencers and the top-cited training analyst on the planet, we hope you are better equipped to support self-service BI at scale for your business.

Posted on Leave a comment

Data Science Platforms News Roundup, June 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, Datawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta. Continue reading Data Science Platforms News Roundup, June 2018

Posted on 1 Comment

Destroying the CEO Myth: Redefining The Power Dynamics of Managing DevOps

Tom Petrocelli, Amalgam Insights Research Fellow

I am constantly asked the question “What does one have to do to implement DevOps”, or some variant.  Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology-based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful.

 

My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change the culture and management first. Unfortunately, that’s the hard part. When companies talk about changing culture for DevOps they often mean implementing multifunction teams or something less than that. Throwing disparate disciplines into an unregulated melting pot doesn’t help. These teams can end up as dysfunctional as with any other management or project structure. Team members will bicker over implementation and try to protect their hard-won territory.

 

As the old adage goes, “everything old is new again” and so-called DevOps culture is no different. Multi-functional teams are just a flavor of matrix management which has been tried over and over for years. They suffer from the same problems. Team members have to serve two masters and managers act like a group of dogs with one tree among them. Trying to please both the project leader and their functional management creates inherent conflicts.

 

Another view of creating DevOps culture is, what I think of as, the “CEO Buy-in Approach”. Whenever there is new thinking in IT there always seems to be advocacy for a top-down approach that starts with the CEO or CIO “buying in” to the concept. After that magic happens and everyone holds hands and sings together. Except that they don’t. This approach is heavy-handed and an unrealistic view of how companies, especially large companies, operate. If simply ordering people to work well together was all it took, there would be no dysfunctional companies or departments.

 

A variation on this theme advocates picking a leader (or two if you have two-in-the-box leadership) to make everyone work together happily. Setting aside the fact that finding people with broad enough experience to lead multi-disciplinary teams, this leads to what I have always called “The Product Manager Problem.”

 

The problem that all new product managers face is the realization that they have all the responsibility and none of the power to accomplish their mission.

 

That’s because responsibility for the product concentrates in one person, the product manager, and all other managers can diffuse their responsibility across many products or functions.

 

Having a single leader responsible for making multi-functional teams work creates a lack of individual accountability. The leader, not the team, is held accountable for the project while the individual team members are still accountable to their own managers. This may work when the managers and project team leaders all have great working relationships. In that case, you don’t need a special DevOps structure. Instead, a model that creates a separate project team leader or leaders enables team dysfunction and the ability to maintain silos through lack of direct accountability. You see this when you have a Scrum Master, Product Owner, or Release Manager who has all the responsibility for a project.

 

The typical response to this criticism of multi-functional teams (and the no-power Product Manager) is that leaders should be able to influence and cajole the team, despite having no real authority. This is ridiculous and refuses to accept that individual managers and the people that work for them are motivated to maintain their own power. Making the boss look good works well when the boss is signing your evaluation and deciding on your raise. Sure, project and team leaders can be made part of the evaluation process but, really who has the real power here? The functional manager in control of many people and resources or the leader of one small team?

 

One potential to the DevOps cultural conundrum is collective responsibility. In this scheme, all team members benefit or are hurt by the success of the project. Think of this as the combined arms combat team model. In the Army, a multi-functional combined arms teams are put together for specific missions. The team is held responsible for the overall mission. They are responsible collectively and individually. While the upper echelons hold the combined arms combat team responsible for the mission, the team leader has the ability to hold individuals accountable.

 

Can anyone imagine an Army or Marine leader being let off the hook for mission failure because one of their people didn’t perform? Of course not, but they also have mechanisms for holding individual soldiers accountable for their performance.

 

In this model, DevOps teams collectively would be held responsible for on-time completion of the entire project as would the entire management chain. Individual team members would have much of their evaluation based on this and the team leader would have the power to remediate nonperformance including removing a team member who is not doing their job (i.e. fire them). They would have to have the ability to train up and fill the role of one type of function with another if a person performing a role wasn’t up to snuff or had to be removed. It would still be up to the “chain of command” to provide a reasonable mission with appropriate resources.

 

Ultimately, anyone in the team could rise up and lead this or another team no matter their speciality. There would be nothing holding back an operations specialist from becoming the Scrum Master. If they could learn the job, they could get it.

 

The very idea of a specialist would lose power, allowing team members to develop talents no matter their job title.

 

I worked in this model years ago and it was successful and rewarding. Everyone helped everyone else and had a stake in the outcome. People learned each other’s jobs, so they could help out when necessary, learning new skills in the process. It wasn’t called DevOps but it’s how it operated. It’s not a radical idea but there is a hitch – silo managers would either lose power or even cease to exist. There would be no Development Manager or Security Manager. Team members would win, the company would win, but not everyone would feel like this model works for them.

 

This doesn’t mean that all silos would go away. There will still be operations and security functions that maintain and monitor systems. The security and ops people who work on development projects just wouldn’t report into them. They would only be responsible to the development team but with full power (and resources) to make changes in production systems.

 

Without collective responsibility, free of influence from functional managers, DevOps teams will never be more than a fresh coat of paint on rotting wood. It will look pretty but underneath, it’s crumbling.

Posted on Leave a comment

Tom Petrocelli Advises: Adopt Chaos Engineering to Preserve System Resilience

Today in CMSWire, Amalgam Insights Research Fellow Tom Petrocelli advises the developer community to support chaos engineering on CMSWire.

As it becomes increasingly important for organizations reliable IT infrastructure, traditional resilience testing methods fall short in tech ecosystems where root-cause troubleshooting is increasingly difficult to manage, control, and fix. Rather than focusing purely on the lineage of tracing catastrophic issues, Petrocelli advises testing abrupt failures that mimic real-world issues.

To learn more about the approach of chaos engineering to managing scale-out, highly distributed, and varied infrastructure environments, read Tom’s full thoughts on CMSWire.

Posted on Leave a comment

Hyoun Park Interviewed on Onalytica

Today, Onalytica, a leader in influencer marketing, published an interview on Hyoun Park as a key influencer in the Business Intelligence and Analytics markets. In this interview, Hyoun provides guidance on:

  • How he became an expert in the Analytics and BI space based on over 20 years of experience
  • The 4 Key Tipping Points for BI that Hyoun is excited about
  • Hyoun’s favorite BI influencers, including Jen Underwood, Doug Henschen, John Myers, Claudia Imhoff, and Howard Dresner
  • And then the 4 trends that will drive the BI industry over the next 12 months.

To read the interview and see what inspires this influencer, read the interview on the Onalytica website.

Posted on 1 Comment

What Data Science Platform Suits Your Organization’s Needs?

This summer, my Amalgam Insights colleague Hyoun Park and I will be teaming up to address that question. When it comes to data science platforms, there’s no such thing as “one size fits all.” We are writing this landscape because understanding the processes of scaling data science beyond individual experiments and integrating it into your business is difficult. By breaking down the key characteristics of the data science platform market, this landscape will help potential buyers choose the appropriate platform for your organizational needs. We will examine the following questions that serve as key differentiators to determine appropriate data science platform purchasing solutions to figure out which characteristics, functionalities, and policies differentiate platforms supporting introductory data science workflows from those supporting scaled-up enterprise-grade workflows.

Continue reading What Data Science Platform Suits Your Organization’s Needs?

Posted on Leave a comment

The Learning Science Perspective: Why Degreed Acquired Pathgather to Rapidly Grow the Learning Experience Platform Market

On June 20, 2018 Degreed acquired Pathgather. The terms of the acquisition were not disclosed. All Pathgather employees are joining Degreed, creating a team of nearly 250 employees. This represents the merger of two companies present at the birth of the now-booming Learning Experience Platform (LEP) industry. Degreed and Pathgather have been direct competitors since the start. As a single entity, they are formidable with a client base of more than 200 organizations, with over 4 million licensed users and nearly $100 million in funding. From a learning science perspective, the marriage of psychology and brain science—Degreed is now stronger as well. Continue reading The Learning Science Perspective: Why Degreed Acquired Pathgather to Rapidly Grow the Learning Experience Platform Market

Posted on 1 Comment

Mapping Multi-Million Dollar Business Value from Machine Learning Projects

Amalgam has just posted a new report: The Roadmap to Multi-Million Dollar Machine Learning Value with DataRobot. I’m especially excited about this report for a couple of reasons.

First, this report documents multiple clear value propositions for machine learning that led to the documented annual value of over a million dollars. This is an important metric to demonstrate at a time when many enterprises are still asking why they should be putting money into machine learning.

Second, Amalgam introduces a straightforward map for understanding how to construct machine learning products that are designed to create multi-million dollar value. Rather than simply hope and wish for a good financial outcome, companies can actually model if their project is likely to justify the cost of machine learning (especially the specialized mathematical and programming skills needed to make this work.)

Amalgam provides the following starting point for designing Multi-Million dollar machine learning value:

Stage One is discovering the initial need for machine learning, which may sound tautological. “To start machine learning, find the need for machine learning…” More specifically, look for opportunities to analyze hundreds of variables that may be related to a specific outcome, but where relationships cannot be quickly analyzed by gut feel or basic business intelligence. And look for opportunities where employees already have gut feelings that a new variable may be related to a good business outcome, such as better credit risk scoring or higher quality supply chain management. Start with your top revenue-creating or value-creating department and then deeply explore.

Stage Two is about financial analysis and moving to production. Ideally, your organization will find a use case involving over $100 million in value. This does not mean that your organization is making $100 million in revenue, as activities such as financial loans, talent recruiting, and preventative maintenance can potentially lead to billions of dollars in capital or value being created even if the vendor only collects a small percentage as a finder’s fee, interest, or maintenance fee. Once the opportunity exists, move on it. Start small and get value.

Then finally, take those lessons learned and start building an internal Machine Learning Best Practices or Center of Excellence organization. Again, start small and focus on documenting what works within your organization, including the team of employees needed to get up and running, the financial justification needed to move forward, and the technical resources needed to operationalize machine learning on a scalable and predictable basis. Drive the cost of Machine Learning down internally so that your organization can tackle smaller problems without being labor, cost, and time-prohibitive.

This blog is just a starting point for the discussion of machine learning value Amalgam covers in The Roadmap to Multi-Million Dollar Machine Learning Value with DataRobot. Please check out the rest of the report as we discuss the Six Stages of moving from BI to AI.

This report also defines a financial ROI model associated with a business-based approach to machine learning.

If you have any questions about this blog, the report, or how to engage Amalgam Insights in providing strategy and vendor recommendations for your data science and machine learning initiatives, please feel free to contact us at info@amalgaminsights.com.

Posted on Leave a comment

5 Stages of The Technology Expense Management Market (re: Calero Acquires Veropath)

In my recent Market Milestone, Calero Acquires Veropath to Bolster its Global Role in Technology Expense Management, I made a quick comment about Veropath as an “accretive acquisition target.” But then I realized that I hadn’t explained what that meant from an Amalgam perspective.

From Amalgam’s perspective, the Technology Expense Market (aka Telecom Expense Management, although these solutions now regularly manage a wide variety of IT assets, services, and subscriptions) roughly breaks out into companies of five sizes, each with capabilities that could be considered “accretive” to larger organizations. I should add that there are a number of additional TEM companies that are at these sizes, but do not fit these profiles. Outlying companies might be very profitable, stable, and good providers, but are not typically considered great acquisition targets.

The first size is those of 1 – 10 employees. These are companies that are usually good at a specific task or have a single product that is custom-suited to managing a specific capability, such as automated invoice processing or rate plan optimization or network data management. The companies in this space tend to have a combination of specialization and subject matter expertise from a technical perspective, but lack the support staff to manage a large number of clients. These are a combination of technology acquisitions and acquihires.

The second size is 10 – 30 employees. These Technology Expense Management companies have found a specific geographical, market, or service niche and tend to have some combination of technology and services. There is a long tail of TEM companies in this category that lack the scale to go national, but may have built strong geographic, technical, or process management capabilities. However, these companies typically lack the sales and marketing engine to expand beyond their current size, meaning that further growth will often require outside capital and additional investment in revenue-creating activities.

The third size is roughly between 30 and 75 employees. At this size, the TEM vendor has found a strong go-to-market message and is supporting both mid-market and enterprise vendors regularly. These vendors have built their own platform, have a significant internal support team, and typically have a strong sales leader who is either the CEO or a VP of Sales. At this point, Amalgam notes that the biggest challenge for these vendors is creating a management team empowered to make good decisions and in letting go of decisions as a CEO. This management challenge is quite difficult to surpass, both because it adds a lot of complexity to the business with very little immediate benefit to the CEO or the firm’s employees. However, at this scale, TEM businesses are also a good target for acquisition as they have built out every business function needed to be a successful and stable long-term business. Roughly speaking, these companies tend to have about $100 million to $500 million in spend under management and run as stable, profitable businesses. There are a number of strong TEM vendors in this space including, but not limited to, Avotus, Ezwim, ICOMM, Mobile Solutions, Network Control, SaaSwedo, SmartBill, Tellennium, Valicom, vCom, VoicePlus, and Wireless Analytics

The fourth size is between 75 and 1,000 employees. These TEM companies are rarely acquired and start becoming the acquirers of other TEM companies because they have successfully built an organization that can scale and run multiple business units. At this size, TEM companies start to manage over a billion dollars a year in spend and tend to either be publicly traded or backed by private equity. And at this point, TEM companies start running into adjacent competitors in markets such as Managed Mobility Services, SaaS Vendor Management, Cloud Service Management, IT Asset Management, and other related IT management areas. This is an interesting area for TEM because, after several years of watching Tangoe acquire businesses at this scale in the early 2010s, multiple new vendors appeared at this scale in the mid-to-late 2010s. Currently, Amalgam considers Calero, Cass, Cimpl, Dimension Data, MDSL, Mobichord, Mer Telemanagement Systems (MTS), One Source Communications, Sakon, and TNX to be representative of vendors of this size of large TEM providers.

Currently, the fifth size of 1,000+ employees is a market of one: Tangoe. This company has grown both organically and acquisitively to manage over $38 billion in technology spend, making it roughly six times larger than its nearest competitor. At this size, Tangoe focuses on large enterprise and global management challenges and is positioned to start pursuing adjacent markets more aggressively. Amalgam believes that there is sufficient opportunity in this market for additional firms of this scale, however, and expects one or more of Calero, Cass Information Systems, MDSL, or Sakon to leap into this scale in the next three-to-five years.

So, when Amalgam refers to “accretive opportunities” in the TEM space from an acquisition perspective, this is the rough context that we use as a starting point. Of course, with the 100+ firms that we track in this market, any particular category has both nuance and personalization in describing individual firms. If you have any questions regarding this blog, please feel free to follow up by emailing info@amalgaminsights.com and if you’d like to learn more about what Calero has done with this acquistion of Veropath, one of the largest UK-headquarted TEM vendors, please download our Market Milestone available this week (or as supplies last) for free.

Posted on 1 Comment

Domino Debuts Data Science Framework

On May 22, Domino held its first Analyst Seminar in advance of its Rev conference for data science leaders. Domino provides an open data science platform to coordinate data science initiatives across enterprises, integrating data scientists, IT, and line of business.

At the Analyst Seminar, Domino introduced its Model Management framework: five pillars supporting a core belief that data science best practices involve data science not just being a siloed department or team, but that its resulting models should drive the business. For this to be possible,  all relevant stakeholders across the enterprise will need to buy into data science initiatives, as this will involve changes to existing business process in order to take advantage of the knowledge gained from data science projects.

Continue reading Domino Debuts Data Science Framework