Posted on 1 Comment

Looking at Microservices, Containers, and Kubernetes with 2020 Vision

Some years are easy to predict than others. Stability in a market makes tracing the trend line much easier. 2020 looks to be that kind of year for the migration to microservices: stable with steady progression toward mainstream acceptance.

There is little doubt that IT organizations are moving toward microservices architectures. Microservices, which deconstruct applications into many small parts, removes much of the friction that is common in n-Tier applications when it comes to development velocity. The added resiliency and scalability of microservices in a distributed system are also highly desirable. These attributes promote better business agility, allowing IT to respond to business needs more quickly and with less disruption while helping to ensure that customers have the best experience possible.

Little in this upcoming year seems disruptive or radical; That big changes have already occurred. Instead, this is a year for building out and consolidating; Moving past the “what” and “why” and into the “how” and “do”.

Kubernetes will be top of mind to IT in the coming year. From its roots as a humble container orchestrator – one of many in the market – Kubernetes has evolved into a platform for deploying microservices into container clusters. There is more work to do with Kubernetes, especially to help autoscale clusters, but it is now a solid base on which to build modern applications.

No one should delude themselves into thinking that microservices, containers, and Kubernetes are mainstream yet. The vast majority of applications are still based on n-Tier design deployed to VMs. That’s fine for a lot of applications but businesses know that it’s not enough going forward. We’ve already seen more traditional companies begin to adopt microservices for at least some portion of their applications. This trend will accelerate in the upcoming year. At some point, microservices and containers will become the default architecture for enterprise applications. That’s a few years from now but we’ve already on the path.

From a vendor perspective, all the biggest companies are now in the Kubernetes market with at least a plain vanilla Kubernetes offering. This includes HPE and Cisco in addition to the companies that have been selling Kubernetes all along, especially IBM/Red Hat, Canonical, Google, AWS, VMWare/Pivotal, and Microsoft. The trick for these companies will be to add enough unique value that their offerings don’t appear generic. Leveraging traditional strengths, such as storage for HPE, networking for Cisco, and Java for Red Hat and VMWare/Pivotal, are the key to standing out in the market.

The entry of the giants in the Kubernetes space will pose challenges to the smaller vendors such as Mirantis and Rancher. With more than 30 Kubernetes vendors in the market, consolidation and loss is inevitable. There’s plenty of value in the smaller firms but it will be too easy for them to get trampled underfoot.

Expect M&A activity in the Kubernetes space as bigger companies acquihire or round out their portfolios. Kubernetes is now a big vendor market and the market dynamics favor them.

If there is a big danger sign on the horizon, it’s those traditional n-Tier applications that are still in production. At some point, IT will get around to thinking beyond the shiny new greenfield applications and want to migrate the older ones. Since these apps are based on radically different architectures, that won’t be easy. There just aren’t the tools to do this migration well. In short, it’s going to be a lot of work. It’s a hard sell to say that the only choices are either expensive migration projects (on top of all that digital transformation money that’s already been spent) or continuing to support and update applications that no longer meet business needs. Replatforming, or deploying the old parts to the new container platform, will provide less ROI and less value overall. The industry will need another solution.

This may be an opportunity to use all that fancy AI technology that vendors have been investing in to create software to break down an old app into a container cluster. In any event, the migration issue will be a drag on the market in 2020 as IT waits for solutions to a nearly intractable problem.

2020 is the year of the microservice architecture.

Even if that seems too dramatic, it’s not unreasonable to expect that there will be significant growth and acceleration in the deployment of Kubernetes-based microservices applications. The market has already begun the process of maturation as it adapts to the needs of larger, mainstream, corporations with more stringent requirements. The smart move is to follow that trend line.

Posted on

Tom Petrocelli to Appear on DM Radio to Discuss Containers and Hybrid Cloud

On January 24, 2019 at 3 PM Eastern, Amalgam Insights’ DevOps and Open Source Research Fellow, Tom Petrocelli will be sharing his perspectives on the importance of containers in multi-cloud management on the DM Radio episode Contain Yourself? The Key to Hybrid Cloud

This episode will be hosted by Eric Kavanagh, CEO of The Bloor Group and Petrocelli will be accompanied by Samuel Holcman of the Pinnacle Business Group and Pakshi Rajan of Paxata.

Don’t miss this opportunity to get Tom Petrocelli’s guidance and wisdom on the current state of containers and cloud management!

Posted on Leave a comment

The View from KubeCon+CloudNativeCon – Containers and Kubernetes Become Enterprise Ready

In case there was any doubt about the direction containers and Kubernetes are going, KubeCon+CloudNativeCon 2018 in Seattle should have dispelled them. The path is clear – technology is maturing and keeps adding more features that make it conducive to mission-critical, enterprise applications. From the very first day, the talk was about service meshes and network functions, logging and traceability, and storage and serverless compute. These are couplets that define the next generation of management, visibility, and core capabilities of a modern distributed application. On top of that is emerging security projects such as SPIFFE & SPIRE, TUF, Falco, and Notary. Management, visibility, growth in core functionality, and security. All of these are critical to making container platforms enterprise ready.

“The future of containers and Kubernetes as the base of the new stack was on display at KubeCon+CloudNativeCon and it’s a bright one.

Tom Petrocelli, Research Fellow, Amalgam Insights”

If the scope of KubeCon+CloudNativeCon and the Cloud Native Computing Foundation (CNCF) is any indication, the ecosystem is also growing. This year there were 8000 people at the conference – a sellout. The CNCF has grown to 300+ vendor members there are 46,000 contributors to its projects. That’s a lot of growth compared to just a few years ago. This many people don’t flock to sinking projects. Continue reading The View from KubeCon+CloudNativeCon – Containers and Kubernetes Become Enterprise Ready

Posted on 2 Comments

Containers Continue on Track for 2019: 3 Key Trends For the Maturing Container Ecosystem

Tom Petrocelli, Amalgam Insights Research Fellow

The past few years have been exciting ones for containers. All types of tools are available and a defined deployment pipeline has begun to emerge. Kubernetes and Docker have come to dominate the core technology. That, in turn, has brought the type of stability that allows for wide-scale deployments. The container ecosystem has exploded with lots of new software components that help maintain, manage, and operate container networks. Capabilities such as logging, load balancing, networking, and security that were previously the domain of system-wide software and appliances are now being brought into the individual application as components in the container cluster.

Open Source has played a big part in this process. The Cloud Native Computing Foundation, or CNCF, has projects for all things container. More are added every day. That is in addition to the many other open source projects that support container architectures. The ecosystem just keeps growing.

Where do we go from here, at least through 2019? Continue reading Containers Continue on Track for 2019: 3 Key Trends For the Maturing Container Ecosystem

Posted on Leave a comment

Monitoring Containers: What’s Inside YOUR Cluster?

Tom Petrocelli, Amalgam Insights Research Fellow

It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately: container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource-rich server is now multiple processes spread across one or many servers. On top of the architecture change, a container cluster usually encompasses a variety of containers that are not application code. These include security, load balancing, network management, web servers, etc. Entire frameworks, such as NGINX Unit 1.0, may be deployed as infrastructure for the cluster. Services that used to be centralized in a network are now incorporated into the application itself as part of the container network.

Because an “application” is now really a collection of smaller services running in a virtual network, there’s a lot more that can go wrong. The more containers, the more opportunities for misbehaving components. For example:

  • Network issues. No matter how the network is actually implemented, there are opportunities for typical network problems to emerge including deadlocked communication and slow connections. Instead of these being part of monolithic network appliances, they are distributed throughout a number of local container clusters.
  • Apps that are slow and make everything else slower. Poor performance of a critical component in the cluster can drag down overall performance. With microservices, the entire app can be waiting on a service that is not responding quickly.
  • Containers that are dying and respawning. A container can crash which may cause an orchestrator such as Kubernetes to respawn the container. A badly behaving container may do this multiple times.

These are just a few examples of the types of problems that a container cluster can have that negatively affect a production system. None of these are new to applications in general. Applications and services can fail, lock up, or slow down in other architectures. There are just a lot more parts in a container cluster creating more opportunities for problems to occur. In addition, typical application monitoring tools aren’t necessarily designed for container clusters. There are events that traditional application monitoring will miss especially issues with containers and Kubernetes themselves.

To combat these issues, a generation of products and open source projects are emerging that are retrofit or purpose-built for container clusters. In some cases, app monitoring has been extended to include containers (New Relic comes to mind). New companies, such as LightStep, have also entered the market for application monitoring but with containers in mind from the onset. Just as exciting are the open source projects that are gaining steam. Prometheus (for application monitoring), OpenTracing (network tracing), and Jaeger (transaction tracing) are some of the open source projects that are help gather data about the functioning of a cluster.

What makes these projects and products interesting is that they place monitoring components in the clusters, close to the applications’ components, and take advantage of container and Kubernetes APIs. This placement helps sysops to have a more complete view of all the parts and interactions of the container cluster. Information that is unique to containers and Kubernetes are available alongside traditional application and network monitoring data.

As IT departments start to roll scalable container clusters into production, knowing what is happening within is essential. Thankfully, the ecosystem for monitoring is evolving quickly, driven equally but companies and open source communities.

Posted on 1 Comment

Market Milestone: Red Hat Acquires CoreOS Changing the Container Landscape

Red Hat Acquires CoreOS

We have just published a new document from Tom Petrocelli analyzing Red Hat’s $250 million acquisition of CoreOS and why it matters for DevOps and Systems Architecture managers.

This report is recommended for CIOs, System Architects, IT Managers, System Administrators, and Operations Managers who are evaluating CoreOS and Red Hat as container solutions to support their private and hybrid cloud solutions. In this document, Tom provides both the upside and concerns that your organization needs to consider in evaluating CoreOS.

This document includes:
A summary of Red Hat’s Acquisition of CoreOS
Why It Matters
Top Takeaways
Contextualizing CoreOS within Red Hat’s private and hybrid cloud portfolio
Alternatives to Red Hat CoreOS
Positive and negative aspects fcr current Red Hat and CoreOS customers

To download this report, please go to our Research section.

Posted on

Cloud Cost Management Vendor Profile: IBM Turbonomic

Amalgam Insights continues to present its list of Distinguished Vendors for Cloud Cost and Optimization Management. This matters because analysts assessed nearly 30 providers for this effort; only a third were able to demonstrate genuine differentiators and approaches that satisfied Amalgam Insights’ requirements for achieving Distinguished Vendor status. To that point, we already have posted profiles on SADA, Spot by NetApp, Apptio Cloudability, Yotascale, Kion, and CAST AI . We next discuss IBM Turbonomic.

WHY IBM TURBONOMIC FOR CLOUD COST AND OPTIMIZATION MANAGEMENT

  • Focus on application performance, which leads to savings
  • Platform configuration is automated, saving IT time and effort during deployment
  • Software learns from organizations’ actions, so recommendations improve over time

ABOUT IBM TURBONOMIC

IBM Turbonomic is an Amalgam Insights Distinguished Vendor for Cloud Cost and Optimization Management. Founded in 2009, Turbonomic was acquired by IBM in 2021. IBM Turbonomic now acts as Big Blue’s solution to ensure application performance and governance across cloud environments, including public and private. Turbonomic has two offices in the United States — its headquarters in Boston and a satellite location in Newark, Delaware — as well as one in the UK and another in Canada. IBM does not publicly disclose how many Turbonomic employees it has, nor does it break out Turbonomic annual revenue or provide customer retention rates.

In terms of cloud spend under management, Turbonomic states that it does not track the amount of money its clients spend on cloud computing. Turbonomic serves Fortune 2000 customers across industries including finance, insurance, and healthcare. Turbonomic is typically considered by organizations that have at least 1,000 cloud instances or virtual machines; many support tens of thousands.

IBM TURBONOMIC’S OFFERING

IBM Turbonomic Application Resource Management targets application performance and governance throughout an organization’s cloud environment, which can include public cloud (Amazon Web Services, Microsoft Azure, Google Cloud), private cloud (IBM, VMware), and multi-cloud environments.

The platform optimizes cloud computing, storage, database as a service, reserved instances, and Kubernetes, but does not currently address spot instances). Furthermore, it optimizes and scales based on IOPs (input/output), reservations, and discounts. Overall, IBM Turbonomic aims to ensure spend aligns to applications, preventing cost overruns and keeping applications performing optimally. While Turbonomic mainly serves IT users, Turbonomic recently teamed with Flexera to add a detailed cost-reporting module that appeals to Financial Operations (FinOps) experts.

IBM Turbonomic charges for its cloud application optimization software based on the number of resources under management. Rather than offering individual add-on capabilities, IBM Turbonomic lets clients choose more advanced capabilities by buying different licensing tiers associated with integrations to other software and processes such as IT service management, orchestrators, and application performance management. IBM Turbonomic includes technical support with all tiers. IBM Turbonomic and its third-party channel partners offer professional services as needed.

IBM Turbonomic states that its top differentiator originates from artificial intelligence that matches application demand to underlying infrastructure supply at every layer of the stack continuously in real-time with automatable resourcing decisions. As more organizations use IBM Turbonomic, the automated recommendations provided to all of its customers improve. Cloud administrators gain insight into suggested actions, such as investments to enhance performance and save money.

IBM Turbonomic Application Resource Management is delivered as software-as-a-service. It works across public, private, containerized, and bare metal cloud environments. IBM Turbonomic’s reference customers include Providence Health, which has 120,000 employees; Litehouse Foods, which makes salad dressing, cheese, and other foods; and apparel maker Carhartt.

COMPETITION AND COMPETITIVE POSITIONING

IBM Turbonomic mainly competes against organizations’ in-house spreadsheets and mix of tools that are specific to the technologies in use. In these cases, IBM Turbonomic finds that organizations are over-provisioning cloud computing resources in the hopes of mitigating risk. Therefore, they are spending too much and only addressing application performance when something goes wrong.

IBM Turbonomic also often faces VMware CloudHealth in its prospective deals.

IBM Turbonomic states that it draws customers because of automation and recommendations that tend to result in the following business outcomes:

  • Reduction of public cloud spend by 30%
  • Increase in team productivity by 35%
  • Improvement of application performance by 20%
  • Increase in speed to market by 40%

IBM TURBONOMIC’S PLANS FOR THE FUTURE

IBM Turbonomic keeps its roadmap private, so details about upcoming enhancements are not public. However, Amalgam Insights believes that IBM Turbonomic will pursue improvements in sustainability reporting and GitOps resizing in the near future, and may soon pursue a deeper relationship with Microsoft Azure, given that three of these areas are of interest to IBM Turbonomic’s current client base.

AMALGAM INSIGHTS RECOMMENDATIONS

Amalgam Insights recommends that organizations with a minimum of 1,000 cloud instances or virtual machines, and residing within the Fortune 2000, consider IBM Turbonomic Application Resource Management.

Because the platform automatically configures during deployment, provides ongoing recommendations for application and cloud-configuration improvement, and continues to learn from users’ actions, organizations can observe how cloud environments are continuously optimized. This allows IT teams to support cloud consumption needs while also ensuring the organization does not overpay or underresource. In addition, FinOps professionals gain the information they need to track and budget digital transformation efforts without burdening their IT counterparts.

Combined, these capabilities are critical to organizations’ goals of delivering stewardship over their cloud environments while maintaining fiscal responsibility that best serves shareholders, investors, and staff.


Posted on

Cloud Cost Management Vendor Profile: CAST AI

Cast AI - Amalgam Insights' 2022 Distinguished Vendor for Cloud Cost Management

Managing cloud infrastructure is no easy task, especially when containers such as Kubernetes come into play. In our ongoing effort to help organizations understand what they need to do to make the most of their cloud environments, Amalgam Insights this year briefed with a number of management and optimization vendors. We continue to publish our findings, which include analyst guidance complete with a series of vendor profiles. This installment focuses on CAST AI, a company that takes a different approach to cloud cost and optimization management by homing in on containers. Read on to learn why that is so important and to understand Amalgam Insights’ resulting recommendations for enterprises.

WHY CAST AI FOR COST CLOUD COST AND OPTIMIZATION MANAGEMENT

  • Optimizes Kubernetes containers on a continuous basis
  • Company claims to save users an average of 63% on cloud bills
  • Cost reporting and cluster analysis provided as a free service

ABOUT CAST AI

CAST AI is an Amalgam Insights Distinguished Vendor for Cloud Cost and Optimization Management. Founded in 2019, Miami-headquartered CAST AI employs 60 people in Florida and Lithuania. It raised $10 million in Series A funding in fall of 2021, following its $7.7 million seed round in late 2020. CAST AI does not look for a specific customer size; some of its users have fewer than two dozen virtual machines, while others run thousands. The privately held firm does not disclose annual revenue or how much cloud spend it manages.

CAST AI’S OFFERING

CAST AI automates and optimizes Kubernetes environments on Amazon Web Services (AWS) Elastic Kubernetes Service, kOps running on AWS, Microsoft Azure Kubernetes Service, and Google Cloud Platform Google Kubernetes Service as well as Kubernetes clusters running directly on CAST AI.

Cast AI users — who typically are DevOps (Development Operations) experts — may run cost reporting that includes cluster analysis and recommendations. FinOps (Financial Operations) professionals can take the reporting results and incorporate them into their practices.

The CAST AI engine goes beyond cost reporting to rearrange Kubernetes environments for the most effective outcomes. To do this, CAST AI connects to a specified app, then runs a script that installs agents to collect information about the app. After that, a report pops up that can provide recommendations for reducing the number of Kubernetes machines or changing to a different compute platform with less memory, all to cut down on cost.

If a user accepts CAST AI’s recommendations, he or she can click a button to optimize the environment in real time. This button sets off a continuous optimization function to give orders to Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS) to rearrange itself, such as autoscaling in real time and rebalancing clusters. Users set their desired automation and alerting thresholds. CAST AI pings the app every 15 seconds and produces an hourly graph. CAST AI claims its users save an average of 63% on their cloud bills.

Pricing for CAST AI varies. CAST AI does not enforce a minimum spend requirement. Rather, it charges by the number of active, optimized CPUs. That starts at $5 per CPU per month and there are tiered discounts from 1-5,000 CPUs, then 5,001-15,000, and so on. Base subscriptions start at $200 per month and go up to $5,000 per month or more, depending on volume discounts. CAST AI provides cost reporting and cluster analysis for free, with no time limits. Users also can buy cost management as a standalone service.

COMPETITION AND COMPETITIVE POSITIONING

CAST AI competes most frequently against the Ocean platform from Spot by NetApp in competitive deals. For the most part, though, CAST AI “competes” against DevOps professionals trying to reduce cloud costs manually — a difficult and time-consuming effort.

CAST AI finds that it gains customers because of its engine’s ease of use and ability to make changes in real-time. This further frees DevOps experts to focus on innovative projects.

CAST AI goes to market via its website and, in Europe, Asia, and the United States, also through third-party partners.

CAST AI’s reference customers including La Fourche, a French online retailer of organic products, and ecommerce consultancy Snow Commerce.

CAST AI’S PLANS FOR THE FUTURE

CAST AI plans to build an air-gapped version of its engine disconnected from the Internet and fully supported within the customer’s internal environment for private cloud users in vertical markets including government and banking. Because CAST AI collects metadata to optimize Kubernetes environments, CAST AI is working on this capability to support more governed industries and organizations.

AMALGAM INSIGHTS’ RECOMMENDATIONS

Amalgam Insights recommends that organizations with Kubernetes containers try CAST AI’s free trial to understand how the platform might help save money and optimize resources. Although Kubernetes has largely won as the software container of choice in DevOps environments, businesses still have not standardized on strategies to optimize the compute and storage associated with containerized workloads and services. Amalgam Insights believes that Kubernetes optimization should not be a long-term direct responsibility for developers and architects as tools emerge to define the resources that are most appropriate for running containerized applications at any given time.

Organizations worldwide are struggling to control cloud costs, especially as they pursue containerization and cloud refactorization projects associated with digital transformation. Organizations also are cleaning up pandemic-spurred cloud deployments that quickly got out of hand and have proven difficult to keep in line since then. CAST AI’s technology provides an option that DevOps engineers should consider as they seek to tighten and optimize the spend tied to applications containerized in the cloud.

Need More Guidance Now?

Check out Amalgam Insights’ new Vendor SmartList report, Control Your Cloud: Selecting Cloud Cost Management in the Face of Recession, available for purchase. If you want to discuss your Cloud Cost Management challenges, please feel free to schedule time with us.

Posted on

Cloud Cost Management Vendor Profile: Yotascale

As Amalgam Insights continues to present independent profiles of vendors in the cloud cost management and optimization space, we next highlight a company that takes an engineer-specific approach. This differentiator takes aim at organizations with a certain level of maturity within their cloud environments, as well as a particular spend threshold. Read on to learn more about Yotascale and to glean Amalgam Insights’ recommendations.

WHY YOTASCALE FOR CLOUD COST MANAGEMENT AND OPTIMIZATION

  • Engineer-specific design for cloud cost ownership
  • Consistent view of cloud costs for all users
  • Normalized and automated tagging for cloud resource tracking

ABOUT YOTASCALE

Yotascale is noted as an Amalgam Insights Distinguished Vendor for Cloud Cost and Optimization Management. A relative newcomer in the cloud cost and optimization management space, Yotascale, founded in 2015, states that it manages more than $1 billion in cloud computing spend across infrastructure, platforms, and software. The Palo Alto-based company targets enterprises and mid-market organizations across verticals including media and entertainment, financial services, healthcare, transportation, and real estate.

These users typically spend at least $3 million per year on cloud computing, as spend below that level is often handled through cloud service providers’ native tools. Yotascale has raised $24 million; its most recent round in October 2020 raised $13 million in B series funding. The company currently employs fewer than 50 people and does not disclose its revenue or client retention rate.

YOTASCALE’S OFFERING

Yotascale started to design its cloud cost management and optimization platform with engineers in mind, followed by cloud operations experts and finance professionals. The company did this to help organizations empower engineers to own responsibility for cloud costs. Yotascale’s perspective states that engineers understand the impact of performance

changes on expenses, so they are ideally positioned to oversee those adjustments. As such, Yotascale built an interface that relies on fewer modules than some other software vendors in the cloud cost management space. In Yotascale, all users have the same view of cost data presented in their organization’s business context (although depending on their role, individuals can view the data through a customizable lens) to prevent confusion among departments. However, role-based access is supported to ensure users only have access to data according to their role, as well as alerts and recommendations that apply to their jobs.

Once configured, the Yotascale software helps normalize tag names across cloud providers and services, and provides automated tagging policies for cloud resources in the organization’s preferred nomenclature. That way, users can see an all-in-one view of their multi-cloud resources as well as containerized workloads across Amazon Web Services (AWS) and Microsoft Azure, as of May 2022. Yotascale has plans to add Google Cloud to its roster, rounding out its coverage of the current market-leading hyperscalers.

Yotascale bases its pricing on a percent of monthly resource hours of services (such as Amazon Elastic Compute Cloud and Relational Database Service), rather than by percent of the total bill. Yotascale offers tiered pricing, typically starting at a cloud usage level of 200,000 hours per month. Standard features and services provided by Yotascale include:

  • AWS/Azure spend under management
  • Inventory of AWS accounts or Azure subscriptions
  • User accounts
  • Billing data processing
  • Cost reduction recommendations
  • Billing data anomaly detection

The base pricing package includes all Yotascale features as well as capabilities to provide insight into cloud carbon footprints so organizations can reduce compute power and support sustainability initiatives.

Prior to launching its application in production, Yotascale works with each customer to create the business context for automated tagging. The process can take as little as two weeks, depending on the end user’s readiness and existing documentation. Installing and onboarding the Yotascale software itself takes less than an hour. Yotascale’s reference customers include Zoom, Hulu, Compass, Lime, Okta, and Klarna. Yotascale sells through its direct sales teams as well as a third-party channel that includes consultants and managed service providers.

COMPETITION AND COMPETITIVE POSITIONING

Yotascale finds that it competes most often against organizations’ internal spreadsheets, as well as first-generation deployments of VMware’s CloudHealth and Apptio Cloudability. Yotascale states that it can reduce cloud computing costs by up to 50% compared to existing cloud cost management efforts. Yotascale states that its customer wins are based on the following: an engineer-specific focus and the ability to assign assets to engineers; its emphasis on tag normalization; its all-in-one views; and data that show how changes will impact performance and cost.

YOTASCALE’S PLANS FOR THE FUTURE

Yotascale next plans to build support for Google Cloud Platform cost management and provide self-service onboarding automation. It also intends to add more integrations as users seek to access existing cost management, billing, and sourcing tools as they consolidate data.

AMALGAM INSIGHTS RECOMMENDATIONS

Amalgam Insights recommends that enterprise and mid-market organizations seeking to empower engineers with cloud cost responsibility and spending a minimum of $3 million per year on cloud computing consider Yotascale. Yotascale is built to support engineers seeking to support accounting requests, tagging automation, and service usage requests for cloud cost management demands that exceed in-house capabilities.

Need More Guidance Now?

Check out Amalgam Insights’ new Vendor SmartList report, Control Your Cloud: Selecting Cloud Cost Management in the Face of Recession, available for purchase. If you want to discuss your Cloud Cost Management challenges, please feel free to schedule time with us.

Posted on 1 Comment

Cloud Cost Management Vendor Profile: Apptio Cloudability

Doing cloud cost and optimization management well often calls for the help of an external vendor. That’s why Amalgam Insights has been publishing our in-depth series on the challenges associated with running a cloud cost and optimization management practice, as well as reasons to rely on third-party platforms and services for assistance.

With that in mind, Amalgam Insights presents the third of our ten vendor profiles — this one featuring Apptio Cloudability. (As a refresher, the first profile focused on SADA ; the second on Spot by NetApp.)

Read on for our analysis, which is part of our new Vendor SmartList report,Control Your Cloud: Selecting Cloud Cost Management in the Face of Recession, available to download after purchase.

THE BOTTOM LINE: WHY APPTIO CLOUDABILITY FOR CLOUD COST AND OPTIMIZATION MANAGEMENT

ABOUT APPTIO CLOUDABILITY

Apptio Cloudability is an Amalgam Insights Distinguished Vendor for Cloud Cost and Optimization Management.

Cloudability was founded in 2011 in Portland, Oregon, to manage cloud billing and usage cost data. It was acquired by Apptio in 2019 to add cloud cost management and optimization to ApptioOne’s capabilities. Bellevue, Washington-based Apptio supports more than 1,200 employees in offices in the United States, London, Sydney, Bangalore, and Krakow. Apptio serves midmarket organizations and enterprises, many of which fall within the Fortune 100.

Across its financial management portfolio, Apptio manages $650 billion in technology budget and Cloudability managed more than $9 billion in cloud spend in 2019 when it was acquired. The privately held company does not disclose how many customers it currently has, its annual revenue or other details including customer retention rates or Net Promoter Score.

APPTIO CLOUDABILITY’S OFFERING

Apptio brought in Cloudability in 2019 to augment its existing technology management capabilities; that strategy includes its October 2018 acquisition of FittedCloud to optimize cloud resources and the October 2020 acquisition of SaaSLicense for Software as a Service management.

Apptio Cloudability comprises cloud computing management and optimization — and, through the separate Total Cost module, reporting and analytics — for multicloud environments, containers, and software as a service (through the SaaSLicense acquisition). The platform is designed for IT, finance, and business teams seeking to manage cloud costs, although the company states that it now talks more with head executives — CEOs, CFOs, and cloud directors in Cloud Center of Excellence groups and procurement — during the sales process.

Apptio Cloudability ingests, normalizes, and structures billing usage data from Amazon Web Services (AWS) and Microsoft Azure. This approach is used on an ongoing basis to continuously improve the economics of running cloud environments with financial operations (FinOps) principles in mind. Apptio Cloudability also delivers rightsizing recommendations for an organization’s cloud environment and AWS savings. Apptio Cloudability sees its users spend the most money on AWS, followed by Microsoft Azure, then Google Cloud. The platform’s savings plans capabilities show users where they can reduce and optimize cloud costs while also forecasting their spend.

Apptio Cloudability sets its pricing on the cloud spend covered under one or three-year contracts. It considers discounts on a case-by-case basis. Apptio Cloudability does not require a minimum number of users or spending. The standard Cloudability package comes with basic help desk and technical assistance. Add-on options include professional services (e.g., building a FinOps practice), training, and certification delivered through the FinOps Foundation. Apptio Cloudability offers optimization and allocation-assistance packages separately; they are priced based on the size of work required. Finally, the TotalCost module is available as an add-on, with tiered pricing based on annual cloud spend.

Through integrations and mapping, TotalCost covers all the major public cloud providers, as well as Oracle, Alibaba, and IBM, and ancillary cloud vendors including Snowflake and CrowdStrike. Cloudability uses TotalCost as a means for helping organizations better grasp all the cloud platforms and services influencing their total cost of cloud ownership, and charge back expenses as needed.

COMPETITION AND COMPETITIVE POSITIONING

Apptio Cloudability states that it wins business for its focus on FinOps capabilities, as well as its savings plans and rightsizing modules. The latter modules provide additional analytics and machine-learning capabilities for clients, allowing Apptio Cloudability to generate recommendations through proprietary algorithms that can analyze as much as 15 years’ worth of a company’s data.

Apptio Cloudability goes to market both through direct sales and through an emerging indirect channel made up of managed service providers and consultants.

APPTIO CLOUDABILITY’S PLANS FOR THE FUTURE

Apptio Cloudability plans to keep investing in providing more detailed optimization recommendations for discounts, developing integrations with cloud data and hyperscaler vendors to support sourcing workflow, and supporting localization, currency, and data sovereignty updates to make Cloudability available in more geographies.

Amalgam Insights expects that Apptio will also invest in capabilities to support managed service providers with improved white-labeling and integration, and to continue developing container cost optimization capabilities for Kubernetes and Docker-based workloads.

AMALGAM INSIGHTS’ RECOMMENDATIONS

Amalgam Insights recommends that organizations, particularly those with multiple clouds, interested in a FinOps focus vet Apptio Cloudability. Apptio should be considered by organizations seeking to control costs and budget resources. Because of Apptio’s history in supporting FinOps as a formal practice, organizations with formal FinOps training or experience should assess Apptio. Amalgam Insights also recommends that organizations seeking to provide additional cloud cost visibility to non-IT executives (such as finance, accounting, and procurement) involved with cloud decision-making and tracking in support of those processes evaluate Apptio.

Need More Guidance Now?

Check out Amalgam Insights’ new Vendor SmartList report, Control Your Cloud: Selecting Cloud Cost Management in the Face of Recession, available for purchase. If you want to discuss your Cloud Cost Management challenges, please feel free to schedule time with us.