Posted on 1 Comment

Looking at Microservices, Containers, and Kubernetes with 2020 Vision

Some years are easy to predict than others. Stability in a market makes tracing the trend line much easier. 2020 looks to be that kind of year for the migration to microservices: stable with steady progression toward mainstream acceptance.

There is little doubt that IT organizations are moving toward microservices architectures. Microservices, which deconstruct applications into many small parts, removes much of the friction that is common in n-Tier applications when it comes to development velocity. The added resiliency and scalability of microservices in a distributed system are also highly desirable. These attributes promote better business agility, allowing IT to respond to business needs more quickly and with less disruption while helping to ensure that customers have the best experience possible.

Little in this upcoming year seems disruptive or radical; That big changes have already occurred. Instead, this is a year for building out and consolidating; Moving past the “what” and “why” and into the “how” and “do”.

Kubernetes will be top of mind to IT in the coming year. From its roots as a humble container orchestrator – one of many in the market – Kubernetes has evolved into a platform for deploying microservices into container clusters. There is more work to do with Kubernetes, especially to help autoscale clusters, but it is now a solid base on which to build modern applications.

No one should delude themselves into thinking that microservices, containers, and Kubernetes are mainstream yet. The vast majority of applications are still based on n-Tier design deployed to VMs. That’s fine for a lot of applications but businesses know that it’s not enough going forward. We’ve already seen more traditional companies begin to adopt microservices for at least some portion of their applications. This trend will accelerate in the upcoming year. At some point, microservices and containers will become the default architecture for enterprise applications. That’s a few years from now but we’ve already on the path.

From a vendor perspective, all the biggest companies are now in the Kubernetes market with at least a plain vanilla Kubernetes offering. This includes HPE and Cisco in addition to the companies that have been selling Kubernetes all along, especially IBM/Red Hat, Canonical, Google, AWS, VMWare/Pivotal, and Microsoft. The trick for these companies will be to add enough unique value that their offerings don’t appear generic. Leveraging traditional strengths, such as storage for HPE, networking for Cisco, and Java for Red Hat and VMWare/Pivotal, are the key to standing out in the market.

The entry of the giants in the Kubernetes space will pose challenges to the smaller vendors such as Mirantis and Rancher. With more than 30 Kubernetes vendors in the market, consolidation and loss is inevitable. There’s plenty of value in the smaller firms but it will be too easy for them to get trampled underfoot.

Expect M&A activity in the Kubernetes space as bigger companies acquihire or round out their portfolios. Kubernetes is now a big vendor market and the market dynamics favor them.

If there is a big danger sign on the horizon, it’s those traditional n-Tier applications that are still in production. At some point, IT will get around to thinking beyond the shiny new greenfield applications and want to migrate the older ones. Since these apps are based on radically different architectures, that won’t be easy. There just aren’t the tools to do this migration well. In short, it’s going to be a lot of work. It’s a hard sell to say that the only choices are either expensive migration projects (on top of all that digital transformation money that’s already been spent) or continuing to support and update applications that no longer meet business needs. Replatforming, or deploying the old parts to the new container platform, will provide less ROI and less value overall. The industry will need another solution.

This may be an opportunity to use all that fancy AI technology that vendors have been investing in to create software to break down an old app into a container cluster. In any event, the migration issue will be a drag on the market in 2020 as IT waits for solutions to a nearly intractable problem.

2020 is the year of the microservice architecture.

Even if that seems too dramatic, it’s not unreasonable to expect that there will be significant growth and acceleration in the deployment of Kubernetes-based microservices applications. The market has already begun the process of maturation as it adapts to the needs of larger, mainstream, corporations with more stringent requirements. The smart move is to follow that trend line.

Posted on

Tom Petrocelli to Appear on DM Radio to Discuss Containers and Hybrid Cloud

On January 24, 2019 at 3 PM Eastern, Amalgam Insights’ DevOps and Open Source Research Fellow, Tom Petrocelli will be sharing his perspectives on the importance of containers in multi-cloud management on the DM Radio episode Contain Yourself? The Key to Hybrid Cloud

This episode will be hosted by Eric Kavanagh, CEO of The Bloor Group and Petrocelli will be accompanied by Samuel Holcman of the Pinnacle Business Group and Pakshi Rajan of Paxata.

Don’t miss this opportunity to get Tom Petrocelli’s guidance and wisdom on the current state of containers and cloud management!

Posted on Leave a comment

The View from KubeCon+CloudNativeCon – Containers and Kubernetes Become Enterprise Ready

In case there was any doubt about the direction containers and Kubernetes are going, KubeCon+CloudNativeCon 2018 in Seattle should have dispelled them. The path is clear – technology is maturing and keeps adding more features that make it conducive to mission-critical, enterprise applications. From the very first day, the talk was about service meshes and network functions, logging and traceability, and storage and serverless compute. These are couplets that define the next generation of management, visibility, and core capabilities of a modern distributed application. On top of that is emerging security projects such as SPIFFE & SPIRE, TUF, Falco, and Notary. Management, visibility, growth in core functionality, and security. All of these are critical to making container platforms enterprise ready.

“The future of containers and Kubernetes as the base of the new stack was on display at KubeCon+CloudNativeCon and it’s a bright one.

Tom Petrocelli, Research Fellow, Amalgam Insights”

If the scope of KubeCon+CloudNativeCon and the Cloud Native Computing Foundation (CNCF) is any indication, the ecosystem is also growing. This year there were 8000 people at the conference – a sellout. The CNCF has grown to 300+ vendor members there are 46,000 contributors to its projects. That’s a lot of growth compared to just a few years ago. This many people don’t flock to sinking projects. Continue reading The View from KubeCon+CloudNativeCon – Containers and Kubernetes Become Enterprise Ready

Posted on 2 Comments

Containers Continue on Track for 2019: 3 Key Trends For the Maturing Container Ecosystem

Tom Petrocelli, Amalgam Insights Research Fellow

The past few years have been exciting ones for containers. All types of tools are available and a defined deployment pipeline has begun to emerge. Kubernetes and Docker have come to dominate the core technology. That, in turn, has brought the type of stability that allows for wide-scale deployments. The container ecosystem has exploded with lots of new software components that help maintain, manage, and operate container networks. Capabilities such as logging, load balancing, networking, and security that were previously the domain of system-wide software and appliances are now being brought into the individual application as components in the container cluster.

Open Source has played a big part in this process. The Cloud Native Computing Foundation, or CNCF, has projects for all things container. More are added every day. That is in addition to the many other open source projects that support container architectures. The ecosystem just keeps growing.

Where do we go from here, at least through 2019? Continue reading Containers Continue on Track for 2019: 3 Key Trends For the Maturing Container Ecosystem

Posted on Leave a comment

Monitoring Containers: What’s Inside YOUR Cluster?

Tom Petrocelli, Amalgam Insights Research Fellow

It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately: container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource-rich server is now multiple processes spread across one or many servers. On top of the architecture change, a container cluster usually encompasses a variety of containers that are not application code. These include security, load balancing, network management, web servers, etc. Entire frameworks, such as NGINX Unit 1.0, may be deployed as infrastructure for the cluster. Services that used to be centralized in a network are now incorporated into the application itself as part of the container network.

Because an “application” is now really a collection of smaller services running in a virtual network, there’s a lot more that can go wrong. The more containers, the more opportunities for misbehaving components. For example:

  • Network issues. No matter how the network is actually implemented, there are opportunities for typical network problems to emerge including deadlocked communication and slow connections. Instead of these being part of monolithic network appliances, they are distributed throughout a number of local container clusters.
  • Apps that are slow and make everything else slower. Poor performance of a critical component in the cluster can drag down overall performance. With microservices, the entire app can be waiting on a service that is not responding quickly.
  • Containers that are dying and respawning. A container can crash which may cause an orchestrator such as Kubernetes to respawn the container. A badly behaving container may do this multiple times.

These are just a few examples of the types of problems that a container cluster can have that negatively affect a production system. None of these are new to applications in general. Applications and services can fail, lock up, or slow down in other architectures. There are just a lot more parts in a container cluster creating more opportunities for problems to occur. In addition, typical application monitoring tools aren’t necessarily designed for container clusters. There are events that traditional application monitoring will miss especially issues with containers and Kubernetes themselves.

To combat these issues, a generation of products and open source projects are emerging that are retrofit or purpose-built for container clusters. In some cases, app monitoring has been extended to include containers (New Relic comes to mind). New companies, such as LightStep, have also entered the market for application monitoring but with containers in mind from the onset. Just as exciting are the open source projects that are gaining steam. Prometheus (for application monitoring), OpenTracing (network tracing), and Jaeger (transaction tracing) are some of the open source projects that are help gather data about the functioning of a cluster.

What makes these projects and products interesting is that they place monitoring components in the clusters, close to the applications’ components, and take advantage of container and Kubernetes APIs. This placement helps sysops to have a more complete view of all the parts and interactions of the container cluster. Information that is unique to containers and Kubernetes are available alongside traditional application and network monitoring data.

As IT departments start to roll scalable container clusters into production, knowing what is happening within is essential. Thankfully, the ecosystem for monitoring is evolving quickly, driven equally but companies and open source communities.

Posted on 1 Comment

Market Milestone: Red Hat Acquires CoreOS Changing the Container Landscape

Red Hat Acquires CoreOS

We have just published a new document from Tom Petrocelli analyzing Red Hat’s $250 million acquisition of CoreOS and why it matters for DevOps and Systems Architecture managers.

This report is recommended for CIOs, System Architects, IT Managers, System Administrators, and Operations Managers who are evaluating CoreOS and Red Hat as container solutions to support their private and hybrid cloud solutions. In this document, Tom provides both the upside and concerns that your organization needs to consider in evaluating CoreOS.

This document includes:
A summary of Red Hat’s Acquisition of CoreOS
Why It Matters
Top Takeaways
Contextualizing CoreOS within Red Hat’s private and hybrid cloud portfolio
Alternatives to Red Hat CoreOS
Positive and negative aspects fcr current Red Hat and CoreOS customers

To download this report, please go to our Research section.

Posted on

Navigating The Road to Retail Analytic Success

Analytics in the Retail and Consumer Packaged Goods (CPG) markets is more complex than the average corporate data ecosystem because of the variety of analytic approaches needed to support these organizations. Every business has operational management capabilities for core human resources and financial management, but retail adds the complexities of hybrid workforce management, scheduling, and operational analytics as well as the front-end data associated with consumer marketing, e-commerce, and transactional behavior across every channel.

In contrast, when retail organizations look at middle-office and front-office analytics, they are trying to support a variety of timeframes ranging from intraday decisions associated with staffing and customer foot traffic to the year-long cycles that may be necessary to fulfill large wholesale orders for highly coveted goods in the consumer market. Over the past three years, operational consistency has become especially challenging to achieve as COVID, labor skill gaps, logistical bottlenecks, commodity shortages, and geopolitical battles have all made supply chain a massive dynamic risk factor that must be consistently monitored across both macro and microeconomic business aspects.

The lack of alignment and connection between the front office, middle-office, and administrative analytic outputs can potentially lead to three separate silos of activity in the retail world—     connected only by some basic metrics, such as receipts and inventory turnover, that are interpreted in three different ways. Like the parable of the blind men and an elephant where each person feels one part of the elephant and imagines a different creature, the disparate parts of retail organizations must figure out how to come together, as the average net margin for general retail companies is about 2% and that margin only gets lower for groceries and for online stores.

Analytic opportunities to increase business value exist across the balance sheet and income statement. Even though consumer sentiment, foot traffic, and online behavior are still key drivers for retail success, analytic and data-driven guidance can provide value across infrastructure, risk, and real-time operations. Amalgam Insights suggests that each of these areas requires a core analytic focus that is different and reflects the nature of the data, the decisions being made, and the stakeholders involved.

Facing Core Retail Business Challenges

First, retail and CPG organizations face core infrastructure, logistics, and data management challenges that typically require building out historic analysis and quantitative visibility capabilities often associated with what is called descriptive or historical analytics. When looking at infrastructure factors such as real estate, warehousing, and order fulfillment issues, organizations must have access to past trends, costs, transactions, and the breadth of relevant variables that go into real estate costs or complex order fulfillment associated with tracking perfect order index.

This pool of data ideally combines public data, industry data, and operational business data that includes, but is not limited to, sales trends, receipts, purchase orders, employee data, loyalty information, customer information, coupon redemption, and other relevant transactional data. This set of data needs to be available as analytic and queryable data that is accessible to all relevant stakeholders to provide business value. In practice, this accessibility typically requires some infrastructure investment either by a company or a technology vendor willing to bear the cost of maintaining a governed and industry-compliant analytic data store. By doing so, retail organizations have the opportunity to improve personalization and promotional optimization.

A second challenge that retail analytics can help with is associated with the risk and compliance issues that retail and CPG organizations face, including organized theft, supplier risk, and balancing risk and reward tradeoffs. A 2022  National Retail Federation (NRF) survey showed that organized retail crime had increased over 26% year over year, driving the need to identify and counter organized theft efforts and tactics more quickly. Retailer risk for specific goods and brands also needs to be quantified to identify potential delays and challenges or to determine whether direct store delivery and other direct-to-market tactics may end up being a profitable approach for key SKUs. Risk also matters from a profitability analysis perspective as retail organizations seek to make tradeoffs between the low-margin nature of retail business and the consumer demand for availability, personalization, automation, brand expansion, and alternative channel delivery that may provide exponential benefits to profits. From a practical perspective, this risk analysis requires investment in a combination of predictive analytics and the ability to translate the variance and marginal cost associated with new investments with projected returns.

A third challenge for retail analytics is to support real-time operational decisions. This use case requires access to streaming and log data associated with massive volumes of rapid transactions, frequently updated time-series data, and contextualized scenarios based on multi-data-sourced outcomes. From a retail outcome perspective, the practical challenge is to make near-real-time decisions, such as same-day or in-shift decisions to support stocking, scheduling, product orders, pricing and discounting decisions, placement decisions, and promotion. In addition, these decisions must be made in the context of broader strategic and operational concerns, such as brand promise, environmental concerns, social issues, and regulatory governance and compliance associated with environmental, social, and governance (ESG) concerns.

As an example, supply chain shortages often come from unexpected sources. An unexpected geopolitical example occurred in the United States when the government’s use of containers as a temporary barrier to block illegal immigration checkpoints on the US-Mexico border led to shortages at US ports for delivery. This delay in accessing containers was not predictable based solely on standard retail metrics and behavior and demonstrates one example of how unexpected political issues can affect a hyperconnected logistical supply chain.

Recommendations for Upgrading Retail Analytics in the 2020s

To solve these analytic problems, retail and CPG organizations need to allow line-of-business, logistics, and sourcing managers to act quickly with self-service and on-demand insights based on all relevant data. This ultimately means that to take an analytic approach to retail,     Amalgam Insights recommends the following three best practices in creating a more data-driven business environment.

  • Create and implement an integrated finance, operational, and workforce management environment. Finance, inventory, and labor must be managed together in an integrated business data store and business planning environment or the retail organization falls apart. Whether companies choose to do this by knitting together multiple applications with data management and integration tools or by choosing a single best-in-breed suite, retail businesses have too many moving parts to split up core operational data across a variety of functional silos and business roles that do not work together. In the 2020s, this is a massive operational disadvantage.
  • Adopt prescriptive analytics, decision intelligence, and machine learning capabilities above and beyond basic dashboards. When retail organizations look at analytics and data outputs, it is not enough to gain historical visibility. In today’s AI-enabled world, companies must have predictive analytics, statistical analysis, detect anomalies quickly, and have the ability to translate business data into machine learning and language models for the next generation of analytics and decision intelligence. Retail can be more proactive and prescriptive with AI and ML models trained to their enterprise data to support more personalized and contextualized purchasing experiences.
  • Implement real-time alerts with relevant and actionable retail information. Finally, timely and contextual alerts are also now part of the analytic process. As retail organizations have moved from seasonal purchases and monthly budgeting to daily or even hourly decisions, regional and branch managers need to be able to move quickly if there are signs of business danger coordinated revenue leakage, brand damage across any of the products held within the store, unexpected weather phenomena, labor issues, or other incipient macro or microeconomic threats.

Posted on 1 Comment

Cloud Cost Management Vendor Profile: IBM Turbonomic

Amalgam Insights continues to present its list of Distinguished Vendors for Cloud Cost and Optimization Management. This matters because analysts assessed nearly 30 providers for this effort; only a third were able to demonstrate genuine differentiators and approaches that satisfied Amalgam Insights’ requirements for achieving Distinguished Vendor status. To that point, we already have posted profiles on SADA, Spot by NetApp, Apptio Cloudability, Yotascale, Kion, and CAST AI . We next discuss IBM Turbonomic.

WHY IBM TURBONOMIC FOR CLOUD COST AND OPTIMIZATION MANAGEMENT

  • Focus on application performance, which leads to savings
  • Platform configuration is automated, saving IT time and effort during deployment
  • Software learns from organizations’ actions, so recommendations improve over time

ABOUT IBM TURBONOMIC

IBM Turbonomic is an Amalgam Insights Distinguished Vendor for Cloud Cost and Optimization Management. Founded in 2009, Turbonomic was acquired by IBM in 2021. IBM Turbonomic now acts as Big Blue’s solution to ensure application performance and governance across cloud environments, including public and private. Turbonomic has two offices in the United States — its headquarters in Boston and a satellite location in Newark, Delaware — as well as one in the UK and another in Canada. IBM does not publicly disclose how many Turbonomic employees it has, nor does it break out Turbonomic annual revenue or provide customer retention rates.

In terms of cloud spend under management, Turbonomic states that it does not track the amount of money its clients spend on cloud computing. Turbonomic serves Fortune 2000 customers across industries including finance, insurance, and healthcare. Turbonomic is typically considered by organizations that have at least 1,000 cloud instances or virtual machines; many support tens of thousands.

IBM TURBONOMIC’S OFFERING

IBM Turbonomic Application Resource Management targets application performance and governance throughout an organization’s cloud environment, which can include public cloud (Amazon Web Services, Microsoft Azure, Google Cloud), private cloud (IBM, VMware), and multi-cloud environments.

The platform optimizes cloud computing, storage, database as a service, reserved instances, and Kubernetes, but does not currently address spot instances). Furthermore, it optimizes and scales based on IOPs (input/output), reservations, and discounts. Overall, IBM Turbonomic aims to ensure spend aligns to applications, preventing cost overruns and keeping applications performing optimally. While Turbonomic mainly serves IT users, Turbonomic recently teamed with Flexera to add a detailed cost-reporting module that appeals to Financial Operations (FinOps) experts.

IBM Turbonomic charges for its cloud application optimization software based on the number of resources under management. Rather than offering individual add-on capabilities, IBM Turbonomic lets clients choose more advanced capabilities by buying different licensing tiers associated with integrations to other software and processes such as IT service management, orchestrators, and application performance management. IBM Turbonomic includes technical support with all tiers. IBM Turbonomic and its third-party channel partners offer professional services as needed.

IBM Turbonomic states that its top differentiator originates from artificial intelligence that matches application demand to underlying infrastructure supply at every layer of the stack continuously in real-time with automatable resourcing decisions. As more organizations use IBM Turbonomic, the automated recommendations provided to all of its customers improve. Cloud administrators gain insight into suggested actions, such as investments to enhance performance and save money.

IBM Turbonomic Application Resource Management is delivered as software-as-a-service. It works across public, private, containerized, and bare metal cloud environments. IBM Turbonomic’s reference customers include Providence Health, which has 120,000 employees; Litehouse Foods, which makes salad dressing, cheese, and other foods; and apparel maker Carhartt.

COMPETITION AND COMPETITIVE POSITIONING

IBM Turbonomic mainly competes against organizations’ in-house spreadsheets and mix of tools that are specific to the technologies in use. In these cases, IBM Turbonomic finds that organizations are over-provisioning cloud computing resources in the hopes of mitigating risk. Therefore, they are spending too much and only addressing application performance when something goes wrong.

IBM Turbonomic also often faces VMware CloudHealth in its prospective deals.

IBM Turbonomic states that it draws customers because of automation and recommendations that tend to result in the following business outcomes:

  • Reduction of public cloud spend by 30%
  • Increase in team productivity by 35%
  • Improvement of application performance by 20%
  • Increase in speed to market by 40%

IBM TURBONOMIC’S PLANS FOR THE FUTURE

IBM Turbonomic keeps its roadmap private, so details about upcoming enhancements are not public. However, Amalgam Insights believes that IBM Turbonomic will pursue improvements in sustainability reporting and GitOps resizing in the near future, and may soon pursue a deeper relationship with Microsoft Azure, given that three of these areas are of interest to IBM Turbonomic’s current client base.

AMALGAM INSIGHTS RECOMMENDATIONS

Amalgam Insights recommends that organizations with a minimum of 1,000 cloud instances or virtual machines, and residing within the Fortune 2000, consider IBM Turbonomic Application Resource Management.

Because the platform automatically configures during deployment, provides ongoing recommendations for application and cloud-configuration improvement, and continues to learn from users’ actions, organizations can observe how cloud environments are continuously optimized. This allows IT teams to support cloud consumption needs while also ensuring the organization does not overpay or underresource. In addition, FinOps professionals gain the information they need to track and budget digital transformation efforts without burdening their IT counterparts.

Combined, these capabilities are critical to organizations’ goals of delivering stewardship over their cloud environments while maintaining fiscal responsibility that best serves shareholders, investors, and staff.


Posted on

Cloud Cost Management Vendor Profile: CAST AI

Cast AI - Amalgam Insights' 2022 Distinguished Vendor for Cloud Cost Management

Managing cloud infrastructure is no easy task, especially when containers such as Kubernetes come into play. In our ongoing effort to help organizations understand what they need to do to make the most of their cloud environments, Amalgam Insights this year briefed with a number of management and optimization vendors. We continue to publish our findings, which include analyst guidance complete with a series of vendor profiles. This installment focuses on CAST AI, a company that takes a different approach to cloud cost and optimization management by homing in on containers. Read on to learn why that is so important and to understand Amalgam Insights’ resulting recommendations for enterprises.

WHY CAST AI FOR COST CLOUD COST AND OPTIMIZATION MANAGEMENT

  • Optimizes Kubernetes containers on a continuous basis
  • Company claims to save users an average of 63% on cloud bills
  • Cost reporting and cluster analysis provided as a free service

ABOUT CAST AI

CAST AI is an Amalgam Insights Distinguished Vendor for Cloud Cost and Optimization Management. Founded in 2019, Miami-headquartered CAST AI employs 60 people in Florida and Lithuania. It raised $10 million in Series A funding in fall of 2021, following its $7.7 million seed round in late 2020. CAST AI does not look for a specific customer size; some of its users have fewer than two dozen virtual machines, while others run thousands. The privately held firm does not disclose annual revenue or how much cloud spend it manages.

CAST AI’S OFFERING

CAST AI automates and optimizes Kubernetes environments on Amazon Web Services (AWS) Elastic Kubernetes Service, kOps running on AWS, Microsoft Azure Kubernetes Service, and Google Cloud Platform Google Kubernetes Service as well as Kubernetes clusters running directly on CAST AI.

Cast AI users — who typically are DevOps (Development Operations) experts — may run cost reporting that includes cluster analysis and recommendations. FinOps (Financial Operations) professionals can take the reporting results and incorporate them into their practices.

The CAST AI engine goes beyond cost reporting to rearrange Kubernetes environments for the most effective outcomes. To do this, CAST AI connects to a specified app, then runs a script that installs agents to collect information about the app. After that, a report pops up that can provide recommendations for reducing the number of Kubernetes machines or changing to a different compute platform with less memory, all to cut down on cost.

If a user accepts CAST AI’s recommendations, he or she can click a button to optimize the environment in real time. This button sets off a continuous optimization function to give orders to Amazon Elastic Kubernetes Service (EKS), Google Kubernetes Engine (GKE), or Azure Kubernetes Service (AKS) to rearrange itself, such as autoscaling in real time and rebalancing clusters. Users set their desired automation and alerting thresholds. CAST AI pings the app every 15 seconds and produces an hourly graph. CAST AI claims its users save an average of 63% on their cloud bills.

Pricing for CAST AI varies. CAST AI does not enforce a minimum spend requirement. Rather, it charges by the number of active, optimized CPUs. That starts at $5 per CPU per month and there are tiered discounts from 1-5,000 CPUs, then 5,001-15,000, and so on. Base subscriptions start at $200 per month and go up to $5,000 per month or more, depending on volume discounts. CAST AI provides cost reporting and cluster analysis for free, with no time limits. Users also can buy cost management as a standalone service.

COMPETITION AND COMPETITIVE POSITIONING

CAST AI competes most frequently against the Ocean platform from Spot by NetApp in competitive deals. For the most part, though, CAST AI “competes” against DevOps professionals trying to reduce cloud costs manually — a difficult and time-consuming effort.

CAST AI finds that it gains customers because of its engine’s ease of use and ability to make changes in real-time. This further frees DevOps experts to focus on innovative projects.

CAST AI goes to market via its website and, in Europe, Asia, and the United States, also through third-party partners.

CAST AI’s reference customers including La Fourche, a French online retailer of organic products, and ecommerce consultancy Snow Commerce.

CAST AI’S PLANS FOR THE FUTURE

CAST AI plans to build an air-gapped version of its engine disconnected from the Internet and fully supported within the customer’s internal environment for private cloud users in vertical markets including government and banking. Because CAST AI collects metadata to optimize Kubernetes environments, CAST AI is working on this capability to support more governed industries and organizations.

AMALGAM INSIGHTS’ RECOMMENDATIONS

Amalgam Insights recommends that organizations with Kubernetes containers try CAST AI’s free trial to understand how the platform might help save money and optimize resources. Although Kubernetes has largely won as the software container of choice in DevOps environments, businesses still have not standardized on strategies to optimize the compute and storage associated with containerized workloads and services. Amalgam Insights believes that Kubernetes optimization should not be a long-term direct responsibility for developers and architects as tools emerge to define the resources that are most appropriate for running containerized applications at any given time.

Organizations worldwide are struggling to control cloud costs, especially as they pursue containerization and cloud refactorization projects associated with digital transformation. Organizations also are cleaning up pandemic-spurred cloud deployments that quickly got out of hand and have proven difficult to keep in line since then. CAST AI’s technology provides an option that DevOps engineers should consider as they seek to tighten and optimize the spend tied to applications containerized in the cloud.

Need More Guidance Now?

Check out Amalgam Insights’ new Vendor SmartList report, Control Your Cloud: Selecting Cloud Cost Management in the Face of Recession, available for purchase. If you want to discuss your Cloud Cost Management challenges, please feel free to schedule time with us.