Posted on 1 Comment

The Need to Manage Kubernetes Cost in the Age of COVID

Kubernetes has evolved from an interesting project to a core aspect of the next generation of software platforms. As a result, enterprise IT departments need to manage the costs associated with Kubernetes to be responsible financial stewards. This pressure to be financially responsible is exacerbated by the current status of COVID-driven pandemic recession that the majority of the world.

Past recessions have shown that the companies best suited to increase sales after a recession are those that

  • Minimize financial and technical debt
  • Invest in IT
  • Support decentralized and distributed decision-making, and
  • Avoid permanent layoffs.

Although Kubernetes is a technology and not a human resources management capability, it does help support increased business flexibility. Kubernetes cost management is an important exercise to ensure that the business flexibility created by Kubernetes is handled in a financially responsible manner. Technology investments must support multiple business goals: optimizing current business initiatives, supporting new business initiatives, and allowing new business initiatives to scale. Without a financial management component, technology deployments cannot be fully aligned to business goals.

From a cost perspective, Amalgam Insights believes that the IT Rule of 30 also applies to Kubernetes.

The IT Rule of 30 states that any unmanaged IT spend category averages 30% in wasted spend, due to a combination of resource optimization, demand-based scaling, time-based resource scaling, and pricing optimization opportunities that technical buyers often miss.

IT engineers and developers are not typically hired for their sourcing, procurement, and finance skills, so it is understandable that their focus is not going to be on cost optimization. However, as Kubernetes-related technology deployments start exceeding $300,000, companies start to have 6-figure dollar savings opportunities just from optimizing Kubernetes and related cloud and hardware resources used to support containerized workloads.

To learn more about Kubernetes Cost Management and key vendors to consider, read our groundbreaking report on the top Kubernetes Cost Management vendors.

Posted on

Informatica Supports the Data-Enabled Customer Experience with Customer 360

On January 11, 2021, Informatica announced its Customer 360 SaaS solution for customer Master Data Management. Built on the Informatica Intelligent Cloud Services platform, Informatica’s Customer 360 solution provides data integration, data governance, data quality, reference data management, process orchestration, and master data management as an integrated SaaS (Software as a Service)-based solution.

It can be easy to take the marketing aspects of master data management solutions for granted, as every solution on the market focused on customer data seems to claim that they help to manage relationships, provide personalization, and support data privacy and compliance while supporting a single version of the truth. The similarity of these marketing positions provides confusion on how new offerings in the master data space differ. And the idea of providing a “single version of the truth” is becoming less relevant or useful in an era where data grows and changes faster than ever before and the need for relevant data based on a shared version of the truth becomes more important than simply having one monolithic and complete version of the truth documented and reified in a single master data repository.

Customer master data also provides challenges to companies as this data has to be considered in context of the expanded role that data now plays in defining the customer. In the Era of COVID, customer interactions and relationships have largely been driven by remote, online transactions as in-person shopping has been sharply curtailed by a combination of health concerns, regulations, and, in some cases, supply chain interruptions that have affected the availability of commodities and finished goods. In this context, customer data management plays an increasingly important role in driving customer relationships, not only by providing personalization of data but also in managing appropriate metadata to support the appropriate building of hierarchies and relationships. Both now and as we move past the current time of Coronavirus, companies must support the data-enabled customer and reduce barriers to commerce and purchasing.

In exploring the Informatica Customer 360 solution, Amalgam Insights found several compelling aspects that enterprise organizations should consider as they build out their customer master data and seek a solution to maintain data consistency across all applications.

First, Amalgam Insights considers the combination of data management, metadata management, and data cleansing capabilities in Customer 360 to be an important capability. Customer data is notorious for its ability to become dirty and inaccurate because it is linked to the characteristics of human lives: home addresses, email addresses, phone numbers, purchase activities, and preferences, etc…

In this context, it is vital that a master data solution focused on customer data needs to support clean and relevant data and the business context provided with reference data and other relevant metadata. Rather than treat master data literally as a static and standalone data record, Customer 360 brings together the context and cleansing needed to maximize both the value and accuracy of master data.

Second, Customer 360’s use of artificial intelligence and machine learning will help businesses to maintain an accurate and shared version of the truth. AI is used in this solution to

  • assist with data matching across data sets
  • provide “smart” field governance to auto-correct data in master data fields with defined formats such as zip codes or country abbreviations
  • use Random Forests to support active learning for blocking and matching
  • support deep learning techniques for text matching and cleansing text that may be difficult to parse

Third, Informatica’s Customer 360 solution provides a strong foundation for application development based both on being built on a shared microservices architecture as well as investments in DevOps-friendly capabilities including metadata versioning supported by automated testing and defined DevOps pipelines. The ability to open up both the services available on the Customer 360 solution as well as the data to custom applications and integrations will help the data-driven business to make relevant data more accessible

The Customer 360 product also includes simplified consumption-based pricing, a different user interface designed for a more simple user experience, as well as improved integration capabilities including real-time, streaming, and batch functionality that reflects the changing nature of data. Amalgam Insights looks forward to seeing how a pay-as-you-go SaaS approach to Customer 360 is received, as this combination is relatively new in the world of master data management implementations that are often treated as massive CapEx projects.

Overall, Amalgam Insights sees Informatica’s Customer 360 as a valuable step forward for both the master data management and customer data markets.

This combined vision of providing consumption-based pricing, a contextual and intelligently augmented version of the truth, and a combination of data capabilities designed to maximize the accuracy of customer data within a modern UX is a compelling offering for organizations seeking a customer data management solution.

As a result, Amalgam Insights recommends Customer 360 for organizations interested in minimizing the amount of time spent in cleansing, fixing, and finding customer data while accelerating time-to-context. Organizations focused on progressing beyond standard non-AI-enabled data cleansing and governance processes are best positioned to maximize the value of this offering. 

Posted on Leave a comment

Analysis: CoreView Raises $10 Million Series B Round for SaaS Management

Key Stakeholders: CIO, CTO, CFO, Software Asset Managers, IT Asset Managers, IT Procurement Managers, Technology Expense Managers, Sales Operations Managers, Marketing Operations Managers.

Why It Matters: The investment in CoreView comes at a time when SaaS proliferation and management are becoming a core IT problem. CoreView’s leadership position in managing Microsoft 365 and enterprise SaaS portfolios makes it a vendor to consider to solve the SaaS mess.

Top Takeaway: Enterprises and IT suppliers managing large SaaS portfolios either from a financial or operational perspective must find a solution to manage the hundreds or thousands of SaaS apps and services under management or risk both security breaches and financial bloat with millions of dollars at stake.

About the Acquisition

On October 5th, 2020, CoreView raised a $10 million Series B round which was led by Insight Partners. CoreView provides a Software as a Service management platform to secure and optimize Microsoft 365 and additional applications as an augmentation of the Microsoft M365 Admin Center.

CoreView was founded in 2014 in Milan, Italy by a team with experience as Microsoft system integrators to provide governance for Office 365 deployments. In October 2019, CoreView augmented its solution with the acquisition of Alpin, a SaaS management solution used to monitor SaaS activity and manage costs.

With this funding, CoreView is expected to increase both its direct clientele as well as its global reseller and service provider base. Having grown almost three-fold over the past year, CoreView is acquiring this funding at a time when SaaS management is becoming an increasingly important part of IT management.

From Amalgam Insights’ perspective, this funding is interesting for two reasons: the quality of the investor and the growing challenge of managing SaaS.

First, this round was led by Insight Partners, which has a strong history of investing in fast-growing DevOps and data companies in line with CoreView’s enterprise software management needs, including Ambassador, Carbon Relay, JFrog, Resolve, Dremio, and OneTrust. Because this investor has been deeply involved with investments in the future of software development and management, Amalgam Insights believes that Insight Partners provides value to CoreView as an investor.

Second, this funding is coming at a time when SaaS proliferation has become an increasingly challenging problem. This funding indicates where the next wave of growth is going to occur in IT management. After a decade of stating that “There’s an app for that, companies must now face the challenge of standardizing and optimizing their app environments. Security vendors such as Symantec and NetSkope have published estimates that the average enterprise uses between 900 and 1200 discrete apps and services on a regular basis, which creates a logistical nightmare.

A decade ago, I wrote on the challenges of Bring Your Own Device and the issues of expense and inventory management for these $500 devices. But with the emergence of Bring Your Own App, multiplied by the sheer proliferation of productivity, sales and marketing, and other line-of-business applications, SaaS management was already coming of age as its own complicated challenge for IT as SaaS was growing 20-25% per year as a market. With the challenges of COVID-19, SaaS has only become more important for keeping remote and work-from-home employees connected to key tools and data.

Recommendations

Based on this funding round, Amalgam Insights makes one key recommendation for IT departments: get control of your SaaS portfolio, which is likely scattered across line-of-business admins, expense reports, and the half of SaaS associated with enterprise software that is currently in IT. Even if the app administration remains in the hands of the line-of-business teams, IT needs to be aware of the data governance, data integration and duplication, and zero-trust based management of least-privilege access across apps. IT still has a job in the SaaS era as SaaS continues to grow from its current size of a quarter of all software in 2020 to Amalgam Insights’ projection in 2025 that the SaaS market will triple to approximately $300 billion and become half of the enterprise software market.

An additional recommendation for all IT agents, resellers, and service providers is to gain a SaaS management capability as soon as possible. At this point, this means looking at two areas: SaaS Operations Management focused on the governance and configuration of SaaS and SaaS Expense Management focused on the inventory and cost optimization of SaaS. There is some overlap between the two as well as some areas of specialization. From Amalgam Insights’ perspective, CoreView is recommended as both an Operations Management and an Expense Management solution with a specialization in supporting Microsoft 365.

If you have any questions about this research note or on the SaaS Management market, please contact Amalgam Insights at info@AmalgamInsights.com to set up time to speak with our analysts.

Posted on

Why Tom Petrocelli Thinks Google Is Forming the Open Usage Commons (OUC)

by Tom Petrocelli

On July 9, 2020 there was an announcement that Google had formed an organization called the Open Usage Commons, or OUC. In a previous blog I laid out the case that this organization was a horrible idea from an intellectual property (IP) management and licensing perspective. In a nutshell, this new organization is holding the trademarks, and only the trademarks, from open source projects. Copyright would continue to be managed through the current open-source licenses and organizations.

As someone who spent several years as part of the intellectual property management industry (at a company literally called IP.com) and as an advocate of open source for 30 years, this struck me as unusual, unnecessary, and suspicious. Ultimately, my experience told me that the OUC added unnecessary complexity and confusion to otherwise straightforward open-source projects. I finished the blog with a call to fork the projects. Pretty harsh words from an analyst who’s usually a pretty positive guy and a big fan of open source.

The follow-up question to the “why this is bad” blog has been “why then is Google doing this?”

I think the reason is much simpler than complex IP problems alluded to by the OUC website. Simply, Google wants to benefit from open-source development but not lose all control over the IP. It’s hard to maximize software revenue when the IP and brand are controlled elsewhere. There are a few organizations – Red Hat and Canonical are good examples – that can generate revenue from open source effectively. Google, on the other hand, has been a reliable and good actor in the open-source cloud-native community while consistently remaining in the third position for cloud services, behind Amazon Web Services and Microsoft Azure.

The fact is that piece of software that Google developed is making lots of money for many other companies while Google remains stuck in the number three slot in the cloud market. The software in question is, of course, Kubernetes.

If you rewind three years ago, Kubernetes was only one of many orchestrators. There was Apache Mesos, Rancher Cattle, Docker Swarm, Cloud Foundry Diego, and others in addition to Kubernetes. At the time, there were few large deployments of container architectures and the need for orchestration was just emerging. Fast forward to today, all of those competitors to Kubernetes are more or less gone and Kubernetes dominates the orchestrator market. Even more important, Kubernetes has become the base for the emerging next-generation IT platform. It will form the core for new architectures moving forward for years, perhaps decades, to come. Neither Google nor anyone else could have predicted that Kubernetes would be the powerhouse that it has become. In the meantime, many large rivals have entered the Kubernetes market including VMWare, Rancher (recently purchased by SUSE), Canonical, Microsoft Azure, Amazon Web Services, and HPE.

Kubernetes has become a massive, open governance, open-source, platform play that Google can’t monetize any more than anyone else. Red Hat was acquired by IBM with a $3B+ valuation, much of it because of OpenShift which is based on Kubernetes. Red Hat is now central to IBM cloud and platform strategy and their primary cloud growth engine. Rancher was acquired by SUSE (for a rumored $600M to $700M) because of its Kubernetes platform. Kubernetes is to Google, what the Docker Engine was to Docker – a key piece of heavily adopted IP that they make less money with than their rivals.

Meanwhile, Google is invested in other homegrown open source projects in the Kubernetes ecosystem, especially Istio and Knative. Istio, one of the projects whose trademarks are under the OUC aegis, is used to implement a service mesh control plane for Kubernetes. It has shown an almost Kubernetes-like uptake in the market and is included in a number of key Kubernetes distributions including Red Hat, Rancher/SUSE, HPE, and IBM. It has long been expected by the cloud-native community that the Istio project, including the trademarks, would become part of the Cloud Native Computing Foundation (CNCF) just like Linkerd and Envoy, two other service mesh projects. Google has instead launched the OUC to take ownership of the Istio trademark.

The head of Google Cloud Services, Thomas Kurian, came from Oracle and is steeped in Oracle’s software business practices. It is easy to imagine that, to him, Google is giving away valuable IP while rivals make all the money. The OUC is a way to retain control of the IP while not appearing to abandon the open-source movement. The board of the OUC consists of two Google employees, one ex-Googler, and, according to Google, a long-time collaborator alongside two others. That doesn’t suggest independence from Google. Even if the project is transferred to the CNCF in the future, Google can still call the shots on branding and messaging through the OUC.

The key problem for Google is that the software industry doesn’t work like it used to.

You can’t be half in and half out of open-source.

In the end, this is more likely to drive vendors to other projects such as Linkerd or Consul and reduce support for Istio. Istio may also go the route of OpenOffice, Java EE, and MySQL. In all three of those projects, where Oracle asserted control over some or most of the intellectual property, disputes broke out over licensing and technical direction leading to forks. The OUC is a clever Google take on the Oracle playbook. Incidentally, each of those forks, LibreOffice, Jakarta EE, and MariaDB have thrived, often overtaking the mother project.

The OUC increases fear, uncertainty, and doubt. The only way for Google to fix this and regain the spirit of open source is to refocus the OUC on IP education and transfer all Istio IP, along with the project, to CNCF. They should find similar homes for the other projects in the OUC portfolio. That is how they can regain the confidence of the open-source community.

Google’s failure to monetize their IP and maximize cloud revenues will not be alleviated by this move. Instead, they will lose their open source credibility and make partners suspicious. Simply put, this is not how open source works. This looks too much like a misguided attempt to control popular open-source software such as Istio and Angular. There are real IP management and licensing problems that the OUC can help to fix. They need to work on fixing those problems and not controlling trademarks.

Contact

Contact Us

Contact Amalgam Insights

We're Game to Talk Tech!

Amalgam Insights is currently available for thought leadership marketing, value & ROI mapping, product development, expert witness, investment advisory, and strategic consulting engagements. If you’d like to learn more, click below to set up time to chat with us. How can we help you?

Can't wait to talk? Send us a message!

Amalgam Insights

Services

Our Services

Our amalgam of services

The Business Value of Technology

Amalgam Insights helps technology vendors with strategic messaging, product development, value mapping, and thought leadership efforts.

We help end user companies to calculate business value, avoid the mistakes being taught through status quo thinking, and see new market trends to unlock the hidden value of technology through workshops, presentations, and research subscriptions.

And we work with independent investors and law firms with due diligence, market guidance, and expert witness efforts associated with telecom and machine learning issues.

Advisory Services

We have advisory relationships with vendors and enterprises to keep them in touch with the latest trends in the IT of business and the business of IT. We also conduct Business Value Workshops to help present the financial, operational, and strategic value of technology.

Custom Research

We conduct bespoke research based on client interest.

Custom topics include:
Business Value Analysis: A deep dive into the ROI, employee perception, and executive perception of a single technology solution based on the voice of the customer
Head-to-Head Solution Selection: A third-party perspective on how customers choose between two complex technologies based on interviews of customers faced with that decision.

Research Subscriptions

Our independent research includes SmartList summaries of market leading vendors, Market Milestone analysis of why key technology investments matter, as well as Analyst Insights that share the strategic frameworks that help shape the future of your IT environment.

Thought Leadership

We know that our findings and research need to be delivered in a multimedia fashion. Our engaging webinars, videos, and social media allow us to share the latest trends in technology finance and data democratization.

Check out our webinars on BrightTALK or take a look at one of our keynotes below.

Let’s work together

Amalgam Insights

About

About Us

About Us

We track the business value of technology

Amalgam (noun):

”a blend of elements that creates a complete whole” –
Cambridge English Dictionary, online, 2019

Amalgam Insights is a blend of expertise – a community of curious and engaged thinkers who focus on the value of technology, trends and strategies that lead to positive disruptive change and business outcomes. We use our decades of experience to track the financial, organizational, and strategic value of technology in the following areas.

  • Managing IT Costs and Budgets
  • Augmenting Accounting and Finance Efforts
  • Making Data, Analytics, and Machine Learning More Accessible

We dig deeply into the traits that translate software functionality into revenue, productivity, and business transformation.

Work with us

Set up an initial Analyst Briefing with us! Let us know how we can help your organization to cut IT costs, improve your data initiatives, or evaluate finance and accounting software! Get in touch and we'll show you the newest and most relevant trends.

How we help

We don't just provide market trends and recommendations to your company. We show you the key trends that actually increase business value and make your work lives easier.

Our process

We work with clients based on what they need, not what we sell. Some clients just need a quick advisory call while others need a workshop, presentation, or ongoing guidance throughout a transformative year. But your success is always top of mind for us.

About Us

Amalgam Insights

We are an industry analyst firm founded by Hyoun Park in 2017 to manage the data, expenses, and finances of enterprise technology. Since our inception, we have had the honor of helping over 1,000 businesses with their technology challenges. We would love for you to be next.

If our research, blog, social media, and presentations have gotten your attention, please set up time to chat with us. We’re always game to talk tech!

Posted on Leave a comment

JAMStack Design Pattern Emerges to Radically Rethink Application Platforms

Author: Tom Petrocelli

Executive Summary

Key Stakeholders: Chief Information Officers, Chief Technical Officers, Vice Presidents of IT, VPs of Platform Engineering, DevOps Evangelists, Platform Engineers, Release Managers, Automation Architects, Software Developers, Software Testers, Security Engineers, Software Engineering Directors and Managers, IT directors, DevOps professionals

Why It Matters: The IT industry has been undergoing a radical rethinking of how we architect application platforms. Much of the attention has been drawn to the back-end platforms built on containers and the Kubernetes ecosystem. Less has been said of the front-end environment, even though it too is undergoing a redesign.

Top Takeaway: Jamstack dovetails nicely into IT emerging architectures. It points to a future cloud-native web architecture based on Jamstack design patterns in the frontend,  backed by microservices implemented as transient serverless functions and more permanent Kubernetes-based services.


No one would say that the Spring 2020 conference season was in any way usual. With COVID-19 spreading throughout the world, the vast majority of conferences became “online”, “digital”, or “virtual.” At the tail end of the season was Microsoft Build Online 2020 (May 19, 2020) and Jamstack Conf 2020 (May 27, 2020) a week later. At both conferences, there was discussion of a web application architecture called Jamstack. Indeed, it was the focus of the later conference.

The IT industry has been undergoing a radical rethinking of how we architect application platforms. Much of the attention has been drawn to the back-end platforms built on containers and the Kubernetes ecosystem. Less has been said of the front-end environment, even though it too is undergoing a redesign. This is where Jamstack comes in.

Unlike containers and Kubernetes, Jamstack is not a
technology. Instead, it is a design pattern that is implemented in technology.
The philosophy behind Jamstack is to isolate the web developer from the back-end
systems that support their applications, creating more efficient developers. An
additional goal is to increase the scalability of web apps. In this regard, the
market drivers are similar to those of the Kubernetes and Serverless communities
– creating applications that are easier to build and maintain while increasing
scalability.

The “JAM” in Jamstack is an acronym for Javascript, APIs, and Markup. This suggests not only a basket of technologies to be used in web applications. It points to a philosophy of simplification of the web app design that makes for more efficient development and operations. With Jamstack, it is expected that web applications are single-page apps, sometimes called Static Site apps, comprised mostly of JavaScript that then uses APIs to access back-end services (typically microservices) while using markup languages such as HTML and CSS to render the UI.

This is a model that most React framework developers will recognize; Load one page that contains the code and UI. After that, the application is expected to access additional data through RESTFul API calls in response to user actions and render them in HTML and CSS. This contrasts with the way many dynamic web sites are designed. That model calls for back-end systems generate HTML that are then sent to the browser as a new page. There are also applications built using the older AJAX design pattern or technology such as ASP.NET or  Java Server Pages, where pages are generated in the back-end and sent to the browser which then uses JavaScript to access more data as needed. Both of these architectures require web developers to be “full-stack developers”,  which means that the developer has to understand the entire platform from database to web front-end to create a web application. This makes development slower and less efficient while requiring client browsers to constantly download large pages, sometimes over large distances.

The evolution and growth of Jamstack was on display not only
at the Jamstack Conference, as one would expect, but also at Microsoft Build.
Part of the Azure developer presentation talked about the Jamstack architecture
and how it is implemented as a service called Azure Static Web Apps. Even more
interesting was the relationship with serverless. Microsoft presented a web
applications architecture that was comprised of the Azure Static Web Apps
service for the front-end and use of Azure Serverless Functions to implement
scalable microservices as the back-end.

There are other ways the Jamstack community is looking to achieve its goals, especially reliability. Of particular interest is driving more functionality into the edge nodes. Most web applications use a content delivery network, or CDN, which caches static portions of a web site geographically closer to users. Without a CDN, all web requests would have to drag big assets such as graphics from their point of origin to the user’s browser. The amount of latency would make the web experience intolerable to many users. The Jamstack community is now looking to place some common processing in the front end CDN edge nodes, to handle common situations instead of processing them at a distant service.

While Jamstack has a lot of advantages, some of the vendor messaging can be a little confusing. For example, the idea that a web developer doesn’t have to be a full-stack developer is true. Someone, however, has to provide that back-end platform for the web developer to access. So, on an organizational level, a full stack is required for Jamstack to be meaningful. If an organization is only building apps using cloud services, this may be the case. But the majority of organizations don’t fit this description. Ultimately, creating a distinction between a web developer and platform developer is an artificial distinction. Code is code.

Ultimately, Jamstack dovetails nicely into IT emerging architectures. It points to a future cloud-native web architecture based on Jamstack design patterns in the frontend,  backed by microservices implemented as transient serverless functions and more permanent Kubernetes-based services. This is the melding of three emerging architectures into one whole design that provides for easier development, maintenance, reliability, and scalability.

Posted on

The Emergence of Kubernetes Control Planes

As is the case with all new technology, container cluster deployments began small. There were some companies, Google for example, that were deploying sizable clusters, but these were not the norm. Instead, there were some test beds and small, greenfield applications. As the technology proved itself and matured, more organizations adopted containers and the market favorite container orchestrator, Kubernetes.

The emergence of Kubernetes was, in fact, a leading indicator that containers were starting to see more widespread adoption in real applications. The more containers deployed, the greater the need for software to automate their lifecycle. Even so, it was unusual to find organizations standing up many Kubernetes clusters, especially geographically dispersed clusters.

That is beginning to change. Organizations that have adopted containers and Kubernetes are starting to struggle with managing multiple clusters spread throughout an enterprise. Just as managing large amounts of containers in a cluster was the impetus for orchestrators such as Kubernetes, new software is needed to manage large scale multi-cluster environments. At the same time, Kubernetes clusters have been getting more complex internally. From humble beginnings of a handful of containers with a microservice or two, clusters now include containers for networking including service mesh sidecars and data planes, logging, app performance monitoring, database connectivity, and storage. All that is in addition to the growing number of microservices being deployed.

In a nutshell, there are now a greater number of larger and more complex Kubernetes containers clusters being deployed. It is no longer enough to manage the lifecycle of the containers. It is now necessary to manage the lifecycle of the cluster itself. This is the purpose of a Kubernetes control plane.

Kubernetes control planes comprise of a series of functions that manage the health and well-being of the cluster. Common features are:

  • Cluster lifecycle management including provisioning of clusters, often from templates for common types of clusters.
  • Versioning including updates to Kubernetes itself.
  • Security and Auditing
  • Visibility, Monitoring, and Logging

Kubernetes control planes are policy-driven and automated. This allows operators to focus on governance while the control plane software does the rest. Not only does this reduce errors but allows for faster responses to changes or problems that may arise. This automation is necessary since managing many large multi-site clusters by hand would require large amounts of manpower and, hence, cost.

Software vendors have stepped with products to meet this emerging need. In the past year, products that implement a Kubernetes control plane have been announced or deployed by Rancher, Platform9, IBM’s Red Hat division (Advanced Cluster Management), and VMware (Tanzu Mission Control) and more.  All of these Kubernetes control planes are designed for multi-cloud, hybrid clusters and are packaged either as part of to a Kubernetes distribution or an aftermarket addition to a company’s Kubernetes product.

Kubernetes control planes are a sign of the normalization of container clusters. The growth both in complexity and scale of container clusters necessitates a management layer that helps DevOps teams to more quickly standup and manage clusters. This is the only way that platform operations can match the speed of Agile development and automated CI/CD toolchains. It is yet another piece of the emerging platform that will be where our modern cloud-native applications will live.

To learn more about this topic, Amalgam Insights recommends the following reports:

Market Landscape – Packaging Code for Microservices

Analyst Insight Cloud Foundry and Kubernetes: Different Paths to Microservices

Red Hat Acquires CoreOS Changing the Container Landscape

as well as Tom Petrocelli’s blogs at https://www.amalgaminsights.com/tag/kubernetes/

Posted on Leave a comment

5 MegaThemes for the 2020s That Will Transform IT

As we get ready for 2020, Amalgam Insights is here to prepare companies for the future.  In the past few weeks, we’ve been posting insights on what to look for in 2020 with posts including:
and our four-part series on Ethical AI for the future:
and
Over this decade, we have learned how to work with technology at massive scale and with unprecedented power as the following technology trends surfaced in the 2010s:
  • The birth and death of Big Data in supporting massive scale as the terabyte shifted from an intimidating amount of data to a standard unit of measurement
  • The evolution of cloud computing from niche tool to a rapidly growing market that is roughly $150 billion a year now and will likely be well over a trillion dollars a year by the end of the 2020s
  • The Internet of Things, which will enable a future of distributed and specialized computing based on billions of processors and build on this decade’s massive progress in creating mobile and wireless smart devices.
  • The democratization of artificial intelligence tools including machine learning, deep learning, and data science services and platforms that have opened up the world of AI to developers and data analysts
  • The use of CRISPR Cas9 to programmatically edit genes, which has changed the biological world just as AI has changed the world of technology
  • Brain biofeedback and Brain-Computer Interfaces, which provide direct neural interfaces to control and affect a physical environment.
  • Extended Reality, through the development of augmented and virtual reality which are starting to provide realistic sensory simulations available on demand

2010s Tech Drivers
2010s Tech Drivers

These bullet points describe where we already are today as of the end of 2019. So, how will all of these technologies affect the way we work in the 2020s? From our perspective, these trends fit into 5 MegaThemes of Personalization, Ubiquity, Computational Augmentation, Biologically Influenced Computing, and Renewability.
We believe the following five themes have both significantly evolved during the 2010s and will create the opportunity for ongoing transformative change that will fundamentally affect enterprise technology. Each of these MegaThemes has three key trends that will affect the ways that businesses use technology in the 2020s. This piece provides an introduction to these trends that will be contextualized from an IT, data, and finance perspective in future work, including blogs, webinars, vendor landscapes, and other analyst insights.

2020s Tech MegaTrends
2020s Tech MegaTrends

Over the rest of the year, we’ll explore each of these five MegaThemes in greater detail, as these primary themes will end up driving innovation, change, and transformation within our tactical coverage areas including AI, analytics, Business Planning, DevOps, Finance and Accounting, Technology Expense Management, and Extended Reality.