Posted on

Tom Petrocelli’s Retirement Message to All of You

Well, best to rip off the band-aid. 

I’m retiring at the end of the year. That’s right, on January 1, 2021 I will be officially and joyfully retired from the IT industry. No more conferences, papers, designs, or coding unless I want to. Truth be told, I’m still pretty young to retire. Some blame has to be laid at the feet of the pandemic. Being in the “trend” industry also sometimes makes you aware of negatives changes coming up. The pandemic is driving some of those including tighter budgets. This will just make everything harder.  Many aspects of my job that I like, especially going to tech conferences, will be gone for a while or maybe forever. 

I can’t blame it all on the pandemic though. Some of it is just demographics. Ours is a youthful industry with a median age of roughly mid to early 40’s. To be honest, I’m getting tired of being the oldest, or one of the oldest people in the room. It’s not as if I’m personally treated as an old person. In fact, I’m mostly treated as younger than I am which means a certain comfort making “old man” jokes around me. No one thinks that I will take offense at the ageism, I suppose. It’s not really “offense” as much as it’s irritation.

There will be a good number of things I will miss. I really love technology and love being among people who love it as much as I do. What I will miss the most is the people I’ve come to know throughout the years. It’s a bit sad that I can’t say goodbye in person to most of them. I will especially miss the team here at Amalgam Insights. Working with Hyoun, Lisa, and everyone else has been a joy. Thanks for that you all.

My career has spanned a bit over 36 years (which may surprise some of you… I hope) and changes rarely experienced in any industry. When I started fresh from college in 1984, personal computers were new, and the majority of computing was still on the mainframes my Dad operated. No one could even imagine walking around with orders of magnitude more computing power in our pockets. So much has changed. 

If you will indulge me, I would like to present a little parting analysis. Here is “What has changed during my career”.

  1. When I started mainframes were still the dominant form of computing. Now they are the dinosaur form of computing. Devices of all kinds wander the IT landscape, but personal computers and servers still dominate the business world. How long before we realize that cyberpunk goal of computers embedded in our heads? Sooner than I would like.
  2. At the beginning of my career, the most common way to access a remote computer was a 300 baud modem. Serial lines that terminals deployed to speak to the mainframes and minicomputers of the times were also that speed. The bandwidth of those devices was roughly 0.03 Mbps. Now, a home connection to an ISP is 20 – 50 Mps or more and a corporate desktop can expect 1 Gbs connections. That’s more than 33 times what was common in the 80s.
  3. Data storage has gotten incredibly cheap compared to the 1980s. The first 10M hard drive I purchased for a $5000 PC cost almost US$ 1000.00 in 1985 dollars.  For 1/10 of that price I can now order a 4T HD (and have it delivered the next day.) Adjusted for inflation that $1000 HD cost ~$2500 in 2020 dollars. That’s 25 times what the modern 4T drive costs.
  4. Along with mainframes, monolithic software has disappeared from the back end. Instead, client-server computing has given way to n-Tier as the main software platform. Not for long though. Distributed computing is in the process of taking off. It’s funny. At the beginning of my career I wrote code for distributed systems, which was an oddity back then. Now, after more than 30 years it’s becoming the norm. Kind of like AI.
  5. Speaking of AI, artificial intelligence was little more than science fiction. Even impressive AI was more about functions like handwriting recognition, which was created at my alma mater, the University at Buffalo, for the post office. Nothing like we see today. We are still, thankfully, decades or maybe centuries from real machine cognition. I’ll probably be dead before we mere humans need to bow to our robot overlords. 
  6. When I began my career, it was very male and white. My first manager was a woman and we had two other women software engineers in our group. This was as weird as a pink polka-dotted rhinoceros walking through the break room. Now, the IT industry is… still very male and white. There are more women, people with disabilities, and people of color than there was then but not quite the progress I had hoped for.
  7. IBM was, at that time, the dominant player in the computer industry. Companies such as Oracle and Cisco were just getting started, Microsoft was still basically a garage operation, and Intel was mostly making calculator chips. Now, IBM struggles to stay alive, Cisco, Oracle, Intel, and Microsoft are the established players in the industry and Amazon, an online store, is at the top of the most important trend in computing in the last 20 years, cloud computing. So many companies have come and gone, I don’t even bother to keep track.
  8. In the 1980s, the computer industry was almost entirely American, with a few European and Japanese companies in the market. Now, it’s still mostly American but for the first time since the dawn of the computer age, there is a serious contender: China. I don’t think they will dominate the industry the way the US has, but they will be a clear and powerful number two in the years to come. The EU is also showing many signs of innovation in the software industry.
  9. At the start of my career, you still needed paper encyclopedias. Within 10 years, you could get vast amounts of knowledge on CD’s. Today, all the world’s data is available at our fingertips. I doubt young people today can even imagine what it was like before the Internet gave us access to vast amounts of data in an instant. To them, it would be like living in a world where state of the art data storage is a clay tablet with cuneiform writing on it.
  10. What we wore to work has changed dramatically. When I started my career, we were expected to wear business dress. That was a jacket and tie with dress slacks for men, and a dress or power suit for women. In the 90s that shifted to business casual. Polo shirts and khakis filled up our closets. Before the pandemic, casual became proper office attire with t-shirts and jeans acceptable. At the start of my career, dressing like that at work could get you fired. Post pandemic, pajamas and sweatpants seem to be the new norm, unless you are on a Zoom call. Even so, pants are becoming optional.
  11. Office communications has also changed dramatically. For eons the way to communicated to co-workers was “the memo.” You wrote a note in longhand on paper and handed it to a secretary who typed it up. If there was more than one person, the secretary would duplicate it with a Xerox machine and place it in everyone’s mailboxes. You had to check your mailbox every day to make sure that you didn’t have any memos. It was slow and the secretaries knew everyone’s business. We still have vestiges of this old system in our email systems. CC stands for carbon copy which was a way of duplicating a memo. In some companies, everyone on the “To:” list received a fresh typed copy while the CC list received a copy that used carbon paper and a duplicating machine. As much as you all might hate email, it is so much better (and faster) than the old ways of communicating. 
  12. When I started my first job, I became the second member of my immediate family that was in the IT industry. My Dad was an operations manager in IBM shops. Today, there are still two members of our immediate family that are computer geeks. My son is also a software developer. He will have to carry the torch for the Petrocelli computer clan. No pressure though…
  13. Remote work? Ha! Yeah no. Not until the 90s and even then, it was supplementary to my go to the office job. I did work out of my house during one of my start ups but I was only 10 minutes from my partner. My first truly remote job was in 2000 and it was very hard to do. This was before residential broadband and smartphones. Now, it’s so easy to do with lots of bandwidth to my house, cheap networking, Slack, and cloud services to make it easy to stay connected. Unfortunately, not everyone has this infrastructure nor the technical know-how to deal with network issues. We’ve come a long way but not far enough as many of you have recently discovered.

So, goodbye my audience, my coworkers, and especially my friends. Hopefully, the universe will conspire to have us meet again. In the meantime, it’s time for me to devote more time to charity, ministry, and just plain fun. What can I say? It’s been an amazing ride. See ya!

(Editor’s Note: It has been a privilege and an honor to work with Tom over the past few years. Tom has always been on the bucket list of analysts I wanted to work with in my analyst career and I’m glad I had the chance to do so. Please wish Tom well in his next chapter! – Hyoun)

Posted on

Why Tom Petrocelli Thinks Google Is Forming the Open Usage Commons (OUC)

by Tom Petrocelli

On July 9, 2020 there was an announcement that Google had formed an organization called the Open Usage Commons, or OUC. In a previous blog I laid out the case that this organization was a horrible idea from an intellectual property (IP) management and licensing perspective. In a nutshell, this new organization is holding the trademarks, and only the trademarks, from open source projects. Copyright would continue to be managed through the current open-source licenses and organizations.

As someone who spent several years as part of the intellectual property management industry (at a company literally called IP.com) and as an advocate of open source for 30 years, this struck me as unusual, unnecessary, and suspicious. Ultimately, my experience told me that the OUC added unnecessary complexity and confusion to otherwise straightforward open-source projects. I finished the blog with a call to fork the projects. Pretty harsh words from an analyst who’s usually a pretty positive guy and a big fan of open source.

The follow-up question to the “why this is bad” blog has been “why then is Google doing this?”

I think the reason is much simpler than complex IP problems alluded to by the OUC website. Simply, Google wants to benefit from open-source development but not lose all control over the IP. It’s hard to maximize software revenue when the IP and brand are controlled elsewhere. There are a few organizations – Red Hat and Canonical are good examples – that can generate revenue from open source effectively. Google, on the other hand, has been a reliable and good actor in the open-source cloud-native community while consistently remaining in the third position for cloud services, behind Amazon Web Services and Microsoft Azure.

The fact is that piece of software that Google developed is making lots of money for many other companies while Google remains stuck in the number three slot in the cloud market. The software in question is, of course, Kubernetes.

If you rewind three years ago, Kubernetes was only one of many orchestrators. There was Apache Mesos, Rancher Cattle, Docker Swarm, Cloud Foundry Diego, and others in addition to Kubernetes. At the time, there were few large deployments of container architectures and the need for orchestration was just emerging. Fast forward to today, all of those competitors to Kubernetes are more or less gone and Kubernetes dominates the orchestrator market. Even more important, Kubernetes has become the base for the emerging next-generation IT platform. It will form the core for new architectures moving forward for years, perhaps decades, to come. Neither Google nor anyone else could have predicted that Kubernetes would be the powerhouse that it has become. In the meantime, many large rivals have entered the Kubernetes market including VMWare, Rancher (recently purchased by SUSE), Canonical, Microsoft Azure, Amazon Web Services, and HPE.

Kubernetes has become a massive, open governance, open-source, platform play that Google can’t monetize any more than anyone else. Red Hat was acquired by IBM with a $3B+ valuation, much of it because of OpenShift which is based on Kubernetes. Red Hat is now central to IBM cloud and platform strategy and their primary cloud growth engine. Rancher was acquired by SUSE (for a rumored $600M to $700M) because of its Kubernetes platform. Kubernetes is to Google, what the Docker Engine was to Docker – a key piece of heavily adopted IP that they make less money with than their rivals.

Meanwhile, Google is invested in other homegrown open source projects in the Kubernetes ecosystem, especially Istio and Knative. Istio, one of the projects whose trademarks are under the OUC aegis, is used to implement a service mesh control plane for Kubernetes. It has shown an almost Kubernetes-like uptake in the market and is included in a number of key Kubernetes distributions including Red Hat, Rancher/SUSE, HPE, and IBM. It has long been expected by the cloud-native community that the Istio project, including the trademarks, would become part of the Cloud Native Computing Foundation (CNCF) just like Linkerd and Envoy, two other service mesh projects. Google has instead launched the OUC to take ownership of the Istio trademark.

The head of Google Cloud Services, Thomas Kurian, came from Oracle and is steeped in Oracle’s software business practices. It is easy to imagine that, to him, Google is giving away valuable IP while rivals make all the money. The OUC is a way to retain control of the IP while not appearing to abandon the open-source movement. The board of the OUC consists of two Google employees, one ex-Googler, and, according to Google, a long-time collaborator alongside two others. That doesn’t suggest independence from Google. Even if the project is transferred to the CNCF in the future, Google can still call the shots on branding and messaging through the OUC.

The key problem for Google is that the software industry doesn’t work like it used to.

You can’t be half in and half out of open-source.

In the end, this is more likely to drive vendors to other projects such as Linkerd or Consul and reduce support for Istio. Istio may also go the route of OpenOffice, Java EE, and MySQL. In all three of those projects, where Oracle asserted control over some or most of the intellectual property, disputes broke out over licensing and technical direction leading to forks. The OUC is a clever Google take on the Oracle playbook. Incidentally, each of those forks, LibreOffice, Jakarta EE, and MariaDB have thrived, often overtaking the mother project.

The OUC increases fear, uncertainty, and doubt. The only way for Google to fix this and regain the spirit of open source is to refocus the OUC on IP education and transfer all Istio IP, along with the project, to CNCF. They should find similar homes for the other projects in the OUC portfolio. That is how they can regain the confidence of the open-source community.

Google’s failure to monetize their IP and maximize cloud revenues will not be alleviated by this move. Instead, they will lose their open source credibility and make partners suspicious. Simply put, this is not how open source works. This looks too much like a misguided attempt to control popular open-source software such as Istio and Angular. There are real IP management and licensing problems that the OUC can help to fix. They need to work on fixing those problems and not controlling trademarks.

Posted on Leave a comment

The IP Perspective on Why Open Usage Common is Just Wrong

I have to admit, that’s I’ve been brooding about this for two days now. On July 9, 2020, Google announced the creation of the Open Usage Commons organization. On the surface, the OUC (pronounced “uc”? Ooc?) seems like just another one of the many not-for-profits formed to assist an open source community to develop and license software created by many companies together. They announced three projects would be part of OUC: Istio, Angular, and Gerrit. So far. pretty normal for moving open core to real open-source with independent governance.

Except it’s not. What seemed redundant at best is much more nefarious. The OUC (oh you cee? It’s a bad name…) only holds the trademarks of these open source projects. The copyrights and, potentially, patents are held elsewhere. It’s like owning a car but someone else owns the paint. Why would anyone want this?

To get to why this is so odd, we need to talk about intellectual property. There are three main types of intellectual property (IP) in the US system, trademarks, copyrights, and patents. All three are meant to protect a different form of intellectual output or ‘art”. A software product may be protected by a patent but it’s not that common. Patents need to be unique inventions and are typically physical products. There is a type of patent, a process patent, that applies to some software, but it is not how the majority of software is protected. This has been an ongoing issue with software for 30+ years. Patents don’t really work for software.

Copyrights protect artistic works such as art and writing. Software is typically protected by copyrights since it is an ephemeral “written” work. The ability to use a copyrighted work, including open-source software, is controlled by licenses. In fact, what differentiates open-source from proprietary software is the license, which grants the right to use and modify the software for free as long as you follow the license. A typical component of an open-source license is the requirement to submit changes made to the software to the community to see if they can benefit everyone.

Finally, trademark protections are for the outward identification of an entity or product within a domain. Logos, names, graphics that identify something, these are trademarks. This blog is protected by copyright, as a written work, and the site name and logos via trademarks.

Software as a product is protected primarily by copyrights and trademarks. The code is protected by copyright and the name and accompanying identifying graphics via trademarks.

Now this is where things get weird. The OUC (Oh oo cay? I really hate the name…) exists to manage only the trademarks of open source projects. This means that the copyrights for Istio, Gerrit, and Angular are held by some other organization or company, and the trademarks by OUC (I’ve run out of jokes about the name). Separating the IP into multiple organizations, and hence multiple licenses, seems confusing at best. This is like financial derivatives where mortgage interest and principle are stripped apart from each other and sold separately. We remember how well that worked in the 2000s.

At worst, this is a way to control who can actually use open source software without actually saying so publicly. You may have the right to use and productize software such as Istio, as Red Hat, Rancher, and others have done, but not be given the rights to use the name without a second license. That has two effects.

First, it requires separate licenses and hence negotiations for different parts of the total IP. The open-source license will give the licensee rights to the copyright but the OUC can refuse the trademark license.  

Second, it creates a situation where licenses may not always agree and could hinder the ability to market the software. You could then have your open-source ducks in a row but be stuck in negotiations with OUC. While all of that is possible, that’s not even the worst part. What’s worse is that it allows Google through the OUC to claim independent governance for the projects when it’s really only the trademarks.

Istio is a prime example of this problem. The cloud-native community has, for some time, been expecting that the Istio software would be given to the CNCF, just like Kubernetes.  The fact that this hasn’t happened yet has been a drag on Istio’s adoption and called into question its future viability. Needless to say, the cloud-native community is perplexed and annoyed by this move. CNCF is very much capable of managing both the copyrights and trademarks of Istio, along with the project itself. Even if the copyrights for Istio eventually end up with the CNCF or Apache Software Foundation or some other established foundation, the OUC will have the trademarks. That means two licenses for anyone trying to productize Istio, something CNCF can accomplish with one. At best, it’s needless complexity.

So, here’s my advice to the communities that have been working on these projects. If at all possible, immediately fork the software into another project and join an established not-for-profit. If that’s not possible then abandon it. Vendors will have to create new distributions. While that’s lousy, it’s worse to be a part of something as suspicious and with such a monumentally bad name as OUC. You know, maybe the name is code for Owned Universally by Corporations.

If you are considering an Istio, Angular, or Gerrit-based project and would like to work with Amalgam Insights on due diligence of the project, please contact us at sales@amalgaminsights.com to learn how to work with us. And please read Petrocelli’s prior research on repositories, microservices, CI/CD, Service Mesh, and other DevOps topics at https://www.amalgaminsights.com/research-store/

Posted on Leave a comment

JAMStack Design Pattern Emerges to Radically Rethink Application Platforms

Author: Tom Petrocelli

Executive Summary

Key Stakeholders: Chief Information Officers, Chief Technical Officers, Vice Presidents of IT, VPs of Platform Engineering, DevOps Evangelists, Platform Engineers, Release Managers, Automation Architects, Software Developers, Software Testers, Security Engineers, Software Engineering Directors and Managers, IT directors, DevOps professionals

Why It Matters: The IT industry has been undergoing a radical rethinking of how we architect application platforms. Much of the attention has been drawn to the back-end platforms built on containers and the Kubernetes ecosystem. Less has been said of the front-end environment, even though it too is undergoing a redesign.

Top Takeaway: Jamstack dovetails nicely into IT emerging architectures. It points to a future cloud-native web architecture based on Jamstack design patterns in the frontend,  backed by microservices implemented as transient serverless functions and more permanent Kubernetes-based services.


No one would say that the Spring 2020 conference season was in any way usual. With COVID-19 spreading throughout the world, the vast majority of conferences became “online”, “digital”, or “virtual.” At the tail end of the season was Microsoft Build Online 2020 (May 19, 2020) and Jamstack Conf 2020 (May 27, 2020) a week later. At both conferences, there was discussion of a web application architecture called Jamstack. Indeed, it was the focus of the later conference.

The IT industry has been undergoing a radical rethinking of how we architect application platforms. Much of the attention has been drawn to the back-end platforms built on containers and the Kubernetes ecosystem. Less has been said of the front-end environment, even though it too is undergoing a redesign. This is where Jamstack comes in.

Unlike containers and Kubernetes, Jamstack is not a technology. Instead, it is a design pattern that is implemented in technology. The philosophy behind Jamstack is to isolate the web developer from the back-end systems that support their applications, creating more efficient developers. An additional goal is to increase the scalability of web apps. In this regard, the market drivers are similar to those of the Kubernetes and Serverless communities – creating applications that are easier to build and maintain while increasing scalability.

The “JAM” in Jamstack is an acronym for Javascript, APIs, and Markup. This suggests not only a basket of technologies to be used in web applications. It points to a philosophy of simplification of the web app design that makes for more efficient development and operations. With Jamstack, it is expected that web applications are single-page apps, sometimes called Static Site apps, comprised mostly of JavaScript that then uses APIs to access back-end services (typically microservices) while using markup languages such as HTML and CSS to render the UI.

This is a model that most React framework developers will recognize; Load one page that contains the code and UI. After that, the application is expected to access additional data through RESTFul API calls in response to user actions and render them in HTML and CSS. This contrasts with the way many dynamic web sites are designed. That model calls for back-end systems generate HTML that are then sent to the browser as a new page. There are also applications built using the older AJAX design pattern or technology such as ASP.NET or  Java Server Pages, where pages are generated in the back-end and sent to the browser which then uses JavaScript to access more data as needed. Both of these architectures require web developers to be “full-stack developers”,  which means that the developer has to understand the entire platform from database to web front-end to create a web application. This makes development slower and less efficient while requiring client browsers to constantly download large pages, sometimes over large distances.

The evolution and growth of Jamstack was on display not only at the Jamstack Conference, as one would expect, but also at Microsoft Build. Part of the Azure developer presentation talked about the Jamstack architecture and how it is implemented as a service called Azure Static Web Apps. Even more interesting was the relationship with serverless. Microsoft presented a web applications architecture that was comprised of the Azure Static Web Apps service for the front-end and use of Azure Serverless Functions to implement scalable microservices as the back-end.

There are other ways the Jamstack community is looking to achieve its goals, especially reliability. Of particular interest is driving more functionality into the edge nodes. Most web applications use a content delivery network, or CDN, which caches static portions of a web site geographically closer to users. Without a CDN, all web requests would have to drag big assets such as graphics from their point of origin to the user’s browser. The amount of latency would make the web experience intolerable to many users. The Jamstack community is now looking to place some common processing in the front end CDN edge nodes, to handle common situations instead of processing them at a distant service.

While Jamstack has a lot of advantages, some of the vendor messaging can be a little confusing. For example, the idea that a web developer doesn’t have to be a full-stack developer is true. Someone, however, has to provide that back-end platform for the web developer to access. So, on an organizational level, a full stack is required for Jamstack to be meaningful. If an organization is only building apps using cloud services, this may be the case. But the majority of organizations don’t fit this description. Ultimately, creating a distinction between a web developer and platform developer is an artificial distinction. Code is code.

Ultimately, Jamstack dovetails nicely into IT emerging architectures. It points to a future cloud-native web architecture based on Jamstack design patterns in the frontend,  backed by microservices implemented as transient serverless functions and more permanent Kubernetes-based services. This is the melding of three emerging architectures into one whole design that provides for easier development, maintenance, reliability, and scalability.

Posted on

The Emergence of Kubernetes Control Planes

As is the case with all new technology, container cluster deployments began small. There were some companies, Google for example, that were deploying sizable clusters, but these were not the norm. Instead, there were some test beds and small, greenfield applications. As the technology proved itself and matured, more organizations adopted containers and the market favorite container orchestrator, Kubernetes.

The emergence of Kubernetes was, in fact, a leading indicator that containers were starting to see more widespread adoption in real applications. The more containers deployed, the greater the need for software to automate their lifecycle. Even so, it was unusual to find organizations standing up many Kubernetes clusters, especially geographically dispersed clusters.

That is beginning to change. Organizations that have adopted containers and Kubernetes are starting to struggle with managing multiple clusters spread throughout an enterprise. Just as managing large amounts of containers in a cluster was the impetus for orchestrators such as Kubernetes, new software is needed to manage large scale multi-cluster environments. At the same time, Kubernetes clusters have been getting more complex internally. From humble beginnings of a handful of containers with a microservice or two, clusters now include containers for networking including service mesh sidecars and data planes, logging, app performance monitoring, database connectivity, and storage. All that is in addition to the growing number of microservices being deployed.

In a nutshell, there are now a greater number of larger and more complex Kubernetes containers clusters being deployed. It is no longer enough to manage the lifecycle of the containers. It is now necessary to manage the lifecycle of the cluster itself. This is the purpose of a Kubernetes control plane.

Kubernetes control planes comprise of a series of functions that manage the health and well-being of the cluster. Common features are:

  • Cluster lifecycle management including provisioning of clusters, often from templates for common types of clusters.
  • Versioning including updates to Kubernetes itself.
  • Security and Auditing
  • Visibility, Monitoring, and Logging

Kubernetes control planes are policy-driven and automated. This allows operators to focus on governance while the control plane software does the rest. Not only does this reduce errors but allows for faster responses to changes or problems that may arise. This automation is necessary since managing many large multi-site clusters by hand would require large amounts of manpower and, hence, cost.

Software vendors have stepped with products to meet this emerging need. In the past year, products that implement a Kubernetes control plane have been announced or deployed by Rancher, Platform9, IBM’s Red Hat division (Advanced Cluster Management), and VMware (Tanzu Mission Control) and more.  All of these Kubernetes control planes are designed for multi-cloud, hybrid clusters and are packaged either as part of to a Kubernetes distribution or an aftermarket addition to a company’s Kubernetes product.

Kubernetes control planes are a sign of the normalization of container clusters. The growth both in complexity and scale of container clusters necessitates a management layer that helps DevOps teams to more quickly standup and manage clusters. This is the only way that platform operations can match the speed of Agile development and automated CI/CD toolchains. It is yet another piece of the emerging platform that will be where our modern cloud-native applications will live.

To learn more about this topic, Amalgam Insights recommends the following reports:

Market Landscape – Packaging Code for Microservices

Analyst Insight Cloud Foundry and Kubernetes: Different Paths to Microservices

Red Hat Acquires CoreOS Changing the Container Landscape

as well as Tom Petrocelli’s blogs at https://www.amalgaminsights.com/tag/kubernetes/

Posted on 1 Comment

Looking at Microservices, Containers, and Kubernetes with 2020 Vision

Some years are easy to predict than others. Stability in a market makes tracing the trend line much easier. 2020 looks to be that kind of year for the migration to microservices: stable with steady progression toward mainstream acceptance.

There is little doubt that IT organizations are moving toward microservices architectures. Microservices, which deconstruct applications into many small parts, removes much of the friction that is common in n-Tier applications when it comes to development velocity. The added resiliency and scalability of microservices in a distributed system are also highly desirable. These attributes promote better business agility, allowing IT to respond to business needs more quickly and with less disruption while helping to ensure that customers have the best experience possible.

Little in this upcoming year seems disruptive or radical; That big changes have already occurred. Instead, this is a year for building out and consolidating; Moving past the “what” and “why” and into the “how” and “do”.

Kubernetes will be top of mind to IT in the coming year. From its roots as a humble container orchestrator – one of many in the market – Kubernetes has evolved into a platform for deploying microservices into container clusters. There is more work to do with Kubernetes, especially to help autoscale clusters, but it is now a solid base on which to build modern applications.

No one should delude themselves into thinking that microservices, containers, and Kubernetes are mainstream yet. The vast majority of applications are still based on n-Tier design deployed to VMs. That’s fine for a lot of applications but businesses know that it’s not enough going forward. We’ve already seen more traditional companies begin to adopt microservices for at least some portion of their applications. This trend will accelerate in the upcoming year. At some point, microservices and containers will become the default architecture for enterprise applications. That’s a few years from now but we’ve already on the path.

From a vendor perspective, all the biggest companies are now in the Kubernetes market with at least a plain vanilla Kubernetes offering. This includes HPE and Cisco in addition to the companies that have been selling Kubernetes all along, especially IBM/Red Hat, Canonical, Google, AWS, VMWare/Pivotal, and Microsoft. The trick for these companies will be to add enough unique value that their offerings don’t appear generic. Leveraging traditional strengths, such as storage for HPE, networking for Cisco, and Java for Red Hat and VMWare/Pivotal, are the key to standing out in the market.

The entry of the giants in the Kubernetes space will pose challenges to the smaller vendors such as Mirantis and Rancher. With more than 30 Kubernetes vendors in the market, consolidation and loss is inevitable. There’s plenty of value in the smaller firms but it will be too easy for them to get trampled underfoot.

Expect M&A activity in the Kubernetes space as bigger companies acquihire or round out their portfolios. Kubernetes is now a big vendor market and the market dynamics favor them.

If there is a big danger sign on the horizon, it’s those traditional n-Tier applications that are still in production. At some point, IT will get around to thinking beyond the shiny new greenfield applications and want to migrate the older ones. Since these apps are based on radically different architectures, that won’t be easy. There just aren’t the tools to do this migration well. In short, it’s going to be a lot of work. It’s a hard sell to say that the only choices are either expensive migration projects (on top of all that digital transformation money that’s already been spent) or continuing to support and update applications that no longer meet business needs. Replatforming, or deploying the old parts to the new container platform, will provide less ROI and less value overall. The industry will need another solution.

This may be an opportunity to use all that fancy AI technology that vendors have been investing in to create software to break down an old app into a container cluster. In any event, the migration issue will be a drag on the market in 2020 as IT waits for solutions to a nearly intractable problem.

2020 is the year of the microservice architecture.

Even if that seems too dramatic, it’s not unreasonable to expect that there will be significant growth and acceleration in the deployment of Kubernetes-based microservices applications. The market has already begun the process of maturation as it adapts to the needs of larger, mainstream, corporations with more stringent requirements. The smart move is to follow that trend line.

Posted on 1 Comment

From #KubeCon, Three Things Happening with the Kubernetes Market

This year’s KubeCon+CloudNativeCon was, to say the least, an experience. Normally sunny San Diego treated conference-goers to torrential downpours. The unusual weather turned the block party event into a bit of a sog. My shoes are still drying out. The record crowds – this year’s attendance was 12,000 up from last year’s 8000 in Seattle – made navigating the show floor a challenge for many attendees.

Despite the weather and the crowds, this was an exciting KubeCon+CloudNativeCon. On display was the maturation of the Kubernetes and container market. Both the technology and the best practices discussions were less about “what is Kubernetes” and, instead more about “how does this fit into my architecture?” and “how enterprise-ready is this stuff?” This shift from the “what” to the “how” is a sign that Kubernetes is heading quickly to the mainstream. There are other indicators at Kubecon+CloudNativeCon that, to me, show Kubernetes maturing into a real enterprise technology.

First, the makeup of the Kubernetes community is clearly changing. Two years ago, almost every company at KubeCon+CloudNativeCon was some form of digital forward company like Lyft or cloud technology vendor such as Google or Red Hat. Now, there are many more traditional companies on both the IT and vendor side. Vendors such as HPE, Oracle, Intel, and Microsoft, mainstays of technology for the past 30 years, are here in force. Industries like telecommunications (drawn by the promise of edge computing), finance, manufacturing, and retail are much more visible than they were just a short time ago. While microservices and Kubernetes are not yet as widely deployed as more traditional n-Tier architectures and classic middleware, the mainstream is clearly interested.

Another indicator of the changes in the Kubernetes space is the prominence of security in the community. Not only are there more vendors than ever, but we are seeing more keynote time given to security practices. Security is, of course, a major component of making Kubernetes enterprise-ready. Without solid security practices and technology, Kubernetes will never be acceptable to a broad swatch of large to mid-sized businesses. That said, there is still so much more that needs to be done with Kubernetes security. The good news is that the community is working on it.

Finally, there is clearly more attention being paid to operating Kubernetes in a production environment. That’s most evident in the proliferation of tracing and logging technology, from both new and older companies, that were on display on the show floor and mainstage. Policy management was also an important area of discussion at the conference. These are all examples of the type of infrastructure that Operations teams will need to manage Kubernetes at scale and a sign that the community is thinking seriously about what happens after deployment.

It certainly helps that a lot of basic issues with Kubernetes have been solved but there is still more work to do. There are difficult challenges that need attention. How to migrate existing stateful apps originally written in Java and based on n-Tier architectures is still mostly an open question. Storage is another area that needs more innovation, though there’s serious work underway in that space. Despite the need for continued work, the progress seen at KubeCon+CloudNativeCon NA 2019 point to future where Kubernetes is a major platform for enterprise applications.  2020 will be another pivotal year for Kubernetes, containers, and microservices architectures. It may even be the year of mainstream adoption. We’ll be watching.

Posted on Leave a comment

Canonical Takes a Third Path to Support New Platforms

We are in the midst of another change-up in the IT world. Every 15 to 20 years there is a radical rethink of the platforms that applications are built upon. During the course of the history of IT, we have moved from batch-oriented, pipelined systems (predominantly written in COBOL) to client-server and n-Tier systems that are the standards of today. These platforms were developed in the last century and designed for last century applications. After years of putting shims into systems to accommodate the scale and diversity of modern applications, IT has just begun to deploy new platforms based on containers and Kubernetes. These new platforms promise greater resiliency and scalability, as well as greater responsiveness to the business. Continue reading Canonical Takes a Third Path to Support New Platforms

Posted on Leave a comment

VMware plus Pivotal Equals Platforms

(Editor’s Note: This week, Tom Petrocelli and Hyoun Park will be blogging and tweeting on key topics at VMworld at a time when multi-cloud management is a key issue for IT departments and Dell is spending billions of dollars. Please follow our blog and our twitter accounts TomPetrocelli, Hyounpark, and AmalgamInsights for more details this week as we cover VMworld!)

On August 22, 2019, VMware announced the acquisition of Pivotal. The term “acquisition” seems a little weird here since both are partly owned by Dell. It’s a bit like Dell buying Dell. Strangeness aside, this is a combination that makes a lot of sense.

For nearly eight years now, the concept of a microservices architecture has been taking shape. Microservices is an architectural idea wherein applications are broken up into many, small, bits of code – or services – that provide a limited set of functions and operate independently. Applications are assembled Lego-like, from component microservices. The advantages of microservices are that different parts of a system can evolve independently, updates are less disruptive, and systems become more resilient because system components are less likely to harm each other. The primary vehicle for microservices are containers (which I’ve covered in my Market Guide: Seven Decision Points When Considering Containers), that are deployed in clusters to enhance resiliency and more easily scale up resources.

The Kubernetes open-source software has emerged as the major orchestrator for containers and provides a stable base to build microservice platforms. These platforms must deploy not only the code that represents the business logic, but a set of system services, such as network, tracing, logging, and storage, as well. Container cluster platforms are, by nature, complex assortments of many moving parts – hard to build and hard to maintain.

The big problem has been that most container technology has been open-source and deployed piecemeal, leaving forward-looking companies to assemble their own container cluster microservices platforms. Building out and then maintaining these DIY platforms requires continued investment in people and other resources. Most companies either can’t afford or are unwilling to make investments in this amount of engineering talent and training. Subsequently, there are a lot of companies that have been left out of the container platform game.

The big change has been in the emergence of commercial platforms (many of which were discussed in my SmartList Market Guide on Service Mesh and Building Out Microservices Networking), based on open-source projects, that bring to IT everything it needs to deploy container-based microservices. All the cloud companies, especially Google, which was the original home of Kubernetes, and open-source software vendors such as Red Hat (recently acquired by IBM) with their OpenShift platform, have some form of Kubernetes-based platform. There may be as many as two dozen commercial platforms based on Kubernetes today.

This brings us to VMware and Pivotal. Both companies are in the platform business. VMware is still the dominant player in Virtual Machine (VM) hypervisors, which underpin most systems today, and are marketing a Kubernetes distribution. They also recently purchased Bitnami, a company that makes technology for bundling containers for deployment. At the time, I said:

“This is VMware doubling down on software for microservices and container clusters. Prima facie, it looks like a good move.”

Pivotal markets a Kubernetes distribution as well but also one of the major vendors for Cloud Foundry, another platform that runs containers, VMs, and now Kubernetes (which I discuss in my Analyst Insight: Cloud Foundry and Kubernetes: Different Paths to Microservices). The Pivotal portfolio also includes Spring Boot, one of the primary frameworks for building microservices in Java, and an extensive Continuous Integration/Continuous Deployment capability based on BOSH (part of Cloud Foundry), Concourse, and other open source tools.

Taken together, VMware and Pivotal offer a variety of platforms for newer microservices and legacy VM architectures that will fit the needs of a big swatch of large enterprises. This will give them both reach and depth in large enterprise companies and allow their sales teams to sell whichever platform a customer needs at the moment while providing a path to newer architectures. From a product portfolio perspective, VMware plus Pivotal is a massive platform play that will help them to compete more effectively against the likes of IBM/Red Hat or the big cloud vendors.

On their own, neither VMWare or Pivotal had the capacity to compete against Red Hat OpenShift, especially now that that Red Hat has access to IBM’s customer base and sales force. Together they will have a full range of technology to bring to bear as the Fortune 500 moves into microservices. The older architectures are also likely to remain in place either because of legacy reasons or because they just fit the applications they serve. VMware/Pivotal will be in a position to service those companies as well.

VMware could easily have decided to pick up any number of Kubernetes distribution companies such as Rancher or Platform9. None of them would have provided the wide range of platform choices that Pivotal brings to the table. And besides, this keeps it all in the Dell family.

Posted on

Kubernetes Grows Up – The View from KubeCon EU 2019

Our little Kubernetes is growing up.

By “growing up” I mean it is almost in a state that a mainstream company can consider it fit for production. While there are several factors that act as a drag against mainstream reception, a lack of completeness has been a major force against Kubernetes broader acceptance. Completeness, in this context, means that all the parts of an enterprise platform are available off the shelf and won’t require a major engineering effort on the part of conventional IT departments.

The good news from KubeCon+CloudNativeCon EU 2019 in Barcelona, Spain (May 20 – 23 2019) is that the Kubernetes and related communities are zeroing in on that ever so important target. There are a number of markers pointing toward mainstream acceptance. Projects are filling out the infrastructure – gaining completeness – and the community is growing.

Project Updates

While Kubernetes may be at the core, there are many supporting projects that are striving to add capabilities to the ecosystem that will result in a more complete platform for microservices. Some of the projects featured in the project updates show the drive for completeness. For example, OpenEBS and Rook are two projects striving to make container storage more enterprise friendly. Updates to both projects were announced at the conference. Storage, like networking, is an area that must be tackled before mainstream IT can seriously consider container microservices platforms based on Kubernetes.

Managing microservices performance and failure is a big part of the ability to deploy containers at scale. For this reason, the announcement that two projects that provide application tracing capabilities, OpenTracing and OpenCensus, were merging into OpenTelemetry is especially important. Ultimately, developers need a unified approach to gathering data for managing container-based applications at scale. Removing duplication of effort and competing agendas will speed up the realization of that vision.

Also announced at KubeCon+CloudNativeCon EU 2019 were updates to Helm and Harbor, two projects that tackle thorny issues of packaging and distributing containers to Kubernetes. These are necessary parts of the process of deploying Kubernetes applications. Securely managing container lifecycles through packaging and repositories is a key component of DevOps support for new container architectures. Forward momentum in these projects is forward movement toward the mainstream.

There were other project updates, including updates to Kubernetes itself and Crio-io. Clearly, the community is filling in the blank spots in container architectures, making Kubernetes a more viable application platform for everyone.

The Community is Growing

Another gauge pointing toward mainstream acceptance is the growth in the community. The bigger the community, the more hands to do the work and the better the chances of achieving feature critical mass. This year in Barcelona, KubeCon+CloudNativeCon EU saw 7700 attendees, nearly twice last year in Copenhagen. In the core Kubernetes project, there are 164K commits and 1.2M comments in Github. This speaks to broad involvement in making Kubernetes better. Completeness requires lots of work and that is more achievable when there are more people involved.

Unfortunately, as Cheryl Hung, Director of Ecosystems at CNCF says, only 3% of contributors are women. The alarming lack of diversity in the IT industry shows up even in Kubernetes despite the high-profile women involved in the conference such as Janet Kuo of Google. Diversity brings more and different ideas to a project and it would be great to see the participation of women grow.

Service Mesh Was the Talk of the Town

The number of conversations I had about service mesh was astounding. It’s true that I had released a pair of papers on it, one just before KubeCon+CloudNativeCon EU 2019. That may have explained why people want to talk to me about it but not the general buzz. There was service mesh talk in the halls, at lunch, in sessions, and from the mainstage. It’s pretty much what everyone wanted to know about. That’s not surprising since a service mesh is going to be a vital part of large scale-out microservices applications. What was surprising was that even attendees who were new to Kubernetes were keen to know more. This was a very good omen.

It certainly helped that there was a big service mesh related announcement from the mainstage on Tuesday. Microsoft, in conjunction with a host of companies, announced the Service Mesh Interface. It’s a common API for different vendor and project service mesh components. Think of it as a lingua franca of service mesh. There were shout-outs to Linkerd and Solo.io. The latter especially had much to do with creating SMI. The fast maturation of the service mesh segment of the Kubernetes market is another stepping stone toward the completeness necessary for mainstream adoption.

Already Way Too Many Distros

There were a lot of Kubernetes distributions a KubeCon+CloudNativeCon EU 2019. A lot. Really.  A lot. While this is a testimony the growth in Kubernetes as a platform, it’s confusing to IT professionals making choices. Some are managed cloud services; others are distributions for on-premises or when you want to install your own on a cloud instance. Here’s some of the Kubernetes distros I saw on the expo floor.  I’m sure I missed a few:

Microsoft Azure Google Digital Ocean Alibaba
Canonical (Ubuntu) Oracle IBM Red Hat
VMWare SUSE Rancher Pivotal
Mirantis Platform9

 

From what I hear this is a sample, not a comprehensive, list. The dark side of this enormous choice is confusion. Choosing is hard when you get beyond a handful of options. Still, only five years into the evolution of Kubernetes, it’s a good sign to see this much commercial support for it.

The Kubernetes and Cloud Native architecture is like a teenager. It’s growing rapidly but not quite done. As the industry fills in the blanks and as communities better networking, storage, and deployment capabilities, it will go mainstream and become applicable to companies of all sizes and types. Soon. Not yet but very soon.