Posted on 1 Comment

Oracle Delivers a FOSS Surprise

Tom Petrocelli, Amalgam Insights Research Fellow

An unfortunate side effect of being an industry analyst is that it is easy to become jaded. There is a tendency to fall back into stereotypes about technology and companies. Add to this nearly 35 years in computer technology and it would surprise no one to hear an analyst say, “Been there, done that, got the t-shirt.” Some companies elicit this reaction more than others. Older tech companies with roots in the 80’s or earlier tend to get in a rut and focus on incremental change (so as not to annoy their established customer base) instead of the exciting new trends. This makes it hard to be impressed by them.

Oracle is one of those companies. It has a reputation for being behind the market (cloud anyone?) and as proprietary as can be. Oracle has also had a difficult time with developers. The controversy over Java APIs (which is really a big company spat with Google) hasn’t helped that relationship. There are still hard feelings left over from the acquisition of Sun Microsystems (a computer geek favorite) and MySQL that have left many innovative developers looking anywhere but Big Red. Oracle’s advocacy of Free and Open Source Software (FOSS) has been at best indifferent. When the FOSS community comes together, one expects to see Red Hat, Google, and even Microsoft and IBM but never Oracle.

Which is why my recent conversation with Bob Quillin of Oracle came as a complete surprise. It was like a bucket of cold water on a hot day, both shocking and at the same time, refreshing.

Now, it’s important to get some context right up front. Bob came to Oracle via an acquisition, StackEngine. So, he and his team’s DNA is more FOSS than Big Red. And, like an infusion of new DNA, the StackEngine crew has succeeded in changing Oracle on some level. They have launched the Oracle Kubernetes and Registry Services which brings a Container Engine, Kubernetes, and a Docker V2 compatible registry to the Oracle Cloud Service. That’s a lot of open source for Oracle.

In addition, Bob talked about how they were helping Oracle customers to move to a Cloud Native strategy. Cloud Native almost always means embracing FOSS since so many components are FOSS. Add to the mixture a move into serverless with Fn.  Fn is also an open source project (Apache 2.0 licensed) but one that originated in Oracle.  That’s not to say there aren’t other Oracle open source projects (Graal for example) but they aren’t at the very edge of computing like Fn. In this part of the FOSS world, Oracle is leading, not following. Oracle even plans to have a presence at Kubecon+CloudNativeCon 2018 in Seattle this December, an open source-oriented conference run by The Linux Foundation, where they will be a Platinum Sponsor. In the past this would be almost inconceivable.

The big question is how will this affect the rest of Oracle? Will this be a side project for Oracle or will they rewrite the Oracle DNA in the same way that Microsoft has done? Can they find that balance between the legacy business, which is based on high-priced, proprietary software – the software that is paying the bills right now – and community run, open source world that is shaping the future of IT? Only time will tell but there will be a big payoff to IT if it happens. Say what you will about Oracle, they know how to do enterprise software. Security, performance, and operating at scale are Oracle’s strengths. They are a big reason their customers keep buying from them instead of an open source startup or even AWS. An infusion of that type of knowledge into the FOSS community would help to overcome many of the downsides that IT experiences when trying to implement open source software in large enterprise production environments.

Was I surprised? To say the least. I’ve never had a conversation like this with Oracle. Am I hopeful? A bit. There are forces within companies like Oracle that can crush an initiative like this.  As the market continues to shift in the direction of microservices, containers, and open source in general, Oracle risks becoming too out of step with the current generation of developers. Even if FOSS doesn’t directly move the needle on Oracle revenue, it can have a profound effect on how Oracle is viewed by the developer community. If the attitude of people like Bob Quillin becomes persuasive, then younger developers may start to see Oracle as more than just their father’s software company. In my opinion, the future of Oracle may depend on that change in perception.

Posted on

Google Grants $9 Million in Google Cloud Platform Credits to Kubernetes Project

Tom Petrocelli, Amalgam Insights Research Fellow

Kubernetes has, in the span of a few short years, become the de facto orchestration software for containers. As few as two years ago there were more than a half-dozen orchestration tools vying for the top spot and now there is only Kubernetes. Even the Linux Foundation’s other orchestrator project, CloudFoundry Diego, is starting to give way to Kubernetes. Part of the success of Kubernetes can be attributed to the support of Google. Kubernetes emerged out of Google and they have continued to bolster the project even as it fell under the auspices of the Linux Foundation’s CNCF.

On August 29, 2018, Google announced that it is giving $9M in Google Cloud Platform (GCP) credit to the CNCF Kubernetes project. This is being hailed by both Google and the CNCF as an announcement of major support. $9M is a lot of money, even if it is credits. However, let’s unpack this announcement a bit more and see what it really means.
Continue reading Google Grants $9 Million in Google Cloud Platform Credits to Kubernetes Project

Posted on 1 Comment

Microsoft Loves Linux and FOSS Because of Developers

Tom Petrocelli, Amalgam Insights Research Fellow

For much of the past 30 years, Microsoft was famous for its hostility toward Free and Open Source Software (FOSS). They reserved special disdain for Linux, the Unix-like operating system that first emerged in the 1990s. Linux arrived on the scene just as Microsoft was beginning to batter Unix with Windows NT. The Microsoft leadership at the time, especially Steve Ballmer, viewed Linux as an existential threat. They approached Linux with an “us versus them” mentality that was, at times, rabid.

It’s not news that times have changed and Microsoft with it. Instead of looking to destroy Linux and FOSS, Microsoft CEO Satya Nadella has embraced it.

Microsoft has begun to meld with the FOSS community, creating Linux-Windows combinations that were unthinkable in the Ballmer era.

In just the past few years Microsoft has:
Continue reading Microsoft Loves Linux and FOSS Because of Developers

Posted on 1 Comment

Infrastructure as Code Provides Advantages for Proactive Compliance

Tom Petrocelli, Amalgam Insights Research Fellow

Companies struggle with all types of compliance issues. Failure to comply with government regulations, such as Dodd-Frank, EPA or HIPAA, is a significant business risk for many companies. Internally mandated compliance also represents problems as well. Security and cost control policies are just as vital as other forms of regulation since they protect the company from reputational, financial, the operational risks.
Continue reading Infrastructure as Code Provides Advantages for Proactive Compliance

Posted on 1 Comment

Cloud Vendors Race to Release Continuous Integration and Continuous Deployment Tools

Tom Petrocelli, Amalgam Insights Research Fellow

Development organization continue to feel increasing pressure to produce better code more quickly. To help accomplish that faster-better philosophy, a number of methodologies have emerged that that help organizations quickly merge individual code, test it, and deploy to production. While DevOps is actually a management methodology, it is predicated on an integrated pipeline that drives code from development to production deployment smoothly. In order to achieve these goals, companies have adopted continuous integration and continuous deployment (CI/CD) tool sets. These tools, from companies such as Atlassian and GitLab, help developers to merge individual code into the deployable code bases that make up an application and then push them out to test and production environments.

Cloud vendors have lately been releasing their own CI/CD tools to their customers. In some cases, these are extensions of existing tools, such as Microsoft Visual Team Studio on Azure. Google’s recently announced Cloud Build as well as AWS CodeDeploy and CodePipeline are CI/CD tools developed specifically for their cloud environments. Cloud CI/CD tools are rarely all-encompassing and often rely on other open source or commercial products, such as Jenkins or Git, to achieve a full CI/CD pipeline.

These products represent more than just new entries into an increasingly crowded CI/CD market. They are clearly part of a longer-term strategy by cloud service providers to become so integrated into the DevOps pipeline that moving to a new vendor or adopting a multi-cloud strategy would be much more difficult. Many developers start with a single cloud service provider in order to explore cloud computing and deploy their initial applications. Adopting the cloud vendor’s CI/CD tools embeds the cloud vendor deeply in the development process. The cloud service provider is no longer sitting at the end of the development pipeline; They are integrated and vital to the development process itself. Even in the case where the cloud service provider CI/CD tools support hybrid cloud deployments, they are always designed for the cloud vendors own offerings. Google Cloud Build and Microsoft Visual Studio certainly follow this model.

There is danger for commercial vendors of CI/CD products outside these cloud vendors. They are now competing with native products, integrated into the sales and technical environment of the cloud vendor. Purchasing products from a cloud vendor is as easy as buying anything else from the cloud portal and they are immediately aware of the services the cloud vendor offers. No fuss, no muss.

This isn’t a problem for companies committed to a particular cloud service provider. Using native tools designed for the primary environment offers better integration, less work, and ease of use that is hard to achieve with external tools. The cost of these tools is often utility-based and, hence, elastic based on the amount of work product flowing through the pipeline. The trend toward native cloud CI/CD tools also helps explain Microsoft’s purchase of GitHub. GitHub, while cloud agnostic, will be much for powerful when completely integrated into Azure – for Microsoft customers anyway.

Building tools that strongly embed a particular cloud vendor into the DevOps pipeline is clearly strategic even if it promotes monoculture. There will be advantages for customers as well as cloud vendors. It remains to be seen if the advantages to customers overcome the inevitable vendor lock-in that the CI/CD tools are meant to create.

Posted on 1 Comment

Destroying the CEO Myth: Redefining The Power Dynamics of Managing DevOps

Tom Petrocelli, Amalgam Insights Research Fellow

I am constantly asked the question “What does one have to do to implement DevOps”, or some variant.  Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology-based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful.

 

My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change the culture and management first. Unfortunately, that’s the hard part. When companies talk about changing culture for DevOps they often mean implementing multifunction teams or something less than that. Throwing disparate disciplines into an unregulated melting pot doesn’t help. These teams can end up as dysfunctional as with any other management or project structure. Team members will bicker over implementation and try to protect their hard-won territory.

 

As the old adage goes, “everything old is new again” and so-called DevOps culture is no different. Multi-functional teams are just a flavor of matrix management which has been tried over and over for years. They suffer from the same problems. Team members have to serve two masters and managers act like a group of dogs with one tree among them. Trying to please both the project leader and their functional management creates inherent conflicts.

 

Another view of creating DevOps culture is, what I think of as, the “CEO Buy-in Approach”. Whenever there is new thinking in IT there always seems to be advocacy for a top-down approach that starts with the CEO or CIO “buying in” to the concept. After that magic happens and everyone holds hands and sings together. Except that they don’t. This approach is heavy-handed and an unrealistic view of how companies, especially large companies, operate. If simply ordering people to work well together was all it took, there would be no dysfunctional companies or departments.

 

A variation on this theme advocates picking a leader (or two if you have two-in-the-box leadership) to make everyone work together happily. Setting aside the fact that finding people with broad enough experience to lead multi-disciplinary teams, this leads to what I have always called “The Product Manager Problem.”

 

The problem that all new product managers face is the realization that they have all the responsibility and none of the power to accomplish their mission.

 

That’s because responsibility for the product concentrates in one person, the product manager, and all other managers can diffuse their responsibility across many products or functions.

 

Having a single leader responsible for making multi-functional teams work creates a lack of individual accountability. The leader, not the team, is held accountable for the project while the individual team members are still accountable to their own managers. This may work when the managers and project team leaders all have great working relationships. In that case, you don’t need a special DevOps structure. Instead, a model that creates a separate project team leader or leaders enables team dysfunction and the ability to maintain silos through lack of direct accountability. You see this when you have a Scrum Master, Product Owner, or Release Manager who has all the responsibility for a project.

 

The typical response to this criticism of multi-functional teams (and the no-power Product Manager) is that leaders should be able to influence and cajole the team, despite having no real authority. This is ridiculous and refuses to accept that individual managers and the people that work for them are motivated to maintain their own power. Making the boss look good works well when the boss is signing your evaluation and deciding on your raise. Sure, project and team leaders can be made part of the evaluation process but, really who has the real power here? The functional manager in control of many people and resources or the leader of one small team?

 

One potential to the DevOps cultural conundrum is collective responsibility. In this scheme, all team members benefit or are hurt by the success of the project. Think of this as the combined arms combat team model. In the Army, a multi-functional combined arms teams are put together for specific missions. The team is held responsible for the overall mission. They are responsible collectively and individually. While the upper echelons hold the combined arms combat team responsible for the mission, the team leader has the ability to hold individuals accountable.

 

Can anyone imagine an Army or Marine leader being let off the hook for mission failure because one of their people didn’t perform? Of course not, but they also have mechanisms for holding individual soldiers accountable for their performance.

 

In this model, DevOps teams collectively would be held responsible for on-time completion of the entire project as would the entire management chain. Individual team members would have much of their evaluation based on this and the team leader would have the power to remediate nonperformance including removing a team member who is not doing their job (i.e. fire them). They would have to have the ability to train up and fill the role of one type of function with another if a person performing a role wasn’t up to snuff or had to be removed. It would still be up to the “chain of command” to provide a reasonable mission with appropriate resources.

 

Ultimately, anyone in the team could rise up and lead this or another team no matter their speciality. There would be nothing holding back an operations specialist from becoming the Scrum Master. If they could learn the job, they could get it.

 

The very idea of a specialist would lose power, allowing team members to develop talents no matter their job title.

 

I worked in this model years ago and it was successful and rewarding. Everyone helped everyone else and had a stake in the outcome. People learned each other’s jobs, so they could help out when necessary, learning new skills in the process. It wasn’t called DevOps but it’s how it operated. It’s not a radical idea but there is a hitch – silo managers would either lose power or even cease to exist. There would be no Development Manager or Security Manager. Team members would win, the company would win, but not everyone would feel like this model works for them.

 

This doesn’t mean that all silos would go away. There will still be operations and security functions that maintain and monitor systems. The security and ops people who work on development projects just wouldn’t report into them. They would only be responsible to the development team but with full power (and resources) to make changes in production systems.

 

Without collective responsibility, free of influence from functional managers, DevOps teams will never be more than a fresh coat of paint on rotting wood. It will look pretty but underneath, it’s crumbling.

Posted on 2 Comments

Microsoft Azure Plus Informatica Equals Cloud Convenience

Tom Petrocelli, Amalgam Insights Research Fellow

Two weeks ago (May 21, 2018), at Informatica World 2018, Informatica announced a new phase in its partnership with Microsoft. Slated for release in the second half of 2018, the two companies announced that Informatica’s Integration Platform as a Service, or IPaaS, would be available on Microsoft Azure as a native service. This is a different arrangement than Informatica has with other cloud vendors such as Google or Amazon AWS. In those cases, Informatica is more of an engineering partner, developing connectors for their on-premises and cloud offerings. Instead, Informatica IPaaS will be available from the Azure Portal and integrated with other Azure services, especially Azure SQL Server, Microsoft’s cloud database and Azure SQL Data Warehouse.
Continue reading Microsoft Azure Plus Informatica Equals Cloud Convenience

Posted on Leave a comment

Monitoring Containers: What’s Inside YOUR Cluster?

Tom Petrocelli, Amalgam Insights Research Fellow

It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately: container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource-rich server is now multiple processes spread across one or many servers. On top of the architecture change, a container cluster usually encompasses a variety of containers that are not application code. These include security, load balancing, network management, web servers, etc. Entire frameworks, such as NGINX Unit 1.0, may be deployed as infrastructure for the cluster. Services that used to be centralized in a network are now incorporated into the application itself as part of the container network.

Because an “application” is now really a collection of smaller services running in a virtual network, there’s a lot more that can go wrong. The more containers, the more opportunities for misbehaving components. For example:

  • Network issues. No matter how the network is actually implemented, there are opportunities for typical network problems to emerge including deadlocked communication and slow connections. Instead of these being part of monolithic network appliances, they are distributed throughout a number of local container clusters.
  • Apps that are slow and make everything else slower. Poor performance of a critical component in the cluster can drag down overall performance. With microservices, the entire app can be waiting on a service that is not responding quickly.
  • Containers that are dying and respawning. A container can crash which may cause an orchestrator such as Kubernetes to respawn the container. A badly behaving container may do this multiple times.

These are just a few examples of the types of problems that a container cluster can have that negatively affect a production system. None of these are new to applications in general. Applications and services can fail, lock up, or slow down in other architectures. There are just a lot more parts in a container cluster creating more opportunities for problems to occur. In addition, typical application monitoring tools aren’t necessarily designed for container clusters. There are events that traditional application monitoring will miss especially issues with containers and Kubernetes themselves.

To combat these issues, a generation of products and open source projects are emerging that are retrofit or purpose-built for container clusters. In some cases, app monitoring has been extended to include containers (New Relic comes to mind). New companies, such as LightStep, have also entered the market for application monitoring but with containers in mind from the onset. Just as exciting are the open source projects that are gaining steam. Prometheus (for application monitoring), OpenTracing (network tracing), and Jaeger (transaction tracing) are some of the open source projects that are help gather data about the functioning of a cluster.

What makes these projects and products interesting is that they place monitoring components in the clusters, close to the applications’ components, and take advantage of container and Kubernetes APIs. This placement helps sysops to have a more complete view of all the parts and interactions of the container cluster. Information that is unique to containers and Kubernetes are available alongside traditional application and network monitoring data.

As IT departments start to roll scalable container clusters into production, knowing what is happening within is essential. Thankfully, the ecosystem for monitoring is evolving quickly, driven equally but companies and open source communities.

Posted on 1 Comment

The Software Abstraction Disconnect is Silly

Tom Petrocelli, Amalgam Insights Research Fellow

Over the past two weeks, I’ve been to two conferences that are run by an open source community. The first was the CloudFoundry Summit in Boston followed by KubeCon+CloudNativeCon Europe 2018 in Copenhagen. At both, I found passionate and vibrant communities of sysops, developers, and companies. For those unfamiliar with CloudFoundry and Kubernetes, they are open source technologies that abstract software infrastructure to make it easier for developers and sysops to deliver applications more quickly.

Both serve similar communities and have a generally similar goal. There is some overlap – CloudFoundry has its own container and container orchestration capability – but the two technologies are mostly complementary. It is possible, for example, to deploy CloudFoundry as a Kubernetes cluster and use CloudFoundry to deploy Kubernetes. I met with IT professionals that are doing one or both of these. The same is true for OpenStack and CloudFoundry (and Kubernetes for that matter). OpenStack is used to abstract the hardware infrastructure, in effect creating a cloud within a data center. It is a tool used by sysops to provision hardware as easily scalable resources, creating a private cloud. So, like CloudFoundry does for software, OpenStack helps to manage resources more easily so that a sysop doesn’t have to do everything by hand. CloudFoundry and OpenStack are clearly complementary. Sysops use OpenStack to create resources in the form of a private cloud; developers then use CloudFoundry to pull together private and public cloud resources into a platform they deploy applications to. Kubernetes can be found in any of those places.

Fake News, Fake Controversies

Why then, is there this constant tension between the communities and adopters of these technologies? It’s as if carpenters had hammer people and saw people who argued over which was better. According to my carpenter friends, they don’t. The foundations and vendors avoid this type of talk, but these kinds of discussions are happening at the practitioner and contributor level all the time. During KubeCon+CloudnativeCon Europe 2018, I saw a number of tweets that, in essence, said: “Why is Cloud Foundry Executive Director Abby Kearns speaking at KubeCon?” They questioned what one had to do with the other. Why not question what peanut butter and jelly have to do with each other?

Since each of these open source projects (and the products based on them) have a different place in a modern hybrid cloud infrastructure, how is it that very smart people are being so short-sighted? Clearly, there is a problem in these communities that limit their point of view. One theory lies in what it takes to proselytize these projects within an organization and wider community. To put it succinctly, to get corporate buy-in and widespread adoption, community members have to become strongly focused on their specific project. So focused, that some put on blinders and can no longer see the big picture. In fact, in order to sell the world on something that seems radical at first, you trade real vision for tunnel vision.

People become invested in what they do and that’s good for these type of community developed technologies. They require a commitment to a project that can’t be driven by any one company and may not pan out. It turns toxic when the separate communities become so ensconced in their own little corner of the tech world that they can’t see the big picture. The very nature of these projects defies an overriding authority that demands the everyone get along, so they don’t always.

It’s time to get some perspective, to see the big picture. We have an embarrassment of technology abstraction riches. It’s time to look up from individual projects and see the wider world. Your organizations will love you for it.

Posted on 1 Comment

KubeCon+CloudNativeCon Europe 2018 Demonstrates The Breadth and Width of Kubernetes


Standing in the main expo hall of KubeCon+CloudNativeCon Europe 2018 in Copenhagen, the richness of the Kubernetes ecosystem is readily apparent. There are booths everywhere, addressing all the infrastructure needs for an enterprise cluster. There are meetings everywhere for the open source projects that make up the Kubernetes and Cloud Native base of technology. The keynotes are full. What was a 500-person conference in 2012 is now, 6 years later, a 4300-person conference even though it’s not in one of the hotbeds of American technology such as San Francisco or New York City.

What is amazing is how much Kubernetes has grown in such a short amount of time. It was only a little more than a year ago that Docker released its Kubernetes competitor called Swarm. While Swarm still exists, Docker also supports, and arguably is betting the future, on Kubernetes.
Kubernetes came out of Google, but that doesn’t really explain why it expanded like the early universe after the Big Bang. Google is not the market leader in the cloud space – it’s one of the top vendors but not the top vendor – and wouldn’t have provided enough market pull to drive the Kubernetes engine this hot. Google is also not a major enterprise infrastructure software vendor the way IBM, Microsoft, or even Red Hat and Canonical are.

Kubernetes benefited from the first mover effect. They were early into the market with container orchestration, were fully open source, and had a large amount of testing in Google’s own environment. Docker Swarm, on the other hand, was too closely tied to Docker, the company, to appease the open source gods.

Now, Kubernetes finds itself like a new college graduate. It’s all grown up but needs to prepare for the real world. The basics are all in place and it’s mature but there is an enormous amount of refinement and holes that need to be filled in for it to be a common part of every enterprise software infrastructure. KubeCon+CloudNativeCon shows that this is well underway. The focus now is on security, monitoring, network improvement, and scalability. There doesn’t seem to be a lot of concern about stability or basic functionality.

Kubernetes has eaten the container world and didn’t get indigestion. That’s rare and wonderful.