Posted on

Tom Petrocelli Releases Groundbreaking Technical Guide on Service Mesh

On April 2, 2019, Amalgam Insights Research Fellow Tom Petrocelli published Technical Guide: A Service Mesh Primer, which serves as a vital starting point for technical architects and developer teams to understand the current trends in microservices and service mesh. This report provides enterprise architects, CTOs, and developer teams with the guidance they need to understand the microservices architecture, service mesh architecture, and OSI model context necessary to conceptualize service mesh technologies.

In this report, Amalgam Insights provides context in the following areas: Continue reading Tom Petrocelli Releases Groundbreaking Technical Guide on Service Mesh

Posted on 1 Comment

Coming Attractions: Groundbreaking Service Mesh Research

In early January, I started researching the service mesh market. To oversimplify, a service mesh is a way of providing for the kind of network services necessary for enterprise applications deployed using a microservices architecture. Since most microservices architectures are being deployed within containers and, most often, managed and orchestrated using Kubernetes, service mesh technology will have a major impact on the adoption of these markets.

As I began writing the original paper, I quickly realized that an explanation of service mesh technology was necessary to understand the dynamic of the service mesh market. Creating a primer on service mesh and a market guide turned out to be too much for one paper. It was unbearably long. Subsequently, the paper was split into two papers, a Technical Guide and a Market Guide.

The Technical Guide is a quick primer on service mesh technology and how it is used to enhance microservices architectures, especially within the context of containers and Kubernetes. The Market Guide outlines the structure of the market for service mesh products and open source projects, discusses many of the major players, and talks to the current Istio versus Linkerd controversy. The latter is actually a non-issue that has taken on more importance than it should given the nascence of the market.

The Technical Guide will be released next week, just prior to Cloud Foundry Summit. Even though service mesh companies seem to be focused on Kubernetes, anytime there is a microservices architecture, there will be a service mesh. This is true for microservices implemented using Cloud Foundry containers.

The Market Guide will be published roughly a month later, before Red Hat Summit and KubeCon+CloudNative Summit Europe, which I will be attending. Most of the vendors discussed in the Market Guide will be in attendance at one or the other conference. Read the report before going so that you know who to talk to if you are attending these conferences.

A service mesh is a necessary part of emerging microservices architectures. These papers will hopefully get you started on your journey to deploying one.

Note: Vendors interested in leveraging this research for commercial usage are invited to contact Lisa Lincoln (lisa@amalgaminghts.com).

 

Posted on Leave a comment

Network Big Iron f5 Acquires Software Network Vendor NGINX

I woke up last Tuesday (March 12, 2019) to find an interesting announcement in my inbox. NGINX, the software networking company, well known for its NGINX web server/load balancer, was being acquired by f5. f5 is best known for its network appliances which implement network security, load balancing, etc. in data centers.

The deal was described as creating a way to “bridge NetOps to DevOps.” That’s a good way to characterize the value of this acquisition. Networking has begun to evolve, or perhaps devolve, from the data center into the container cluster. Network services that used to be the domain of centralized network devices, especially appliances, may be found in small footprint software that runs in containers, often in a Kubernetes pod. It’s not that centralized network resources don’t have a place – you wouldn’t be able to manage the infrastructure that container clusters run on without them. Instead, both network appliances and containerized network resources, such as a service mesh, will be present in microservices architectures. By combining both types of network capabilities, f5 will be able to sell a spectrum of network appliances and software tailored toward different types of architectures. This includes the emerging microservices architectures that are quickly becoming mainstream. With NGINX, f5 will be well positioned to meet the network needs of today and of the future.

The one odd thing about this acquisition is that f5 already has an in-house project, Aspen Mesh, to commercialize very similar software. Aspen Mesh sells an Istio/Envoy distribution that extends the base features of the open source software. There is considerable overlap between Aspen Mesh and NGINX, at least in terms of capabilities. Both provide software to enable a service mesh and provide services to virtual networks. ” Sure, NGINX has market share (and brain share) but $670M is a lot of money when you already have something in hand.

NGINX and f5 say that they see the products as complementary and will allow f5 to build a continuum of offerings for different needs and scale. In this regard, I would agree with them. Aspen Mesh and NGINX are addressing the same problems but in different ways. By combining NGINX with the Aspen Mesh, f5 can cover more of the market.

Given the vendor support of Istio/Envoy in the market, it’s hard to imagine f5 just dropping Aspen Mesh. At present, f5 plans to operate NGINX separately but that doesn’t mean they won’t combine NGINX with Aspen Mesh in the future. Some form of coexistence is necessary for f5 to leverage all the investments in both brands.

The open source governance question may be a problem. There is nervousness within the NGINX community about its future. NGINX is based on its own open source project, one not controlled by any other vendors. The worry is that the NGINX community run into the same issues that the Java and MySQL communities did after they were acquired by Oracle which included changes to licensing and issues over what constituted the open source software versus the enterprise, hence proprietary software. f5 will have to reassure the NGINX community or risk a fork of the project or, worse, the community jumping ship to other projects. For Oracle, that led to MariaDB and a new rival to MySQL.

NGINX will give f5 both opportunity and technology to address emerging architectures that their current product lines will not. Aspen Mesh will still need time to grow before it can grab the brain share and revenue that NGINX already has. For a mainstream networking company like f5, this acquisition gets them into the game more quickly, generates revenue immediately, and does so in a manner that is closer to their norm. This makes a lot of sense.

Now that the first acquisition has happened, the big question will be “who are the next sellers and the next buyers?” I would predict that we will see more deals like this one. We will have to wait and see.

Posted on Leave a comment

At IBM Think, Watson Expands “Anywhere”

At IBM Think in February, IBM made several announcements around the expansion of Watson’s availability and capabilities, framing these announcements as the launch of “Watson Anywhere.” This piece is intended to provide guidance to data analysts, data scientists, and analytic professionals seeking to implement machine learning and artificial intelligence capabilities and evaluating the capabilities of IBM Watson’s AI and machine learning services for their data.

Announcements

IBM declared that Watson is now available “anywhere” – both on-prem and in any cloud configuration, whether private, public, singular, multi-cloud, or a hybrid cloud environment. Data that needs to remain in place for privacy and security reasons can now have Watson microservices act on it where it resides. The obstacle of cloud vendor lock-in can be avoided by simply bringing the code to the data instead of vice versa. This ubiquity is made possible via a connector from IBM Cloud Private for Data that makes these services available via Kubernetes containers. New Watson services that will be available via this connector include Watson Assistant, IBM’s virtual assistant, and Watson OpenScale, an AI operation and automation platform.

Watson OpenScale is an environment for managing AI applications that puts IBM’s Trust and Transparency principles into practice around machine learning models. It builds trust in these models by providing explanations of how said models come to the conclusions that they do, permitting visibility into what’s seen as a “black box” by making their processes auditable and traceable. OpenScale also claims the ability to automatically identify and mitigate bias in models, suggesting new data for model retraining. Finally, OpenScale also provides monitoring capabilities of AI in production, validating ongoing model accuracy and health from a central management console.

Watson Assistant lets organizations build conversational bot interfaces into applications and devices. When interacting with end users, it can perform searches of relevant documentation, ask the user for further clarification, or redirect the user to a person for sufficiently complex queries. Its availability as part of Watson Anywhere permits organizations to implement and run virtual assistants in clouds outside of the IBM Cloud.

These new services join other Watson services currently available via the IBM Cloud Private for Data connector including Watson Studio and Watson Machine Learning, IBM’s programs for creating and deploying machine learning models. Additional Watson services being made available for Watson Anywhere later this year include Watson Knowledge Studio and Watson Natural Language Understanding.

In addition, IBM also announced IBM Business Automation with Watson, a future AI capability that will permit businesses to further automate existing work processes by analyzing patterns in workflows for commonly repeated tasks. Currently, this capability is available via limited early access; general availability is anticipated later in 2019.

Recommendations

Organizations seeking to analyze data “in place” have a new option with Watson services now accessible outside of the IBM Cloud. Data that must remain where it is for security and privacy reasons can now have Watson analytics processes brought to it via a secure container, whether that data resides on-prem or in any cloud, not just the IBM cloud. This opens the possibility of using Watson to enterprises in regulated industries like finance, government, and healthcare, as well as in departments where governance and auditability are core requirements, such as legal and HR.

With the IBM Cloud Private for Data connector enabling Watson Anywhere, companies now have a net-new reason to consider IBM products and services in their data workflow. While Amazon and Azure dominate the cloud market, Watson’s AI and machine learning tools are generally easier to use out of the box. For companies who have made significant commitments to other cloud providers, Watson Anywhere represents an opportunity to bring more user-friendly data services to their data residing in non-IBM clouds.

Companies concerned about the “explainability” of machine learning models, particularly in regulated industries or for governance purposes, should consider using Watson OpenScale to monitor models in production. Because OpenScale can provide visibility into how models behave and make decisions, concerns about “black box models” can be mitigated with the ability to automatically audit a model, trace a given iteration, and explain how the model determined its outcomes. This transparency boosts the ability for line of business and executive users to understand what the model is doing from a business perspective, and justify subsequent actions based on that model’s output. For a company to depend on data-driven models, those models need to prove themselves trustworthy partners to those driving the business, and explainability bridges the gap between the model math and the business initiatives.

Finally, companies planning for long-term model usage need to consider how they plan to support model monitoring and maintenance. Longevity is a concern for machine learning models in production. Model drift reflects changes that your company needs to be aware of. How do companies ensure that model performance and accuracy is maintained over the long haul? What parameters determine when a model requires retraining, or to be taken out of production? Consistent monitoring and maintenance of operationalized models is key to their ongoing dependability.

Posted on Leave a comment

Data Science and Machine Learning News, November 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Amazon, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, DominoElastic, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, SnapLogic, Tableau, Talend, Teradata, TIBCO, Trifacta, TROVE.

Continue reading Data Science and Machine Learning News, November 2018

Posted on Leave a comment

Red Hat Hybrid Cloud Management Gets Financial with Cloud Cost Management

Key Stakeholders: CIO, CFO, Accounting Directors and Managers, Procurement Directors and Managers, Telecom Expense Personnel, IT Asset Management Personnel, Cloud Service Managers, Enterprise Architects

Why It Matters: As enterprise cloud infrastructure continues to grown 30-40% per year and containerization becomes a top enterprise concern, IT must have tools and a strategy for managing the cost of storage and compute associated with both hybrid cloud and container spend. With Cloud Cost Management, Red Hat provides an option for its considerable customer base.

Key Takeaways: Red Hat OpenShift customers seeking to managing the computing costs associated with hybrid cloud and containers should starting trialing Cloud Cost Management when it becomes available in 2019. Effective cost management strategies and tools should be considered table stakes for all enterprise-grade technologies.

Amalgam Insights is a top analyst firm in the analysis of IT subscription cost management, as can be seen in our:

In this context, Red Hat’s intended development of multi-cloud cost management integrated with CloudForms is an exciting announcement for the cloud market. This product, scheduled to come out in early 2019, will allow enterprises supporting multiple cloud vendors to support workload-specific cost management, which Amalgam Insights considers to be a significant advancement in the cloud cost management market.

And this product comes at a time when cloud infrastructure cost management has seen significant investment including VMware’s $500 million purchase of Boston-based CloudHealth Technologies, the 2017 $50 million “Series A” investment in CloudCheckr, investments in this area by leading Telecom and Technology Expense Management vendors such as Tangoe and Calero, and recent acquisitions and launches in this area from the likes of Apptio, BMC, Microsoft, HPE, and Nutanix.

However, the vast majority of these tools are currently lacking in the granular management of cloud workloads that can be tracked at a service level and then appropriately cross-charged to a project, department, or location. This capability will be increasingly important as application workloads become increasingly nuanced and revenue-driven accounting of IT becomes increasingly important. Amalgam Insights believes that, despite the significant activity in cloud cost management, that this market is just starting to reach a basic level of maturity as enterprises continue to increase their cloud infrastructure spend by 40% per year or more and start using multiple cloud vendors to deal with a variety of storage, computing, machine learning, application, service, integration, and hybrid infrastructure needs.

Red Hat Screenshot of Hybrid Cloud Cost Management

As can be seen from the screenshot, Red Hat’s intended Hybrid Cloud Cost Management offering reflects both modern design and support for both cloud spend and container spend. Given the enterprise demand for third-party and hybrid cloud cost management solutions, it makes sense to have an OpenShift-focused cost management solution.

Amalgam Insights has constantly promoted the importance of formalized technology cost management initiatives and their ability in reducing IT cost categories by 30% or more. We believe that Red Hat’s foray into Hybrid Cloud Cost Management has an opportunity to compete with a crowded field of competitors in managing multi-cloud and hybrid cloud spend. Despite the competitive landscape already in play, Red Hat’s focus on the OpenShift platform as a starting point for cost management will be valuable for understanding cloud spend at container, workload, and microservices levels that are currently poorly understood by IT executives.

My colleague Tom Petrocelli has noted that “I would expect to see more and more development shift to open source until it is the dominant way to develop large scale infrastructure software.” As this shift takes place, the need to manage the financial and operational accounting of these large-scale projects will become a significant IT challenge. Red Hat is demonstrating its awareness of this challenge and has created a solution that should be considered by enterprises that are embracing both Open Source and the cloud as the foundations for their future IT development.

Recommendations

Companies already using OpenShift should look forward to trialling Cloud Cost Management when it comes out in early 2019. This product provides an opportunity to effectively track the storage and compute costs of OpenShift workloads across all relevant infrastructure. As hybrid and multi-cloud management becomes increasingly common, IT organizations will need a centralized capability to track their increasingly complex usage associated with the OpenShift Container Platform.

Cloud Service Management and Technology Expense Management solutions focused on tracking Infrastructure as a Service spend should consider integration with Red Hat’s Cloud Cost Management solution. Rather than rebuild the wheel, these vendors can take advantage of the work already done by RedHat to track container spend.

And for Red Hat, Amalgam Insights provides the suggestion that Cloud Cost Management become more integrated with CloudForms over time. The most effective expense management practices for complex IT spend categories always include a combination of contracts, inventory, invoices, usage, service orders, service commitments, vendor comparisons, and technology category comparisons. To gain this holistic view that optmizes infrastructure expenses, cloud procurement and expense specialists will increasingly demand this complete view across the entire lifecycle of services.

Although this Cloud Cost Management capability has room to grow, Amalgam Insights expects this tool to quickly become a mainstay, either as a standalone tool or as integrated inputs within an enterprise’s technology expense or cloud service management solution. As with all things Red Hat, Amalgam Insights expects rapid initial adoption within the Red Hat community in 2019-2020 which will drive down enterprise infrastructure total cost of ownership and increase visibility for enterprise architects, financial controllers, and accounting managers responsible for responsible IT cost management.

Posted on 1 Comment

Hanging out with the Cool Oracle Kids

Tom Petrocelli, Amalgam Insights Research Fellow

When I wrote my last article on open source at Oracle, I got some feedback. Much of it was along the lines are “Have you hit your head on something hard recently?” or “You must be living in an alternate dimension.” While the obvious answer to both is “perhaps…” it has become increasingly obvious that Oracle is trying very hard to be one of the cool open source kids.  They have spent money, both in for product development and acquisition, to build up their open source portfolio. This is what I saw front and center at Oracle OpenWorld.

When many IT professionals think about Oracle, they think about their flagship enterprise database. That’s fair since Oracle is still the clear leader in industrial strength databases. They are continuing to evolve the database platform with the Autonomous Database. Oracle is also well known for their enterprise applications especially ERP and CRM. The Oracle technology and product portfolio, however, is large and extends much further than the database and enterprise application categories. The cloud has given Oracle the opportunity to extend even further into emerging technology such as serverless or blockchain. It was also an opportunity to adopt open source technology across the board.

Open source, for example, is clearly on the minds of Oracle executives. Larry Ellison himself talked briefly about open source in his keynote. That’s a tectonic shift for Oracle. It can no longer be said that it is just a few people inside the company giving lip service to it. Oracle Cloud has embraced Docker containers with the Oracle Container Engine, and Kubernetes with the Oracle Kubernetes Engine. What was remarkable was that they are deploying unforked versions of these technologies. By deploying unforked i.e. standard versions of container images and Kubernetes, Oracle is demonstrating that they are not trying to turn these technologies into proprietary Oracle software that cannot be migrated to other cloud services or platforms. Instead, they are betting that large enterprise customers will want to run containers on the Oracle Cloud platform, which emphasizes security and reliability. In addition, they also believe that customers will want more automation to make enterprise cloud infrastructure easier to manage. These are Oracle’s strengths and are well suited to enterprise customers with complex applications.

Oracle is also heavily vested in important open source projects. One such project, Fn, is a project to develop serverless technology that can be deployed on-premises and in the cloud. What is remarkable is that they began this as an open source project before commercialization. This differs from some other Oracle open source projects, such as OpenJDK, which first came out of a commercial product, the Oracle Java VM. Fn is also the basis for Oracle Functions, Oracle’s serverless offering. Even here, they are taking an open approach by using the standard, unforked Fn so that Fn functions are not locked into the Oracle Cloud platform. Again, Oracle believes that customers will eventually decide on Oracle Functions because of the reliability and security of their cloud but they aren’t forcing customers into it.

OpenJDK is arguably one of the most strategic open source projects that Oracle is involved in. It is the project that is developing the next generations of the Java language and platform. Oracle has a commercial version of the VM but it is differentiated through service and support, not additional features. The IT community has a right to be a bit leery of the true openness of OpenJDK, especially given Oracle’s history with the platform, but their approach is strictly open source. Some of the upcoming OpenJDK features currently in the pipeline are designed to make Java a more competitive language while still maintaining the concurrency and typesafe features that have made Java the language of choice for secure, performance-oriented enterprise applications. Project Amber, for example, is trying to reduce the amount of code a developer has to type by inferring more from the code itself. The reduction in the ceremonials alone will make Java a more efficient and modern language. Project Loom, on the other hand, is building out a lightweight concurrency system for those instances where Threads are too resource intensive and OS level concurrency isn’t necessary.

More than Oracle’s products and contributions to projects, it is clear that the attitudes within the company have changed. Speaking with Oracle executives about open source sounds more like talking to Google or Red Hat. They are not losing the focus on automation, reliability, and security, which is why large enterprises do business with Oracle. They are, instead, trying to make open source fit the enterprise better. This, for Oracle, is the path to success.

As someone who has been in the IT industry a long time, I know that we can be tribal and chauvinistic about companies. Sins of the past and impressions from years ago form our opinions about what companies offer. Thirty years ago, Oracle and Microsoft were the cool kids on the block and IBM was my father’s IT provider. Unfortunately, we miss out on opportunities when we divide companies into the old and the new. It’s time to consider that a company such as Oracle could change and might have embraced the open source movement.

Posted on 1 Comment

Tom Petrocelli Provides Context for IBM’s Acquisition of Red Hat

Tom Petrocelli, Amalgam Insights Research Fellow

In light of yesterday’s announcement that IBM is planning to acquire Red Hat for $34 billion, we’d like to share with you some of our recent coverage and mentions of Red Hat to provide context for this gargantuan acquisition.

To learn more about the state of Enterprise Free Open Source Software and the state of DevOps, make sure you continue to follow Tom Petrocelli on this website and his Twitter account.

Posted on Leave a comment

Data Science Platforms News Roundup, September 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, DominoElastic, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta, TROVE.

Continue reading Data Science Platforms News Roundup, September 2018

Posted on 1 Comment

Oracle Delivers a FOSS Surprise

Tom Petrocelli, Amalgam Insights Research Fellow

An unfortunate side effect of being an industry analyst is that it is easy to become jaded. There is a tendency to fall back into stereotypes about technology and companies. Add to this nearly 35 years in computer technology and it would surprise no one to hear an analyst say, “Been there, done that, got the t-shirt.” Some companies elicit this reaction more than others. Older tech companies with roots in the 80’s or earlier tend to get in a rut and focus on incremental change (so as not to annoy their established customer base) instead of the exciting new trends. This makes it hard to be impressed by them.

Oracle is one of those companies. It has a reputation for being behind the market (cloud anyone?) and as proprietary as can be. Oracle has also had a difficult time with developers. The controversy over Java APIs (which is really a big company spat with Google) hasn’t helped that relationship. There are still hard feelings left over from the acquisition of Sun Microsystems (a computer geek favorite) and MySQL that have left many innovative developers looking anywhere but Big Red. Oracle’s advocacy of Free and Open Source Software (FOSS) has been at best indifferent. When the FOSS community comes together, one expects to see Red Hat, Google, and even Microsoft and IBM but never Oracle.

Which is why my recent conversation with Bob Quillin of Oracle came as a complete surprise. It was like a bucket of cold water on a hot day, both shocking and at the same time, refreshing.

Now, it’s important to get some context right up front. Bob came to Oracle via an acquisition, StackEngine. So, he and his team’s DNA is more FOSS than Big Red. And, like an infusion of new DNA, the StackEngine crew has succeeded in changing Oracle on some level. They have launched the Oracle Kubernetes and Registry Services which brings a Container Engine, Kubernetes, and a Docker V2 compatible registry to the Oracle Cloud Service. That’s a lot of open source for Oracle.

In addition, Bob talked about how they were helping Oracle customers to move to a Cloud Native strategy. Cloud Native almost always means embracing FOSS since so many components are FOSS. Add to the mixture a move into serverless with Fn.  Fn is also an open source project (Apache 2.0 licensed) but one that originated in Oracle.  That’s not to say there aren’t other Oracle open source projects (Graal for example) but they aren’t at the very edge of computing like Fn. In this part of the FOSS world, Oracle is leading, not following. Oracle even plans to have a presence at Kubecon+CloudNativeCon 2018 in Seattle this December, an open source-oriented conference run by The Linux Foundation, where they will be a Platinum Sponsor. In the past this would be almost inconceivable.

The big question is how will this affect the rest of Oracle? Will this be a side project for Oracle or will they rewrite the Oracle DNA in the same way that Microsoft has done? Can they find that balance between the legacy business, which is based on high-priced, proprietary software – the software that is paying the bills right now – and community run, open source world that is shaping the future of IT? Only time will tell but there will be a big payoff to IT if it happens. Say what you will about Oracle, they know how to do enterprise software. Security, performance, and operating at scale are Oracle’s strengths. They are a big reason their customers keep buying from them instead of an open source startup or even AWS. An infusion of that type of knowledge into the FOSS community would help to overcome many of the downsides that IT experiences when trying to implement open source software in large enterprise production environments.

Was I surprised? To say the least. I’ve never had a conversation like this with Oracle. Am I hopeful? A bit. There are forces within companies like Oracle that can crush an initiative like this.  As the market continues to shift in the direction of microservices, containers, and open source in general, Oracle risks becoming too out of step with the current generation of developers. Even if FOSS doesn’t directly move the needle on Oracle revenue, it can have a profound effect on how Oracle is viewed by the developer community. If the attitude of people like Bob Quillin becomes persuasive, then younger developers may start to see Oracle as more than just their father’s software company. In my opinion, the future of Oracle may depend on that change in perception.