Posted on

Tom Petrocelli’s Retirement Message to All of You

Well, best to rip off the band-aid. 

I’m retiring at the end of the year. That’s right, on January 1, 2021 I will be officially and joyfully retired from the IT industry. No more conferences, papers, designs, or coding unless I want to. Truth be told, I’m still pretty young to retire. Some blame has to be laid at the feet of the pandemic. Being in the “trend” industry also sometimes makes you aware of negatives changes coming up. The pandemic is driving some of those including tighter budgets. This will just make everything harder.  Many aspects of my job that I like, especially going to tech conferences, will be gone for a while or maybe forever. 

I can’t blame it all on the pandemic though. Some of it is just demographics. Ours is a youthful industry with a median age of roughly mid to early 40’s. To be honest, I’m getting tired of being the oldest, or one of the oldest people in the room. It’s not as if I’m personally treated as an old person. In fact, I’m mostly treated as younger than I am which means a certain comfort making “old man” jokes around me. No one thinks that I will take offense at the ageism, I suppose. It’s not really “offense” as much as it’s irritation.

There will be a good number of things I will miss. I really love technology and love being among people who love it as much as I do. What I will miss the most is the people I’ve come to know throughout the years. It’s a bit sad that I can’t say goodbye in person to most of them. I will especially miss the team here at Amalgam Insights. Working with Hyoun, Lisa, and everyone else has been a joy. Thanks for that you all.

My career has spanned a bit over 36 years (which may surprise some of you… I hope) and changes rarely experienced in any industry. When I started fresh from college in 1984, personal computers were new, and the majority of computing was still on the mainframes my Dad operated. No one could even imagine walking around with orders of magnitude more computing power in our pockets. So much has changed. 

If you will indulge me, I would like to present a little parting analysis. Here is “What has changed during my career”.

  1. When I started mainframes were still the dominant form of computing. Now they are the dinosaur form of computing. Devices of all kinds wander the IT landscape, but personal computers and servers still dominate the business world. How long before we realize that cyberpunk goal of computers embedded in our heads? Sooner than I would like.
  2. At the beginning of my career, the most common way to access a remote computer was a 300 baud modem. Serial lines that terminals deployed to speak to the mainframes and minicomputers of the times were also that speed. The bandwidth of those devices was roughly 0.03 Mbps. Now, a home connection to an ISP is 20 – 50 Mps or more and a corporate desktop can expect 1 Gbs connections. That’s more than 33 times what was common in the 80s.
  3. Data storage has gotten incredibly cheap compared to the 1980s. The first 10M hard drive I purchased for a $5000 PC cost almost US$ 1000.00 in 1985 dollars.  For 1/10 of that price I can now order a 4T HD (and have it delivered the next day.) Adjusted for inflation that $1000 HD cost ~$2500 in 2020 dollars. That’s 25 times what the modern 4T drive costs.
  4. Along with mainframes, monolithic software has disappeared from the back end. Instead, client-server computing has given way to n-Tier as the main software platform. Not for long though. Distributed computing is in the process of taking off. It’s funny. At the beginning of my career I wrote code for distributed systems, which was an oddity back then. Now, after more than 30 years it’s becoming the norm. Kind of like AI.
  5. Speaking of AI, artificial intelligence was little more than science fiction. Even impressive AI was more about functions like handwriting recognition, which was created at my alma mater, the University at Buffalo, for the post office. Nothing like we see today. We are still, thankfully, decades or maybe centuries from real machine cognition. I’ll probably be dead before we mere humans need to bow to our robot overlords. 
  6. When I began my career, it was very male and white. My first manager was a woman and we had two other women software engineers in our group. This was as weird as a pink polka-dotted rhinoceros walking through the break room. Now, the IT industry is… still very male and white. There are more women, people with disabilities, and people of color than there was then but not quite the progress I had hoped for.
  7. IBM was, at that time, the dominant player in the computer industry. Companies such as Oracle and Cisco were just getting started, Microsoft was still basically a garage operation, and Intel was mostly making calculator chips. Now, IBM struggles to stay alive, Cisco, Oracle, Intel, and Microsoft are the established players in the industry and Amazon, an online store, is at the top of the most important trend in computing in the last 20 years, cloud computing. So many companies have come and gone, I don’t even bother to keep track.
  8. In the 1980s, the computer industry was almost entirely American, with a few European and Japanese companies in the market. Now, it’s still mostly American but for the first time since the dawn of the computer age, there is a serious contender: China. I don’t think they will dominate the industry the way the US has, but they will be a clear and powerful number two in the years to come. The EU is also showing many signs of innovation in the software industry.
  9. At the start of my career, you still needed paper encyclopedias. Within 10 years, you could get vast amounts of knowledge on CD’s. Today, all the world’s data is available at our fingertips. I doubt young people today can even imagine what it was like before the Internet gave us access to vast amounts of data in an instant. To them, it would be like living in a world where state of the art data storage is a clay tablet with cuneiform writing on it.
  10. What we wore to work has changed dramatically. When I started my career, we were expected to wear business dress. That was a jacket and tie with dress slacks for men, and a dress or power suit for women. In the 90s that shifted to business casual. Polo shirts and khakis filled up our closets. Before the pandemic, casual became proper office attire with t-shirts and jeans acceptable. At the start of my career, dressing like that at work could get you fired. Post pandemic, pajamas and sweatpants seem to be the new norm, unless you are on a Zoom call. Even so, pants are becoming optional.
  11. Office communications has also changed dramatically. For eons the way to communicated to co-workers was “the memo.” You wrote a note in longhand on paper and handed it to a secretary who typed it up. If there was more than one person, the secretary would duplicate it with a Xerox machine and place it in everyone’s mailboxes. You had to check your mailbox every day to make sure that you didn’t have any memos. It was slow and the secretaries knew everyone’s business. We still have vestiges of this old system in our email systems. CC stands for carbon copy which was a way of duplicating a memo. In some companies, everyone on the “To:” list received a fresh typed copy while the CC list received a copy that used carbon paper and a duplicating machine. As much as you all might hate email, it is so much better (and faster) than the old ways of communicating. 
  12. When I started my first job, I became the second member of my immediate family that was in the IT industry. My Dad was an operations manager in IBM shops. Today, there are still two members of our immediate family that are computer geeks. My son is also a software developer. He will have to carry the torch for the Petrocelli computer clan. No pressure though…
  13. Remote work? Ha! Yeah no. Not until the 90s and even then, it was supplementary to my go to the office job. I did work out of my house during one of my start ups but I was only 10 minutes from my partner. My first truly remote job was in 2000 and it was very hard to do. This was before residential broadband and smartphones. Now, it’s so easy to do with lots of bandwidth to my house, cheap networking, Slack, and cloud services to make it easy to stay connected. Unfortunately, not everyone has this infrastructure nor the technical know-how to deal with network issues. We’ve come a long way but not far enough as many of you have recently discovered.

So, goodbye my audience, my coworkers, and especially my friends. Hopefully, the universe will conspire to have us meet again. In the meantime, it’s time for me to devote more time to charity, ministry, and just plain fun. What can I say? It’s been an amazing ride. See ya!

(Editor’s Note: It has been a privilege and an honor to work with Tom over the past few years. Tom has always been on the bucket list of analysts I wanted to work with in my analyst career and I’m glad I had the chance to do so. Please wish Tom well in his next chapter! – Hyoun)

Posted on

Why Babelfish for Aurora PostgreSQL is a Savage and Aggressive Announcement by AWS

On December 1st at Amazon re:invent, Amazon announced its plans to open source Babelfish for PostgreSQL in Q1 of 2021 under the Apache 2.0 license. Babelfish for PostgreSQL is a service that allows PostgreSQL databases to support SQL Server requests and communication without requiring schema rewrites or custom SQL.

As those of you who work with data know, this is an obvious shot across the bow by Amazon to make it easier than ever to migrate away from SQL Server and towards PostgreSQL. Amazon is targeting Microsoft in yet another attempt to push database migration.

Over my 25 years in tech (and beyond), there have been many many attempts to push database migration and the vast majority have failed. Nothing in IT has the gravitational pull of the enterprise database, mostly because the business risks of migration have almost never warranted the potential operational and cost savings of migration.

So, what makes Babelfish for PostgreSQL different? PostgreSQL is more flexible than traditional relational databases in managing geospatial data and is relatively popular, placing fourth on DB-Engines ranking as of December 2, 2020. So, the demand to use PostgreSQL as a transactional database fundamentally exists at a groundroots level.

In addition, the need to create and store data is continuing to grow exponentially. There is no longer a “single source of truth” as there once was in the days of monolithic enterprise applications. Today, the “truth” is distributed, multi-faceted, and rapidly changing based on new data and context, which is often better set up in new or emerging databases rather than retrofitted into an existing legacy database tool and schema.

The aspect that I think is fundamentally most important is that Babelfish for PostgreSQL allows PostgreSQL to understand SQL Server’s proprietary T-SQL. This removes the need to rewrite schemas and code for the applications that are linked to SQL Server prior to migration.

And it doesn’t hurt that, as an open source project, the PostgreSQL community has traditionally been both open and not dominated by any one vendor. So, although this project will help Amazon, Amazon will not be driving the majority of the project or have a majority of the contributors to the project.

My biggest caveat is that Babelfish is still a work in progress. For now, it’s an appropriate tool for standard transactional database use cases, but you will want to closely check data types. And if you have a specialized industry vertical or use case associated with the application, you may need an industry-specific contributor to help with developing Babelfish for your migration.

As for the value, there is both the operational value and the financial value. From an operational perspective, PostgreSQL is typically easier to manage than SQL Server and provides more flexibility to migrate and host the database based on your preferences. There is also an obvious cost benefit, as the inherent license cost of SQL Server will likely cut the cost of the database itself by 60%, give or take on Amazon Web Services. For companies that are rapidly spinning up services and creating data, this can be a significant cost over time.

For now, I think the best move is to start looking at the preview of Babelfish on Amazon Aurora to get a feel for the data translations and transmissions since Babelfish for PostgreSQL likely won’t be open sourced for another couple of months. This will allow you to measure up the maturity of Babelfish for your current and rapidly growing databases. Given the likely gaps that exist in Babelfish at the moment, the best initial use cases for this tool are for databases where fixed text values make up the majority of data being transferred.

As an analyst, I believe this announcement is one of the few in my lifetime that will result in a significant migration of relational database hosting. I’m not predicting the death of SQL Server, by any means, and this tool is really best suited for smaller TB and below transactional databases at this point. (Please don’t think of this as a potential tool for your SQL Server data warehouse at this point!)

But the concept, the proposed execution, and the value proposition of Babelfish all line up in a way that is client and customer-focused, rather than a heavy-handed attempt to force migration for vendor-related revenue increases.

Posted on

Underspecification, Deep Evidential Regression, and Protein Folding: Three Big Discoveries in Machine Learning

This past month has been a banner month for Machine Learning as three key reports have come out that change the way that the average lay person should think about machine learning. Two of these papers are about conducting machine learning while considering underspecification and using deep evidential regression to estimate uncertainty. The third report is about a stunning result in machine learning’s role to improve protein folding.

The first report was written by a team of 40 Google researchers, titled Underspecification Presents Challenges for Credibility in Modern Machine Learning. Behind the title is the basic problem that certain predictors can lead to nearly identical results in a testing environment, but provide vastly different results in a production environment. It can be easy to simply train a model or to optimize a model to provide a strong initial fit. However, savvy machine learning analysts and developers will realize that their models need to be aligned not only to good results, but to the full context of the environment, language, risk profile, and other aspects of the problem in question.

The paper suggests conducting additional real-world stress tests for models that may seem similar and to understand the full scope of requirements associated with the model in question. As with much of the data world, the key for avoiding underspecification seems to come back to strong due diligence and robust testing rather than simply trusting the numbers.

The second report is Deep Evidential Regression, written by a team of MIT and Harvard authors which did the following.

In this paper, we propose a novel method for training non-Bayesian NNs to estimate a continuous target as well as its associated evidence in order to learn both aleatoric and epistemic uncertainty. We accomplish this by placing evidential priors over the original Gaussian likelihood function and training the NN to infer the hyperparameters of the evidential distribution

http://www.mit.edu/~amini/pubs/pdf/deep-evidential-regression.pdf

From a practical perspective, this method provides a relatively simple way to understand how “uncertain” your neural net is compared to the reality that it is trying to reflect. This paper moves beyond the standard measures of variance and accuracy to start trying to understand how confident we can be in the models being created. From my perspective, this concept couples well with the problem of underspecification. Together, I believe these two papers will help data scientists go a long way towards cleaning up models that look superficially good, but fail to reflect real world results.

Finally, I would be remiss if I didn’t mention the success of DeepMind’s program, AlphaFold, in the Critical Assessment of Structure Prediction challenge, which focuses on protein-structure predictions.

Although DeepMind has been working on AlphaFold for years, this current version tested yesterday provided results that were a quantum leap compared to prior years.

From Deepmind: https://deepmind.com/blog/article/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology

The reason that protein folding is so difficult to calculate is that there are multiple levels of structure to a protein. We learn about amino acids, which are the building blocks of proteins and basically defined by DNA. The A’s, T’s, C’s, and G’s basically provide an alphabet that defines the linear lineup of a protein with groups of three nucleotides defining an amino acid.

But then there’s a secondary structure where internal bonding can make the proteins line up as alpha sheets or beta helices. The totality of these secondary structures, this combination of sheets and helix shapes, makes up the tertiary structure.

And then multiple chains of tertiary structure can come together into a quaternary structure, which is the end game for building a protein. If you really want to learn the details, Khan Academy has a nice video to walk you through the details, as I’ve skipped all of the chemistry.

But the big takeaway: there are four levels of increasingly complicated chemical structure for a protein, each with its own set of interactions that make it very computationally challenging to guess what a protein would look like based just on having the basic DNA sequence or the related amino acid sequence.

Billions of computing hours have been spent on trying to figure out some vague idea of what a protein might look like and billions of lab hours have then been spent trying to test whether this wild guess is accurate or, more likely, not. This is why it is an amazing game-changer to see that DeepMind has basically nailed what the quaternary structure looks like.

This version of AlphaFold is an exciting Nobel Prize-caliber discovery. I think this will be the first Nobel Prize driven by deep learning and this discovery is an exciting validation of the value of AI at a practical level. At this point, AlphaFold is the “Data Prep” tool for protein folding with the same potential to greatly reduce the effort needed to simply make sure that a protein is feasable.

This discovery will improve our ability to create drugs, explore biological systems, and fundamentally understand how mutations affect proteins on a universal scale.

This is an exciting time to be a part of the AI community and to see advances being made literally on a weekly basis. As an analyst in this space, I look forward to seeing how these, and other discoveries, filter down to tools that we are able to use for business and at home.

Posted on

Updated Analysis: ServiceNow Acquires Element AI

(Note: Last Updated January 15, 2021 to reflect the announced purchase price.)

On November 30, 2020, ServiceNow announced an agreement to purchase Element AI, which was one of the top-funded and fastest-growing companies in the AI space.

Element AI was founded in 2016 by a supergroup of AI and technology executives with prior exits including Jean-Francois Gagne, Anne Martel, Nicolas Chapados, Philippe Beaudoin, and Turing Award winner Yoshua Bengio. This team was focused on helping non-technical companies to develop AI software solutions and expectations were only raised after a $102 million Series A round in 2017 followed by a $151 million funding round in 2019.

Element AI’s business model was similar to the likes of Pivotal Labs from a software development perspective or Slalom from an analytics perspective in that Element AI sought to provide the talent, skills, resources, and development plans to help companies adopt AI. The firm was often brought in to support AI projects that were beyond the scope of larger, more traditional consultancies such as Accenture and McKinsey.

However, Element AI faced a crossroads in 2020 in between several key market trends. First, the barrier to entry for AI has reduced considerably due to the development of AutoML solutions combined with the increased adoption of Python and R. Second, management consulting revenue growth slowed down in 2020, which reduced the velocity of pipeline to Element AI and made it harder to project the “hockey stick” exponential growth expected by highly funded companies, especially in light of COVID-related contract delays. And third, the ROI associated with AI projects is now better understood to largely come from the automation and optimization of processes associated with already-existing digital transformation projects that make separate AI efforts to be duplicative in nature, as Amalgam Insights has documented in our Business Value Analysis reports over time.

In the face of these trends, the acquisition of Element AI by ServiceNow is a very logical exit. This acquisition allows investors to get their money back relatively quickly.

(Update: on January 14, 2021, ServiceNow filed that the purchase price was approximately US $230 million or CDN $295 million. This was a massive discount on the estimated $600 million+ valuation from the previous September 2019 funding announcement )

Not every bet on building a multi-billion dollar company works out as planned, but this exercise was successful in creating a team of AI professionals with experience in building enterprise solutions. Amalgam Insights expects that over 200 Element AI employees will end up moving over to ServiceNow to build AI-driven pipelines and solutions under ServiceNow’s chief AI officer Vijay Narayanan. This team was ultimately the key reason for ServiceNow to make the acquisition, as Element AI’s commercial work is expected to be shut down after the close of this acquisition so that Element AI can focus on the ServiceNow platform and the enterprise transformation efforts associated with the million-dollar contracts that ServiceNow creates.

With this acquisition, ServiceNow has also stated that it intends to maintain the Element AI Montreal office as an “AI innovation hub,” which Amalgam Insights highly approves of. Montreal has long been a hub of analytics, data science, and artificial intelligence efforts and it would both help ServiceNow from a technical perspective to maintain a hub here and could help assuage some wounds that the Canadian government may have from losing a top AI company with government funding to a foreign company. Given ServiceNow’s international approach to business and Canada’s continued importance to the data, analytics, and AI spaces, this acquisition could be an unexpected win-win relationship between ServiceNow and Canada.

What To Expect Going Forward

With this acquisition, ServiceNow has quickly gained access to a large team of highly skilled AI professionals at a time when its revenues are growing 30% year over year. At this point, ServiceNow must scale quickly simply to keep up with its customers and this acquisition ended up being a necessary step to do so. This acquisition is the fourth AI-related acquisition made by ServiceNow after purchases of Loom Systems for log analytics, Passage AI to support conversational and natural language understanding, and Sweagle to support configuration management (CMDB) for IT management.

At the same time, Amalgam Insights believes this acquisition will provide focus to the Element AI team, which was dealing with the challenge of growing rapidly, while trying to solve the world’s AI problems ranging from AI for Good to defining ethical AI to building AI tools to discovering product-revenue alignment. The demands of trying to solve multiple problems as a startup, even an ambitious and well-funded startup, can be problematic. This acquisition allows Element AI to be a professional services arm and a development resource for ServiceNow’s ever-evolving platform roadmap as ServiceNow continues to expand from its IT roots to take on service, HR, finance, and other business challenges as a business-wide transformational platform.

Posted on Leave a comment

Analysis: Qlik acquires Blendr.io to Enhance Data Integration Capabilities

Key Stakeholders: Chief Information Officers, Chief Technical Offiers, Chief Data Officers, Data Management Managers, Analytics Managers, Enterprise Information Managers

Why It Matters: SaaS and public cloud data sources are both rapidly proliferating and playing a bigger role in enterprise analytics. It is no longer enough to build a Single Source of Truth without being able to share and integrate data across all relevant sources.

Key Takeaway: This is an evolutionary time for analytics solutions as AI, process automation, public cloud, & SaaS proliferation are quickly changing the demands for analytics. This iPaaS (Integration Platform as a Service) acquisition shows how Qlik is augmenting its core capabilities to keep up with quickly-changing market demands.

About the Acquisition

On October 22, 2020, Qlik acquired Blendr.io, an integration Platform as a Service that supports data integration across over 500 Software as a Service applications and clud data sources. This research note analyzes why Qlik acquired Blendr.io, what this means for current and potential clients of both companies, and key recommendations for data and analytics professionals to consider based on this acquisition.

Blendr.io was founded in 2017 in Belgium by tech entrepreneur Niko Nelissen, who had previous experience in building out marketing automation, event ticketing, data hosting, and data center operations businesses. In short, his background enveloped all aspects of providing data as a service both as backend data infrastructure as well as front-end data used for sales and marketing technologies.

This background led to the creation of Blendr.io, which quickly arose as a tool to support data automation and workflows across data, alerts, caches, applications, and databases. This capability has proven especially valuable at a time when application proliferation has occured and the “trusted business platform” has been replaced by a federation of Best-in-Breed applications that need to be connected together from data, process, and synchronization perspectives.

Based on this need, Qlik’s acquisition of Blendr.io makes sense as a functional addition that both strengthens Qlik’s sales and marketing support and allows Qlik to play a greater role in delivering what they call “Active Intelligence.”

This acquisition also is accretive to Qlik’s prior acquisitions made since it became a Thoma Bravo portfolio company in 2016:

July 2018: Qlik acquires Podium Data to gain a Big Data-fluent data catalog and governance capability.

March 2019: Qlik acquires Attunity to support cloud data synchronization and to validate the advice I had provided at Attunity’s 2013 Analyst Day on the future valuation of Attunity.

January 2020: Qlik acquires RoxAI to support real-time alerts associated with data and analytic changes.

August 2020: Qlik acquires Knarr Analytics assets to strengthen its supply chain analytics capabilities and to bring in key talent.

This acquisition also comes as Qlik is in the process of retiring a prior acquisition, Qlik DataMarket. This 2014 acquisition was originally intended to help clients to combine internal business data with external geographic, financial, or political data. But as access to public external data has become easier to support and business clients have found the challenge of simply managing internal data across a wide variety of private sources to become a bigger challenge, Qlik has made a similar shift through this acquisition.

What to Expect from this Acquisition

Qlik customers with significant SaaS portfolios should be excited to see this acquisition, as it now allows Qlik to develop native analytic products across a variety of marketing, sales, productivity, machine learning, and public cloud platforms. Qlik states that it will start launching products based on the blendr.io acquisition in 2021. Amalgam Insights expects that the combination of iPaaS, data governance, and analytics will allow Qlik to create secure amalgams of data and process automation.

This step of integrating SaaS and cloud data to analytics is necessary for Qlik to deliver on the promise of the “data-driven enterprise” that we have all heard so much about over the past several years. To get beyond the hype, data must be contextualized, analyzed, and used both to accelerate basic rules-based actions and to support decisions based on more complex scenarios, politics, and strategies. Qlik’s acquisition of blendr.io is an important step forward in addition to the Podium Data and Attunity acquisitions in allowing Qlik to support a multi-app, multi-cloud, and multi-data environment where there is no longer the “single source of truth.”

For Blendr.io customers, expect Qlik to continue development on Blendr as a standalone product. It is in Qlik’s best interest to continue the development of Blendr’s iPaaS, as this is a competitive space. Blendr.io competes favorably with the likes of Dell Boomi, Jitterbit, MuleSoft, SnapLogic, Workato, and Zapier. Qlik will be pressured to maintain functionality on par or ahead of its competitors. However, similar to prior acquisitions, expect Blendr to see a name change and an expanded focus on supporting the Qlik portfolio of products. Over time, it would not be surprising to see this “Qlik iPaaS” being more integrated with the Qlik Data Catalyst catalog product and the replication and ingestion components of the Qlik Data Integration platform.

Recommendations

For Qlik customers, this is a good time to start putting pressure on your sales and account team on the types of products you would like to see from a combination of iPaaS integration, governed data, and analytics. Why build it yourself when you can make Qlik do a fair bit of the heavy lifting?

In addition, Qlik customers may want to share their current SaaS portfolios with their account teams to drive the development of the iPaaS. The challenge with managing SaaS-based data is not in connecting one app to one data source: APIs make this task fairly straightforward. The real challenge is that the average enterprise uses over 1,000 discrete apps and data-driven services based on data provided by security vendors such as Symantec and NetSkope, which leads to a near-infinite number of potential connections. Companies must get a “starter kit” of connectors to handle the Pareto 20% of issues that represent 80% of their day-to-day work.

For Blendr.io customers, expect to be introduced to the Qlik portfolio of products. For those who have not looked at Qlik in a few years and think of it mainly as the QlikView visual discovery solution, take a look at Qlik’s portfolio across data management, data lake and warehouse management, data replication, and the Qlik Sense data engine which includes more modern geospatial, search, and natural language analytic capabilities. Since Thoma Bravo’s acquisition of Qlik in 2016, Qlik has expanded its portfolio to support data management challenges.

Conclusion

Overall, Amalgam Insights is bullish on this latest acquisition, as it both fills a gap in Qlik’s existing portfolio of data and analytics capabilities and brings in a technology that is still relatively new and flexible for Qlik to integrate into its quickly growing portfolio. With this acquisition, Qlik gets one step closer to becoming an enterprise data solution.

From a market perspective, it is hard not to compare the moves Qlik makes to the moves made by fellow private equity-owned data companies TIBCO (purchased by Vista Equity in September 2014) and Informatica (purchased by Permeira and Canada Pension Plan Investment Board in August 2015). Qlik’s focus on expanding data discovery and access across frequently used business data has been fairly consistent across its acquisitions. As these companies race towards the financial timelines and outcomes required by private equity, Amalgam Insights believes that Qlik is well on path to creating a whole that is more than the sum of its parts.

If you have any additional questions about this acquisition, the current state of the business analytics market, or how to work with Amalgam Insights, please contact us at info@amalgaminsights.com to set up a consultation.

Posted on Leave a comment

Analysis: CoreView Raises $10 Million Series B Round for SaaS Management

Key Stakeholders: CIO, CTO, CFO, Software Asset Managers, IT Asset Managers, IT Procurement Managers, Technology Expense Managers, Sales Operations Managers, Marketing Operations Managers.

Why It Matters: The investment in CoreView comes at a time when SaaS proliferation and management are becoming a core IT problem. CoreView’s leadership position in managing Microsoft 365 and enterprise SaaS portfolios makes it a vendor to consider to solve the SaaS mess.

Top Takeaway: Enterprises and IT suppliers managing large SaaS portfolios either from a financial or operational perspective must find a solution to manage the hundreds or thousands of SaaS apps and services under management or risk both security breaches and financial bloat with millions of dollars at stake.

About the Acquisition

On October 5th, 2020, CoreView raised a $10 million Series B round which was led by Insight Partners. CoreView provides a Software as a Service management platform to secure and optimize Microsoft 365 and additional applications as an augmentation of the Microsoft M365 Admin Center.

CoreView was founded in 2014 in Milan, Italy by a team with experience as Microsoft system integrators to provide governance for Office 365 deployments. In October 2019, CoreView augmented its solution with the acquisition of Alpin, a SaaS management solution used to monitor SaaS activity and manage costs.

With this funding, CoreView is expected to increase both its direct clientele as well as its global reseller and service provider base. Having grown almost three-fold over the past year, CoreView is acquiring this funding at a time when SaaS management is becoming an increasingly important part of IT management.

From Amalgam Insights’ perspective, this funding is interesting for two reasons: the quality of the investor and the growing challenge of managing SaaS.

First, this round was led by Insight Partners, which has a strong history of investing in fast-growing DevOps and data companies in line with CoreView’s enterprise software management needs, including Ambassador, Carbon Relay, JFrog, Resolve, Dremio, and OneTrust. Because this investor has been deeply involved with investments in the future of software development and management, Amalgam Insights believes that Insight Partners provides value to CoreView as an investor.

Second, this funding is coming at a time when SaaS proliferation has become an increasingly challenging problem. This funding indicates where the next wave of growth is going to occur in IT management. After a decade of stating that “There’s an app for that, companies must now face the challenge of standardizing and optimizing their app environments. Security vendors such as Symantec and NetSkope have published estimates that the average enterprise uses between 900 and 1200 discrete apps and services on a regular basis, which creates a logistical nightmare.

A decade ago, I wrote on the challenges of Bring Your Own Device and the issues of expense and inventory management for these $500 devices. But with the emergence of Bring Your Own App, multiplied by the sheer proliferation of productivity, sales and marketing, and other line-of-business applications, SaaS management was already coming of age as its own complicated challenge for IT as SaaS was growing 20-25% per year as a market. With the challenges of COVID-19, SaaS has only become more important for keeping remote and work-from-home employees connected to key tools and data.

Recommendations

Based on this funding round, Amalgam Insights makes one key recommendation for IT departments: get control of your SaaS portfolio, which is likely scattered across line-of-business admins, expense reports, and the half of SaaS associated with enterprise software that is currently in IT. Even if the app administration remains in the hands of the line-of-business teams, IT needs to be aware of the data governance, data integration and duplication, and zero-trust based management of least-privilege access across apps. IT still has a job in the SaaS era as SaaS continues to grow from its current size of a quarter of all software in 2020 to Amalgam Insights’ projection in 2025 that the SaaS market will triple to approximately $300 billion and become half of the enterprise software market.

An additional recommendation for all IT agents, resellers, and service providers is to gain a SaaS management capability as soon as possible. At this point, this means looking at two areas: SaaS Operations Management focused on the governance and configuration of SaaS and SaaS Expense Management focused on the inventory and cost optimization of SaaS. There is some overlap between the two as well as some areas of specialization. From Amalgam Insights’ perspective, CoreView is recommended as both an Operations Management and an Expense Management solution with a specialization in supporting Microsoft 365.

If you have any questions about this research note or on the SaaS Management market, please contact Amalgam Insights at info@AmalgamInsights.com to set up time to speak with our analysts.

Posted on

Is Apple Losing Its Consumer Marketing Touch?

CNBC’s Jessica Bursztynsky just wrote a nice piece,” Apple fails to market the iPhone 12 Pro to the average consumer

My take on the article: One of Apple’s traditional strengths has been translating technical capabilities into household tasks. This strength is what allowed the iPhone to take off in the first place when the initial iPhone hardware was inferior to its competitors. As an example, when the iPhone first came out, 3G networks had already been in the United States for five years, yet Apple started with a 2G phone.

The odd part is that the technical capabilities of the iPhone 12 do translate to a more personal phone: take the outdoors home with you, augment your world, get a smarter phone. 5 nm chips are much smarter than any other iPhone ever. But Apple didn’t find a way to bring the story together for the iPhone 12 despite having a more vivid, smarter, faster, and more networked phone. From a technical perspective, the iPhone 12 is a big upgrade, almost a generational improvement.

But Apple fell for the hype of its partners with 5G and 5nm rather than the personal, high-end, affordable luxury game changer branding that has made Apple a juggernaut. If there is anything that Apple should know by know, it is that all of these technical numbers are practically meaningless to its core audience. Although I’ve joked in the past that technology doesn’t seem to exist before Apple acknowledge that it exists, I don’t actually think that works for 5G, as both the infrastructure and use cases for 5G at the consumer level have not been fully figured out yet.

Just as Bose customers couldn’t have cared less about the audiophile’s perspective of Bose products, Apple customers couldn’t care less about the computing specs compared to the simple question of “Does it work?”

Some basic apps or features on the iPhone 12 taking advantage of the enhanced photos, LIDAR, and 5nm based processing in the background would have been great. If Apple can’t figure out how 40%+ faster helps you, how can anyone else?

It’s also interesting that there was little in the new phone regarding security or working from home. I guess Apple figured it has nailed Work and School from Home despite all the challenges that still exist. But for anybody who has either been moved to a work from home situation or has had the interminable experience of helping your kid with a remote schooling environment, you know there is a lot of work left. Some sort of example of how to make the iPhone a work hub would have been really interesting.

To me, the iPhone 12 launch felt like an old Nokia Symbian phone launch that always focused on specs and hardware superiority. Even BlackBerry, back in the day, had more appeal to the feel and UX of its devices. Ask Nokia how that technical superiority sale turned out in the late 2000s. 

I’m not saying Apple will disappear tomorrow. But the iPhone 12 launch looked like that of a mature technology waiting to be disrupted rather than a technology designed to further enhance your life. This is an interesting time to watch the evolution of the smartphone industry, as augmented reality devices are not ready for the mainstream yet, Huawei is dealing with geopolitical challenges, Samsung continues to produce a variety of smart devices, and Google has revived the Pixel brand with an impressive set of recent device models.

My recommendation: the iPhone 12 is an interesting set of functionalities that still lacks the infrastructure and apps to fully take advantage of what it does. I think this will be a great device to purchase around the same time that a COVID vaccine becomes generally available, probably around Summer of next year. Until then, if you want to get used to the photo and LIDAR capabilities of the phone or are in a city with good 5G coverage, the iPhone 12 is a good starting point.

For more context on 5G, please read our strategic guide on 5G or contact us at info@amalgaminsights.com to set up a strategic consulting session.

Posted on Leave a comment

Amazon Web Services Launches AWS Cost Anomaly Detection, in Beta

If you’re moving into cloud, Amazon launched a service on September 25th called AWS Cost Anomaly Detection within AWS Cost Management to find surges in spend. Part of the product is a machine learning algorithm that tracks your spend to ensure that spend peaks aren’t just part of a cyclical spend change and to detect anomalies. One of the interesting aspects of this product to me is the flexibility of monitoring spend based on service, account, category, or tag.

AWS Monitor Types for Cost Anomaly Detection

The tagging capability is the most interesting one to me, as tags are how cloud costs are effectively cross-charged to projects, cost centers, geographies, and the other financial categories that are most relevant from an IT expense and financial management perspective. Although the other spend monitoring categories are interesting from a practitioner level and obviously should be used to optimize spend, they will likely be less useful to share with your colleagues.

I’m especially interested in seeing more detail about how machine learning ends up tracking AWS service spend over time to correct its recommendations. One of the interesting aspects of this service is that you actually do not choose your parameters for which anomalies get tracked, as the algorithmic approach picks up every spike. Rather, the service focuses on when it should alert you to changes and anomalies based on the size of the spike. And then you can choose to be alerted in near-real time, daily, or weekly basis.

Given that it’s currently a beta product, I’m betting that the alerts and recommendations aren’t quite fully baked at this point. But even so, this optimization moves cloud towards the state of in-billing period monitoring and optimization that we’re used to doing in wireless and wired spend. Take a look and see how Cost Anomaly Detection starts to shape and optimize your AWS services’ spend.

Of course, this is an AWS-specific service, so there are still opportunities both for other cloud providers to provide similar services as well as for the leading third-party cloud service management providers such as Apptio Cloudability, Cloudcheckr, CloudHealth by VMware, Calero-MDSL, Flexera, Snow Software, Tangoe, and Upland Software to also develop similar capabilities for multi-cloud.

For now, Amalgam Insights recommends taking a look at the documentation and learning how the service works. We are starting to transform IT cost management from a practice of manually tracking cost data on our own to depending on algorithms and machine logic to do the hard number-crunching and swivel-chair work for use. Even if you’re not going back to school to learn the linear algebra, calculus, and neural net designs needed to do data science on your own, you need to have an idea of what can and can’t be done through algorithmic means.

Posted on

Motus Acquires Vision Wireless to Bolster Enterprise Mobility Support

On September 30th, 2020, workforce expense vendor Motus announced the acquisition of Vision Wireless, a wireless expense management company based in Augusta, Georgia in the United States. This purchase demonstrates Motus’ continued focus in enterprise mobility to add to its September 2019 purchase of Wireless Analytics.

Vision Wireless was founded in 2003 with a focus on wireless expense management and managed mobility services provided to Fortune 1000 and mid-sized enterprises. In recent years, Vision Wireless has taken a greater role in providing thought leadership to the industry at large, including its recent sponsorship of Amalgam Insights’ Technology Expense Management Expo.

Over the past decade, I’ve typically recommended Vision Wireless as an appropriate vendor for North American-based organizations over $1 billion in revenue. Vision Wireless is known for its strong managed service capabilities with white-glove service and a software solution that supports integration with a variety of accounts payable, general ledger, procurement, IT service management, human resources, and mobile device management solutions. Vision Wireless was also working on increasing its automation, self-service, and integration capabilities.

With the acquisition by Motus, Amalgam Insights expects that Vision Wireless’ platform development will accelerate to support Motus’ ambitions of becoming a full-service remote worker solution. Over the past year, Motus has brought its Fixed and Variable Rate methodology to mobile devices to help companies support Total Cost of Ownership and expense analysis across both business and personal (Bring Your Own Device) use cases.

What’s Next?

Vision Wireless customers should expect to see no significant changes in their support in the immediate future. Motus acquired Vision Wireless in no small part to acquire the quality and depth of service that Vision Wireless has provided to its clients for years. At the same time, Vision Wireless customers should also be aware that Motus also has a fleet management solution for organizations that conduct business activities with personal vehicles. For companies seeking to improve their remote worker expense management, this Motus solution may be an opportunity to help support both current working conditions as well as a post-COVID future that maintains a significant remote worker population.

Motus customers gain a high-quality mobility support and managed services team with experience supporting clients such as Aramark, ServiceMaster, and CVS Health. This acquisition will ease Motus’ ability to support integration with a variety of software solutions and platforms, such as ServiceNow, SAP Ariba, Coupa, and leading ERP solutions.

Amalgam Insights is also interested to see how this acquisition affects the market for remaining high-quality managed mobility services and wireless expense solutions that we track such as Advantix, GoExceed, ICOMM, Intratem, mindWireless, Mobichord, MobilSense, Mobile Solutions, Valicom, and vMox.

In the long term, as cars become increasingly connected, the Internet of Things become ubiquitous, and the need for remote mobility support only becomes greater over time, Amalgam Insights believes that Vision Wireless’ long-time expertise in supporting enterprise mobility support will prove to be a strong asset for Motus. Amalgam Insights also believes that this is not the end of Motus’ acquisition streak, as a Thoma Bravo portfolio company. For Motus to fully unleash its potential as a remote worker reimbursement vendor, Amalgam Insights believes that there is still room for Motus to expand into cloud management, security, home office expenses, and other business categories that are split between corporate and personal responsibilities.

Overall, Amalgam Insights believes that this acquisition represents a strong commitment by Motus to continue expanding its support of enterprise mobility, an acquisition of a strong enterprise client case, and an opportunity to use Vision Wireless’ managed services foundation and platform to help expand its remote work support. This acquisition is indicative of what should be happening in the wireless and telecom expense market: accretive acquisition to support a bigger picture, such as the future of remote work management, the future of IT management, and the future of employee management. Expect more acquisitions like this in the near future.

Posted on

UPDATE – Quick Take: Is Oracle Buying Tiktok? (Hint: It’s all about the cloud)

Last Updated January 20th, 2021

(Update: As of January 20th, with the presidential inauguration of Joe Biden, it seems unlikely that the Biden administration will continue the pursuit of the US ban on Tiktok. This follows U.S. District Court Judge Carl Nichols Dec. 7 ruling that the Commerce Department had “likely overstepped” its authority in placing the ban. An earlier injunction on shutting down Tiktok services on October 30th in the United States Court of Appeals for the Third Circuit by Judge Wendy Beetlestone is currently scheduled to be appealed in February 2021.)

Key Takeaway: Master tactician Larry Ellison gains a feather in the Oracle Cloud by playing the long game and positioning Tiktok as a significant Oracle Cloud customer. Well played, Mr. Ellison.

As if 2020 hasn’t been weird enough, many of us are finding out that enterprise stalwart Oracle is apparently going to purchase Gen Z (born after 1995) and Gen Alpha (born 2010 or later) social media darling Tiktok.

What? Is this actually happening?

Well, not quite. But to explain, first we need to look at the context.

Last month, President Trump created an executive order to ban Tiktok in the United States based on security and censorship issues. This move was seen both as a move against the Chinese economy and to protect global social media platforms based in the United States such as Facebook and Twitter.

In response, a number of potential suitors showed up with either bids or proposals to support Tiktok in the United States. Microsoft showed interest in purchasing Tiktok to support its Azure cloud and gain a massive source of video content that would be useful across Microsoft’s marketing (Bing), gaming (XBox, Minecraft), augmented reality (Hololens), and artificial intelligence (Azure AI) businesses. And at one point, retail giant Walmart was associated with this bid, perhaps in an attempt to fend off Amazon in this digital path. But this bid was shut down was rejected on September 13.

Oracle came in after Microsoft, showing interest in Tiktok. At the time, there was massive confusion from the market at large on why Oracle would be interested. But, as someone who has written about the tight relationship between social technologies and the cloud for many years, my immediate thought was that it’s all about the cloud.

Oracle has been forcefully marketing Oracle Cloud Infrastructure as an enterprise solution after making significant investments to improve connectivity and usability. These recent changes have led to significant logo wins including Zoom and 8×8, both of which chose Oracle for its performance and 80% savings on outbound network traffic. The cost of connectivity has traditionally been a weak point for leading cloud providers, both due to a lack of focus on networking and because cloud vendors have wanted to gate data within their own platform and have little to no incentive to make inter-cloud transfers and migrations cheaper and easier. But Oracle’s current market position combined with its prior investments in high performance computing and network performance means that it makes good business sense for Oracle to be the most efficient cloud on a per-node and bandwidth perspective and to attack where other cloud vendors are weak.

Social media and communications vendors are massive cloud customers, in their own right. Pinterest has a 6 year, $750 million commitment with Amazon Web Services and is easily on pace to spend far more. Lyft has its own $300 million commitment wth AWS. And Citrix has a $1 billion commitment with its cloud vendor of choice, presumably Microsoft Azure. The cloud contract sizes of large and dynamic social and video-centric vendors is enormous. Every cloud provider would be glad to support the likes of Tiktok as a customer or potentially even as a massive operations writeoff that would be countered by the billions of dollars in revenue Tiktok provides.

And, of course, Tiktok creates a massive amount of data. Similar to Microsoft’s interest in Tiktok, Oracle obviously has both expertise and a large business focused on the storage and analysis of data. Managing Tiktok content, workloads, and infrastructure would provide Oracle with technical insights to video creation trends and management that no other company other than perhaps Alphabet’s Youtube could provide. Over the past couple of years, Oracle has put a lot of effort both into database automation and cloud administration with its Gen2 offering.

In addition to bolstering Oracle’s cloud, Tiktok also could make sense as a tie-in to Oracle’s Marketing Cloud. At a time when large marketing suites are struggling to support new platforms such as Tiktok, what better way to develop support than to own or to access the underlying technology? But wait, does Oracle have access to Tiktok’s code and algorithms?

Apparently not. Current stories suggest that Oracle will be the hosting partner or “Trusted Technology Provider” for Tiktok America while Tiktok parent company ByteDance still maintains a majority ownership of the company. It looks like Oracle has positioned itself to be the cloud provider for a massive social media platform, as the United States alone has over 100 million active users on Tiktok. And the speculation behind Microsoft’s rejected bid is that Microsoft sought to purchase the source code and algorithms of Tiktok, which ByteDance refused to provide.

So, the net-net appears to be that in response to Trump’s Executive Order, Oracle will gain an anchor client for Oracle Cloud Infrastructure while making some investment into the new Tiktok US organzation. Oracle’s reputation for security and tight US government relations are expected to paper over any current concerns about data sovreignty and governance, such as Chinese access to US user data. Current Tiktok investors, such as General Atlantic and Sequoia Capital, may also have stakes in the new US company. This activity effectively puts more money into a Chinese company. Most importantly, this action will allow Tiktok to remain operational in the United States after September 20th, the original due date of the executive order.

Congratulations to Oracle and Larry Ellison on a game well played.