Posted on

Instant Mediocrity: a Business Guide to ChatGPT in the Enterprise

In June of 2023, we are firmly in the midst of the highest of hype levels for Generative AI as ChatGPT has taken over the combined hype of the Metaverse, cryptocurrency, and cloud computing. We now face a deluge of experts who all claim to be “prompt engineering” experts and can provide guidance on the AI tools that will make your life easier to live. At the same time, we are running into a cohort of technologists who warn that AI is only one step away from achieving full sentience and taking over the world as an apocalyptic force.

In light of this extreme set of hype drivers, the rest of us do face some genuine business concerns associated with generative AI. But our issues are not in worshipping our new robot overloads or in the next generation of “digital transformation” focused on the AI-driven evolution of our businesses that lay off half the staff. Rather, we face more prosaic concerns regarding how to actually use Generative AI in a business environment and take advantage of the productivity gains that are possible with ChatGPT and other AI tools.

Anybody who has used ChatGPT in their areas of expertise has quickly learned that ChatGPT has a lot of holes in its “knowledge” of a subject that prevent it from providing complete answers, timely answers, or productive outputs that can truly replace expert advice. Although Generative AI provides rapid answers with a response rate, cadence, and confidence that mimics human speech, it often is missing either the context or the facts to provide the level of feedback that a colleague would. Rather, what we get is “instant mediocrity,” an answer that matches what a mediocre colleague would provide if given a half-day, full-day, or week to reply. If you’re a writer, you will quickly notice that the essays and poems that ChatGPT writes are often structurally accurate, but lack the insight and skill needed to write a university-level assignment.

And the truth is that instant mediocrity is often a useful level of skill. If one is trying to answer a question that has one of three or four answers, a technology that is mediocre at that skill will probably give you the right answer. If you want to provide a standard answer for structuring a project or setting up a spreadsheet to support a process, a mediocre response is good enough. If you want to remember all of the standard marketing tools used in a business, a mediocre answer is just fine. As long as you don’t need inspired answers, mediocrity can provide a lot of value.

A few things for you to consider as your organization starts using ChatGPT. Just like when the iPhone launched 16 years ago, you don’t really have a choice on whether your company is using ChatGPT or not. All you can do is figure out how to manage and govern the use. Our recommendations typically take one of three major categories: Strategy, Productivity, and Cost. Given the relatively low price of ChatGPT both as a consumer-grade tool and as an API where current pricing is typically a fraction of the cost of doing similar tasks without AI, the focus here will be on strategy and productivity

Strategy – Every software company now has a ChatGPT roadmap. And even mid-sized companies typically have hundreds of apps under management. So, now there will 200, 500, or however many potential ways for employees to use ChatGPT over the next 12 months. Figure out how GPT is being integrated into the software and whether GPT is being directly used to process data or indirectly to help query, index, or augment data.

Strategy – Identify the value of mediocrity. The average large enterprise getting mediocrity from a query-writing or indexing perspective is often a much higher standard than the mediocrity of text autocompletion. Seek mediocrity in tasks where the average online standard is already higher than the average skill within your organization.

Strategy – How will you keep proprietary data out of Gen AI? – Most famously, Samsung recently had a scare when it saw how AI tools were echoing and using proprietary information. How are companies both ensuring that they have not put new proprietary data into generative AI tools for potential public use and that their existing proprietary data was not used to train generative AI models? This governance will require greater visibility from AI providers to provide detail on the data sources that were used to build and train the models we are using today.

Strategy – On a related note, how will you keep commercially used intellectual property from being used by Gen AI? Most intellectual property covered by copyright or patent does not allow for commercial reuse without some form of license. Do companies need to figure out some way of licensing data that is used to train commercial models? Or can models verify that they have not used any copyrighted data? Even if users have relinquished copyright for the specific social networks and websites that they initially wrote for, this does not automatically give OpenAI and other AI providers the same license to use the same data for training. And can AIs own copyright? Once large content providers such as music publishers, book publishers, and entertainment studios realize the extent to which their intellectual property is at risk with AI and somebody starts making millions with AI-enabled content that strongly resembles any existing IP, massive lawsuits will ensue. If an original provider, be ready to defend IP. If using AI, be wary of actively commercializing or claiming ownership of AI-enabled work for anything other than parody or stock work that can be easily replaced.

Productivity – Is code enterprise-grade: secure, compliant, and free of private corporate metadata? One of the most interesting new use cases for generative AI is the ability to create working code without having prior knowledge of the programming language. Although generative AI currently cannot create entire applications without significant developer engagement, it can quickly provide specifically defined snippets, functions, and calls that may have been a challenge to explicitly search for or to find on a Stack Overflow or in git libraries. As this use case continues to proliferate, coders need to understand their auto-generated code well enough to check for security issues, performance challenges, appropriate metadata and documentation, and reusability based on corporate service and workload management policies. But this will increasingly allow developers to shift from directly coding every line to editing and proofing the quality of code. In doing so, we may see a renaissance of cleaner, more optimized, and more reusable code for internal applications as the standard for code now becomes “instantly mediocre.”

Productivity – Quality, not Quantity. There are hundreds of AI-enabled tools out there to provide chat, search-based outputs, generative text and graphics, and other AI capabilities. Measure twice and cut once in choosing the tools that you use to help you. It’s better to find the five tools that matter than the 150 tools that don’t maximize the mediocre output that you receive.

Productivity – Are employees trained on fact-checking and proofing their Gen AI outputs? Whether employees are creating text, getting sample code, or prompting new graphics and video, the outputs need to be verified against a fact-based source to ensure that the generative AI has not “hallucinated” or autocompleted details that are incorrect. Generative AI seeks to provide the next best word or the next best pixel that is most associated with the prompt that it has been given, but there are no guarantees that this autocompletion will be factual just because it is related to the prompt at hand. Although there is a lot of work being done to make general models more factual, this is an area where enterprises will likely have to build their own, more personalized models over time that are industry, language, and culturally specific. Ideally, ChatGPT and other Generative AI tools are a learning and teaching experience, not just a quick cheat.

Productivity – How will Generative AI help accelerate your process and workflow automation? Currently, automation tends to be a rules-driven set of processes that lead to the execution of a specific action. But generative AI can do a mediocre job of translating intention into a set of directions or a set of tasks that need to be completed. While generative AI may get the order of actions wrong or make other text-based errors that need to be fact-checked by a human, the AI can accelerate the initial discovery and staging of steps needed to complete business processes. Over time, this natural language generation-based approach to process mapping is going to become the standard starting point for process automation. Process automation engineers, workflow engineers, and process miners will all need to learn how prompts can be optimized to quickly define processes.

Cost – What will you need to do to build your own AI models? Although the majority of ChatGPT announcements focus on some level of integration between an existing platform or application and some form of GPT or other generative AI tool, there are exceptions. BloombergGPT provides a model based on all of the financial data that it has collected to help support financial research efforts. Both Stanford University Alpaca and Databricks Dolly have provided tools for building custom large language models. At some point, businesses are going to want to use their own proprietary documents, data, jargon, and processes to build their own custom AI assistants and models. When it comes time for businesses to build their own billion-parameter, billion-word models, will they be ready with the combination of metadata definitions, comprehensive data lake, role definitions, compute and storage resources, and data science operationalization capabilities to support these custom models? And how will companies justify the model creation cost compared to using existing models? Amalgam Insights has some ideas that we’ll share in a future blog. But for now, let’s just say that the real challenge here is not in defining better results, but in making the right data investments now that will empower the organization to move forward and take the steps that thought leaders like Bloomberg have already pursued in digitizing and algorithmically opening up their data with generative AI.

Although we somewhat jokingly call ChatGPT “instant mediocrity,” this phrase should be taken seriously both in acknowledging the cadence and response quality that is created. Mediocrity can actually be a high level of performance by some standards. Getting to a response that an average 1x’er employee can provide immediately is valuable as long as it is seen for what it is rather than unnecessarily glorified or exaggerated. Treat it as an intern or student-level output that requires professional review rather than an independent assistant and it will greatly improve your professional output. Treat it as an expert and you may end up needing legal assistance. (But maybe not from this lawyer. )

Posted on

What To Watch Out For As GPT Leads a New Technological Revolution

2022 was a banner year for artificial intelligence technologies that reached the mainstream. After years of being frustrated with the likes of Alexa, Cortana, and Siri and the inability to understand the value of machine learning other than as a vague backend technology for the likes of Facebook and Google, 2022 brought us AI-based tools that was understandable at a consumer level. From our perspective, the most meaningful of these were two products created by OpenAI: DALL-E and ChatGPT, which expanded the concept of consumer AI from a simple search or networking capability to a more comprehensive and creative approach for translating sentiments and thoughts into outputs.

DALL-E (and its successor DALL-E 2) is a system that can create visual images based on text descriptions. The models behind DALL-E look at relationships between existing images and the text metadata that has been used to describe those images. Based on these titles and descriptions, DALL-E uses diffusion models to start with random pixels that lead to generated images based on these descriptions. This area of research is by no means unique to OpenAI, but it is novel to open up a creative tool such as DALL-E to the public. Although the outputs are often both interesting and surprisingly different from what one might have imagined, they are not without their issues. For instance, the discussion around the legal ownership of DALL-E created graphics is not clear, since Open AI claims to own the images used, but the images themselves are often based on other copyrighted images. One can imagine that, over time, an artistic sampling model may start to appear similar to the music industry where licensing contracts are used to manage the usage of copyrighted material. But this will require greater visibility regarding the lineage of AI-based content creation and the data used to support graphic diffusion. Until this significant legal question is solved, Amalgam Insights believes that the commercial usage of DALL-E will be challenging to manage. This is somewhat reminiscent of the challenges that Napster faced at the end of the 20th century as a technology that both transformed the music industry and had to deal with the challenges of a new digital frontier.

But the technology that has taken over the zeitgeist of technology users is ChatGPT and related use cases associated with the GPT (Generative Pre-Trained Transformer) autoregressive language model trained on 500 billion words across the web, Wikipedia, and books. And it has become the favorite plaything of many a technologist. What makes ChatGPT attractive is its ability to take requests from users asking questions with some level of subject matter specificity or formatting and to create responses in real-time. Here are a couple of examples from both a subject matter and creative perspective.

Example 1: Please provide a blueprint for bootstrapping a software startup.

This is a bit generic and lacks some important details on how to find funding or sell the product, but it is in line with what one might expect to see in a standard web article regarding how to build a software product. The ending of this answer shows how the autogenerative text is likely referring to prior web-based content built for search engine optimization and seeking to provide a polite conclusion based on junior high school lessons in writing the standard five-paragraph essay rather than a meaningful conclusion that provides insight. In short, it is basically a status quo average article with helpful information that should not be overlooked, but is not either comprehensive or particularly insightful for anyone who has ever actually started a business.

A second example of ChatGPT is in providing creative structural formats for relatively obscure topics. As you know, I’m an expert in technology expense management with over two decades of experience and one of the big issues I see is, of course, the lack of poetry associated with this amazing topic. So, again, let’s go to ChatGPT.

Example 2: Write a sonnet on the importance of reducing telecom expenses

As a poem, this is pretty good for something written in two seconds. But it’s not a sonnet, as sonnets are 14 lines, written in iambic pentameter (10 syllable lines split int 5 iambs, or a unstressed syllable followed by a stressed syllable) and split into three sections of four lines followed by a two-line section with a rhyme scheme of ABAB, CDCD, EFEF, GG. So, there’s a lot missing there.

So, based on these examples, how should ChatGPT be used? First, let’s look at what this content reflects. The content here represents the average web and text content that is associated with the topic. With 500 billion words in the GPT-3 corpus, there is a lot of context to show what should come next for a wide variety of topics. Initial concerns of GPT-3 have started with the challenges of answering questions for extremely specific topics that are outside of its training data. But let’s consider a topic I worked on in some detail back in my college days while using appropriate academic language in asking a version of Gayatri Spivak’s famous (in academic circles) question “Can the subaltern speak?”

Example 3: Is the subaltern allowed to fully articulate a semiotic voice?

Considering that the language and topic here is fairly specialized, the introductory assumptions are descriptive but not incisive. The answer struggles with the “semiotic voice” aspect of the question in discussing the ability and agency to use symbols from a cultural and societal perspective. Again, the text provides a feeling of context that is necessary, but not sufficient, to answer the question. The focus here is on providing a short summary that provides an introduction to the issue before taking the easy way out telling us what is “important to recognize” without really taking a stand. And, again, the conclusion sounds like something out of an antiseptic human resources manual in asking for the reader to consider “different experiences and abilities” rather than the actual question regarding the ability to use symbols, signs, and assumptions. This is probably enough of an analysis at a superficial level as the goal here isn’t to deeply explore postmodern semiotic theory but to test ChatGPT’s response in a specialized topic.

Based on these three examples, one should be careful in counting on ChatGPT to provide a comprehensive or definitive answer to a question. Realistically, we can expect ChatGPT will provide representative content for a topic based on what is on the web. The completeness and accuracy of a ChatGPT topic is going to be dependent on how often the topic has been covered online. The more complete an answer is, the more likely it is that this topic has already been covered in detail.

ChatGPT will provide a starting point for a topic and typically provide information that should be included to introduce the topic. Interestingly, this means that ChatGPT is significantly influenced by the preferences that have built online web text over the past decade of content explosion. The quality of ChatGPT outputs seems to be most impressive to those who treat writing as a factual exercise or content creation channel while those who look at writing as a channel to explore ideas may find it lacking for now based on its generalized model.

From a topical perspective, ChatGPT will probably have some basic context for whatever text is used in a query. It would be interesting to see the GPT-3 model augmented with specific subject matter texts that could prioritize up-to-date research, coding, policy, financial analysis, or other timely new content either as a product or training capability.

In addition, don’t expect ChatGPT to provide strong recommendations or guidance. The auto-completion that ChatGPT does is designed to show how everyone else has followed up on this topic. And, in general, people do not tend to take strong stances on web-based content or introductory articles.

Fundamentally, ChatGPT will do two things. First, it will make mediocre content ubiquitous. There is no need to hire people to write an “average” post for your website anymore as ChatGPT and other technologies either designed to compete with or augment it will be able to do this easily. If your skillset is to write grammatically sound articles with little to no subject matter experience or practical guidance, that skill is now obsolete as status quo and often-repeated content can now be created on command. This also means that there is a huge opportunity to combine ChatGPT with common queries and use cases to create new content on demand. However, in doing so, users will have to be very careful not to plagiarize content unknowingly. This is an area where, just like with DALL-E, OpenAI will have to work on figuring out data lineage, trademark and copyright infringement, and appropriation of credit to support commercial use cases.  ChatGPT struggles with what are called “hallucinations” where ChatGPT makes up facts or sources because those words are physically close to the topic discussed in the various websites and books that ChatGPT uses. ChatGPT is a text generation tool that picks words based on how frequently they show up with other words. Sometimes that result will be extremely detailed and current and other times, it will look very generic and mix up related topics that are often discussed together.

Second, this tool now provides a much stronger starting point for writers seeking to say something new or different. If your point of view is something that ChatGPT can provide in two seconds, it is neither interesting or new. To stand out, you need to provide greater insight, better perspective, or stronger directional guidance. This is an opportunity to improve your skills or to determine where your professional skills lie. ChatGPT still struggles with timely analysis, directional guidance, practical recommendations beyond surface-level perspectives, and combining mathematical and textual analysis (i.e. doing word problems or math-related case studies or code review) so there is still an immense amount of opportunity for people to write better.

Ultimately, ChatGPT is a reflection of the history of written text creation, both analog and digital. Like all AI, ChatGPT provides a view of how questions were answered in the past and provides an aggregate composite based on auto-completion. For topics with a basic consensus, such as how to build a product, this tool will be an incredible time saver. For topics that may have multiple conflicting opinions, ChatGPT will try to play either both sides or all sides in a neutral manner. And for niche topics, ChatGPT will try to fake an answer at what is approximately a high school student’s understanding of the topic. Amalgam Insights recommends that all knowledge workers experiment with ChatGPT in their realm of expertise as this tool and the market of products that will be built based on the autogenerated text will play an important role in supporting the next generation of tech.

Posted on

May 13: From BI to AI (AWS, Databricks, DataRobot, Dremio, Google, Hugging Face, Mindtech, Predibase, Privacera, Pyramid Analytics, Snowflake, Starburst, ThoughtSpot)

If you would like your announcement to be included in Amalgam Insights’ weekly data and analytics roundups, please email lynne@amalgaminsights.com.

Funding

Hugging Face Raises $100M C Round

Machine learning platform Hugging Face announced May 9 that they had raised $100M in Series C funding. Lux Capital led the round, with participation from Addition, AIX Ventures, a_capital, Betaworks, Coatue, Sequoia, SV Angel, and individual angel investors. Hugging Face will use the funding on R+D, product development, and education.

Predibase Announces $16.25M in Seed and A Round Funding, Emerges From Stealth

Predibase, a low-code machine learning platform, came out of stealth this week and announced $16.25M in Series A and seed funding. Greylock led both rounds, with participation from The Factory and angel investors. The funding will go towards hiring additional engineering and ML talent, as well as the go-to-market strategy and bringing Predibase into general availability.

Pyramid Analytics Closes $120M E Round

Pyramid Analytics, a decision intelligence and augmented analytics platform, announced on May 9 that they had closed $120M in Series E financing. HIG Growth Partners led the round, with participation from existing investors Clal Insurance Enterprises Holdings, General Oriental Investments, and Kingfisher Capital, and new investors JVP, Maor Investments, Sequoia Capital, and Viola Growth. Pyramid will use the money on product development, continued R+D, expanding partnerships globally, and hiring to support all of these efforts.

Launches and Updates

Databricks Strengthens AWS Partnership, Announces PAYGO Lakehouse Offering

On May 10, Databricks announced a pay-as-you-go lakehouse offering on AWS, available now. Customers will be able to launch a lakehouse from the AWS Marketplace whether or not they’ve used Databricks before; they can set up a Databricks account from within AWS, and even consolidate their Databricks usage bills into their AWS billing.

Google Serves Up LaMDA 2 Demos in its AI Test Kitchen

At Google I/O, Google unveiled its new Language Model for Dialog Applications (LaMDA) 2 conversational AI model, along with AI Test Kitchen, an app to demonstrate use cases for LaMDA 2. LaMDA is a generative text model, aiming to produce relevant textual responses based on patterns it recognizes from linguistic input. While no date has been announced for general availability, Google plans to open up LaMDA access to small groups of people.

Mindtech Chameleon Now Generates Diverse “Actors” to Address Bias

Mindtech Global, a synthetic data creation platform, has announced updates to its Chameleon platform. Chameleon 22.1 lets users automatically generate millions of “actors” in virtual worlds, creating privacy-compliant synthetic visual data for training computer vision systems. To address known bias issues, the Chameleon actors now have a range of configuration options, including height, build, age, skin tone, and clothing and hairstyle options.

Privacera Announces Release of Platform 6.3 and PrivaceraCloud 4.3

Privacera, a data access governance company, announced the release of the latest version of Privacera Platform 6.3 and PrivaceraCloud 4.3. New features include extending Attribute Based Access Control across all supported data and analytical sources, supplementing existing role-based and tag-based access control mechanisms; along with enhanced support for Google BigQuery, Starburst Enterprise, Databricks, and Snowflake.

ThoughtSpot Expands the Modern Analytics Cloud to Help Companies Dominate the Decade of Data

At their Beyond 2022 event this week, ThoughtSpot announced numerous new capabilities for their cloud analytics platform. Key updates include integrations and connectors for Amazon Redshift Serverless, Snowflake Data Marketplace, Databricks Partner Connect, Dremio, and Starburst Galaxy; templates, code samples, and ThoughtSpot Blocks to accelerate the development process; and automation capabilities to trigger actions based on analytics.

Events

DataRobot AIX 22 Celebrates AI Innovation, June 7-8, 2022

On June 7 and 8, DataRobot will host DataRobot AIX 22, a free virtual event to explore innovation in AI, analytics, and data science. Featured speakers include DataRobot executives Dan Wright, Debanjan Saha, Nenshad Bardoliwalla, and Michael Schmidt. Register for the event at the DataRobot AIX 22 website.

Posted on

April 29: From BI to AI (Akamai, Arize AI, Baseten, Credo AI, Enveil, Exafunction, expert.ai, HPE, Informatica, Linode, RelationalAI, Salesforce, Snowflake, Synthesis AI, Tableau)

If you would like your announcement to be included in Amalgam Insights’ weekly data and analytics roundups, please email lynne@amalgaminsights.com.

Funding

Baseten Raises $12M A Round, Launches Product to Turn Machine Learning Models into Apps

Baseten, a platform that turns machine learning models into web applications, announced on April 26 that they had raised $20M in combined seed and Series A funding. The seed round of $8M was co-led by Greylock and South Park Commons Fund, while the A round was led by Greylock. Baseten also formally launched their product into public beta. Amalgam Insights’ Hyoun Park was quoted in Baseten’s press release announcing the funding and launch.

Enveil Secures $25M B Round

Enveil, a data privacy company, announced that it had closed $25M in Series B financing this week. USAA led the oversubscribed round, with participation from existing investors Bloomberg Beta, Capital One Ventures, C5 Capital, Cyber Mentor Fund, DataTribe, GC&H, In-Q-Tel, Mastercard, and 1843 Capital. Enveil plans to put the funds towards product development, expanding sales, and marketing.

Exafunction Raises $25 Million Series A Funding Led by Greenoaks | Business Wire

Exafunction, a deep learning infrastructure company, announced $25M in Series A financing. Greenoaks led the round, with Founders Fund participating. The funding will go towards R+D, customer training, and performance improvements, in particular around GPU virtualization.

RelationalAI Raises $75M B Round, Bringing Total Funding to $122M

On April 26, RelationalAI, a knowledge graph system builder, announced that they had closed $75M in a Series B funding round. Tiger Global led the round, with participation from existing investors Addition, Madrona Venture Group, and Menlo Ventures. The funding will go towards R+D and go-to-market activities. As part of the transaction, Bob Muglia, former CEO of Snowflake, has joined the RelationalAI board.

Synthesis AI Raises $17M in Series A Funding

On April 28, Synthesis AI, a synthetic data platform, announced that they had closed $17M in Series A financing. New investor 468 Capital led the round, with participation from additional new investors Sorenson Ventures and Strawberry Creek Ventures and existing investors Bee Partners, iRobot Boom Capital, Kubera Venture Capital, and PJC. Synthesis AI will use the funds for hiring, product development, and R+D at the intersection of AI and computer-generated imagery (CGI).

Launches and Updates

Akamai Debuts Linode Managed Database Service

On April 25, Akamai launched a managed database service powered by Linode, which Akamai acquired back in March. The service handles common maintenance and deployment tasks associated with database management, allowing for better performance and uptime. The launch reflects Akamai’s expansion into database management services to go along with its existing networking capabilities.

Arize AI Launches Bias Tracing to Address Algorithmic Bias

Arize AI, a machine learning observability platform, launched Arize Bias Tracing this week. The tool helps data science and machine learning teams monitor models for bias, discover what features and cohorts contribute to bias in a given model, and mitigate the impact of bias on said model.

Credo AI Reveals Responsible AI Governance Platform

Credo AI, a governance solution for AI, launched its Responsible AI platform this week. Key features in Responsible AI include a pipeline that ingests assessments from Credo AI Lens and translates them into risk scores across common AI risk areas; a repository for critical governance artifacts; the ability to assess AI risk and compliance of third-party AI and ML models via a dedicated portal.

Expert.ai Releases New Version of its Platform, Provides “Knowledge Model” AI Accelerators

Expert.ai, a natural language hybrid AI platform, announced the latest version of its platform on April 26. Notable features include a set of “Knowledge Models,”, which are pre-configured rules-based models that can extract entities, insights, and relationships from text within specific domains. Included domains with this release are finance, life sciences, environmental, social, and governance (ESG), personally identifiable information (PII), and behavioral and emotional traits. In addition, Azure has been added as a deployment environment, Boomi and Qlik can now be used to connect to third-party applications, and custom Python and Java can be used in the natural language workflow orchestrations.

Hewlett Packard Enterprise Reveals Machine Learning Development Environment

On April 27, Hewlett Packard Enterprise debuted their HPE Machine Learning Development System. The System consists of HPE’s machine learning platform, HPE Machine Learning Development Environment, along with hardware and software to optimize development, training, and deployment of machine learning models.

Tableau Expands Analytics Embedding Capabilities

At Salesforce TrailblazerDX ’22, Tableau announced additional capabilities around embedding analytics. These include Web Data Connector 3.0, which allows data developers to build connectors from their data to a web application; v3 of Tableau’s Embedding API, so Tableau analytics can be integrated into any application using web components; Embeddable Web Authoring, the ability to edit Tableau visualizations within any application or web portal; Connected Apps for Seamless Authentication, which permits Tableau to be integrated into developers’ applications with proper authentication; and Tableau Actions with Salesforce Flow, the ability to trigger workflows in Flow directly from a Tableau dashboard.

Partnerships

Informatica and Snowflake Integrate Governance Features

Informatica and Snowflake continue to grow their partnership. This week, Informatica announced an integration between Snowflake Data Cloud’s native governance features and Informatica’s Cloud Data Governance and Catalog. The integration will provide a dashboard allowing for easy monitoring of data governance, access controls, and end-to-end lineage.

Posted on

April 8: From BI to AI (Accenture, Ascend.io, Atlassian, Confluent, Databricks, Dataiku, data.world, Deloitte, Elastic, Fivetran, Gamma Soft, Google Cloud, Instaclustr, LightBeam.ai, MongoDB, Monitaur, Neo4j, NetApp, ReadySet, Redis, Starburst, Talend)

If you would like your announcement to be included in Amalgam Insights’ weekly data and analytics roundups, please email lynne@amalgaminsights.com.

Funding

Ascend.io Announces $31 Million Series B Funding Round

On April 6, Ascend.io, a data and analytics engineering automation platform, announced that they had secured a $31M Series B round of financing. Tiger Global led the round, with participation from existing investor Accel and new investor Shasta Ventures. The funding will be used to expand go-to-market efforts and target geographies, as well as broadening the scope of supported cloud platforms.

data.world Announces $50M Series C Funding Round

On April 5, data.world, an enterprise data catalog, announced a $50M Series C funding round. Goldman Sachs’’s Growth Equity group led the funding round. Additional contributions came from Prologis Ventures, Sandbox Insurtech Ventures, Shasta Ventures, Vopak Ventures, and individual angel investors. Funding will go towards hiring, global geographic expansion, and product development.

ReadySet Launches, Reveals $29M in Early Funding

On April 5, ReadySet, a SQL caching engine, officially launched, revealing $29M in seed round and Series A funding. Index Ventures led the A round, with participation by Amplify Partners and additional angel investors. The funding will go towards product development.

Product Launches and Updates

Welcoming Atlassian Data Lake and Atlassian Analytics

Atlassian announced new capabilities of its Atlassian Platform at its Team 22 event earlier this week. The Atlassian Data Lake will gather data from the various Atlassian apps in one place for convenient querying. At launch, the focus is on data from Jira Software and Jira Service Management, but Atlassian plans to include its other apps soon. As for actually analyzing the data, Atlassian used technology from its Chartio acquisition last year to build Atlassian Analytics, which connects to the Atlassian Data Lake and generates analytics and visualizations of data both in the data lake, and combined with third-party data as well.

Databricks Makes Delta Live Tables Generally Available

On April 5, Databricks announced that Delta Live Tables was now generally available. Delta Live Tables automates repetitive, time-consuming parts of data pipeline operation and maintenance so that data engineers and analysts can focus on the actual data.

LightBeam.ai Debuts its Data Privacy Automation Platform

LightBeam.ai came out of stealth mode April 6, announcing the general availability of its data privacy automation platform. LightBeam consolidates and automates data compliance processes that are often manually managed at the moment, helping companies abide more strictly by data privacy regulations.

Monitaur Launches GovernML Addition to ML Assurance Platform

Monitaur, an AI governance company, announced the general availability of GovernML on April 6, part of its ML Assurance suite to monitor machine learning models for bias, risk, and other behavioral issues. GovernML will create a system of record providing a governance offering around the AI lifecycle, offering key features such as policy management, technical monitoring, and human oversight of model performance and results.

Partnerships and Acquisitions

Debuting the Data Cloud Alliance

On April 5, Google Cloud joined with nearly a dozen other companies (Accenture, Confluent, Databricks, Dataiku, Deloitte, Elastic, Fivetran, MongoDB, Neo4j, Redis, and Starburst) to found the Data Cloud Alliance, committing to making data more accessible and mobile across a wide variety of environments. Alliance members will strive to reduce complexity in data environments by providing infrastructure, APIs, and support for data portability and accessibility between platforms, working towards building digital data standards the members will support in common.

NetApp Announces Intent to Acquire Instaclustr

NetApp announced April 7 that it had signed a definitive agreement to acquire Instaclustr, a database and data app deployment service. While NetApp has long been known for its data storage capabilities, its acquisition of Instaclustr is the latest in a series of procurements reflecting a significant expansion of data management capabilities among their offerings.

Talend Acquires Gamma Soft

On April 7, data integration company Talend announced that it had acquired Gamma Soft, a change data capture company. Combining Gamma Soft’s change data capture capabilities with Talend’s data integration and management functionality will help Talend customers process data changes more quickly.

Posted on

March 18: From BI to AI (Alteryx, Databricks, DataRobot, Dataiku, Domino Data Lab, H2O.ai, Microsoft Azure, Redis, Salesforce, Snowflake, Synthetaic, Tecton)

If you would like your announcement to be included in Amalgam Insights’ weekly data and analytics roundups, please email lynne@amalgaminsights.com.

Featured: Healthcare Data Announcements During HIMSS22

In the wake of Databricks rolling out its healthcare-specific data lakehouse last week, along with the Healthcare Information and Management Systems Society conference (HIMSS22) happening this week, several other enterprises made healthcare data-related announcements in the last few days.

H2O.ai Reveals Portfolio of Healthcare AI Apps

On March 11, H2O.ai announced an expansion of its healthcare data capabilities, offering 40 AI applications within Population Health, Precision Medicine, Public Health, and Intelligent Supply Chain. Notable apps include a COVID-19 hospital occupancy simulator, COVID-19 forecasting, a gene mutation risk assessment app, and a route optimizer app for supply chain support for health manufacturers.

Microsoft Announces General Availability of Azure Health Data Services, Updates to Microsoft Cloud for Healthcare

On March 15, Microsoft announced the general availability of Azure Health Data Services, along with improvements for Microsoft Cloud for Healthcare. Key features of Azure Health Data Services include the ability to securely transfer protected health information (PHI) in the cloud, along with connecting it to other apps within the Microsoft Cloud for Healthcare. Notable relevant updates to Microsoft Cloud for Healthcare include Text Analytics for Health to improve clinical and operational insights by extracting insights from unstructured medical data and transforming it into Fast Healthcare Interoperability Resources (FHIR) format.

Salesforce Announces Improvements to Customer 360 For Health
On March 16, Salesforce announced new capabilities within Salesforce’s Customer 360 for Health. Of note on the data side, Salesforce debuted Patient Unified Health Scoring to provide insights into best courses of action for a given patient, integrated with the Patient Data Platform, allowing medical data to be appropriately connected while respecting HIPAA and other regulations and governance.

Snowflake Launches Healthcare & Life Sciences Data Cloud

On March 17, Snowflake launched its Healthcare and Life Sciences Data Cloud, aiming to eliminate data silos and allow for appropriate sharing and use of sensitive medical data while respecting regulations and governance. Key features include enhanced data governance, the ability to ingest and run analytics on HL7/FHIR messages, and support for analyzing numerous types of unstructured medical data.

Funding

Synthetaic Secures $13M Series A Financing

Synthetaic, an AI-based image classifier, has raised $13M in Series A funding. Lupa Systems led the round, with additional participation from Betaworks, Booz Allen Hamilton, Esri, and TitleTown Tech. The funding will be used for hiring, R+D, and strategic partnerships.

Product Launches and Updates

DataRobot AI Cloud 8.0 Now Available

On March 17, DataRobot debuted AI Cloud 8.0. Key enhancements include support for Automated Time Series in DataRobot’s AI App Builder, the availability of Continuous AI in on-prem environments, and new integrations with Microsoft Active Directory and Scoring Code for Snowflake. AI Cloud 8.0 is available now.

Dataiku Debuts Cloud Stack Accelerator on AWS

On March 16, Dataiku revealed their no-code cloud stack accelerator on AWS. The accelerator is a way to rapidly deploy and manage Dataiku on AWS, bringing together Amazon and Dataiku resources for machine learning projects. Notable capabilities in the partnership include the ability to connect, transform, and analyze datasets hosted on Amazon Redshift within Dataiku; build and scale Dataiku AutoML machine learning models running on Amazon Elastic Kubernetes Service; and include computer vision and text analytics within Dataiku projects using AWS’ machine learning services.

Partnerships

Alteryx Updates Partner Program

Alteryx announced its updated partner program, marking a shift in its go-to-market strategy to emphasize the acceleration partners can provide in implementing analytics projects. Notable changes include three new tiers for partners (Registered, Select, and Premier), standardized benefits that increase with partner-initiated projects, a new role-based training curriculum and certifications, and global guidelines for engagement.

Databricks Introduces Brickbuilder Solutions, Extending Its Partner Program

Databricks established Brickbuilder Solutions, an extension of its partner program. Brickbuilder Solutions includes a number of consultants who have built solutions on the Databricks Lakehouse Platform, helping their clients accelerate their data-driven digital transformation projects.

Tecton and Redis Integrate for Realtime Feature Store Access

Tecton, an enterprise feature store, and Redis, a realtime data platform, announced a partnership late last week. Tecton has integrated its feature store with Redis Enterprise Cloud to provide customers realtime feature serving for high-volume, low-latency use cases such as approving credit card transactions or fraud detection.

Hiring

Domino Data Lab Welcomes Former Microsoft Chief Data Analytics Officer John Kahan as Advisor

Domino Data Lab has welcomed John Kahan as a strategic advisor to their CEO and Board of Directors. Kahan will provide guidance on go-to-market and product development. Most recently, Kahan was the Chief Data Analytics Officer at Microsoft, where he held numerous roles in the data space over nearly the past two decades. Kahan also serves on several other companies’ boards and as an advisor to predictive analytics company Equinauts, petroleum and renewable energy company US Venture, and the Novartis Foundation to advise on AI in public health, intersecting with markets of interest to Domino.

Posted on

Zoom, Five9 Call Off the Wedding: What’s Next?

Reluctant shareholders have put the kibosh on Zoom’s intention to buy contact-center-as-a-service provider Five9. The deal would have amounted to almost $15 billion. 

But it was an all-stock deal. As it turns out, Five9 shareholders weren’t such fans of that structure. Zoom’s stock has declined 25% since the video conferencing behemoth announced in July it would buy Five9. Those share prices were not shaping up in Five9 investors’ favor. So, on Sept. 30, they voted against Zoom’s proposed, $14.7 billion purchase. As a result, Zoom and Five9 announced they had mutually terminated the acquisition.

Zoom had sought out Five9 for its cloud contact center expertise. Throughout the pandemic, organizations worldwide have relied on Zoom to keep their teams intact through video conferencing. To help users through uncertain times, Zoom has understood that it needs to deliver even more functional and appealing features, and it made good at Zoomtopia 2021. Part of its new platform announcements included the Video Engagement Center, which contains important contact center capabilities. Notably, though, Zoom debuted that component separate from any Five9 announcements. Did the company see Five9 shareholder rejection coming two weeks later or was it already planning to incorporate Five9 into its VEC portfolio? Either way, the answer might not matter much. Zoom and Five9 say they will still work together.

“We will continue to partner with Zoom like we did before, and just as we partner with other UC providers like Microsoft Teams, Nextiva, Mitel, and others,” Five9 told analysts in an Oct. 1 statement. “This allows us to offer customers the choice they often crave when looking at building out their [customer experience] ecosystem.”

Zoom CEO Eric Yuan said his company will “maintain our valued existing contact center partnerships with companies like Five9, Genesys, NICE inContact, Talkdesk, and Twilio.” 

Amalgam Insights believes Five9 did indeed present Zoom with an attractive acquisition target. The 20-year-old Five9 stands out as a pioneer in cloud contact center. It was among the first contact center developers to understand the need for multimodal chat — not just phone conversations, which often frustrate users enduring iffy interactive voice response, but giving agents and customers the ability to communicate over email, social media, chat function, and text. It also homed in on the importance of easy-to-access analytics to help improve the customer experience in real-time. Combining Zoom and Five9 would have added more heft to Zoom’s offerings. Nonetheless, for its part, Five9 is doing just fine on its own as a standalone public company (fluctuating stock prices notwithstanding); it boasts a $10.6 billion market capitalization. While a union between it and Zoom would have created a global giant, both companies can fuel success by partnering with one another — again, as they say they will do. 

Even so, Zoom needs to diversify. The company tripled its value over the past two years, thanks in no small part to COVID-19. Demand for its services led to an extra $50 billion (and counting) in its market cap and spending power. Now is the time for Zoom to prove it can hold onto, and keep powering, its dominant position. For sure, the company made waves at its Zoomtopia 2021 event in mid-September, giving Amalgam Insights analysts reason to predict the video conferencing provider is aware of the mandate in front of it. Zoom debuted much-needed enhancements — from live translation and transcription and Smart Gallery improvements to hot desking and events hosting — that promise to make video conferencing far more than a pandemic-related enabler. Zoom appears to be hyperfocused on innovation.

Still, if it decided to renegotiate a Five9 purchase with cash replacing some stock, the pairing would make a natural fit despite a first failed attempt. If that doesn’t happen, Zoom should keep an eye out for other unified communications and contact center players that would beef up its platform in unique ways. Five9, meanwhile, may be courted by the likes of Salesforce or Adobe as the contact center becomes an even more ferocious battleground for supporting customer centricity. Both of those companies are high on the list of vendors needing to augment their video conferencing platforms with differentiated integrations, and both have deep pockets.

Overall, even if Zoom does not retool its bid, or if Five9 moves on to greener pastures, both Zoom and Five9 must stay trained on the future. Hybrid work represents the next major paradigm for organizations, and it’s a challenging one for them to navigate. They have to accommodate in-office and remote workers, and many of those people will flow between the two modes. This calls for stringent attention to concerns including data protection, yet requires easy-to-use tools and omnipresent support for the shifts between in-person and at-home working. The vendors that make hybrid work simple and smooth are the ones that will prevail. As Zoom continues its mission to “make video communications frictionless and secure,” it must continue to lead both with the innovation and flexibility that made it a surprise hit in 2020. And regardless of whether Five9 is acquired or remains an independent vendor, the demand for omnichannel and preferred channel support will stick. As such, Five9 will keep evolving cutting-edge technologies to improve the state of customer interaction.

Posted on Leave a comment

July 23: From BI to AI (Cube Dev, Dremio, Google Cloud, Julia Computing, Lucata, Palantir, Redpoint, Sisense, Vertica, Zoom)

If you would like your announcement to be included in these data platform-focused roundups, please email lynne@amalgaminsights.com.

Product Launches and Updates

Dremio Launches SQL Lakehouse Service to Accelerate BI and Analytics

On July 21, at Subservice Live, Dremio debuted Dremio Cloud, a cloud-native SQL-based data “lakehouse” service. The service marries various aspects of data lakes and data warehouses into a SQL lakehouse, enabling high-performance SQL workloads in the cloud and expediting the process of getting started. Dremio Cloud is now available in the AWS Marketplace.

Google Cloud Announces Healthcare Data Engine to Enable Interoperability in Healthcare

On July 22, Google Cloud announced Healthcare Data Engine, now in private preview. Healthcare Data Engine integrates healthcare and life sciences data from multiple sources such as medical records, claims, clinical trials, and research data, enabling a more longitudinal view of patient health along with advanced analytics and AI in a secure environment. With the introduction of Amazon HealthLake last week, it’s clear that expanding healthcare and life sciences analytics capabilities continue to be a top priority among data services providers.

Palantir Introduces Foundry for Builders

Dipping a toenail into the waters outside their usual large established organization customer base, Palantir announced the launch of Foundry for Builders, providing access to the Palantir Foundry platform for startups under a fully-managed subscription model. Foundry for Builders is starting off with limited availability; the initial group of startups provided access are all connected to Palantir alumni, with the hope of expanding to other early-stage “hypergrowth” companies down the road.

Redpoint Global Announces In Situ

On July 20, Redpoint announced In Situ, a service that provides data quality and identity resolution. In Situ uses Redpoint’s data management technology to supply identity resolution and data integration services in real time within an organization’s virtual private cloud, without needing to transfer said private data across the internet.

Sisense Announces Sisense Extense Framework

On July 21, Sisense debuted the Sisense Extense Framework, a way to deliver interactive analytics experiences within popular business applications. Initially supported apps include Slack, Salesforce, Google Sheets, Google Slides, and Google Chrome, now available on the Sisense Marketplace. The Sisense Extense Framework will be released more broadly later this year to partners looking to build similar “infusion” apps.

Vertica Announces Vertica 11

On June 20, at Vertica Unify 2021, Vertica announced the Vertica 11 Analytics Platform. Key improvements include broader deployment support, strengthened security, increased analytical performance, and enhanced machine learning capabilities.

Funding

Cube Dev Raises $15.5 Million to Help Companies Build Applications with Cloud Data Warehouses

On July 19, Cube Dev announced that they had raised $15.5M in Series A funding. Decibel led this round, with participation from Bain Capital Ventures, Betaworks and Eniac Ventures. The funding will be used to scale go-to-market activities and accelerate R+D on its first commercial product. Cube Dev also brought aboard Jonathan E. Cowperthwait of npm as Head of Marketing and Jordan Philips of Dashbase as Head of Revenue Operations to support their commercial expansion.

Julia Computing Raises $24M in Series A, Former Snowflake CEO Bob Muglia Joins Board

Julia Computing announced the completion of a $24M Series A funding round on July 19. Dorilton Ventures led the round, with participation from Menlo Ventures, General Catalyst, and HighSage Ventures. Julia Computing will use the funding to further develop JuliaHub, its secure, high-performance cloud platform for scientific and technical modeling, and to grow the Julia ecosystem overall. Bob Muglia, the former CEO of Snowflake, joined the Julia Computing board on the same day.

Lucata Raises $11.9 Million in Series B Funding to Introduce Next-Generation Computing Platform

Lucata, a platform to scale and accelerate graph analytics, AI, and machine learning capabilities, announced July 19 that it had raised $11.9M in Series B funding. Notre Dame, Middleburg Capital Development, Blu Ventures Inc., Hunt Holdings, Maulick Capital, and Varian Capital all participated in the round. The funding will fuel an “aggressive” go-to-market strategy.

Acquisitions

Zoom to Acquire Five9

On July 18, Zoom announced that it had entered into a definitive agreement to acquire Five9, a cloud contact center service provider, for $14.7B in stock. In welcoming Five9 to the Zoom platform, Zoom expects to build a better “customer engagement platform,” complementary with its Zoom Phone offering. Later in the week, Zoom also announced the launch of Zoom Apps and Zoom Events, further enhancing the collaboration capabilities of the primary Zoom video communications suite.

Posted on Leave a comment

Salto Raises $42 Million to Reduce Technical Debt of Enterprise Infrastructure

Key Stakeholders: Chief Information Officers, Chief Technology Officers, Vice President/Director/Manager of Platform Engineering, Vice President/Director/Manager of Operations, System Architects, Product Managers, Product Marketing Managers, IT Finance, Software Asset Managers, Sales Operations, Marketing Operations

Why It Matters: As Software as a Service continues to balloon into a $275 billion global market by 2025 and the typical Global 5000 enterprise supports over 1,000 apps over its network, the challenge of SaaS configuration increasingly is linked to employee onboarding and productivity. Just as the battle for enterprise mobility security was a core concern for the 2010s, the battle for SaaS app governance will be a core IT concern for the 2020s.

Key Takeaway: IT departments must coordinate enterprise architects, security and governance teams, and software asset management personnel to ensure that all major SaaS applications considered mission-critical have well-governed configuration testing and management capabilities.

About the Funding Round

On May 19, 2021, application configuration platform Salto announced a $42 million B round led by Accel with participation by Salesforce Ventures and prior investors Lightspeed Venture Partners and Bessemer Venture Partners. This round comes only seven months after a $27 million A round announced in October 2020.

With this round of funding, Salto is expected to continue developing its solution and rapidly hiring. Salto currently supports Salesforce, NetSuite, HubSpot, Workato, and Zuora. These core SaaS applications are all market leaders, but considering the breadth of additional enterprise applications currently in market, the potential value associated with Salto supporting additional solutions is obvious and massive.

What Does Salto Do and Why Is It Worth So Much?

Salto is a solution for configuring business applications in a repeatable, scalable, and governed fashion at a time when the administration of Software as a Service is becoming increasingly complicated and challenging. Salto uses DevOps-based and software development-based tools and methodologies to help enterprise support SaaS at scale.

This mindset comes from Salto’s founders, Rami Tamir, Benny Shnaider, and Gil Hoffer, who collectively founded Salto in 2019 after previously working at Pentacom, Quumranet, and Ravello. Each company ended up exiting for over $100 million, showing the type of track record that venture capital firms love to see.

Salto’s core technology is maintained as an Open Source project (https://github.com/salto-io/salto) and a SaaS toolkit that includes

  • Not Another Configuration Language (NaCl… get it?), a structured language to help support and define software configurations
  • A command-line interface with operations commands including
    Fetch, which connects to each enterprise application and downloads current configurations for users
  • Deploy, which compare your preferred configurations to existing configurations and then creates an execution plan to fix configurations
  • A Salto vs-code extension to the vs-code IDE used to interact with NaCl-based files.

The SaaS offering of Salto also supports

  • Environments that allow for testing a service instance of an application and can be managed through the Fetch and Deploy applications
  • A Git client, which helps users to effectively push or pull changes as needed to support software configuration.

So, why does this matter so much for IT?

Let’s take a step back. We have established that software is one of the greatest force multipliers for human effort in the history of the world. It is nearly impossible to get work done in large enterprises without using at least one or more complex enterprise software solutions, such as an ERP (Enterprise Resource Planning) or CRM (Customer Relationship Management) system.

To add to this complexity, the dominant deployment mode for software is now Software as a Service, which is growing over 25% per year and drives the majority of new software purchases. Amalgam Insights estimates that the average company with 1000 employees is running 500 applications on their network and about 10% of those apps are centrally managed through IT as key enterprise data assets and workflow managers. These SaaS applications are being updated constantly, to the point that many vendors have given up on providing formal versions and instead simply provide agile updates. Even vendors with formal versions are releasing new functionalities and fixes on a constant basis.

In this era of immense application environments and constant change, companies can easily end up with inconsistent environments across departments and locations as they customize their software deployments with user interface preferences and specific code to match their business needs. Companies need to support their software suites based on business dependencies and make sure that core software solutions are always working for the sake of employee productivity.

Amalgam Insights believes that Salto’s SaaS configuration solution is an important management solution for end-user computing that has not been fully developed as of yet. At a time when everything from paper to on-premises software to hardware is all being replaced by SaaS, companies have either been offered SaaS operations management solutions to govern and secure licenses, Software asset management to manage the inventory of applications, or SaaS expense management to reduce and optimize spend. However, these three families of SaaS management do not effectively govern and audit the configuration and administration of applications

Salto uses NaCl to extract the metadata associated with a software configuration to provide users with a consistent taxonomy, text search, and references to make sure that companies understand what happens when they change their software configurations. Seemingly minor access or usability changes may end up unwittingly breaking business processes and interdepartmental collaboration.

The Value Chain of Salto for Enterprise Environments

The practical result is that Salto has seen customers claim to accelerate update times by 75%. The resulting productivity can be framed in several ways.

First, the terms of the (value of the new solution) * (the number of employees affected). This value should be based on a value based on the average revenue per employee, as employee output is based on revenue, not compensation.

Second, the avoidance of technical debt and avoiding the conflicts of multiple versions or broken versions in production can be estimated.

Third, the value of having visibility to the full configuration and interrelationships that each software system has can lead to better business process management and accelerated business changes. This value may be more difficult to quantify, but is often noticed at the executive level when businesses are trying to make changes.

Fourth, this level of visibility and auditability can lead to more rapid governance and compliance reporting as well as improved protection to potential security vulnerabilities related both to application configuration and the human aspects associated with working on misconfigured applications.


Altogether, the value of Salto quickly adds up to 1% of an employee’s annual productivity, which Amalgam Insights estimates to be between $3,200 per year.

Hyoun Park, Chief Analyst Amalgam Insights

It is not unreasonable to think that an employee could quickly lose an hour or two each month from NetSuite or Salesforce configuration issues, either from direct work issues or from the lineage, reporting, and security issues that follow. At the enterprise level, this quickly escalates to over $3 million for every 1,000 employees, making the business case for Salto more obvious.

This is ultimately the case that Salto is making in a SaaS-empowered world and that Accel, Bessemer, Lightspeed, and Salesforce Ventures have signed off nearly $70 million to pursue.

Recommendation for Enterprise IT Departments

Amalgam Insights’ key recommendation in light of this announcement is simple: Work with enterprise architects to ensure that all major SaaS applications considered mission-critical have well-governed configuration testing and management capabilities. 

Enterprise SaaS is currently a $110 billion market that will grow to $275 billion in 2025. In light of this growth and the increasing corporate dependence on SaaS to support business processes, companies must have a solution to support effective SaaS configuration management and changes. In this world of ever-changing technical needs, IT must keep up and ensure that SaaS deployments and updates are governed just as more traditional software, hardware, network, data center, and other IT resources are.

Posted on

The CIO’s Best Friend: An Accurate IT Inventory That Underpins Organizational Transformation

Key Stakeholders: Chief Information Officers, Chief Financial Officers, IT Finance Directors and Managers, IT Procurement Directors and Managers, Accounting Directors and Managers, Telecom Expense Directors and Managers, IT Operations, IT Strategy, FinOps Directors and Managers

Why It Matters: IT will bleed cash through incomplete inventory management. Organizations need liquidity to save jobs and to survive a recessionary environment.

Top Takeaway: IT has the tools, capabilities, and obligation to create a solid inventory practice that eliminates wasteful spending while providing employees with the right resources and preparing companies for a future of digital transformation

This blog marks the final installment in our series on managing IT in the Time of Corona with an intense focus on inventory, the foundational piece of a strong IT management practice. If IT does not have complete insight into the assets and services for which it is responsible, then everything – expenses, optimization activity, equipment assignments, etc. – is inaccurate and creates waste across the board. 

Right now, halfway through a year fraught with the impacts of COVID-19 and quarantine on a hobbled economy, cash flow is king. All organizations and departments are living or dying by this metric. As I’ve repeated in every blog and webinar since mid-April, IT holds the keys to freeing up cash for the rest of the business, but CIOs and CFOs ignoring the potential savings in IT are doing their organizations a disservice. IT has the ethical obligation to conduct the analyses that expose and correct financial waste, especially in light of the first stage of COVID IT, Survival, where corporate controls took a back seat to basic operations.

The IT Rule of 30 shows that every unmanaged IT spend category (network, cloud, telecom, mobility, SaaS, etc.) contains, on average, 30% in waste. March, April, and May were a perfect example of uncontrolled and unmanaged spend environments. Thirty percent of hundreds of thousands or even millions of dollars is a lot of money, more than enough to bolster cash flow and fund salaries. 

And while the pandemic has created the impetus for finding cash, it’s also bringing to light any aspects of IT that have gone unmanaged. Here’s why it matters.

Amalgam Insights estimates that three-quarters of organizations in the United States have frozen their IT budgets because of COVID.

At the same time, employees who started working from home with little notice and few resources bought applications and other tools, often consumer-grade, without IT’s knowledge – and charged the company. This shadow purchasing means IT has a lot of new inventory to account for and track, and even rein in wherever possible. Cleaning up known and shadow inventory will undoubtedly uncover significant amounts of unnecessary spending the organization can end or repurpose. With that in mind, let’s look at inventory through the lens of telecom, mobility, IaaS and SaaS.

The Nitty-Gritty of Managing IT Inventory

Remember, inventory is the bedrock of expense management. It supplies all the information that feeds into invoices, contracts, and services. Without thorough, transparent, and constantly maintained inventory, the rest of an expense management practice becomes an unreliable source of data and a money-squanderer. IT experts want to avoid this and want to be good stewards of the business, but they need to gain operational and financial control of their technology portfolios. Here’s how to get started:

  1. Centralize all service orders. There’s no way for IT to know what it owns unless it knows what has been ordered. The problem is, mobility, cloud, and bring-your-own everything has led to one-off, hidden, obscured, decentralized service ordering. It’s not just about the volume of orders – it’s the number of people who can submit orders using credit cards and corporate email addresses. IT must implement a single source of truth where employees place service requests or document orders so IT can track every inventory component.
  2. Align service orders to inventory. First, IT needs to figure out whether a service order actually got completed. That sounds basic but it is a shockingly common occurrence. Amalgam Insights finds that IT orders do not get completed for a variety of reasons including problems with billing systems, information getting lost in translation, and human mistakes. Before proceeding to step 3, make sure service orders and inventory match.
  3. Categorize and/or tag inventory. Assign what has been ordered to the appropriate business department and make sure that information lives in various systems throughout the organization. That way, all databases contain the same details. Repositories Amalgam Insights recommends linking to IT inventory include: HR systems, Active Directory, general ledger, single sign-on, IT asset and service management, project management, cloud management, and governance.

    IT can communicate with these systems either directly or through tagging. The overarching idea is that integration aligns inventory change and tags to employee and project changes – and nothing gets lost in the shuffle. We’re all aware of the example of an employee who leaves the organization and whose mobile phone ends up unused in a drawer, even while the cell phone carrier charges for data and service, or who sets up a processor-intensive job with five duplicate instances that never get turned off.
  4. Discover what is missing. To truly understand inventory, IT must understand who is creating the inventory. This is a constant challenge. Anyone with a credit card and an employee ID can place orders. To combat this, think not just about what you’re supporting but the roles that would order the technology you’re supporting. Inventory is a team effort. Knowing that, identify and speak with your key technology orderers to learn why they are ordering phones, SaaS, and other technology services. Find out why they are adding to the IT budget when IT is trying to control costs and how they are governing the services that are ordered. By doing so, IT will learn what tools it hasn’t been supplying to help employees with their productivity. Managing IT is not about stripping staff of resources – it’s about empowering them with the right platforms and equipment, while getting the most for the money spent.
  5. Track, track and track again. Breaking this down, this first means tracking all aspects of inventory features. It is not enough to know that IT has ordered a phone line or a cloud service or an application – you need to know the associated features.

    Second, IT must track logins and usage for all assets and services. Cloud invoicing is like call accounting on steroids for those with a telecom background. Eliminate any and all zombie services that drain resources long after the owner has left the company or project has been completed. In the time of COVID-19, organizations especially cannot afford to pay for something they do not use.
  6. Cross-check invoices with inventory. This one is pretty basic: Invoices and inventory must match. No exceptions. When it comes to telecom, Amalgam Insights sees 80% of bills with at least one error. There is a lot of room for mismatches between invoices and inventory with telecom. Technology vendor marketing and sales departments like to change service names, tiers, and features frequently to provide better deals. But these frequent customizations pave the way for wrong charges.

    IT needs to vet all of its large bills (all bills, really, but especially those most prone to problems) hawkishly each month. For its part, public cloud invoicing and inventory generally have few inventory errors. But the inventory problem for cloud comes into play with shadow IT, or employees placing orders outside of IT’s knowledge (but whose costs still end up on IT’s shoulders). SaaS products, like cloud, feature so much agility that tracking who is buying what and when is hard. At the same time, SaaS companies, like telecom suppliers, change pricing, features, and tiers often. A $10 plan can become a $50 plan, or vice versa, overnight.
  7. Prioritize contracts based on inventory portfolio. This essentially boils down to bringing together two worlds – that of inventory and that of sourcing/procurement. In an ideal scenario, the people in these departments will have a strong working relationship and pool their strengths for the good of the organization.

    Frankly, inventory folks do not care to negotiate contracts and sourcing/procurement managers do not want to track inventory. Melding certain aspects of each unit into the same set of priorities will create benefits including:
  • Replacing obsolete services with new ones
  • Showing where to prioritize negotiations and price discussions
  • Using monthly and quarterly inventory trends to negotiate better rates and services

The biggest takeaway from each of these steps is that inventory is not a standalone effort within IT. It is a team undertaking. Representatives from legal and finance might even need to get involved. So, maintain strong cross-departmental relationships. Keep tabs on who has been ordering, and whether or how ordering has fundamentally changed. Learn each spend subcategory to understand who IT may need to be talking with to pivot or put the kibosh on spending. 

Conclusion: Clean Up Inventories One Category at at Time

Do not tackle every inventory category at once. You’ll quickly become overwhelmed and miss critical details. Start with one bucket and figure out which stage it’s in: Prepare, Herd Cats, or Optimize. 

“Prepare” means getting the inventory across your IT subscriptions up to speed. This includes centralizing service order data, aligning the service orders to inventory, and then categorizing the inventory.

“Herd Cats,” the intermediate stage, requires doing discovery to understand how IT has changed since March, when most organizations in the United States started shifting to remote work. Knowing how resources have been reallocated, added or removed will contribute to contract renewal talks and/or any difficult discussions with vendors. Checking expense management, Single Sign-On, and invoice management solutions will provide guidance on what is missing from the IT view of inventory.

“Optimize” takes place after the first two phases, and ties back to the aforementioned notion of prioritizing contracts, vendors, and spend categories based on inventory portfolio.

This process needs to be conducted methodically, one IT category at a time. There’s a lot of cleanup to do as a result of all the changes caused by COVID. Give the IT department one to three months to accomplish this. I recommended starting this process back in early April to our advisory clients and in late April to our webinar attendees. If you haven’t started yet, clean up the inventory now to take advantage of COVID-related amnesty and goodwill that currently exists across IT vendors.

Get your inventory clean and you’ll save yourself, your colleagues, and the organization a lot of heartache as we all face some hard times. And cheers to you, IT, for doing so much to free up cash, and save and bring back jobs. In many ways, you are unsung heroes. 

***If you are seeking outside guidance on finding solutions to help manage your IT environment, Amalgam Insights is here to help. Click here to schedule a consultation.

***To continue your trek as an IT hero, join us at TEM Expo, which is still available at no cost until August 12 to learn more about how to prepare for COVID IT and take immediate action to cut costs. I especially recommend watching Andi Pringle’s session on The Art of Inventories, Robert Bracco and Dana Risley’s session on IT Financial Management, Shane Hansen’s session on Cash Management, Robert Lee Harris’ session on Cloud Savings Cost Management, and our Executive Panel on the Future of the Technology Expense Management Market.

***And if you’d like to learn more about this topic now, please watch our Amalgam Insights’ webinar on Inventory.