Posted on

This Week in Enterprise Tech, Week 3

This Week in Enterprise Tech, brought to you by the DX Report’s Charles Araujo and Amalgam Insights’ Hyoun Park, explores six big topics for CIOs across innovation, the value of data, strategic budget management, succession planning, and enterprise AI.

1) We start with the City of Birmingham, which is struggling with its SAP to Oracle migration. We discuss how this IT project has shifted from the promise of digital transformation to the reality of being in survival mode and the cautions of mistaking core services for innovation.

Article link: https://www.theregister.com/AMP/2024/02/28/birmingham_city_council_to_spend/

2) We then take a look at Salesforce’s earnings, where the Data Cloud is the Powerhouse of the earnings and CIOs are proving the value of data with their pocketbooks and the power of the purse. We break down the following earnings chart.


3) We saw NVIDIA’s success in AI as a sign that CIO budgets are changing. Find out about the new trend of CIO-led budgets that are independent of the traditional IT budget, as well as Charles’ framework of separating the efficiency bucket from the innovation bucket from his first book, The Quantum Age of IT.

Article link: https://www.wsj.com/articles/corporate-ai-investment-is-surging-to-nvidias-benefit-5611ffc5?mod=djemCIO

4) One of the hottest companies in enterprise software sees a big leadership change, as Frank Slootman steps down from Snowflake and Sridhar Ramaswamy from the Neeva acquisition takes over. We discuss why this is a good move to avoid stagnation and discuss how to deal with bets in innovation.

Article link: https://www.cnbc.com/amp/2024/02/28/new-snowflake-ceo-says-ai-will-keep-him-busy-for-many-years-to-come.html

5) Continuing the trend of innovation management, we talk about what Apple’s exit of the electric car business means in terms of managing innovative moonshots and what CIO’s often miss in terms of setting metrics around leadership and innovation culture.

Article link: https://www.nytimes.com/2024/02/28/business/dealbook/apple-car-project-to-drive-wider-innovation.html?referringSource=articleShare&smid=nytcore-ios-share

6) And finally, we talk about the much-covered Google Gemini AI mistakes. We think the errors themselves fall within the range of issues that we’ve seen from other large language models, but we caution why the phrase “Eliminate Bias” should be a massive red flag for AI projects.

Article link: https://www.theverge.com/2024/2/28/24085445/google-ceo-gemini-ai-diversity-scandal-employee-memo

This Week In Enterprise Tech is hosted by:

Charles Araujo of The DX Reportand

Hyoun Park of Amalgam Insights

Posted on

This Week in Enterprise Tech, Week 2

In Week 2 of TWIET, Charles Araujo and Amalgam Insights’ Hyoun Park take on the following topics and why they matter to the CIO Office.

This Week in Enterprise Tech, Week 2

First, we discuss the emergence of the AI app layer and what this means for enterprise IT organizations. It is not enough to simply think of AI in terms of what models are being used, but also the augmentation, tuning, app interface, maintenance, and governance of AI in the enterprise.

Second, we dig into KKR’s $3.8 billion acquisition of VMware’s End User Computing business and what this means both for the EUC business and for VMware customers as a whole as the market leader in virtualization is now owned by one of the best money makers in the tech industry, Hock Tan of Broadcom.

Third, we explore NVIDIA’s quarterly earnings by going beyond the obvious growth of data center sales of GPUs. What do the rest of NVIDIA’s sales say about the current state of Cloud FinOps and compute investments in areas such as gaming and smart autos?

And finally, we consider the nature of trust on the internet based on a recent Wired report that explores the use of robots.txt. You probably best know this file as a tool to keep Google from caching your site. But what does it mean as more and more spiders seek to automate the caching of all your web-accessible intellectual property?

Posted on

Instant Mediocrity: a Business Guide to ChatGPT in the Enterprise

In June of 2023, we are firmly in the midst of the highest of hype levels for Generative AI as ChatGPT has taken over the combined hype of the Metaverse, cryptocurrency, and cloud computing. We now face a deluge of experts who all claim to be “prompt engineering” experts and can provide guidance on the AI tools that will make your life easier to live. At the same time, we are running into a cohort of technologists who warn that AI is only one step away from achieving full sentience and taking over the world as an apocalyptic force.

In light of this extreme set of hype drivers, the rest of us do face some genuine business concerns associated with generative AI. But our issues are not in worshipping our new robot overloads or in the next generation of “digital transformation” focused on the AI-driven evolution of our businesses that lay off half the staff. Rather, we face more prosaic concerns regarding how to actually use Generative AI in a business environment and take advantage of the productivity gains that are possible with ChatGPT and other AI tools.

Anybody who has used ChatGPT in their areas of expertise has quickly learned that ChatGPT has a lot of holes in its “knowledge” of a subject that prevent it from providing complete answers, timely answers, or productive outputs that can truly replace expert advice. Although Generative AI provides rapid answers with a response rate, cadence, and confidence that mimics human speech, it often is missing either the context or the facts to provide the level of feedback that a colleague would. Rather, what we get is “instant mediocrity,” an answer that matches what a mediocre colleague would provide if given a half-day, full-day, or week to reply. If you’re a writer, you will quickly notice that the essays and poems that ChatGPT writes are often structurally accurate, but lack the insight and skill needed to write a university-level assignment.

And the truth is that instant mediocrity is often a useful level of skill. If one is trying to answer a question that has one of three or four answers, a technology that is mediocre at that skill will probably give you the right answer. If you want to provide a standard answer for structuring a project or setting up a spreadsheet to support a process, a mediocre response is good enough. If you want to remember all of the standard marketing tools used in a business, a mediocre answer is just fine. As long as you don’t need inspired answers, mediocrity can provide a lot of value.

A few things for you to consider as your organization starts using ChatGPT. Just like when the iPhone launched 16 years ago, you don’t really have a choice on whether your company is using ChatGPT or not. All you can do is figure out how to manage and govern the use. Our recommendations typically take one of three major categories: Strategy, Productivity, and Cost. Given the relatively low price of ChatGPT both as a consumer-grade tool and as an API where current pricing is typically a fraction of the cost of doing similar tasks without AI, the focus here will be on strategy and productivity

Strategy – Every software company now has a ChatGPT roadmap. And even mid-sized companies typically have hundreds of apps under management. So, now there will 200, 500, or however many potential ways for employees to use ChatGPT over the next 12 months. Figure out how GPT is being integrated into the software and whether GPT is being directly used to process data or indirectly to help query, index, or augment data.

Strategy – Identify the value of mediocrity. The average large enterprise getting mediocrity from a query-writing or indexing perspective is often a much higher standard than the mediocrity of text autocompletion. Seek mediocrity in tasks where the average online standard is already higher than the average skill within your organization.

Strategy – How will you keep proprietary data out of Gen AI? – Most famously, Samsung recently had a scare when it saw how AI tools were echoing and using proprietary information. How are companies both ensuring that they have not put new proprietary data into generative AI tools for potential public use and that their existing proprietary data was not used to train generative AI models? This governance will require greater visibility from AI providers to provide detail on the data sources that were used to build and train the models we are using today.

Strategy – On a related note, how will you keep commercially used intellectual property from being used by Gen AI? Most intellectual property covered by copyright or patent does not allow for commercial reuse without some form of license. Do companies need to figure out some way of licensing data that is used to train commercial models? Or can models verify that they have not used any copyrighted data? Even if users have relinquished copyright for the specific social networks and websites that they initially wrote for, this does not automatically give OpenAI and other AI providers the same license to use the same data for training. And can AIs own copyright? Once large content providers such as music publishers, book publishers, and entertainment studios realize the extent to which their intellectual property is at risk with AI and somebody starts making millions with AI-enabled content that strongly resembles any existing IP, massive lawsuits will ensue. If an original provider, be ready to defend IP. If using AI, be wary of actively commercializing or claiming ownership of AI-enabled work for anything other than parody or stock work that can be easily replaced.

Productivity – Is code enterprise-grade: secure, compliant, and free of private corporate metadata? One of the most interesting new use cases for generative AI is the ability to create working code without having prior knowledge of the programming language. Although generative AI currently cannot create entire applications without significant developer engagement, it can quickly provide specifically defined snippets, functions, and calls that may have been a challenge to explicitly search for or to find on a Stack Overflow or in git libraries. As this use case continues to proliferate, coders need to understand their auto-generated code well enough to check for security issues, performance challenges, appropriate metadata and documentation, and reusability based on corporate service and workload management policies. But this will increasingly allow developers to shift from directly coding every line to editing and proofing the quality of code. In doing so, we may see a renaissance of cleaner, more optimized, and more reusable code for internal applications as the standard for code now becomes “instantly mediocre.”

Productivity – Quality, not Quantity. There are hundreds of AI-enabled tools out there to provide chat, search-based outputs, generative text and graphics, and other AI capabilities. Measure twice and cut once in choosing the tools that you use to help you. It’s better to find the five tools that matter than the 150 tools that don’t maximize the mediocre output that you receive.

Productivity – Are employees trained on fact-checking and proofing their Gen AI outputs? Whether employees are creating text, getting sample code, or prompting new graphics and video, the outputs need to be verified against a fact-based source to ensure that the generative AI has not “hallucinated” or autocompleted details that are incorrect. Generative AI seeks to provide the next best word or the next best pixel that is most associated with the prompt that it has been given, but there are no guarantees that this autocompletion will be factual just because it is related to the prompt at hand. Although there is a lot of work being done to make general models more factual, this is an area where enterprises will likely have to build their own, more personalized models over time that are industry, language, and culturally specific. Ideally, ChatGPT and other Generative AI tools are a learning and teaching experience, not just a quick cheat.

Productivity – How will Generative AI help accelerate your process and workflow automation? Currently, automation tends to be a rules-driven set of processes that lead to the execution of a specific action. But generative AI can do a mediocre job of translating intention into a set of directions or a set of tasks that need to be completed. While generative AI may get the order of actions wrong or make other text-based errors that need to be fact-checked by a human, the AI can accelerate the initial discovery and staging of steps needed to complete business processes. Over time, this natural language generation-based approach to process mapping is going to become the standard starting point for process automation. Process automation engineers, workflow engineers, and process miners will all need to learn how prompts can be optimized to quickly define processes.

Cost – What will you need to do to build your own AI models? Although the majority of ChatGPT announcements focus on some level of integration between an existing platform or application and some form of GPT or other generative AI tool, there are exceptions. BloombergGPT provides a model based on all of the financial data that it has collected to help support financial research efforts. Both Stanford University Alpaca and Databricks Dolly have provided tools for building custom large language models. At some point, businesses are going to want to use their own proprietary documents, data, jargon, and processes to build their own custom AI assistants and models. When it comes time for businesses to build their own billion-parameter, billion-word models, will they be ready with the combination of metadata definitions, comprehensive data lake, role definitions, compute and storage resources, and data science operationalization capabilities to support these custom models? And how will companies justify the model creation cost compared to using existing models? Amalgam Insights has some ideas that we’ll share in a future blog. But for now, let’s just say that the real challenge here is not in defining better results, but in making the right data investments now that will empower the organization to move forward and take the steps that thought leaders like Bloomberg have already pursued in digitizing and algorithmically opening up their data with generative AI.

Although we somewhat jokingly call ChatGPT “instant mediocrity,” this phrase should be taken seriously both in acknowledging the cadence and response quality that is created. Mediocrity can actually be a high level of performance by some standards. Getting to a response that an average 1x’er employee can provide immediately is valuable as long as it is seen for what it is rather than unnecessarily glorified or exaggerated. Treat it as an intern or student-level output that requires professional review rather than an independent assistant and it will greatly improve your professional output. Treat it as an expert and you may end up needing legal assistance. (But maybe not from this lawyer. )

Posted on

Workday AI and ML Innovation Summit: Chasing the Eye of the AI Storm

We are in a time of transformational change as the awareness of artificial intelligence (AI) grows during a time of global uncertainty. The labor supply chain is fluctuating quickly and the economy is on rocky ground as interest rates and geopolitical strife create currency challenges. Meanwhile, the commodity supply chain is in turmoil, leading to chaos and confusion. Rising interest rates and a higher cost of money are only adding to the challenges faced by those in the global business arena. In this world where technology is dominant in the business world, the global economic foundation is shifting, and the worlds of finance and talent are up for grabs, Workday stepped up to hold its AI and ML Innovation summit to show a way forward for the customers of its software platform, including a majority of the Fortune 500 that use Workday already as a system of record.

The timing of this summit will be remembered as a time of rapid AI change, with new major announcements happening daily. OpenAI’s near-daily announcements regarding working with Microsoft, launching ChatGPT, supporting plug-ins, and asking for guidance on AI governance are transforming the general public’s perception of AI. Google and Meta are racing to translate their many years of research in AI into products. Generative AI startups already focused on legal, contract, decision intelligence, and revenue intelligence use cases are happy to ride the coattails of this hype. Universities are showing how to build large language models such as Stanford’s Alpaca. And existing machine learning and AI companies such as Databricks are showing how to build custom models based on existing data for a fraction of the cost needed to build GPT.

In the midst of this AI maelstrom, Workday decided to chase the eye of the hurricane and put stakes in the ground on its current approach to innovation, AI, and ML. From our perspective, we were interested both in the executive perspective and in the product innovation associated with this Brave New World of AI.

Enter the Co-CEO – Carl Eschenbach

Workday’s AI and ML Innovation Summit commenced with an introduction of the partners and customers that would be present at the event. The Summit began with a conversation between Workday’s Co-CEOs, Aneel Bhusri and Carl Eschenbach, where Eschenbach talked about his focus on innovation and growth for the company. Eschenbach is not new to Workday, having been on its board during his time at Sequoia Capital, where he also led investments in Zoom, UIPath, and Snowflake. Having seen his work at VMware, Amalgam Insights was interested to see Eschenbach take this role and help Workday evolve its growth strategy from an executive level. From the start, both Bhusri and Eschenbach made it clear that this Co-CEO team is intended to be a temporary status with Eschenbach taking the reins in 2024, while Bhusri becomes the Executive Chair of Workday.

Eschenbach emphasized in this session that Workday has significant opportunities in providing a full platform solution, and its international reach requires additional investment both in technology and go-to-market efforts. Workday partners are essential to the company’s success and Eschenbach pointed out a recent partnership with Amazon to provide Workday as a private offering that can use Amazon Web Service contract dollars to purchase Workday products once the work is scoped by Workday. Workday executives also mentioned the need for consolidation, which is one of Amalgam Insights’ top themes and predictions for enterprise software for 2023. The trend in tech is shifting toward best-in-suite and strategic partnering opportunities rather than a scattered best-in-breed approach that may sprawl across tens or even hundreds of vendors.

These Co-CEOs also explored what Workday was going to become over the next three to five years to take the next stage of its development after Bhusri evolved Workday from an HR platform to a broader enterprise software platform. Bhusri sees Workday as a system of record that uses AI to serve customer pain points. He poses that ERP is an outdated term, but that Workday is currently categorized as a “services ERP” platform in practice when Workday is positioned as a traditional software vendor. Eschenbach adds that Workday is a management platform across people and finances on a common multi-tenant platform.

From Amalgam Insights’ perspective, this is an important positioning as Workday is establishing that its focus is on two of the highest value and highest cost issues in the company: skills and money. Both must exist in sufficient quantities and quality for companies to survive.

The Future of AI and Where Workday Fits

We then heard from Co-President Sayan Chakraborty, who took the stage to discuss the “Future of Work” across machine learning and generative AI. As a member of the National Artificial Intelligence Advisory Committee, the analysts in the audience expected Chakraborty to have a strong mastery of the issues and challenges Workday faced in AI and this expectation was clarified by the ensuing discussion.

Chakraborty started by saying that Workday is monomaniacally focused on machine learning to accelerate work and points out that we face a cyclical change in the nature of the working age across the entire developed world. As we deal with a decline in the percentage of “working-age” adults on a global scale, machine learning exists as a starting point to support structural challenges in labor structures and work efforts.

To enable these efforts, Chakraborty brought up the technology, data, and application platforms based on a shared object model, starting with the Workday Cloud Platform and including analytics, Workday experience, and machine learning as specific platform capabilities. Chakraborty referenced the need for daily liquidity FDIC requests as a capability that is now being asked for in light of banking failures and stresses such as the recent Silicon Valley Bank failure.

Workday has four areas of differentiation in machine learning: data management, autoML (automated machine learning, including feature abstraction), federated learning, as well as a platform approach. Workday’s advantage in data is stated across quantity, quality associated with a single data model, structure and tenancy, and the amplification of third-party data. As a starting point, this approach allows Workday to support models based on regional or customer-specific data supported by transfer learning. At this point, Chakraborty was asked why Workday has Prism in a world of Snowflake and other analytic solutions capable of scrutinizing data and supporting analytic queries and data enrichment. Prism is currently positioned as an in-platform capability that allows Workday to enrich its data, which is a vital capability as companies face the battle for context across data and analytic outputs. 

Amalgam Insights will dig into this in greater detail in our recommendations and suggestions, but at this point we’ll note that this set of characteristics is fairly uncommon at the global software platform level and presents opportunities to execute based on recent AI announcements that Workday’s competitors will struggle to execute on.

Workday currently supports federated machine learning at scale out to the edge of Workday’s network, which is part of Workday’s differentiation in providing its own cloud. This ability to push the model out to the edge is increasingly important for supporting geographically specific governance and compliance needs (dubbed by some as the “Splinternet“) as Workday has seen increased demand for supporting regional governance requests leading to separate US and European Union machine learning training teams each working on regionally created data sources.

Chakraborty compared Workday’s approach of a platform machine learning approach leading to a variety of features to traditional machine learning feature-building approaches where each feature is built through a separate data generation process. The canonical Workday example is Workday’s Skills Cloud platform where Workday currently has close to 50,000 canonical skills and 200,000 recognized skills and synonyms scored for skill strength and validity. This Skills Cloud is a foundational differentiator for Workday and one that Amalgam Insights references regularly as an example of a differentiated syntactic and semantic layer of metadata that can provide differentiated context to a business trying to understand why and how data is used.

Workday mentioned six core principles for AI and ML, including people and customers, built to ensure that the machine learning capabilities developed are done through ethical approaches. In this context, Chakraborty also mentioned generative AI and large language models, which are starting to provide human-like outputs across voice, art, and text. He points out how the biggest change in AI occurred in 2006 when NVIDIA created GPUs, which used matrix math to support the constant re-creation of images. Once GPUs were used from a computational perspective, they made massively large parameter models possible. Chakraborty also pointed out the 2017 DeepMind paper on transformers to solve problems in parallel rather than sequentially, which led to the massive models that could be supported by cloud models. The 1000x growth in two years is unprecedented even from a tech perspective. Models have reached a level of scale where they can solve emergent challenges that they have not been trained on. This does not imply consciousness but does demonstrate the ability to analyze complex patterns and systems behavior. Amalgam Insights notes that this reflects a common trend in technology where new technology approaches often take a number of years to come to market, only to be treated as instant successes once they reach mainstream adoption.

The exponential growth of AI usage was accentuated in March 2023 when OpenAI, Microsoft, Google, and others provided an unending stream of AI-based announcements including OpenAI’s GPT 4 and GPT Plugins, Microsoft 365 Copilot and Microsoft Security Copilot, Google providing access to its generative AI Bard, Stanford’s $600 Alpaca generative AI model, and Databricks’ Dolly, which allows companies to build custom GPT-like experiences. This set of announcements, some of which were made during the Workday Innovation Summit, shows the immense nature of Workday’s opportunity as one of the premier enterprise data sources in the world that will both be integrated into all of these AI approaches.

Chakraborty points out that the weaknesses of GPT include bad results and a lack of explainability in machine learning, bad actors (including IP and security concerns), and the potential Environmental, Social, and Governance costs associated with financial, social, and environmental concerns. As with all technology, GPT and other generative AI models take up a lot of energy and resources without any awareness of how to throttle down in a sustainable and still functional manner. From a practical perspective, this means that current AI systems will be challenged to manage uptime as all of these new services attempt to benchmark and define their workloads and resource utilization. These problems are especially problematic in enterprise technology as the perceived reliability of enterprise software is often based on near-perfect accuracy of calculating traditional data and analytic outputs.

Amalgam Insights noted in our review of ChatGPT that factual accuracy and intellectual property attribution have been largely missing in recent AI technologies that have struggled to understand or contextualize a question based on surroundings or past queries. The likes of Google and Meta have focused on zero-shot learning for casual identification of trends and images rather than contextually specific object identification and topic governance aligned to specific skills and use cases. This is an area where both plug-ins and the work of enterprise software companies will be vital over the course of this year to augment the grammatically correct responses of generative AI with the facts and defined taxonomies used to conduct business.

Amalgam also found it interesting that Chakraborty mentioned that the future of models would include high-quality data and smaller models custom-built to industry and vertical use cases. This is an important statement because the primary discussion in current AI circles is often about how bigger is better and how models compete on having hundreds of billions of parameters to consider. In reality, we have reached the level of complexity where a well-trained model will provide responses that reflect the data that it has been trained on. The real work at this point is on how to better contextualize answers and how to separate quantitative and factual requests from textual and grammatical requests that may be in the same question. The challenge of accurate tone and grammar is very different from the ability to understand how to transform an eigenvector and get accurate quantitative output. Generative AI tends to be good at grammar but is challenged by quantitative and fact-based queries that may have answers that differ from its grammatical autocompletion logic.

Chakraborty pointed out that reinforcement learning has proven to be more useful than either supervised or unsupervised training for machine learning, as it allows models to look at user behavior rather than forcing direct user interaction. This Workday focus both provides efficacy of scale and takes advantage of Workday’s existing platform activities. This combination of reinforcement training and Workday’s ownership of its Skills Cloud will provide a sizable advantage over most of the enterprise AI world in aligning general outputs to the business world.

Amalgam Insights notes here that another challenge of the AI discussion is how to create an ‘unbiased’ approach for training and testing models when the more accurate question is to document the existing biases and assumptions that are being made. The sooner we can move from the goal of being “unbiased” to the goal of accurately documenting bias, the better we will be able to trust the AI we use.

Recommendations for the Amalgam Community on Where Workday is Headed Next

Obviously, this summit provided Amalgam Insights both with a lot of food for thought provided by Workday’s top executives. The introductory remarks summarized above were followed up with insight and guidance on Workday’s product roadmap across both the HR and finance categories where Workday has focused its product efforts, as well as visibility to the go-to-market and positioning, approaches that Workday plans to provide in 2023. Although much of these discussions were held under a non-disclosure agreement, Amalgam Insights will try to use this guidance to help companies to understand what is next from Workday and what customers should request. From an AI perspective, Amalgam Insights believes that customers should push Workday in the following areas based on Workday’s ability to deliver and provide business value.

  1. Use the data model to both create and support large language models (LLMs). The data model is a fundamental advantage in setting up machine learning and chat interfaces. Done correctly, this is a way to have a form of Ask Me Anything for the company based on key corporate data and the culture of the organization. This is an opportunity to use trusted data to provide relevant advice and guidance to the enterprise. As one of the largest and most trusted data sources in the enterprise software world, Workday has an opportunity to quickly build, train, and deploy models on behalf of customers, either directly or through partners. With this capability, “Ask Workday” may quickly become the HR and finance equivalent of “Ask Siri.”
  2. Use Workday’s Skills Cloud as a categorization to analyze the business, similar to cost center, profit center, geographic region, and other standard categories. Workforce optimization is not just about reducing TCO, but aligning skills, predicting succession and future success potential, and market availability for skills. Looking at the long-term value of attracting valuable skills and avoiding obsolete skills is an immense change for the Future of Work. Amalgam Insights believes that Workday’s market-leading Skills Cloud provides an opportunity for smart companies to analyze their company below the employee level and actually ascertain the resources and infrastructure associated with specific skills.
  3. Workday still has room to improve regarding consolidation, close, and treasury management capabilities. In light of the recent Silicon Valley Bank failure and the relatively shaky ground that regional and niche banks currently are on, it’s obvious that daily bank risk is now an issue to take into account as companies check if they can access cash and pay their bills. Finance departments want to consolidate their work into one area and augment a shared version of the truth with individualized assumptions. Workday has an opportunity to innovate in finance as comprehensive vendors in this space are often outdated or rigidly customized on a per-customer level that does not allow versions to scale out in a financially responsible way as the Intelligent Data Core allows. And Workday’s direct non-ERP planning competitors mostly lack Workday’s scale both in its customer base and consultant partner relationships to provide comprehensive financial risk visibility across macroeconomic, microeconomic, planning, budgeting, and forecasting capabilities. Expect Workday to continue working on making this integrated finance, accounting, and sourcing experience even more integrated over time and to pursue more proactive alerts and recommendations to support strategic decisions.
  4. Look for Workday Extend to be accessed more by technology vendors to create custom solutions. The current gallery of solutions is only a glimpse of the potential of Extend in establishing Workday-based custom apps. It only makes sense for Workday to be a platform for apps and services as it increasingly wins more enterprise data. From an AI perspective, Amalgam Insights would expect to see Workday Extend increasingly working with more plugins (including ChatGPT plugins), data models, and machine learning models to guide the context, data quality, hyperparameterization, and prompts needed for Workday to be an enterprise AI leader. Amalgam Insights also expects this will be a way for developers in the Workday ecosystem to take more advantage of the machine learning and analytics capabilities within Workday that are sometimes overlooked as companies seek to build models and gain insights into enterprise data.
Posted on

What To Watch Out For As GPT Leads a New Technological Revolution

2022 was a banner year for artificial intelligence technologies that reached the mainstream. After years of being frustrated with the likes of Alexa, Cortana, and Siri and the inability to understand the value of machine learning other than as a vague backend technology for the likes of Facebook and Google, 2022 brought us AI-based tools that was understandable at a consumer level. From our perspective, the most meaningful of these were two products created by OpenAI: DALL-E and ChatGPT, which expanded the concept of consumer AI from a simple search or networking capability to a more comprehensive and creative approach for translating sentiments and thoughts into outputs.

DALL-E (and its successor DALL-E 2) is a system that can create visual images based on text descriptions. The models behind DALL-E look at relationships between existing images and the text metadata that has been used to describe those images. Based on these titles and descriptions, DALL-E uses diffusion models to start with random pixels that lead to generated images based on these descriptions. This area of research is by no means unique to OpenAI, but it is novel to open up a creative tool such as DALL-E to the public. Although the outputs are often both interesting and surprisingly different from what one might have imagined, they are not without their issues. For instance, the discussion around the legal ownership of DALL-E created graphics is not clear, since Open AI claims to own the images used, but the images themselves are often based on other copyrighted images. One can imagine that, over time, an artistic sampling model may start to appear similar to the music industry where licensing contracts are used to manage the usage of copyrighted material. But this will require greater visibility regarding the lineage of AI-based content creation and the data used to support graphic diffusion. Until this significant legal question is solved, Amalgam Insights believes that the commercial usage of DALL-E will be challenging to manage. This is somewhat reminiscent of the challenges that Napster faced at the end of the 20th century as a technology that both transformed the music industry and had to deal with the challenges of a new digital frontier.

But the technology that has taken over the zeitgeist of technology users is ChatGPT and related use cases associated with the GPT (Generative Pre-Trained Transformer) autoregressive language model trained on 500 billion words across the web, Wikipedia, and books. And it has become the favorite plaything of many a technologist. What makes ChatGPT attractive is its ability to take requests from users asking questions with some level of subject matter specificity or formatting and to create responses in real-time. Here are a couple of examples from both a subject matter and creative perspective.

Example 1: Please provide a blueprint for bootstrapping a software startup.

This is a bit generic and lacks some important details on how to find funding or sell the product, but it is in line with what one might expect to see in a standard web article regarding how to build a software product. The ending of this answer shows how the autogenerative text is likely referring to prior web-based content built for search engine optimization and seeking to provide a polite conclusion based on junior high school lessons in writing the standard five-paragraph essay rather than a meaningful conclusion that provides insight. In short, it is basically a status quo average article with helpful information that should not be overlooked, but is not either comprehensive or particularly insightful for anyone who has ever actually started a business.

A second example of ChatGPT is in providing creative structural formats for relatively obscure topics. As you know, I’m an expert in technology expense management with over two decades of experience and one of the big issues I see is, of course, the lack of poetry associated with this amazing topic. So, again, let’s go to ChatGPT.

Example 2: Write a sonnet on the importance of reducing telecom expenses

As a poem, this is pretty good for something written in two seconds. But it’s not a sonnet, as sonnets are 14 lines, written in iambic pentameter (10 syllable lines split int 5 iambs, or a unstressed syllable followed by a stressed syllable) and split into three sections of four lines followed by a two-line section with a rhyme scheme of ABAB, CDCD, EFEF, GG. So, there’s a lot missing there.

So, based on these examples, how should ChatGPT be used? First, let’s look at what this content reflects. The content here represents the average web and text content that is associated with the topic. With 500 billion words in the GPT-3 corpus, there is a lot of context to show what should come next for a wide variety of topics. Initial concerns of GPT-3 have started with the challenges of answering questions for extremely specific topics that are outside of its training data. But let’s consider a topic I worked on in some detail back in my college days while using appropriate academic language in asking a version of Gayatri Spivak’s famous (in academic circles) question “Can the subaltern speak?”

Example 3: Is the subaltern allowed to fully articulate a semiotic voice?

Considering that the language and topic here is fairly specialized, the introductory assumptions are descriptive but not incisive. The answer struggles with the “semiotic voice” aspect of the question in discussing the ability and agency to use symbols from a cultural and societal perspective. Again, the text provides a feeling of context that is necessary, but not sufficient, to answer the question. The focus here is on providing a short summary that provides an introduction to the issue before taking the easy way out telling us what is “important to recognize” without really taking a stand. And, again, the conclusion sounds like something out of an antiseptic human resources manual in asking for the reader to consider “different experiences and abilities” rather than the actual question regarding the ability to use symbols, signs, and assumptions. This is probably enough of an analysis at a superficial level as the goal here isn’t to deeply explore postmodern semiotic theory but to test ChatGPT’s response in a specialized topic.

Based on these three examples, one should be careful in counting on ChatGPT to provide a comprehensive or definitive answer to a question. Realistically, we can expect ChatGPT will provide representative content for a topic based on what is on the web. The completeness and accuracy of a ChatGPT topic is going to be dependent on how often the topic has been covered online. The more complete an answer is, the more likely it is that this topic has already been covered in detail.

ChatGPT will provide a starting point for a topic and typically provide information that should be included to introduce the topic. Interestingly, this means that ChatGPT is significantly influenced by the preferences that have built online web text over the past decade of content explosion. The quality of ChatGPT outputs seems to be most impressive to those who treat writing as a factual exercise or content creation channel while those who look at writing as a channel to explore ideas may find it lacking for now based on its generalized model.

From a topical perspective, ChatGPT will probably have some basic context for whatever text is used in a query. It would be interesting to see the GPT-3 model augmented with specific subject matter texts that could prioritize up-to-date research, coding, policy, financial analysis, or other timely new content either as a product or training capability.

In addition, don’t expect ChatGPT to provide strong recommendations or guidance. The auto-completion that ChatGPT does is designed to show how everyone else has followed up on this topic. And, in general, people do not tend to take strong stances on web-based content or introductory articles.

Fundamentally, ChatGPT will do two things. First, it will make mediocre content ubiquitous. There is no need to hire people to write an “average” post for your website anymore as ChatGPT and other technologies either designed to compete with or augment it will be able to do this easily. If your skillset is to write grammatically sound articles with little to no subject matter experience or practical guidance, that skill is now obsolete as status quo and often-repeated content can now be created on command. This also means that there is a huge opportunity to combine ChatGPT with common queries and use cases to create new content on demand. However, in doing so, users will have to be very careful not to plagiarize content unknowingly. This is an area where, just like with DALL-E, OpenAI will have to work on figuring out data lineage, trademark and copyright infringement, and appropriation of credit to support commercial use cases.  ChatGPT struggles with what are called “hallucinations” where ChatGPT makes up facts or sources because those words are physically close to the topic discussed in the various websites and books that ChatGPT uses. ChatGPT is a text generation tool that picks words based on how frequently they show up with other words. Sometimes that result will be extremely detailed and current and other times, it will look very generic and mix up related topics that are often discussed together.

Second, this tool now provides a much stronger starting point for writers seeking to say something new or different. If your point of view is something that ChatGPT can provide in two seconds, it is neither interesting or new. To stand out, you need to provide greater insight, better perspective, or stronger directional guidance. This is an opportunity to improve your skills or to determine where your professional skills lie. ChatGPT still struggles with timely analysis, directional guidance, practical recommendations beyond surface-level perspectives, and combining mathematical and textual analysis (i.e. doing word problems or math-related case studies or code review) so there is still an immense amount of opportunity for people to write better.

Ultimately, ChatGPT is a reflection of the history of written text creation, both analog and digital. Like all AI, ChatGPT provides a view of how questions were answered in the past and provides an aggregate composite based on auto-completion. For topics with a basic consensus, such as how to build a product, this tool will be an incredible time saver. For topics that may have multiple conflicting opinions, ChatGPT will try to play either both sides or all sides in a neutral manner. And for niche topics, ChatGPT will try to fake an answer at what is approximately a high school student’s understanding of the topic. Amalgam Insights recommends that all knowledge workers experiment with ChatGPT in their realm of expertise as this tool and the market of products that will be built based on the autogenerated text will play an important role in supporting the next generation of tech.