Congratulations to us on 50 Episodes of This Week in Enterprise Tech! And appropriately, this week was a busy one with everything from Snowflake vs. Databricks to Workday and Cisco throwing in their versions of agent management to some surprising research coming from Microsoft Research on how too much AI can atrophy confidence.
This Week in Enterprise Technology, Hyoun Park and Charles Araujo critically assess the biggest tech issues for the CIO office:
PegaSystems Points Out Employee Concerns for Agents
Anthropic Economic Index Measures Enterprise AI Usage
Microsoft Research Discovers AI Atrophies Cognitive Confidence
DOGE Struggles to COBOL Together a Mainframe
The Dissection of Intel Begins …
Broadcom and TSMC have been discussing how to break up Intel along with the Trump administration. For Intel fans, this is tough to watch. Charles and Hyoun toast the end of an era as well as potential CIO repercussions.
AWS is requiring any SaaS eligible for AWS discounts to be 100% hosted on AWS moving forward. Although this seems like a reasonable policy, the challenge is in getting Amazon certification, which can be a time consuming process and may lead to some lost discounts. Hyoun and Charles provide their caveats to the financially minded CIO and FinOps pro.
Glean has entered the agentic fray with Glean Agents, which automate tasks using a wide range of data to enable task automation and data analysis. Charles and Hyoun debate whether this announcement does enough to stand out in a crowded data agent market.
Snowflake launched its Cortex Agents in public preview to support data management use cases that Snowflake is well known for. The granularity of the agent as well as iterative process and continuous governance capabilities help Cortex Agents to stand out.
SAP is teaming up with Databricks to create SAP Business Data Cloud, a joint product designed to improve SAP customer access to the deep metadata within their SAP deployment. Charles and Hyoun debate how this may prevent SAP customers from migrating their data over to Snowflake by working with a competitor with both technical capabilities and head to head competitive experience in selling against Snowflake.
Tags: SAP Databricks, Business data cloud, Mosaic AI, Databricks SQL, Unity Catalog
Workday Releases the Agent System of Record
Workday announces a system of record for agents to support the inventory and indexing of agents. Hyoun and Charles discuss how they are making big claims but not taking full advantage of the breadth of workday metadata and application functionality across skills, contingent, labor, and performance management to truly treat agents as work enablement technologies.
This concept is interesting especially after NVIDIA’s Jensen Huang’s recent claim: “The IT department of every company is going to be the HR department of AI agents in the future.” Workday would prefer that HR remain the HR department for AI agents. But can they pull it off?
Outshift by Cisco Discusses the Internet of Agents
Cisco estimates there will be 20,000 AI agents per company: How will they all work together? Cisco posits the need for an Internet of Agents, an open and secure system that allows different AI agents to communicate and collaborate. Charlie and Hyoun discuss some of the challenges of an Internet of Agents, including thinking beyond the API.
PegaSystems Points Out Employee Concerns for Agents
A new PegaSystems survey shows that a large percentage of workers are uncomfortable using agents in the workplace. Even as we’ve seen other data showing that executives actually want their workers to use agents and AI. Hyoun and Charles discuss employee concerns and whether executives are ready to teach and train their expectations regarding AI.
Anthropic Economic Index Measures Enterprise AI Usage
Anthropic is launching the Anthropic Economic Index to track the value and use of AI. This is both a website and a research paper. The most interesting finding that got Charles and Hyoun’s attention is that 3/4 of current AI usage is concentrated in 4% of jobs. These metrics serve as a strong reminder of the current maturity of enterprise AI, which leads Charles to an interesting conclusion.
Microsoft Research Discovers AI Atrophies Cognitive Confidence
Microsoft finds that too much AI leads to cognition atrophy. Uh oh. The interesting aspect, as Charles wisely notes in this discussion, is that this study measures the human perception of cognitive quality, leading to concerns on how to keep sharp in an AI world. Hyoun provides some tips and suggestions for more human AI.
What happens when you get a former COBOL programmer and a former IT asset auditor and data analyst to look at DOGE?
Elon Musk’s DOGE team may need a crash course in COBOL as they dig deep with their presidentially conferred admin access to Social Security, Medicare, and other key government payments and transactional data. Hyoun and Charles discuss the real challenges of budget and data audits and Charles provides an unconventional solution to DOGE.
Welcome back to This Week in Enterprise Technology, Hyoun Park and Charles Araujo analyze the latest enterprise technology announcements and how they will affect your business and your bosses’ expectations.
Join TWIET as we guide CIOs and technical managers through the strategic ramifications behind the vendor hype, product innovation, and the avalanches of money going in and out of enterprise tech. As always, this podcast is available in audio, video, and broken up into sections for your benefit.
As always, if you enjoy this, like, subscribe, comment, and get in touch with us. 
Are AI Hallucinations About Being Wrong or Being Creative?
1. How Does Florida’s Pornhub Ban Affect Content Access?
At the beginning of 2025, Florida placed a new age and ID verification requirement for adult content leading to notorious site PornHub leaving the state. Behind the shock value, this is a trend in the United States with 19 states now having specific ID verification requirements for certain types of content. What does this mean for businesses seeking to provide content?
2. 6th Circuit Kills Net Neutrality: IT Investment Concerns
The United States Sixth Circuit Court of Appeals decided on January 2, 2025 to repeal the concept of net neutrality, the idea that content should be treated equally by networks. Now that networks have no legal obligation to treat content equally, what does this mean for software providers and for large enterprises providing content over the Internet? Will networks play favorites? Will hyperscalers need to team up with networks?
Despite Google’s undeniable groundbreaking work in AI, Google is finding itself playing catch-up in the enterprise AI world. Google DeepMind has unveiled Project Astra and Gemini 2.0 to enhance generative AI. Astra is intended to act as a multimodal universal assistant using text, speech, and images. The technology is interesting and novel, but Charles and Hyoun debate whether Google will figure out how to productize this technology.
Salesforce announced Agentforce 2.0, one of the first 2.0 products in the Agentic AI world. Among other things, Salesforce upgraded its agentic capabilities, included more of Saleforce’s ecosystem directly into the Agentforce offering, and doubled its commitment to AI sales. Hyoun and Charles discuss how the Salesforce AI technology ecosystem stands up in a heated AI market.
CIO.com’s Grant Gross takes on one of the most interesting topics in tech: the conundrum of pricing for AI. Charles and Hyoun explore a varied portfolio of pricing strategies and maturity models, along with a classic Harvard Business Review article, that will shape the future of AI FinOps and cost.
6. Aaron Levie Clarifies the Value of AI Access to PCs
Box CEO Aaron Levie is no stranger to sticking his neck out when it comes to predicting the future of enterprise software. In a recent X post, Levie elucidates the value of AI agents accessing browsers and personal computers from an information access perspective. Charles and Hyoun discuss a future where the agent is more empowered to directly connect users and apps.
AI Agents using browsers and computers is a big deal because you effectively have an API for complete knowledge work.
Right now, for AI to execute complex digital work, you have to go through the APIs of a wide array of services. Imagine a workflow where you want to collect…
Bench was once known for having raised over $110 million to support small and medium business accounting needs and posted of having over 35,000 US customers. But on December 27, all that changed as venture debt became due, and Bench was unable to pay. Hyoun and Charles warn of how this may be a harbinger for the volatility of SaaS solutions in 2025 that have not provided a Plan B to customers.
IT ops has long been a consuming, demanding, and challenging job to support. Venture capital firm Felicis provides its vision on the future of IT management with a strong assist from AI. Charles and Hyoun are fully onboard with this vision, but we point out some of the challenges of taking on current enterprise stalwarts, such as ServiceNow and Atlassian.
On Wired, Wharton professor Ethan Mollick argues that AI can serve as a new organizational management strategist to help connect people, show new relationships between employees, and even help structure the company more optimally. Charles and Hyoun debate AI‘s readiness to serve as the strategist both from a discovery perspective and whether existing employee management systems are ready to support this vision.
10. Are AI Hallucinations About Being Wrong or Being Creative?
What is an AI hallucination? In this recent New York Times article, scientists including recent Nobel Prize winner David Baker are described as using AI hallucinations in their research when they are using AI to design theoretical or prospective proteins. Is using AI to take a defensible and novel approach a hallucination? Or are we starting to overuse the term hallucination when it comes to AI? Charles and Hyoun dig into the problematic nature of the AI hallucination.
Welcome back to This Week in Enterprise Technology, Hyoun Park and Charles Araujo analyze the latest enterprise technology announcements and how they will affect your business and your bosses’ expectations.
Join TWIET as we guide CIOs and technical managers through the strategic ramifications behind the vendor hype, product innovation, and the avalanches of money going in and out of enterprise tech. As always, this podcast is available in audio, video, and broken up into sections for your benefit.
This Week in Enterprise Technology, Hyoun Park and Charles Araujo critically assess last week’s biggest tech news:
AWS Enhances Amazon Connect with Generative AI Tools
AWS Takes on AI Hallucination Challenges
AWS Bedrock Adds Multi-Agent Orchestration and Model Routing
AWS Centralizes AI Efforts with SageMaker
Casey Newton Examines AI Skepticism’s Comforts
Emergence AI Coordinates Multi-Vendor Agents
Exa Redefines Generative Search Experiences
MLCommons Benchmarks LLM Output Risks
South Korea’s Unrest Threatens Global Memory Supply
Werner Vogels on Managing “Simplexity”
Broadcom Adjusts to Minimize VMware Migration Risks
AWS Upgrades Amazon Connect with New Generative AI Features
Amazon Connect has been a successful cloud contact center product and contact center has been one of the clearest areas for AI to provide productivity benefits and increase potential revenue transactions, AWS re:invent was an opportunity to announce the latest generative AI advancements within Connect. Charles and Hyoun discuss the opportunities for contact centers to adopt AI.
AWS launches Automated Reasoning checks to cross reference outputs with known facts and enterprise data. Although this is not as novel as AWS was stating, it is a valuable step forward. Hyoun and Charles debate the utility of this Automated Reasoning checks and whether AI hallucinations really matter or are just a sign of AI immaturity and inexperience.
AWS Bedrock Updates: Multi-Agent Collaboration, Model Routing
AWS announced interesting AI management updates for Amazon Bedrock. Both multi-agent management and prompt routing across models will be useful for enterprises seeking to expand the utility and cost structure of AI. Charles and Hyoun wonder if this agent management will cover the bill given the wide variety of agents that are starting to appear in the enterprise. .
AWS create a new umbrella brand that includes data studio, data lake, analytics, and data management. Hyoun and Charles argue about whether Sagemaker, best known as a data science tool, was the right umbrella brand for these data efforts.
One of TWIET’s favorite journalists, Casey Newton, weigh in on the false comfort of AI skepticism. Newton argues that the potential harm of AI is being underestimated by those who simply think that AI is full of lies or incompetent. Charles and Hyoun discuss a more realistic path for IT departments to consider as they deploy AI.
Start up Emergence AI announced its autonomous multi-agent AI orchestrator. At a time on every enterprise platform seems to be coming out with its own set of agents, Hyoun and Charles think it is about time for a third-party agent orchestration solution to hit the market and get some traction.
The MIT Technology Review covered a startup named Exa taking a novel approach to Gen AI based web searches with the goal of using the web like a database. Charles and Hyoun discuss the scale and results for this approach.
MLCommons has released its AIluminate 1.0 benchmarks to describe several categories of harm including sex crimes, violence, and defamation risks. Hyoun and Charles discuss past challenges regarding model benchmarking and risks.
South Korea’s Unrest Threatens Global Memory Supply
South Korea saw government unrest in an attempted military coup last week. Although we are not expert political scientists, international supply chains do affect our ability to source IT. We discussed the ramifications of South Korea earning 60% of the global memory, check market and considerations for the CIO in looking at geopolitical strife.
At Amazon re:invent, Amazon CTO pointed out both that complexity is inevitable and that there are two types of complexity that are important for technical audiences to consider, including a new concept of “simplexity”.. Hyoun is reminded of the Nassim Taleb concept of antifragility while Charles digs deeper into the strategic issues of technical debt.
Broadcom Adjusts to Minimize VMware Migration Risks
Broadcom has had to call back from its initial plans of making its top 2000 customers all direct and has handed much of that business back to its channels. With help from The Register and Canalys, Hyoun and Charles discuss repercussions for tech sourcing.
At Amalgam Insights, we’ve long championed the Technology Lifecycle Management approach to describe the many areas existing across IT FinOps to reduce costs across hyperscalers, data clouds, Software-as-a-Service, networking, enterprise mobility, and telecom costs. Usage optimization is not enough. Product rationalization is not enough. Contract negotiations are not enough. And when these opportunities are not bridged, additional opportunities fall by the wayside across the IT environment.
Technology Lifecycle Management
One of the most interesting trends of 2024 has been the FinOps Foundation’s initial steps into software management. As cloud FinOps starts entering the already-established worlds of SaaS management and network management, it will be interesting to see both where cloud FinOps’ comfort zone of massive usage and product tradeoffs help the more static worlds of software and telecom as well as where FinOps’ traditional struggles with business taxonomy, detailed sub-user chargebacks, and lack of contract detail start to push against the issues that you, I, and the rest of the IT world have been dealing with for the past 15+ years. Unlike Cloud FinOps, Software FinOps and Telecom FinOps have a long history of multi-user monitoring, usage, and cost management and to avoid cooperating with end-user requests for any type of market-wide billing or usage standardization.
For the past several months, I’ve been working on my SmartList of IT FinOps vendors (thank you to all the vendors and customers for your participation!) with a eye towards looking at vendors focused on bridging gaps between cloud FinOps and the rest of the IT world. Out of the 300+ vendors I look at from an IT expense lifecycle perspective, I’ve narrowed the list down to 15 vendors that I think are really doing something different, which will be our Distinguished Vendors. But I am also starting to find some interesting trends that make these distinguished vendors different from the extremely crowded portfolio of options that IT departments face.
Before getting to those, let’s just define what is not considered differentiated in 2024. Simply ingesting bills is not good enough. Being customer-centric and having attention to detail are considered table stakes if you cannot provide customers that vouch for your approach being quantitatively better than other competitors. Everybody I cover can support tagging. Cross-charging is a standard enterprise ask in RfX activity. Basic dashboards and analytics are standard (and if you haven’t chosen a high-performance analytics solution like Sisense or Sigma or Qrvey, you’re probably behind for supporting real-time changes). And everyone is competing against 2020s-era design standards for SaaS as a standard, especially with the flood of money that has gone into the data and code-centric Third Wave of FinOps.
First, as expected, top vendors are often finding now that they are the second or third solution chosen at the enterprise level. IT buyers have often focused on the “up and to the right” vendor landscape too much in their purchase of FinOps and IT expense management solutions with the assumption that these are commoditized solutions rather than the financial window to the pulse of the digital business. Although every business today likes to talk about how they have “digitally transformed” and are “data-driven,” the truth is that companies have not fully embraced data and digital business until the actual cost of data infrastructure, delivery, and data-related customer activities can be finally categorized and calculated. And without FinOps and IT expense solutions, the comparative data simply isn’t there at the bill of material (BOM) or component level that we take for granted in manufacturing or service level. A digital business lacking formal IT FinOps is like managing a hospital without knowing how many procedures are being done or a discrete manufacturing company not knowing how many parts it uses.
Second, there is a massive effort to utilize and translate tagging efforts into standard business categories. The FinOps insistence on tagging service elements across infrastructure and platform without alignment to larger corporate or IT frameworks is starting to collapse into its own form of technical debt. The combination of poor data quality, inconsistent or idiosyncratic tagging, poor mapping to existing categories, and lack of integration with existing birthright and foundational enterprise applications has led to the need for better metadata definition and mapping. Of course, the goal here is not simply to create a pristine metadata environment (although I do love well defined taxonomies and ontologies!), but to deeply align IT to business so that product, service, and revenue managers know which technologies are vital to their work.
Third, data integrations and a unified data layer are increasingly important across vendors. One of the most salient and welcome aspects of the latest wave of FinOps solutions is the data layer that provides visibility across hyperscalers, data clouds such as Databricks, Snowflake, and Cloudera, as well as observability solutions such as Datadog. From a tactical perspective, this data ingestion helps cloud architects and digital business leaders to track costs. But it also helps set up the FinOps practitioner as a central player in managing the business aspects of digital behavior. The cost of accessing, processing, and utilizing data will only increase with the emergence of AI as a new massive spend area within IT. And although this cost is under the direct supervision of the CTO and CFO right now in many cases, it will all eventually filter down to some level of IT FinOps as the complexity of these ecosystems becomes too immense. Many of today’s top solutions are proactively setting themselves to be this data layer for supporting the future of IT activity.
Fourth, the tradeoffs between defined hyperscalers, defined on-premises services, virtualized services, and containerized services continue to be a challenge and the current answer is often to split each of these categories into a separate management solution. This is not a sustainable approach and Amalgam Insights both expects and encourages market consolidation so that FinOps professionals can see more of their infrastructure and platform investments under the same umbrella.
Fifth, everybody likes to talk about “cloud economics” while ignoring the “economics” part of the puzzle. There is a lot of focus on the mathematical aspects of large scale optimization of usage. And I get it. Techies love working with big problems and algorithmic fixes and finding clever ways to discover null values and aggregating pennies into dollars. This skill set is often what makes technology contributors invaluable both in developing architecture and in root-cause analysis for large enterprise challenges. But it is only a piece of the puzzle when it comes to aligning the financial health of the enterprise with digital and cloud activity. At some point, this optimization needs to be tied to the less exciting tasks of invoice processing, cross-checking billing rates with existing rates and discounts, direct product tradeoffs both within and across hyperscaler providers, comparing contract terms, enforcing contracted KPIs and timeframes, and aligning cloud activity to new accounting rules and tax credits. All of this is part of the microeconomics of cloud. Simply looking at usage optimization makes you a unit-specific beancounter. A true cloud economist needs to understand how the cost of cloud shapes the rest of the business and how it needs to be structured, reinforced, and defined based on all business forces.
Sixth, the cost of IT FinOps is changing rapidly. The first generation of cloud FinOps solutions charged similarly to software management and telecom management vendors where enterprise spend was based on a percentage of spend. But the problem is that these approaches started to outgrow their value for clients that grew their cloud spend to $50 million or $100 million and literally realized they could build their own solution and support it in perpetuity. We have seen solutions like Capital One Slingshot or Netflix’s development of NetflixOSS Ice, now supported by Teevity based on corporate efforts that became commercial products. So, there is a lot of pressure to drive down the cost of IT FinOps back to a flat license rate or user-based pricing once spend gets to an enterprise-tier. This pressure is exacerbated by the vendor sprawl across FinOps. For vendors that are product-based, this will continue to be a trend while companies shifting to a managed services approach will be able to maintain pricing based on the overall maintenance of the FinOps environment across billing, accounting, optimization, procurement, vendor relationships, governance, and IT-business alignment.
Those are some of the big trends I’ve been seeing so far over the last several months. There is a lot of nuance here, which I’m looking forward to digging into as I finalize the FinOps SmartList and get to highlight the Distinguished Vendors: the top 5% of companies that stand out both in their results and their differentiation. Is there anything you think I missed? Or do you want to talk more about either your FinOps product or your current FinOps RfP work? If so, get in touch with me at briefings@amalgaminsights.com.
Sports has increasingly become a showcase for back-end business capabilities that have long eschewed the spotlight: analytics, data, accounting, etc…
This recent ESPN article on the Knicks showcases the importance of their contract pro and combining strategic procurement (contract negotiations, KPIs, expiration dates, payment terms, vendor and client responsibilities) with the accounting knowledge to enforce and fully leverage those terms. And the Knicks’ player procurement Brock Aller gets a nice glow-up here because of his expertise across these areas in his complex spend category: player contracts and options.
Basketball has increasingly made “cap-ology” or the management of each team’s salary cap an important topic, as it often defines the practical limits of how much a professional basketball team can choose to improve. There is a practical lesson here for strategic IT procurement (or really all procurement) professionals on how to structure, reallocate, and maximize IT investment on a fixed budget or within a budget cap. I especially like the use of laddered rates, date-specific cutoffs and performance, and the use of commoditized or overlooked assets to trade for cash or optionality are all mentioned or hinted at here.
Even if I’m not a fan, the resurgence of the New York Knicks is a great case for how procurement and accounting need to work more closely together, ideally with a bridge person, to maximize value.
This Week in Enterprise Tech, brought to you by the DX Report’s Charles Araujo and Amalgam Insights’ Hyoun Park, explores six big topics for CIOs across innovation, the value of data, strategic budget management, succession planning, and enterprise AI.
1) We start with the City of Birmingham, which is struggling with its SAP to Oracle migration. We discuss how this IT project has shifted from the promise of digital transformation to the reality of being in survival mode and the cautions of mistaking core services for innovation.
2) We then take a look at Salesforce’s earnings, where the Data Cloud is the Powerhouse of the earnings and CIOs are proving the value of data with their pocketbooks and the power of the purse. We break down the following earnings chart.
3) We saw NVIDIA’s success in AI as a sign that CIO budgets are changing. Find out about the new trend of CIO-led budgets that are independent of the traditional IT budget, as well as Charles’ framework of separating the efficiency bucket from the innovation bucket from his first book, The Quantum Age of IT.
4) One of the hottest companies in enterprise software sees a big leadership change, as Frank Slootman steps down from Snowflake and Sridhar Ramaswamy from the Neeva acquisition takes over. We discuss why this is a good move to avoid stagnation and discuss how to deal with bets in innovation.
5) Continuing the trend of innovation management, we talk about what Apple’s exit of the electric car business means in terms of managing innovative moonshots and what CIO’s often miss in terms of setting metrics around leadership and innovation culture.
6) And finally, we talk about the much-covered Google Gemini AI mistakes. We think the errors themselves fall within the range of issues that we’ve seen from other large language models, but we caution why the phrase “Eliminate Bias” should be a massive red flag for AI projects.
Today, we are kicking off a new podcast with our Chief Analyst Hyoun Park and The DX Report’s Charles Araujo. Together, we are looking at the biggest events in enterprise technology and discussing how they affect the CIO’s office. We’re planning to bring our decades of experience as market observers, hands-on technical skills, and strategic advisors not only to show what the big stories were, but also the big lessons that IT and other technical executives need to take from these stories.
If you want to learn how to avoid the biggest mistakes that CIOs will make across strategy, succession planning, innovation, budgeting, and integrating AI into existing technology environments, subscribe to our new video and podcast efforts! Check out Week 1 right here.
This week, we discuss in this episode the philosophy of fast-rising Zoho, an enterprise application company that has grown over 10x over the past decade to become a leading CRM and analytic software provider on a global basis based on our recent visit to Zoho’s Analyst Event in McAllen, Texas. Find out how “transnational localism” has supported Zoho’s global rocket-ship growth and what it means for managing your own international team.
We then TWIET about the Apple Vision Pro and how Apple, Meta, Microsoft, and Google have been pushing the boundaries of extended reality over the past decade as well as what this means for enterprise IT organizations based on Apple’s track record.
And finally we confront the complexities of Cloud FinOps and managing cloud costs at a time when layoffs are common in the tech world and IT economics and financial management are becoming increasingly complex.
The past week has been “Must See TV” in the tech world as AI darling OpenAI provided a season of Reality TV to rival anything created by Survivor, Big Brother, or the Kardashians. Although I often joke that my professional career has been defined by the well-known documentaries of “The West Wing,” “Pitch Perfect,” and “Sillcon Valley,” I’ve never been a big fan of the reality TV genre as the twist and turns felt too contrived and over the top… until now.
Starting on Friday, November 17th, when The Real Housewives of OpenAI started its massive internal feud, every organization working on an AI project has been watching to see what would become of the overnight sensation that turned AI into a household concept with the massively viral ChatGPT and related models and tools.
So, what the hell happened? And, more importantly, what does it mean for the organizations and enterprises seeking to enter the Era of AI and the combination of generative, conversational, language-driven, and graphic capabilities that are supported with the multi-billion parameter models that have opened up a wide variety of business processes to natural language driven interrogation, prioritization, and contextualization?
The Most Consequential Shake Up In Technology Since Steve Jobs Left Apple
The crux of the problem: OpenAI, the company we all know as the creator of ChatGPT and the technology provider for Microsoft’s Copilots, was fully controlled by another entity, OpenAI, the nonprofit. This nonprofit was driven by a mission of creating general artificial intelligence for all of humanity. The charter starts with“OpenAI’s mission is to ensure that artificial general intelligence (AGI) – by which we mean highly autonomous systems that outperform humans at most economically valuable work – benefits all of humanity. We will attempt to directly build safe and beneficial AGI, but will also consider our mission fulfilled if our work aids others to achieve this outcome.”
There is nothing in there about making money. Or building a multi-billion dollar company. Or providing resources to Big Tech. Or providing stakeholders with profit other than highly functional technology systems. In fact, further in the charter, it even states that if a competitor shows up with a project that is doing better at AGI, OpenAI commits to “stop competing with and start assisting this project.”
So, that was the primary focus of OpenAI. If anything, OpenAI was built to prevent large technology companies from being the primary force and owner of AI. In that context, four of the six board members of OpenAI decided that open AI‘s efforts to commercialize technology were in conflict with this mission, especially with the speed of going to market, and the shortcuts being made from a governance and research perspective.
As a result, they ended up firing both the CEO, Sam, Altman and removed President COO Greg Brockman, who had been responsible for architecting that resources and infrastructure associated with OpenAI, from the board. That action begat this rapid mess and chaos for this 700+ employee organization which was allegedly about to see an 80 billion dollar valuation
A Convoluted Timeline For The Real Housewives Of Silicon Valley
Friday: OpenAI’s board fires its CEO and kicks its president Greg Brockman off the board. CTO Mira Murati, who was called the night before, was appointed temporary CEO. Brockman steps down later that day.
Saturday: Employees are up in arms and several key employees leave the company, leading to immediate action by Microsoft going all the way up to CEO Satya Nadella to basically ask “what is going on? And what are you doing with our $10 billion commitment, you clowns?!” (Nadella probably did not use the word clowns, as he’s very respectful.)
Sunday: Altman comes in the office to negotiate with Microsoft and OpenAI’s investors. Meanwhile, OpenAI announces a new CEO, Emmett Shear, who was previously the CEO of video game streaming company Twitch. Immediately, everyone questions what he’ll actually be managing as employees threaten to quit, refuse to show up to an all-hands meeting, and show Altman overwhelming support on social media. A tumultuous Sunday ends with an announcement by Microsoft that Altman and Brockman will lead Microsoft’s AI group.
Monday: A letter shows up asking the current board to resign with over 700 employees threatening to quit and move to the Microsoft subsidiary run by Altman and Brockman. Co-signers include board member and OpenAI Ilya Sutskever, who was one of the four board votes to oust Altman in the first place.
Tuesday: The new CEO of OpenAI, Emmett Shear, states that he will quit if the OpenAI board can’t provide evidence of why they fired Sam Altman. Late that night, Sam Altman officially comes back to OpenAI as CEO with a new board consisting initially of Bret Taylor, former co-CEO of Salesforce, Larry Summers (former Secretary of the Treasury), and Adam d’Angelo, one of the former board members who voted to figure Sam Altman. Helen Toner of Georgetown and Tasha McCauley, both seen as ethical altruists who were firmly aligned with OpenAI’s original mission, both step down from the board.
Wednesday: Well, that’s today as I’m writing this out. Right now, there are still a lot of questions about the board, the current purpose of OpenAI, and the winners and losers.
Keep In Mind As We Consider This Wild And Crazy Ride
OpenAI was not designed to make money. Firing Altman may have been defensible from OpenAI’s charter perspective to build safe General AI for everyone and to avoid large tech oligopolies. But if that’s the case, OpenAI should not have taken Microsoft’s money. OpenAI wanted to have its cake and eat it as well with a board unused to managing donations and budgets at that scale.
Was firing Altman even the right move? One could argue that productization puts AI into more hands and helps prepare society for an AGI world. To manage and work with superintelligences, one must first integrate AI into one’s life and the work Altman was doing was putting AI into more people’s hands in preparation for the next stage of global access and interaction with superintelligence.
At the same time, the vast majority of current OpenAI employees are on the for-profit side and signed up, at least in part, because of the promise of a stock-based payout. I’m not saying that OpenAI employees don’t also care about ethical AI usage, but even the secondary market for OpenAI at a multi-billion dollar valuation would help pay for a lot of mortgages and college bills. But tanking the vast majority of employee financial expectations is always going to be a hard sell, especially if they have been sold on a profitable financial outcome.
OpenAI is expensive to run: probably well over 2 billion dollars per year, including the massive cloud bill. Any attempt to slow down AI development or reduce access to current AI tools needs to be tempered by the financial realities of covering costs. It is amazing to think that OpenAI’s board was so naïve that they could just get rid of the guy who was, in essence, their top fundraiser or revenue officer without worrying about how to cover that gap.
Primary research versus go-to-market activities are very different. Normally there is a church-and-state type of wall between these two areas exactly because they are to some extent at odds with each other. The work needed to make new, better, safer, and fundamentally different technology is often conflicted with the activity used to sell existing technology. And this is a division that has been well established for decades in academia where patented or protected technologies are monetized by a separate for-profit organization.
The Effective Altruism movement: this is an important catchphrase in the world of AI, as it is not just defined as a dictionary definition. This is a catchphrase for a specific view of developing artificial general intelligence (superintelligences beyond human capacity) with the goal of supporting a population of 10^58 millennia from now. This is one extreme of the AI world, which is countered by a “doomer” mindset thinking that AI will be the end of humanity.
Practically, most of us are in between with the understanding that we have been using superhuman forces in business since the Industrial Revolution. We have been using Google, Facebook, data warehouses, data lakes, and various statistical and machine learning models for a couple of decades that vastly exceed human data and analytic capabilities.
And the big drama question for me: What is Adam d’Angelo still doing on the board as someone who actively caused this disaster to happen? There is no way to get around the fact that this entire mess was due to a board-driven coup and he was part of the coup. It would be surprising to see him stick around for more than a few months especially now that Bret Taylor is on board, who provides an overlap of experiences and capabilities that d’Angelo possesses, but at greater scale.
The 13 Big Lessons We All Learned about AI, The Universe, and Everything
First, OpenAI needs better governance in several areas: board, technology, and productization.
Once OpenAI started building technologies with commercial repercussions, the delineation between the non-profit work and the technology commercialization needed to become much clearer. This line should have been crystal clear before OpenAI took a $10 billion commitment from Microsoft and should have been advised by a board of directors that had any semblance of experience in managing conflicts of interest at this level of revenue and valuation. In particular, Adam d’Angelo as the CEO of a multi-billion dollar valued company and Helen Toner of Georgetown should have helped to draw these lines and make them extremely clear for Sam Altman prior to this moment.
Investors and key stakeholders should never be completely surprised by a board announcement. The board should only take actions that have previously been communicated to all major stakeholders. Risks need to be defined beforehand when they are predictable. This conflict was predictable and, by all accounts, had been brewing for months. If you’re going to fire a CEO, make sure your stakeholders support you and that you can defend your stance.
You come at the king, you best not miss.” As Omar said in the famed show “The Wire,” you cannot try to take out the head of an organization unless your followup plan is tight.
OpenAI’s copyright challenges feel similar to when Napster first became popular as a streaming platform for music. We had to collectively figure out how to avoid digital piracy while maintaining the convenience that Napster provided for supporting music and sharing other files. Although the productivity benefits make generative AI worth experimenting with, always make sure that you have a back up process or capability for anything supported with generative AI.
OpenAI and other generative AI firms have also run into challenges regarding the potential copyright issues associated with their models. Although a number of companies are indemnifying clients from damages associated with any outputs associated with their models, companies will likely still have to stop using any models or outputs that end up being associated with copyrighted material.
From Amalgam Insights’ perspective, the challenge with some foundational models is that training data is used to build the parameters or modifiers associated with a model. This means that the copyrighted material is being used to help shape a product or service that is being offered on a commercial basis. Although there is no legal precedent either for or against this interpretation, the initial appearance of this language fits with the common sense definitions of enforcing copyright on a commercial basis. This is why the data collating approach that IBM has taken to generative AI is an important differentiator that may end up being meaningful.
Don’t take money if you’re not willing to accept the consequences. This is a common non-profit mistake to accept funding and simply hope it won’t affect the research. But the moment research is primarily dependent on one single funder, there will always be compromises. Make sure those compromises are expressly delineated in advance and if the research is worth doing under those circumstances.
Licensing nonprofit technologies and resources should not paralyze the core non-profit mission. Universities do this all the time! Somebody at OpenAI, both in the board and at the operational level, should be a genius at managing tech transfer and commercial utilization to help avoid conflicts between the two institutions. There is no reason that the OpenAI nonprofit should be hamstrung by the commercialization of its technology because there should be a structure in place to prevent or minimize conflicts of interest other than firing the CEO.
Second, there are also some important business lessons here.
Startups are inherently unstable. Although OpenAI is an extreme example, there are many other more prosaic examples of owners or boards who are unpredictable, uncontrollable, volatile, vindictive, or otherwise unmanageable in ways that force businesses to close up shop or to struggle operationally. This is part of the reason that half of new businesses fail within five years.
Loyalty matters, even in the world of tech. It is remarkable that Sam Altman was backed by over 90% of his team on a letter saying that they would follow him to Microsoft. This includes employees who were on visas and were not independently rich, but still believed in Sam Altman more than the organization that actually signed their paychecks. Although it never hurts to also have Microsoft’s Kevin Scott and Satya Nadella in your corner and to be able to match compensation packages, this also speaks to the executive responsibility to build trust by creating a better scenario for your employees than others can provide. In this Game of Thrones, Sam Altman took down every contender to the throne in a matter of hours.
Microsoft has most likely pulled off a transaction that ends up being all but an acquisition of OpenAI. It looks like Microsoft will end up with the vast majority of OpenAI’s‘s talent as well as an unlimited license to all technology developed by OpenAI. Considering that OpenAI was about to support a stock offering with an $80 billion market cap, that’s quite the bargain for Microsoft. In particular, Bret Taylor’s ascension to the board is telling as his work at Twitter was in the best interests of the shareholders of Twitter in accepting and forcing an acquisition that was well in excess of the publicly-held value of the company. Similarly, Larry Summers, as the former president of Harvard University, is experienced in balancing non-profit concerns with the extremely lucrative business of Harvard’s endowment and intellectual property. As this board is expanded to as many as nine members, expect more of a focus on OpenAI as a for-profit entity.
With Microsoft bringing OpenAI closer to the fold, other big tech companies that have made recent investments in generative AI now have to bring those partners closer to the core business. Salesforce, NVIDIA, Alphabet, Amazon, Databricks, SAP, and ServiceNow have all made big investments in generative AI and need to lock down their access to generative AI models, processors, and relevant data. Everyone is betting on their AI strategy to be a growth engine over the next five years and none can afford a significant misstep.
Satya Nadella’s handling of the situation shows why he is one of the greatest CEOs in business history. This weekend could have easily been an immense failure and a stock price toppling event for Microsoft. But in a clutch situation, Satya Nadella personally came in with his executive team to negotiate a landing for openAI, and to provide a scenario that would be palatable both to the market and for clients. The greatest CEOs have both the strategic skills to prepare for the future and the tactical skills to deal with immediate crisis. Nadella passes with flying colors on all accounts and proves once again that behind the velvet glove of Nadella’s humility and political savvy is an iron fist of geopolitical and financial power that is deftly wielded.
Carefully analyze AI firms that may have similar charters for supporting safe AI, and potentially slowing down or stopping product development for the sake of a higher purpose. OpenAI ran into challenges in trying to interpret its charter, but the charter’s language is pretty straightforward for anyone who did their due diligence and took the language seriously. Assume that people mean what they say. Also, consider that there are other AI firms that have similar philosophies to OpenAI, such as Anthropic, which spun off of OpenAI for reasons similar to the OpenAI board reasoning of firing Sam Altman. Although it is unlikely that Anthropic (or large firms with safety-first philosophies like Alphabet and Meta’s AI teams) will fall apart similarly, the charters and missions of each organization should be taken into account in considering their potential productization of AI technologies.
AI is still an emerging technology. Diversify, diversify, diversify. It is important to diversify your portfolio and make sure that you were able to duplicate experiments on multiple foundation models when possible. The marginal cost of supporting duplicate projects pales in comparison to the need to support continuity and gain greater understanding of the breath of AI output possibilities. With the variety of large language models, software vendor products, and machine learning platforms on the market, this is a good time to experiment with multiple vendors while designing process automation and language analysis use cases.
Over the past year, Generative AI has taken the world by storm as a variety of large language models (LLMs) appeared to solve a wide variety of challenges based on basic language prompts and questions.
A partial list of market-leading LLMs currently available include:
The biggest question regarding all of these models is simple: how to get the most value out of them. And most users fail because they are unused to the most basic concept of a large language model: they are designed to be linguistic copycats.
As Andrej Karpathy of OpenAI stated earlier this year,
And we all laughed at the concept for being clever as we started using tools like ChatGPT, but most of us did not take this seriously. If English really is being used as a programming language, what does this mean for the prompts that we use to request content and formatting?
I think we haven’t fully thought out what it means for English to be a programming language either in terms of how to “prompt” or ask the model how to do things correctly or how to think about the assumptions that an LLM has as a massive block of text that is otherwise disconnected from the real world and lacks the sensory input or broad-based access to new data that can allow it to “know” current language trends.
Here are 8 core language-based concepts to keep in mind when using LLMs or considering the use of LLMs to support business processes, automation, and relevant insights.
1) Language and linguistics tools are the relationships that define the quality of output: grammar, semantics, semiotics, taxonomies, and rhetorical flourishes. There is a big difference between asking for “write 200 words on Shakespeare” vs. “elucidate 200 words on the value of Shakespeare as a playwright, as a poet, and as a philosopher based on the perspective on Edmund Malone and the English traditions associated with blank verse and iambic pentameter as a preamble to introducing the Shakespeare Theatre Association.”
I have been a critic of the quality that LLMs provide from an output perspective, most recently in my perspective “Instant Mediocrity: A Business Guide to ChatGPT in the Enterprise.” https://amalgaminsights.com/2023/06/06/instant-mediocrity-a-business-guide-to-chatgpt-in-the-enterprise/. But I readily acknowledge that the outputs one can get from LLMs will improve. Expert context will provide better results than prompts that lack subject matter knowledge
2) Linguistic copycats are limited by the rules of language that are defined within their model. Asking linguistic copycats to provide language formats or usage that are not commonly used online or in formal writing will be a challenge. Poetic structures or textual formats referenced must reside within the knowledge of the texts that the model has seen. However, since Wikipedia is a source for most of these LLMs, a contextual foundation exists to reference many frequently used frameworks.
3) Linguistic copycats are limited by the frequency of vocabulary usage that they are trained on. It is challenging to get an LLM to use expert-level vocabulary or jargon to answer prompts because the LLM will typically settle for the most commonly used language associated with a topic rather than elevated or specific terms.
This propensity to choose the most common language associated with a topic makes it difficult for LLM-based content to sound unique or have specific rhetorical flourishes without significant work from the prompt writer.
4) Take a deep breath and work on this. Linguistic copycats respond to the scope, tone, and role mentioned in a prompt. A recent study found that, across a variety of LLM’s, the prompt that provided the best answer for solving a math problem and providing instructions was not a straightforward request such as “Let’s think step by step,” but “Take a deep breath and work on this problem step-by-step.”
Using a language-based perspective, this makes sense. The explanations of mathematical problems that include some language about relaxing or not stressing would likely be designed to be more thorough and make sure the reader was not being left behind at any step. The language used in a prompt should represent the type of response that the user is seeking.
5) Linguistic copycats only respond to the prompt and the associated prompt engineering, custom instructions, and retrieval data that they can access. It is easy to get carried away with the rapid creation of text that LLM’s provide and mistake this for something resembling consciousness, but the response being created is a combination of grammatical logic and the computational ability to take billions of parameters into account across possibly a million or more different documents. This ability to access relationships across 500 or more gigabytes of information is where LLMs do truly have an advantage over human beings.
6) Linguistic robots can only respond based on their underlying attention mechanisms that define their autocompletion and content creation responses. In other words, linguistic robots make judgment calls on which words are more important to focus on in a sentence or question and use that as the base of the reply.
For instance, in the sentence “The cat, who happens to be blue, sits in my shoe,” linguistic robots will focus on the subject “cat” as the most important part of this sentence. The cat “happens to be,” implies that this isn’t the most important trait. The cat is blue. The cat sits. The cat is in my shoe. The words include an internal rhyme and are fairly nonsensical. And then the next stage of this process is to autocomplete a response based on the context provided in the prompt.
7) Linguistic robots are limited by a token limit for inputs and outputs. Typically, a token is about four characters while the average English content word is about 6.5 characters (https://core.ac.uk/download/pdf/82753461.pdf). So, when an LLM talks about supporting 2048 tokens, that can be seen as about 1260 words, or about four pages of text, for concepts that require a lot of content. In general, think of a page of content as being about 500 tokens and a minute of discussion typically being around 200 tokens when one is trying to judge how much content is either being created or entered into an LLM.
8) Every language is dynamic and evolves over time. LLMs that provide good results today may provide significantly better or worse results tomorrow simply because language usage has changed or because there are significant changes in the sentiment of a word. For instance, the English language word “trump” in 2015 has a variety of political relationships and emotional associations that are now standard to language usage in 2023. Be aware of these changes across languages and time periods in making requests, as seemingly innocuous and commonly used words can quickly gain new meanings that may not be obvious, especially to non-native speakers.
Conclusion
The most important takeaway of the now-famous Karpathy quote is to take it seriously not only in terms of using English as a programming language to access structures and conceptual frameworks, but also to understand that there are many varied nuances built into the usage of the English language. LLM’s often incorporate these nuances even if those nuances haven’t been directly built into models, simply based on the repetition of linguistic, rhetorical, and symbolic language usage associated with specific topics.
From a practical perspective, this means that the more context and expertise provided in asking an LLM for information and expected outputs, the better the answer that will typically be provided. As one writes prompts for LLMs and seek the best possible response, Amalgam Insights recommends providing the following details in any prompt:
Tone, role, and format: This should include a sentence that shows, by example, the type of tone you want. It should explain who you are or who you are writing for. And it should provide a form or structure for the output (essay, poem, set of instructions, etc…). For example, “OK, let’s go slow and figure this out. I’m a data analyst with a lot of experience in SQL, but very little understanding of Python. Walk me through this so that I can explain this to a third grader.”
Topic, output, and length: Most prompts start with the topic or only include the topic. But it is important to also include perspective on the size of the output. Example, “I would like a step by step description of how to extract specific sections from a text file into a separate file. Each instruction should be relatively short and comprehensible to someone without formal coding experience.”
Frameworks and concepts to incorporate: This can include any commonly known process or structure that is documented, such as an Eisenhower Diagram, Porter’s Five Forces, or the Overton Window. As a simpe example, one could ask, “In describing each step, compare each step to the creation of a pizza, wherever possible.”
Combining these three sections together into a prompt should provide a response that is encouraging, relatively easy to understand, and compares the code to creating a pizza.
In adapting business processes based on LLMs to make information more readily available for employees and other stakeholders, be aware of these biases, foibles, and characteristics associated with prompts as your company explores this novel user interface and user experience.
We are in a time of transformational change as the awareness of artificial intelligence (AI) grows during a time of global uncertainty. The labor supply chain is fluctuating quickly and the economy is on rocky ground as interest rates and geopolitical strife create currency challenges. Meanwhile, the commodity supply chain is in turmoil, leading to chaos and confusion. Rising interest rates and a higher cost of money are only adding to the challenges faced by those in the global business arena. In this world where technology is dominant in the business world, the global economic foundation is shifting, and the worlds of finance and talent are up for grabs, Workday stepped up to hold its AI and ML Innovation summit to show a way forward for the customers of its software platform, including a majority of the Fortune 500 that use Workday already as a system of record.
The timing of this summit will be remembered as a time of rapid AI change, with new major announcements happening daily. OpenAI’s near-daily announcements regarding working with Microsoft, launching ChatGPT, supporting plug-ins, and asking for guidance on AI governance are transforming the general public’s perception of AI. Google and Meta are racing to translate their many years of research in AI into products. Generative AI startups already focused on legal, contract, decision intelligence, and revenue intelligence use cases are happy to ride the coattails of this hype. Universities are showing how to build large language models such as Stanford’s Alpaca. And existing machine learning and AI companies such as Databricks are showing how to build custom models based on existing data for a fraction of the cost needed to build GPT.
In the midst of this AI maelstrom, Workday decided to chase the eye of the hurricane and put stakes in the ground on its current approach to innovation, AI, and ML. From our perspective, we were interested both in the executive perspective and in the product innovation associated with this Brave New World of AI.
Enter the Co-CEO – Carl Eschenbach
Workday’s AI and ML Innovation Summit commenced with an introduction of the partners and customers that would be present at the event. The Summit began with a conversation between Workday’s Co-CEOs, Aneel Bhusri and Carl Eschenbach, where Eschenbach talked about his focus on innovation and growth for the company. Eschenbach is not new to Workday, having been on its board during his time at Sequoia Capital, where he also led investments in Zoom, UIPath, and Snowflake. Having seen his work at VMware, Amalgam Insights was interested to see Eschenbach take this role and help Workday evolve its growth strategy from an executive level. From the start, both Bhusri and Eschenbach made it clear that this Co-CEO team is intended to be a temporary status with Eschenbach taking the reins in 2024, while Bhusri becomes the Executive Chair of Workday.
Eschenbach emphasized in this session that Workday has significant opportunities in providing a full platform solution, and its international reach requires additional investment both in technology and go-to-market efforts. Workday partners are essential to the company’s success and Eschenbach pointed out a recent partnership with Amazon to provide Workday as a private offering that can use Amazon Web Service contract dollars to purchase Workday products once the work is scoped by Workday. Workday executives also mentioned the need for consolidation, which is one of Amalgam Insights’ top themes and predictions for enterprise software for 2023. The trend in tech is shifting toward best-in-suite and strategic partnering opportunities rather than a scattered best-in-breed approach that may sprawl across tens or even hundreds of vendors.
These Co-CEOs also explored what Workday was going to become over the next three to five years to take the next stage of its development after Bhusri evolved Workday from an HR platform to a broader enterprise software platform. Bhusri sees Workday as a system of record that uses AI to serve customer pain points. He poses that ERP is an outdated term, but that Workday is currently categorized as a “services ERP” platform in practice when Workday is positioned as a traditional software vendor. Eschenbach adds that Workday is a management platform across people and finances on a common multi-tenant platform.
From Amalgam Insights’ perspective, this is an important positioning as Workday is establishing that its focus is on two of the highest value and highest cost issues in the company: skills and money. Both must exist in sufficient quantities and quality for companies to survive.
The Future of AI and Where Workday Fits
We then heard from Co-President Sayan Chakraborty, who took the stage to discuss the “Future of Work” across machine learning and generative AI. As a member of the National Artificial Intelligence Advisory Committee, the analysts in the audience expected Chakraborty to have a strong mastery of the issues and challenges Workday faced in AI and this expectation was clarified by the ensuing discussion.
Chakraborty started by saying that Workday is monomaniacally focused on machine learning to accelerate work and points out that we face a cyclical change in the nature of the working age across the entire developed world. As we deal with a decline in the percentage of “working-age” adults on a global scale, machine learning exists as a starting point to support structural challenges in labor structures and work efforts.
To enable these efforts, Chakraborty brought up the technology, data, and application platforms based on a shared object model, starting with the Workday Cloud Platform and including analytics, Workday experience, and machine learning as specific platform capabilities. Chakraborty referenced the need for daily liquidity FDIC requests as a capability that is now being asked for in light of banking failures and stresses such as the recent Silicon Valley Bank failure.
Workday has four areas of differentiation in machine learning: data management, autoML (automated machine learning, including feature abstraction), federated learning, as well as a platform approach. Workday’s advantage in data is stated across quantity, quality associated with a single data model, structure and tenancy, and the amplification of third-party data. As a starting point, this approach allows Workday to support models based on regional or customer-specific data supported by transfer learning. At this point, Chakraborty was asked why Workday has Prism in a world of Snowflake and other analytic solutions capable of scrutinizing data and supporting analytic queries and data enrichment. Prism is currently positioned as an in-platform capability that allows Workday to enrich its data, which is a vital capability as companies face the battle for context across data and analytic outputs.
Amalgam Insights will dig into this in greater detail in our recommendations and suggestions, but at this point we’ll note that this set of characteristics is fairly uncommon at the global software platform level and presents opportunities to execute based on recent AI announcements that Workday’s competitors will struggle to execute on.
Workday currently supports federated machine learning at scale out to the edge of Workday’s network, which is part of Workday’s differentiation in providing its own cloud. This ability to push the model out to the edge is increasingly important for supporting geographically specific governance and compliance needs (dubbed by some as the “Splinternet“) as Workday has seen increased demand for supporting regional governance requests leading to separate US and European Union machine learning training teams each working on regionally created data sources.
Chakraborty compared Workday’s approach of a platform machine learning approach leading to a variety of features to traditional machine learning feature-building approaches where each feature is built through a separate data generation process. The canonical Workday example is Workday’s Skills Cloud platform where Workday currently has close to 50,000 canonical skills and 200,000 recognized skills and synonyms scored for skill strength and validity. This Skills Cloud is a foundational differentiator for Workday and one that Amalgam Insights references regularly as an example of a differentiated syntactic and semantic layer of metadata that can provide differentiated context to a business trying to understand why and how data is used.
Workday mentioned six core principles for AI and ML, including people and customers, built to ensure that the machine learning capabilities developed are done through ethical approaches. In this context, Chakraborty also mentioned generative AI and large language models, which are starting to provide human-like outputs across voice, art, and text. He points out how the biggest change in AI occurred in 2006 when NVIDIA created GPUs, which used matrix math to support the constant re-creation of images. Once GPUs were used from a computational perspective, they made massively large parameter models possible. Chakraborty also pointed out the 2017 DeepMind paper on transformers to solve problems in parallel rather than sequentially, which led to the massive models that could be supported by cloud models. The 1000x growth in two years is unprecedented even from a tech perspective. Models have reached a level of scale where they can solve emergent challenges that they have not been trained on. This does not imply consciousness but does demonstrate the ability to analyze complex patterns and systems behavior. Amalgam Insights notes that this reflects a common trend in technology where new technology approaches often take a number of years to come to market, only to be treated as instant successes once they reach mainstream adoption.
Chakraborty points out that the weaknesses of GPT include bad results and a lack of explainability in machine learning, bad actors (including IP and security concerns), and the potential Environmental, Social, and Governance costs associated with financial, social, and environmental concerns. As with all technology, GPT and other generative AI models take up a lot of energy and resources without any awareness of how to throttle down in a sustainable and still functional manner. From a practical perspective, this means that current AI systems will be challenged to manage uptime as all of these new services attempt to benchmark and define their workloads and resource utilization. These problems are especially problematic in enterprise technology as the perceived reliability of enterprise software is often based on near-perfect accuracy of calculating traditional data and analytic outputs.
Amalgam Insights noted in our review of ChatGPT that factual accuracy and intellectual property attribution have been largely missing in recent AI technologies that have struggled to understand or contextualize a question based on surroundings or past queries. The likes of Google and Meta have focused on zero-shot learning for casual identification of trends and images rather than contextually specific object identification and topic governance aligned to specific skills and use cases. This is an area where both plug-ins and the work of enterprise software companies will be vital over the course of this year to augment the grammatically correct responses of generative AI with the facts and defined taxonomies used to conduct business.
Amalgam also found it interesting that Chakraborty mentioned that the future of models would include high-quality data and smaller models custom-built to industry and vertical use cases. This is an important statement because the primary discussion in current AI circles is often about how bigger is better and how models compete on having hundreds of billions of parameters to consider. In reality, we have reached the level of complexity where a well-trained model will provide responses that reflect the data that it has been trained on. The real work at this point is on how to better contextualize answers and how to separate quantitative and factual requests from textual and grammatical requests that may be in the same question. The challenge of accurate tone and grammar is very different from the ability to understand how to transform an eigenvector and get accurate quantitative output. Generative AI tends to be good at grammar but is challenged by quantitative and fact-based queries that may have answers that differ from its grammatical autocompletion logic.
Chakraborty pointed out that reinforcement learning has proven to be more useful than either supervised or unsupervised training for machine learning, as it allows models to look at user behavior rather than forcing direct user interaction. This Workday focus both provides efficacy of scale and takes advantage of Workday’s existing platform activities. This combination of reinforcement training and Workday’s ownership of its Skills Cloud will provide a sizable advantage over most of the enterprise AI world in aligning general outputs to the business world.
Amalgam Insights notes here that another challenge of the AI discussion is how to create an ‘unbiased’ approach for training and testing models when the more accurate question is to document the existing biases and assumptions that are being made. The sooner we can move from the goal of being “unbiased” to the goal of accurately documenting bias, the better we will be able to trust the AI we use.
Recommendations for the Amalgam Community on Where Workday is Headed Next
Obviously, this summit provided Amalgam Insights both with a lot of food for thought provided by Workday’s top executives. The introductory remarks summarized above were followed up with insight and guidance on Workday’s product roadmap across both the HR and finance categories where Workday has focused its product efforts, as well as visibility to the go-to-market and positioning, approaches that Workday plans to provide in 2023. Although much of these discussions were held under a non-disclosure agreement, Amalgam Insights will try to use this guidance to help companies to understand what is next from Workday and what customers should request. From an AI perspective, Amalgam Insights believes that customers should push Workday in the following areas based on Workday’s ability to deliver and provide business value.
Use the data model to both create and support large language models (LLMs). The data model is a fundamental advantage in setting up machine learning and chat interfaces. Done correctly, this is a way to have a form of Ask Me Anything for the company based on key corporate data and the culture of the organization. This is an opportunity to use trusted data to provide relevant advice and guidance to the enterprise. As one of the largest and most trusted data sources in the enterprise software world, Workday has an opportunity to quickly build, train, and deploy models on behalf of customers, either directly or through partners. With this capability, “Ask Workday” may quickly become the HR and finance equivalent of “Ask Siri.”
Use Workday’s Skills Cloud as a categorization to analyze the business, similar to cost center, profit center, geographic region, and other standard categories. Workforce optimization is not just about reducing TCO, but aligning skills, predicting succession and future success potential, and market availability for skills. Looking at the long-term value of attracting valuable skills and avoiding obsolete skills is an immense change for the Future of Work. Amalgam Insights believes that Workday’s market-leading Skills Cloud provides an opportunity for smart companies to analyze their company below the employee level and actually ascertain the resources and infrastructure associated with specific skills.
Workday still has room to improve regarding consolidation, close, and treasury management capabilities. In light of the recent Silicon Valley Bank failure and the relatively shaky ground that regional and niche banks currently are on, it’s obvious that daily bank risk is now an issue to take into account as companies check if they can access cash and pay their bills. Finance departments want to consolidate their work into one area and augment a shared version of the truth with individualized assumptions. Workday has an opportunity to innovate in finance as comprehensive vendors in this space are often outdated or rigidly customized on a per-customer level that does not allow versions to scale out in a financially responsible way as the Intelligent Data Core allows. And Workday’s direct non-ERP planning competitors mostly lack Workday’s scale both in its customer base and consultant partner relationships to provide comprehensive financial risk visibility across macroeconomic, microeconomic, planning, budgeting, and forecasting capabilities. Expect Workday to continue working on making this integrated finance, accounting, and sourcing experience even more integrated over time and to pursue more proactive alerts and recommendations to support strategic decisions.
Look for Workday Extend to be accessed more by technology vendors to create custom solutions. The current gallery of solutions is only a glimpse of the potential of Extend in establishing Workday-based custom apps. It only makes sense for Workday to be a platform for apps and services as it increasingly wins more enterprise data. From an AI perspective, Amalgam Insights would expect to see Workday Extend increasingly working with more plugins (including ChatGPT plugins), data models, and machine learning models to guide the context, data quality, hyperparameterization, and prompts needed for Workday to be an enterprise AI leader. Amalgam Insights also expects this will be a way for developers in the Workday ecosystem to take more advantage of the machine learning and analytics capabilities within Workday that are sometimes overlooked as companies seek to build models and gain insights into enterprise data.