Learning Elastic’s Machine Learning Story at Elastic{ON} in Boston

Why is a Data Science and Machine Learning Analyst at Elastic’s road show when they’re best known for search? In early September, Amalgam Insights attended Elastic{ON} in Boston, MA. Prior to the show, my understanding of Elastic was that they were primarily a search engine company. Still, the inclusion of a deep dive into machine learning interested me, and I was also curious to learn more about their security analytics, which were heavily emphasized in the agenda.

In exploring Elastic’s machine learning capabilities, I got a deep dive with Rich Collier, the Senior Principal Solutions Architect and Machine Learning specialist. Elastic acquired Prelert, an incident management company with unsupervised machine learning capabilities in September 2016 with the goal of incorporating real-time behavioral analytics into the Elastic Stack; in the interim two years, integrating Prelert has grown Elastic’s abilities to act on time-series data anomalies found in the Elasticsearch data store, offering an extension to the Elastic Stack called “Machine Learning” as part of their Platinum-level SaaS offerings.

Elastic Machine Learning users no longer have to define rules to identify abnormal time-series data, nor do they even need to code their own models – the Machine Learning extension analyzes the data to understand what “normal” looks like in that context, including what kind of shifts can be expected over different periods of time from point-to-point all the way up to seasonal patterns. From that, it learns when to throw an alert on encountering atypical data in real time, whether that data is log data, metrics, analytics, or a sudden upsurge in search requests for “$TSLA.” Learning from the data rather than configuring blunt rules makes for a more granular precision that reduces alerts on false positives on anomalous data.

The configuration for the Machine Learning extension is simple and requires no coding experience; front-line business analysts can customize the settings via pull-down menus and other graphical form fields to suit their needs. To simplify the setup process even further, Elastic offers a list of “machine learning recipes” on their website for several common use cases in IT operations and security analytics; given how graphically oriented the Elastic stack is, I wouldn’t be surprised to see these “recipes” implemented as default configuration options in the future. Doing so would simplify the configuration from several minutes of tweaking individual settings to selecting a common profile in a click or two.

Elastic also stated that one long-term goal is to “operationalize data science for everyone.” At the moment, that’s a fairly audacious claim for a data science platform, let alone a company best known for search and search analytics. One relevant initiative Kearns mentioned in the keynote was the debut of the Elastic Common Schema, a common set of fields for ingesting data into Elasticsearch. Standardizing data collection makes it easier to correlate and analyze data through these relational touch points, and opens up the potential for data science initiatives in the future, possibly through partnerships or acquisitions. But they’re not trying to be a general-purpose data science company right now; they’re building on their core of search, logging, security, and analytics; machine learning ventures are likely to fall within this context. Currently, that offering is anomaly detection on time series data.

Providing users who aren’t data scientists with the ability to do anomaly detection on time series data may be just one step in one category of data modeling, but having that sort of specialized tool accessible to data and business analysts would help organizations better understand the “periodicity” of typical data. Retailers could track peaks and valleys in sales data to understand purchasing patterns, for example, while security analysts could focus on responding to anomalies without having to define what anomalous data looks like ahead of time as a collection of rules.

Elastic’s focus on making this one specific machine learning tool accessible to non-data-scientists reminded me of Google’s BigQuery ML initiative  – take one very specific type of machine learning query, and operationalize it for easy use by data and business analysts to address common business queries. Then, once they’ve perfected that tool, they’ll move onto building the next one.

Improving the quality of data acquired and stored in Elasticsearch from search results will be key to improving on the user experience. I spoke with Steve Kearns, the Senior Director of Product Management at Elastic, who delivered the keynote speech with a sharp focus on “scale, speed, and relevance” for improving search results. Better search data can be used to optimize machine learning applied to that data. With how Elastic has created the Machine Learning extension focused on anomaly detection across time series data – data Elasticsearch specializes in collecting, such as log data – this can support more accurate data analysis and better business results for data-driven organizations.

Overall, It was intriguing to see how machine learning is being incorporated into IT solutions that aren’t directly supporting data science environments. Enabling growth in the use of machine learning tactics effectively spreads the use of data across an organization, bringing companies closer to the advantages of the data-driven ideal. Elastic’s Machine Learning capability potentially opens up a specific class of machine learning for a broader spectrum of Elastic users without requiring them to acquire coding and statistics backgrounds; this positions Elastic as a provider of a specific type of machine learning services in the present, and makes it more plausible to consider them as a company for providing machine learning services in the future.

EPM at a Crossroads: Big Data Solutions

Key Stakeholders: Chief Information Officers, Chief Financial Officers, Chief Operating Officers, Chief Digital Officers, Chief Technology Officer, Accounting Directors and Managers, Sales Operations Directors and Managers, Controllers, Finance Directors and Managers, Corporate Planning Directors and Managers

Analyst-Recommended Solutions: Adaptive Insights, a Workday Company, Anaplan, Board, Domo, IBM Planning Analytics, OneStream, Oracle Planning and Budgeting, SAP Analytics Cloud

In 2018, the Enterprise Performance Management market is at a crossroads. This market has emerged from a foundation of financial planning, budgeting, and forecasting solutions designed to support basic planning and has evolved as the demands for business planning, risk and forecasting management, and consolidation have increased over time. In addition, the EPM market has expanded as companies from the financial consolidation and close markets, business performance management markets, and workflow and process automation markets now play important roles in effectively managing Enterprise Performance.

In light of these challenges, Amalgam Insights is tracking six key areas where Enterprise Performance Management is fundamentally changing: Big Data, Robotic Process Automation, API connectivity, Analytics and Data Science, Vertical Solutions, and Design Thinking for User Experience

Supporting Big Data for Enterprise Performance Management

Amalgam Insights has identified two key drivers repeatedly mentioned by finance departments seeking to support Big Data in Enterprise Performance Management. First, EPM solutions must support larger stores of data over time to fully analyze financial data and a plethora of additional business data needed to support strategic business analysis. The challenge of growing data has become increasingly important as enterprises now face the challenge of managing billion row tables and outgrow the traditional cubes and datamarts used to manage basic financial data. The sheer scale of financial and commerce-related transactional data requires a Big Data approach at the enterprise level to support timely analysis of planning, consolidation, close, risk, and compliance.

In addition, these large data sources need to integrate with other data sources and references to support integrated business planning to align finance planning with sales, supply chain, IT, and other departments. As the CFO is increasingly asked to be not only a financial leader, but a strategic leader, she must have access to all relevant business drivers and have a single view of how relevant sales, support, supply chain, marketing, operational, and third-party data are aligned to financial performance. Each of these departments has its own large store of data that the strategic CFO must also be able to access, allocate, and analyze to guide the business.

New EPM solutions must evolve beyond traditional OLAP cubes to support hybrid data structures that effectively scale to support the immense scale and variety of data being supported. Amalgam notes that EPM solutions focusing on large data solutions take a variety of relational, in-memory, columnar, cloud computing, and algorithmic approaches to define categories on the fly, store, structure, and analyze financial data.

To support these large stores of data and effectively support them from a financial, strategic, and analytic perspective, Amalgam Insights recommends the following companies that have been innovative in supporting immense and varied planning and budgeting data environments based on briefings and discussions held in 2018:

  • Adaptive Insights, a Workday Company
  • Anaplan
  • Board
  • Domo
  • IBM Planning Analytics
  • OneStream
  • Oracle Planning and Budgeting
  • SAP Analytics Cloud

Adaptive Insights

Adaptive Insights’ Elastic Hypercube, an in-memory, dynamic caching and scaling solution announced in July 2018. Amalgam Insights saw a preview of this technology at Adaptive Live and was intrigued by the efficiency that Adaptive Insights provided to models in selectively recalculating only the dependent changes as a model was edited, using a dynamic caching approach for only using memory and computational cycles when data was being accessed, and using both tabular and cube formats to support data structures. This data format will also be useful to Adaptive Insights as a Workday company in building out the various departmental planning solutions that will be accretive to Workday’s positioning as an HR and ERP solution after Workday’s June acquisition (covered in June in our Market Milestone).

Anaplan

Anaplan’s Hyperblock is an in-memory engine combining columnar, relational, and OLAP approaches. This technology is the basis of Anaplan’s platform and allows Anaplan to rapidly support large planning use cases. By developing composite dimensions, Anaplan users can pre-build a broad array of combinations that can be used to repeatably deploy analytic outputs. As noted in our March blog, Anaplan has been growing rapidly based on its ability to rapidly support new use cases. In addition, Anaplan has recently filed its S-1 to go public.

Board

Board goes to market both as an EPM and a general business intelligence solution. Its core technology is the Hybrid Bitwise Memory Pattern (HBMP), a proprietary in-memory data management solution, designed to algorithmically map each bit of data, then to store this map in-memory. In practice, this approach allows Board to allow many users to both access and edit information without dealing with lagging or processing delays. This approach also allows Board to support which aspects of data to support in an in-memory or dynamic manner to prioritize computing assets.

Domo

Domo describes its Adrenaline engine as an “n-dimensional, highly concurrent, exo-scale, massively parallel, and sub-second data warehouse engine” to store business data. This is accompanied by VAULT, Domo’s data lake to support data ingestion and serve as a single store of record for business analysis. Amalgam Insights covered the Adrenaline engine as one of Domo’s “Seven Samurai” in our March report Domo Hajimemashite: At Domopalooza 2018, Domo Solves Its Case of Mistaken Identity. Behind the buzzwords, these technologies allow Domo to provide executive reporting capabilities across a wide range of departmental use cases in near-real time. Although Domo is not a budgeting solution, it is focused on portraying enterprise performance for executive consumption and should be considered for organizations seeking to gain business-wide visibility to key performance metrics.

IBM Planning Analytics

IBM Planning Analytics runs on Cognos TM1 OLAP in-memory cubes. To increase performance, these cubes use sparse memory management where missing values are ignored and empty values are not stored. In conjunction with IBM’s approach of caching analytic outcomes in-memory, this approach allows IBM to improve performance compared to standard OLAP approaches and this approach has been validated at scale by a variety of IBM Planning Analytics clients. Amalgam Insights presented on the value of IBM’s approach at IBM Vision 2017 both from a data perspective and from a user interface perspective that will be covered in a future blog.

OneStream

OneStream provides in-memory processing & stateless servers to support scale, but their approach to analytic scale is based on virtual cubes and extensible dimensions, which allow organizations to continue building dimensions over time that are tied back to a corporate level and to create logical views of data based on a larger data store to support specific financial tasks such as budgeting, tax reporting, or financial reporting. OneStream’s approach is focused on financial use rather than general business planning.

Oracle Planning and Budgeting Cloud

Oracle Planning and Budgeting Cloud Service is based on Oracle Hyperion, the market leader in Enterprise Performance Management from a revenue perspective. The Oracle Cloud is built on Oracle Exalogic Elastic Cloud, Oracle Exadata Database Machine, and the Oracle Database, which provide a strong in-memory foundation for the Planning and Budgeting application by providing an algorithmic approach to manage storage, compute, and networking. This approach effectively allows Oracle to support planning models at massive scale.

SAP Analytics Cloud

SAP Analytics Cloud, SAP’s umbrella product for planning and business intelligence, uses SAP S/4HANA, an in-memory columnar relational database, to provide real-time access to data and to accelerate both modelling and analytic outputs based on all relevant transactional data. This approach is part of SAP’s broader HANA strategy to encapsulate both analytic and transactional processing in a single database, effectively making all data reportable, modellable, and actionable. SAP has also recently partnered with Intel Optane DC persistent memory to support larger data volumes for enterprises requiring larger persistent data stores for analytic use.

This blog is part of a multi-part series on the evolution of Enterprise Performance Management and key themes that the CFO office must consider in managing holistic enterprise performance: Big Data, Robotic Process Automation, API connectivity, Analytics and Data Science, Vertical Solutions, and Design Thinking for User Experience. If you would like to set up an inquiry to discuss EPM or provide a vendor briefing on this topic, please contact us at info@amalgaminsights.com to set up time to speak.

Last Blog: EPM at a Crossroads
Next Blog: Robotic Process Automation and Machine Learning in EPM

The “Unlearning” Dilemma in Learning and Development

Key Stakeholders: IT Managers, IT Directors, Chief Information Officers, Chief Technology Officers, Chief Digital Officers, IT Governance Managers, and IT Project and Portfolio Managers.

Top Takeaways: One critical barrier to full adoption is the poorly addressed problem of unlearning. Anytime a new piece of software achieves some goal with a set of motor behaviors that is at odds with some well-established, habitized motor program, the learner struggles, evidences frustration, and is less likely to effectively onboard. Learning scientists can remedy this problem and can help IT professionals build effective training tools.

Introduction

In my lifetime I have seen amazing advances in technology. I remember the days of overhead projectors, typewriters and white out. Now our handheld “phone” can project a high-resolution image, can convert spoken word into text, and can autocorrect errors.

The corporate world is dominated by new technologies that are making our lives easier and our workplaces more effective. Old technologies are being updated regularly, and new innovative, disruptive technologies are replacing them. It is an exciting time. Even so, this fast-paced technological change requires continuous learning and unlearning and this is where we often fall short.

Despite the fact that we all have experience with new technologies that have made our lives easier, and we applaud those advances, a large proportion of us (myself included) fear the introduction of a new technology or next release of our favorite piece of software. We know that new and improved technologies generally make us more productive and make our lives easier, at least in the long-run, but at the same time, we dread that new software because the training usually “sucks”.

This is a serious problem. New and improved technology development is time-consuming and expensive. The expectation is that all users will onboard effectively and reap the benefits of the new technology together. When many users actively avoid the onboarding process (yours truly included) this leads to poor adoption, underutilization of these powerful tools, and reduced profits. I refer to this as the “adoption gap”.

Why does this “adoption gap” exist and what can we do about it?

There are two reasons for the “adoption gap” and both can be addressed if training procedures are developed that embrace learning science—the marriage of psychology and brain science.

First, all too often a software developer or someone on their or another team is tasked with developing a training manual or training tool. Although these individuals are amazing at their jobs, they are not experts in training content development and delivery and often build ineffective tools. The relevant stories that I can tell from my 25-year career as a University Professor are many. I can’t count the number of times I received an email explaining (in a paradoxically excruciating and incomprehensible detail) how a new version of an existing software has changed, or how some new technology was going to be unveiled that would replace a tool that I currently used. I was provided with a schedule of “training workshops” to attend, or a link to some unintelligible “training manual”.

Because the training materials were not easy to use and not immediately actionable, I and my colleagues did everything that we could to stick with the legacy (and currently workable solution) or to get small group training from someone who knew how to use the technology. Although this worked for us, it was highly sub-optimal from the perspective of the University and it adversely affected productivity.

When the training tool is ineffective, employees will fail to use the new technology effectively and will quickly revert back to the legacy software that works and feels comfortable. I discuss this problem in a recent blog post and I offered some specific suggestions for improving technology training and development that include surveying users, embracing microlearning, using multiple media, and incorporating knowledge checks. Even so, that blog ignores the second major cause of the “adoption gap” and obstacles to IT onboarding, unlearning.

The Importance of Unlearning

In this report I take a deeper dive and explore a problem that is poorly understood in Learning and Development. This is the central problem of unlearning. Simply put, all too often a novel technology or software release will introduce new ways of achieving a goal that are very different from, or at odds with, the old way of achieving that goal.

Consider a typical office setting in which an employee spends several hours each day using some technology (e.g., Photoshop, a word processor, some statistics package). The employee has been using this technology on a daily basis for months, possibly years and using the technology has become second nature. The employee has become so proficient with this tool that their behaviors and motor interactions with the technology have become habitized. The employee does not even have to “think” about how to cut and paste, or conduct a simple regression, or add a layer to their project. They have performed these actions so many times that they have developed “muscle memory”. The brain develops “muscle memory” and habits that reduce the working memory and attention load and leave those valuable resources for more complex problems like interpreting the outcome of a regression or visualizing the finalized Photoshop project that we have in mind.

Now suppose that the new release changes the motor program associated with cutting and pasting, the drop-down menu selections needed to complete a regression, or the button clicks to add, delete or move project layers. In this case, your habits and muscle memory are telling you one thing, but you have to do something else with the new software. This is very challenging, frustrating, and working memory and attention demanding. One has to use inhibitory control so as not to initiate the habit, and instead think really hard to initiate the new set of behaviors, and do this over and over again so that they become habitized. This takes time and effort and is working memory and attention demanding. Many (yours truly included) will abandon this process and fall back on what “works”. Unlearning habits is much more challenging than learning new behaviors.

Key Recommendations to Support Unlearning

This is an area where learning science can be leveraged. An extensive body of psychological and brain science research (much of my own) has been conducted over the past several decades that provides specific guidelines on how to solve the problem of unlearning. Here are a few suggestions for addressing this problem.

Recommendation 1: Identify Technology Changes That Impact High Frequency “Habits”. When onboarding a new software solution or when an existing software solution is upgraded, the IT team should audit IT behavior to identify high-frequency functionality and monitor users’ behavior. Users should also be encouraged to provide feedback on their interactions with the software and to identify functions that they believe have changed. Of course, IT personnel could be proactive and audit high-frequency behaviors before purchasing new software. This information could guide the purchasing process. IT professionals must understand that although the technology as a whole may be improved with each new release, there is often at least one high-frequency task that changes and requires extensive working memory and attentional resource to overcome. Every such instance is a chance for an onboarding failure.

Recommendation 2: Apply Spaced Training and Periodic Testing to Unlearn High-Frequency Habits. Once high-frequency habits that have changed are identified, unlearning procedures should be incorporated to speed the unlearning, and new learning process. Spaced training and periodic testing can be implemented to speed this process. Details can be found here, but briefly, learning (and unlearning) procedures should be developed that target these habits for change. These training modules should be introduced across training sessions spaced over time (usually hours or days apart). Each training session should be preceded by a testing protocol that identifies areas of weakness that require additional testing. This provides critical information for the learner and allows them to see objective evidence of progress. In short, habits cannot be overcome in a single training session. However, the speed of learning and unlearning can be increased when spaced training and testing procedures are introduced.

Recommendation 3: Automate High-Frequency Tasks to Avoid the Need for Unlearning. The obvious solution to the learning and unlearning problem is to minimize the number of motor procedural changes across software releases or with new technology. A straightforward method is to ask employees which tasks that they locked into “muscle memory”. Once identified, software developers and experts in UI/UX could work to automate or optimize these processes. Tools such as optimized macros, machine learning-based optimization or new functionality would be the goal. The time saved onboarding users should be significant and the number of users abandoning the process should be minimized. Although aspirational, with the amount of “big data” available to developers, and the rich psychological literature on motor behavior, this is a solvable problem. We simply need to recognize the problem that employees have been aware of for decades, and acknowledge that the problem must be solved.

By taking these recommendations into account, technology onboarding will become more efficient, technology users will become more efficient, and companies will be better positioned to extract maximum value from their investment in new and transformative technologies.

FloQast Supports ASC 606 Compliance by Providing a Multi Book Close for Accountants

On September 11, 2018, FloQast announced multi-book accounting capabilities designed to help organizations to support ASC 606 compliant financial closes by supporting dual reporting on revenue recognition and related expenses. As Amalgam Insights has covered in prior research, ASC 606/IFRS 15 standards for recognizing revenue on subscription services are currently required for all public companies and will be the standard for private companies as of the end of 2019.

Currently, FloQast supports multi book accounting for Oracle NetSuite and Sage Intacct, two strong mid-market finance solutions that have invested in their subscription billing model support capabilities. This capability is available for FloQast Business, Corporate, and Enterprise customers at no additional cost. The support of these solutions also reflects the investment that each of these ERP vendors has made in subscription billing. NetSuite’s 2015 acquisition of subscription billing solution Monexa eventually led to the launch of NetSuite SuiteBilling, while Sage Intacct developed its subscription billing capabilities organically in 2015.

Why This Matters For Accounting Teams

Currently, accounting teams compliant with ASC 606 are required to provide two sets of books associated with each financial close. Organizations seeking to accurately reflect their finances both from a legacy and current perspective either need to duplicate efforts to provide compliant accounting outputs or to use an accounting solution that will accurately create separate sets of close results. By simultaneously creating dual level close outputs, organizations can avoid the challenge of creating detailed journal entries to explain discrepancies within a single close instance.

Recommendations for Accounting Teams with ASC 606 Compliance Requirements

This announcement has a couple of ramifications for mid-market enterprises and organizations that are either currently supporting ASC 606 as public companies or preparing to support ASC 606 as private companies.

First, Amalgam Insights believes that accounting teams using either Oracle NetSuite or Sage Intacct should adopt FloQast as a relatively low-cost solution to solve the challenge of duplicate ASC 606 close. Currently, this functionality is most relevant to Oracle NetSuite and Sage Intacct customers with significant ASC 606 accounting challenges. To understand why, consider the basic finances of this decision.

Amalgam Insights estimates that, based on FloQast’s current pricing of $125 per month for business accounts or $150 per month for corporate accounts , FloQast will pay for itself with productivity gains in any accounting department where an organization spends four or more man-hours per month to create duplicate closes. This return is in addition to the existing ROI associated with financial close that Amalgam Insights has previously tracked for FloQast customers in our Business Value Analysis. In this document, we found that FloQast customers interviewed saw a 647% ROI in their first year of deployment by accelerating and simplifying their close workflows and improving team visibility to the current status of financial close.

Second, accounting teams should generally expect to support multi-book accounting for the foreseeable future. Although ASC 606 is now a current standard, financial analysts and investors seeking to conduct historical analysis of a company for investment, acquisition, or partnership will want to conduct a consistent “apples-to-apples” comparison of finances for multiple years. Until your organization has three full years of financial statements under ASC 606 that are audited, your organization will likely have to maintain multiple sets of books. Given that most public organizations started using ASC 606 in 2018, this means having a plan for multiple sets of books until 2020. For private organizations, this may mean an additional year or two given that mandatory compliance starts at the end of 2019. Companies that avoid preparing for the reality of dual level closes for the next couple of years will be spending significant accountant hours on easily avoidable work.

If you would like to learn more about FloQast, the Business Value Analysis, or the current vendor solution options for financial close management, please contact Amalgam Insights at info@amalgaminsights.com

AMALGAM INSIGHTS ISSUES “VENDOR SMARTLIST” OF SOFTWARE SOLUTIONS THAT HELP BUILD BETTER LEADERSHIP BRAINS

BOSTON, September 11, 2018 — A new report from industry analyst firm Amalgam Insights cites nine leaders in software solutions designed to effectively train the hard skills, people skills, and situational awareness of leadership.

The Vendor SmartList™, authored by Amalgam Insights Research Fellow Todd Maddox, Ph.D., takes on added significance given the increasing number of news reports about inappropriate conduct by corporate leaders or the lack of trained behavioral skills among a company’s employees.
“Companies must continuously educate their workers, top to bottom, about the ‘what,’ ‘how,’ and ‘feel’ of effective leadership by leveraging brain science to ensure that their behavior doesn’t land them in trouble,” Maddox says. “The companies in our report have succeeded in mapping these systems to the leadership processes needed in today’s business environment.”

Maddox lists nine companies as leaders, with comments about each company’s strength:
• CrossKnowledge: “Path to Performance uses digital solutions that can combine with facilitator training and a dedicated dashboard to provide effective leadership training paths.”
• Development Dimensions International: “DDI uses a broad range of online and offline solutions, tools for practice and simulation, and ‘what if’ scenarios to train leadership skills that meet tomorrow’s business challenges.”
• Fuse Universal: “(Its) use of science and analytics help clients build content that reflects their company DNA, vision and brand.”
• Grovo: “Its use of microlearning content and training programs that power modern learning and improve business outcomes.”
• Learning Technologies Group: “Its broad portfolio of providers, combined with its emphasis on rich scenario-based training and the importance of business outcomes.”
• Rehearsal: “The use of video role-play training that helps leaders improve their people skills through practice, coaching and collaboration.”
• Skillsoft: “Its combination of content and delivery tools that effectively train hard skills, people skills and situational awareness in leadership.”
• TalentQuest: “The use of personality and behavioral science to empower leaders to better manage, coach and develop their people.”
• Valamis: “Its use of technology and data science to build a system for each client that delivers learning aligned with the client’s business goals.”

Maddox cautions, however, that companies deploying any of the systems listed above, or others, must be prepared to do so on an ongoing basis. “One-off training does not work,” he notes.

The report is available for download at https://amalgaminsights.com/product/vendor-smartlist-2018s-best-leadership-training.

Webinar On Demand: Optimizing Leadership Training and Development by Leveraging Learning Science

On September 6, 2018, Amalgam Insights Learning Scientist Research Fellow, Todd Maddox, Ph.D. presented a webinar focused on “The What, How, and Feel of Leadership Brain Training”.

By attending this talk, you can bring back to your organization a better understanding of how psychology and brain science can be leveraged to provide a roadmap for successfully training leaders and managers at all levels. In this era of digital transformation, where organizations rely increasingly on cross-functional and deeply collaborative teams, leadership is becoming more distributed and employees are taking on leadership roles much earlier in their careers.

Combine this with some of the recent corporate crises (#metoo, unconscious bias, discrimination) and effective leadership training becomes even more important. The overriding aim of this talk is to examine leadership training and development from a learning science perspective—the marriage of psychology and brain science—and to identify procedures that optimize leadership training.

To watch this webinar, view below on the embedded viewer or click through to watch “The What, How, and Feel of Leadership Brain Training

Google Grants $9 Million in Google Cloud Platform Credits to Kubernetes Project

Tom Petrocelli, Amalgam Insights Research Fellow
Kubernetes has, in the span of a few short years, become the de facto orchestration software for containers. As few as two years ago there were more than a half-dozen orchestration tools vying for the top spot and now there is only Kubernetes. Even the Linux Foundation’s other orchestrator project, CloudFoundry Diego, is starting to give way to Kubernetes. Part of the success of Kubernetes can be attributed to the support of Google. Kubernetes emerged out of Google and they have continued to bolster the project even as it fell under the auspices of the Linux Foundation’s CNCF.

On August 29, 2018, Google announced that it is giving $9M in Google Cloud Platform (GCP) credit to the CNCF Kubernetes project. This is being hailed by both Google and the CNCF as an announcement of major support. $9M is a lot of money, even if it is credits. However, let’s unpack this announcement a bit more and see what it really means.

Please register or log into your Free Amalgam Insights Community account to read more.
Log In Register

Data Science Platforms News Roundup, August 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta.

Please register or log into your Free Amalgam Insights Community account to read more.
Log In Register

VMware Purchases CloudHealth Technologies to support Multicloud Enterprises and Continue Investing in Boston


Vendors and Solutions Mentioned: VMware, CloudHealth Technologies, Cloudyn, Microsoft Azure Cloud Cost Management, Cloud Cruiser, HPE OneSphere. Nutanix Beam, Minjar, Botmetric

Key Stakeholders: Chief Financial Officers, Chief Information Officers, Chief Accounting Officers, Chief Procurement Officers, Cloud Computing Directors and Managers, IT Procurement Directors and Managers, IT Expense Directors and Managers

Key Takeaway: As Best-of-Breed vendors continue to emerge, new technologies are invented, existing services continue to evolve, vendors pursue new and innovative pricing and delivery models, cloud computing remains easy to procure, and IaaS doubles every three years as a spend category, cloud computing management will only increase in complexity and the need for Cloud Service Management will only increase. VMware has made a wise choice in buying into a rapidly growing market and now has greater opportunity to support and augment complex peak, decentralized, and hybrid IT environments.

About the Announcement

On August 27, 2018, VMware announced a definitive agreement to acquire CloudHealth Technologies, a Boston-based startup company focused on providing a cloud operations and expense management platform that supports enterprise accounts across Amazon Web Services, Microsoft Azure, and Google Cloud Platform.

Please register or log into your Free Amalgam Insights Community account to read more.
Log In Register

Code-Free to Code-Based: The Power Spectrum of Data Science Platforms

Codeless to Code-Based

The spectrum of code-centricity on data science platforms ranges from “code-free” to “code-based.” Data science platforms frequently boast that they provide environments that require no coding, and that are code-friendly as well. Where a given platform falls along this spectrum affects who can successfully use a given data science platform, and what tasks they are…

Please register or log into your Free Amalgam Insights Community account to read more.
Log In Register