Posted on Leave a comment

Webinar On Demand: Optimizing Leadership Training and Development by Leveraging Learning Science

On September 6, 2018, Amalgam Insights Learning Scientist Research Fellow, Todd Maddox, Ph.D. presented a webinar focused on “The What, How, and Feel of Leadership Brain Training”.

By attending this talk, you can bring back to your organization a better understanding of how psychology and brain science can be leveraged to provide a roadmap for successfully training leaders and managers at all levels. In this era of digital transformation, where organizations rely increasingly on cross-functional and deeply collaborative teams, leadership is becoming more distributed and employees are taking on leadership roles much earlier in their careers.

Combine this with some of the recent corporate crises (#metoo, unconscious bias, discrimination) and effective leadership training becomes even more important. The overriding aim of this talk is to examine leadership training and development from a learning science perspective—the marriage of psychology and brain science—and to identify procedures that optimize leadership training.

To watch this webinar, view below on the embedded viewer or click through to watch “The What, How, and Feel of Leadership Brain Training

Posted on

Google Grants $9 Million in Google Cloud Platform Credits to Kubernetes Project

Tom Petrocelli, Amalgam Insights Research Fellow

Kubernetes has, in the span of a few short years, become the de facto orchestration software for containers. As few as two years ago there were more than a half-dozen orchestration tools vying for the top spot and now there is only Kubernetes. Even the Linux Foundation’s other orchestrator project, CloudFoundry Diego, is starting to give way to Kubernetes. Part of the success of Kubernetes can be attributed to the support of Google. Kubernetes emerged out of Google and they have continued to bolster the project even as it fell under the auspices of the Linux Foundation’s CNCF.

On August 29, 2018, Google announced that it is giving $9M in Google Cloud Platform (GCP) credit to the CNCF Kubernetes project. This is being hailed by both Google and the CNCF as an announcement of major support. $9M is a lot of money, even if it is credits. However, let’s unpack this announcement a bit more and see what it really means.
Continue reading Google Grants $9 Million in Google Cloud Platform Credits to Kubernetes Project

Posted on Leave a comment

Data Science Platforms News Roundup, August 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta.

Continue reading Data Science Platforms News Roundup, August 2018

Posted on 2 Comments

VMware Purchases CloudHealth Technologies to support Multicloud Enterprises and Continue Investing in Boston


Vendors and Solutions Mentioned: VMware, CloudHealth Technologies, Cloudyn, Microsoft Azure Cloud Cost Management, Cloud Cruiser, HPE OneSphere. Nutanix Beam, Minjar, Botmetric

Key Stakeholders: Chief Financial Officers, Chief Information Officers, Chief Accounting Officers, Chief Procurement Officers, Cloud Computing Directors and Managers, IT Procurement Directors and Managers, IT Expense Directors and Managers

Key Takeaway: As Best-of-Breed vendors continue to emerge, new technologies are invented, existing services continue to evolve, vendors pursue new and innovative pricing and delivery models, cloud computing remains easy to procure, and IaaS doubles every three years as a spend category, cloud computing management will only increase in complexity and the need for Cloud Service Management will only increase. VMware has made a wise choice in buying into a rapidly growing market and now has greater opportunity to support and augment complex peak, decentralized, and hybrid IT environments.

About the Announcement

On August 27, 2018, VMware announced a definitive agreement to acquire CloudHealth Technologies, a Boston-based startup company focused on providing a cloud operations and expense management platform that supports enterprise accounts across Amazon Web Services, Microsoft Azure, and Google Cloud Platform.
Continue reading VMware Purchases CloudHealth Technologies to support Multicloud Enterprises and Continue Investing in Boston

Posted on Leave a comment

Code-Free to Code-Based: The Power Spectrum of Data Science Platforms

The spectrum of code-centricity on data science platforms ranges from “code-free” to “code-based.” Data science platforms frequently boast that they provide environments that require no coding, and that are code-friendly as well. Where a given platform falls along this spectrum affects who can successfully use a given data science platform, and what tasks they are able to perform at what level of complexity and difficulty. Codeless interfaces supply drag-and-drop simplicity and relatively quick responses to straightforward questions at the expense of customizability and power. Code-based interfaces require specialized coding, data, and statistics skills, but supply the flexibility and power to answer cutting-edge questions.

Codeless and hybrid code environments furnish end users who may lack a significant coding and statistics background with some level of data science capabilities. If a problem is relatively simple, such as a straightforward clustering question to identify customer personas for marketing, graphic interfaces provide the ability to string together data workflows from a pool of available algorithms without needing to know Python or other coding languages. Even for data scientists who do know how to code, the ability to pull together relatively simple models in a drag-and-drop GUI can be faster than manually coding them, and this also avoids the problem of typos and reduces the need for debugging code technicalities at the expense of focusing on the pure logic without distractions.

Answering a more advanced question may require some level of custom coding. Your data workflow may be constructed in a hybrid manner, composed of some pre-built models connected to nodes that can include bespoke code. This permits more adaptability of models, and makes them more powerful than those restricted solely to what a given data science platform supplies out of the box. However, even if a data science platform includes the option to include custom code in a hybrid model, taking advantage of this feature requires somebody with coding knowledge to create the code.

If the problem being addressed is complex enough, sharper coding, statistics, and data skills are necessary to create appropriately tailored models. At this level of complexity, a code-centric interactive development environment is necessary so that the data scientist can put their advanced skills into model construction and customization.

Data science platforms can equip data science users and teams with multiple interfaces for creating machine learning models. What interfaces are included say a fair bit about what kind of end users a given platform aims to best serve, and the level of skill expected of the various members of your data science team. A fully-inclusive data science platform includes both a GUI environment for data analysts to construct simple workflows (and for project managers and line of business to understand what the model is doing from a high-level perspective), as well as a proper coding environment for data scientists to code more complex custom models.

Posted on 1 Comment

Microsoft Loves Linux and FOSS Because of Developers

Tom Petrocelli, Amalgam Insights Research Fellow

For much of the past 30 years, Microsoft was famous for its hostility toward Free and Open Source Software (FOSS). They reserved special disdain for Linux, the Unix-like operating system that first emerged in the 1990s. Linux arrived on the scene just as Microsoft was beginning to batter Unix with Windows NT. The Microsoft leadership at the time, especially Steve Ballmer, viewed Linux as an existential threat. They approached Linux with an “us versus them” mentality that was, at times, rabid.

It’s not news that times have changed and Microsoft with it. Instead of looking to destroy Linux and FOSS, Microsoft CEO Satya Nadella has embraced it.

Microsoft has begun to meld with the FOSS community, creating Linux-Windows combinations that were unthinkable in the Ballmer era.

In just the past few years Microsoft has:
Continue reading Microsoft Loves Linux and FOSS Because of Developers

Posted on Leave a comment

Oracle GraphPipe: Expediting and Standardizing Model Deployment and Querying

On August 15, 2018, Oracle announced the availability of GraphPipe, a network protocol designed to transmit machine learning data between remote processes in a standardized manner, with the goal of simplifying the machine learning model deployment process. The spec is now available on Oracle’s GitHub, along with clients and servers that have implemented the spec for Python and Go (with a Java client soon to come); and a TensorFlow plugin that allows remote models to be included inside TensorFlow graphs.

Oracle’s goal with GraphPipe is to standardize the process of model deployment regardless of the frameworks utilized in the model creation stage.

Continue reading Oracle GraphPipe: Expediting and Standardizing Model Deployment and Querying

Posted on Leave a comment

Oracle Autonomous Transaction Processing Lowers Barriers to Entry for Data-Driven Business

I recently wrote a Market Milestone report on Oracle’s launch of Autonomous Transaction Processing, the latest in a string of Autonomous Database announcements made by Oracle following announcements in Autonomous Data Warehousing and the initial announcement of the Autonomous Database late last year.

This string of announcements by Oracle takes advantage of Oracle’s investments in infrastructure, distributed hardware, data protection and security and index optimization to create a new set of database services that seek to automate basic support and optimization capabilities. These announcements matter because, as transactional and data-centric business models continue to proliferate, both startups and enterprises should seek a data infrastructure that will remain optimized, secure, and scalable over time without become cost and resource intensive. With Oracle Automated Transaction Processing, Oracle provides its solution to provide an enterprise-grade data foundation for this next generation of businesses.

One of Amalgam Insights’ key takeaways in this research is the analyst estimate that Oracle ATP could reduce the cost of cloud-based transactional database management by 65% compared to similar services managed on Amazon Web Services. Frankly, companies that need to support net-new transactional databases that must be performant and scalable to support Internet of Things, messaging, and other new data-driven businesses should consider Oracle ATP and should do due diligence on Oracle Autonomous Database Cloud for reducing long-term Total Cost of Ownership. This chart is based on the costs of a 10 TB Oracle database on a reserved instance on Amazon Web Services vs. a similar database on the Oracle Autonomous Database Cloud

One of the most interesting aspects of the Autonomous Database in general that Oracle will need to further explain is how to guide companies with existing transactional databases and data warehouses to an Automated environment. It is no secret that every enterprise IT department is its own special environment driven by a combination of business rules, employee preferences, governance, regulation, security, and business continuity expectations. At the same time, IT is used to automation and rapid processing of some aspects of technology management, such as threat management and logs for patching and other basic transactions. But considering the needs of IT for extreme customization, how does IT gain enough visibility to the automated decisions made in indexing and ongoing optimization?

At this point, Amalgam Insights believes that Oracle is pushing a fundamental shift in database management that will likely lead to the automation of manual technical management tasks. This change will be especially helpful for net-new databases where organizations can use the Automated Database Cloud to help establish business rules for data access, categorization, and optimization. This is likely a no-brainer decision, especially for Oracle shops that are strained in their database management resources and seeking to handle more data for new transaction-based business needs or machine learning.

For established database workloads, enterprises will have to think about how or if to transfer existing enterprise databases to the Autonomous Database Cloud. Although enterprises will likely gain some initial performance improvements and potentially reduce the support costs associated with large databases, they will also likely spend time in double-checking the decisions and lineage associated with Automated Database decisions, both in test and in deployment settings. Amalgam Insights would expect that Autonomous Database management would lead to indexing, security, and resource management decisions that may be more optimal than human-led decisions, but with a logic that may not be fully transparent to IT departments that have strongly-defined and governed business rules and processes.

Although Amalgam Insights is convinced that Oracle Autonomous Database is the beginning of a new stage of Digitized and Automated IT, we also believe that a next step for Oracle Autonomous Database Cloud will be to create governance, lineage, and audit packages to support regulated industries, legislative demands, and documentation to describe the business rules for Autonomous logic. Amalgam Insights expects that Oracle would want to keep specific algorithms and automation logic as proprietary trade secrets. But without some level of documentation that is tracable and auditable, large enterprises will have to conduct significant work on their own to figure out if they are able to transfer large databases to Oracle Autonomous Database Cloud, which Amalgam Insights would expect to be an important part of Oracle’s business model and cloud revenue projections going forward.

To read the full report with additional insights and details on the Oracle Autonomous Transaction Processing announcement, please download the full report on Oracle’s launch of Autonomous Transaction Processing, available at no cost for a limited time.

Posted on Leave a comment

Area9: Leveraging Brain and Computer Science to Build an Effective Adaptive Learning Platform

I recently received an analyst briefing from Nick Howe, the Chief Learning Officer at Area9 Learning who offer an adaptive learning solution. Although Area9 Learning was founded in 2006, I have known about area 9 since the 1980s and it was first “discovered” in 1909. How is that possible?

In 1909, the German anatomist Korbinian Brodmann developed a numbering system for mapping the cerebral cortex based on the organization of cells (called cytoarchitecture). Brodmann area 9, or BA9, includes the prefrontal cortex (a region of brain right behind the forehead) which is a critical structure in the cognitive skills learning system in the brain and functionally serves working memory and attention.

The cognitive skills learning system, prefrontal cortex (BA9), working memory and attention are critical for many aspects of learning, especially hard skills learning.
Continue reading Area9: Leveraging Brain and Computer Science to Build an Effective Adaptive Learning Platform

Posted on Leave a comment

Growing Your Data Science Team: Diversifying Beyond Unicorns

If your organization already has a data scientist, but your data science workload has grown beyond their capacity, you’re probably thinking about hiring another data scientist. Perhaps even a team of them. But cloning your existing data scientist isn’t the best way to grow your organization’s capacity for doing data science.

Why not simply hire more data scientists? First, so many of the tasks listed above are actually well outside the core competency of data scientists’ statistical work, and other roles (some of whom likely already exist in your organization) can perform these tasks much more efficiently. Second, data scientists who can perform all of these tasks well are a rare find; hoping to find their clones in sufficient numbers on the open market is a losing proposition. Third, though your organization’s data science practice continues to expand, the amount of time your original domain expert is able to spend with the data scientist on a growing pool of data science projects does not; it’s time to start delegating some tasks to operational specialists.

Continue reading Growing Your Data Science Team: Diversifying Beyond Unicorns