Note: If you missed Parts I, II, and III of this blog series, catch up and read Part I: The Problem Part II: The Brain Science, and Part III: xR Hard Skills Applications This is part of a four-blog series exploring the psychology and brain science behind the potential for extended reality tools to disrupt…
In early January, I started researching the service mesh market. To oversimplify, a service mesh is a way of providing for the kind of network services necessary for enterprise applications deployed using a microservices architecture. Since most microservices architectures are being deployed within containers and, most often, managed and orchestrated using Kubernetes, service mesh technology will have a major impact on the adoption of these markets.
As I began writing the original paper, I quickly realized that an explanation of service mesh technology was necessary to understand the dynamic of the service mesh market. Creating a primer on service mesh and a market guide turned out to be too much for one paper. It was unbearably long. Subsequently, the paper was split into two papers, a Technical Guide and a Market Guide.
The Technical Guide is a quick primer on service mesh technology and how it is used to enhance microservices architectures, especially within the context of containers and Kubernetes. The Market Guide outlines the structure of the market for service mesh products and open source projects, discusses many of the major players, and talks to the current Istio versus Linkerd controversy. The latter is actually a non-issue that has taken on more importance than it should given the nascence of the market.
The Technical Guide will be released next week, just prior to Cloud Foundry Summit. Even though service mesh companies seem to be focused on Kubernetes, anytime there is a microservices architecture, there will be a service mesh. This is true for microservices implemented using Cloud Foundry containers.
The Market Guide will be published roughly a month later, before Red Hat Summit and KubeCon+CloudNative Summit Europe, which I will be attending. Most of the vendors discussed in the Market Guide will be in attendance at one or the other conference. Read the report before going so that you know who to talk to if you are attending these conferences.
A service mesh is a necessary part of emerging microservices architectures. These papers will hopefully get you started on your journey to deploying one.
Note: Vendors interested in leveraging this research for commercial usage are invited to contact Lisa Lincoln (firstname.lastname@example.org).
Note: If you missed Parts I and II of this blog series, catch up and read Part I: The Problem, and Part II: The Brain Science. This is part of a four-blog series exploring the psychology and brain science behind the potential for extended reality tools to disrupt corporate Learning & Development.
xR Applications in Corporate L&D
The key ingredient of xR technology in corporate L&D is the experiential and immersive nature of the technology that provides rich, coordinated contextual cues that lead to a sense of “presence”. You are either in a real-world experience augmented with information (Augmented Reality or AR), or you are transported into a new virtual world (Virtual Reality or VR). In both cases, experiential learning systems are engaged in synchrony with cognitive, behavioral, and emotional learning systems in the brain. I elaborate below. Continue reading “Why Extended Reality (xR) is Poised to Disrupt Corporate Learning and Development – Part III: xR Hard Skills Applications”
I woke up last Tuesday (March 12, 2019) to find an interesting announcement in my inbox. NGINX, the software networking company, well known for its NGINX web server/load balancer, was being acquired by f5. f5 is best known for its network appliances which implement network security, load balancing, etc. in data centers.
The deal was described as creating a way to “bridge NetOps to DevOps.” That’s a good way to characterize the value of this acquisition. Networking has begun to evolve, or perhaps devolve, from the data center into the container cluster. Network services that used to be the domain of centralized network devices, especially appliances, may be found in small footprint software that runs in containers, often in a Kubernetes pod. It’s not that centralized network resources don’t have a place – you wouldn’t be able to manage the infrastructure that container clusters run on without them. Instead, both network appliances and containerized network resources, such as a service mesh, will be present in microservices architectures. By combining both types of network capabilities, f5 will be able to sell a spectrum of network appliances and software tailored toward different types of architectures. This includes the emerging microservices architectures that are quickly becoming mainstream. With NGINX, f5 will be well positioned to meet the network needs of today and of the future.
The one odd thing about this acquisition is that f5 already has an in-house project, Aspen Mesh, to commercialize very similar software. Aspen Mesh sells an Istio/Envoy distribution that extends the base features of the open source software. There is considerable overlap between Aspen Mesh and NGINX, at least in terms of capabilities. Both provide software to enable a service mesh and provide services to virtual networks. ” Sure, NGINX has market share (and brain share) but $670M is a lot of money when you already have something in hand.
NGINX and f5 say that they see the products as complementary and will allow f5 to build a continuum of offerings for different needs and scale. In this regard, I would agree with them. Aspen Mesh and NGINX are addressing the same problems but in different ways. By combining NGINX with the Aspen Mesh, f5 can cover more of the market.
Given the vendor support of Istio/Envoy in the market, it’s hard to imagine f5 just dropping Aspen Mesh. At present, f5 plans to operate NGINX separately but that doesn’t mean they won’t combine NGINX with Aspen Mesh in the future. Some form of coexistence is necessary for f5 to leverage all the investments in both brands.
The open source governance question may be a problem. There is nervousness within the NGINX community about its future. NGINX is based on its own open source project, one not controlled by any other vendors. The worry is that the NGINX community run into the same issues that the Java and MySQL communities did after they were acquired by Oracle which included changes to licensing and issues over what constituted the open source software versus the enterprise, hence proprietary software. f5 will have to reassure the NGINX community or risk a fork of the project or, worse, the community jumping ship to other projects. For Oracle, that led to MariaDB and a new rival to MySQL.
NGINX will give f5 both opportunity and technology to address emerging architectures that their current product lines will not. Aspen Mesh will still need time to grow before it can grab the brain share and revenue that NGINX already has. For a mainstream networking company like f5, this acquisition gets them into the game more quickly, generates revenue immediately, and does so in a manner that is closer to their norm. This makes a lot of sense.
Now that the first acquisition has happened, the big question will be “who are the next sellers and the next buyers?” I would predict that we will see more deals like this one. We will have to wait and see.
Amalgam Insights Brain Science Research Fellow Todd Maddox has released new research on the Rehearsal website focused on the role of collaborative video-based practice and its role in teaching people skills (also known as soft skills). Continue reading “Todd Maddox Reveals How Collaborative Video-Based Practice Effectively Trains People Skills: A Brain Science Analysis”
At IBM Think in February, IBM made several announcements around the expansion of Watson’s availability and capabilities, framing these announcements as the launch of “Watson Anywhere.” This piece is intended to provide guidance to data analysts, data scientists, and analytic professionals seeking to implement machine learning and artificial intelligence capabilities and evaluating the capabilities of IBM Watson’s AI and machine learning services for their data.
IBM declared that Watson is now available “anywhere” – both on-prem and in any cloud configuration, whether private, public, singular, multi-cloud, or a hybrid cloud environment. Data that needs to remain in place for privacy and security reasons can now have Watson microservices act on it where it resides. The obstacle of cloud vendor lock-in can be avoided by simply bringing the code to the data instead of vice versa. This ubiquity is made possible via a connector from IBM Cloud Private for Data that makes these services available via Kubernetes containers. New Watson services that will be available via this connector include Watson Assistant, IBM’s virtual assistant, and Watson OpenScale, an AI operation and automation platform.
Watson OpenScale is an environment for managing AI applications that puts IBM’s Trust and Transparency principles into practice around machine learning models. It builds trust in these models by providing explanations of how said models come to the conclusions that they do, permitting visibility into what’s seen as a “black box” by making their processes auditable and traceable. OpenScale also claims the ability to automatically identify and mitigate bias in models, suggesting new data for model retraining. Finally, OpenScale also provides monitoring capabilities of AI in production, validating ongoing model accuracy and health from a central management console.
Watson Assistant lets organizations build conversational bot interfaces into applications and devices. When interacting with end users, it can perform searches of relevant documentation, ask the user for further clarification, or redirect the user to a person for sufficiently complex queries. Its availability as part of Watson Anywhere permits organizations to implement and run virtual assistants in clouds outside of the IBM Cloud.
These new services join other Watson services currently available via the IBM Cloud Private for Data connector including Watson Studio and Watson Machine Learning, IBM’s programs for creating and deploying machine learning models. Additional Watson services being made available for Watson Anywhere later this year include Watson Knowledge Studio and Watson Natural Language Understanding.
In addition, IBM also announced IBM Business Automation with Watson, a future AI capability that will permit businesses to further automate existing work processes by analyzing patterns in workflows for commonly repeated tasks. Currently, this capability is available via limited early access; general availability is anticipated later in 2019.
Organizations seeking to analyze data “in place” have a new option with Watson services now accessible outside of the IBM Cloud. Data that must remain where it is for security and privacy reasons can now have Watson analytics processes brought to it via a secure container, whether that data resides on-prem or in any cloud, not just the IBM cloud. This opens the possibility of using Watson to enterprises in regulated industries like finance, government, and healthcare, as well as in departments where governance and auditability are core requirements, such as legal and HR.
With the IBM Cloud Private for Data connector enabling Watson Anywhere, companies now have a net-new reason to consider IBM products and services in their data workflow. While Amazon and Azure dominate the cloud market, Watson’s AI and machine learning tools are generally easier to use out of the box. For companies who have made significant commitments to other cloud providers, Watson Anywhere represents an opportunity to bring more user-friendly data services to their data residing in non-IBM clouds.
Companies concerned about the “explainability” of machine learning models, particularly in regulated industries or for governance purposes, should consider using Watson OpenScale to monitor models in production. Because OpenScale can provide visibility into how models behave and make decisions, concerns about “black box models” can be mitigated with the ability to automatically audit a model, trace a given iteration, and explain how the model determined its outcomes. This transparency boosts the ability for line of business and executive users to understand what the model is doing from a business perspective, and justify subsequent actions based on that model’s output. For a company to depend on data-driven models, those models need to prove themselves trustworthy partners to those driving the business, and explainability bridges the gap between the model math and the business initiatives.
Finally, companies planning for long-term model usage need to consider how they plan to support model monitoring and maintenance. Longevity is a concern for machine learning models in production. Model drift reflects changes that your company needs to be aware of. How do companies ensure that model performance and accuracy is maintained over the long haul? What parameters determine when a model requires retraining, or to be taken out of production? Consistent monitoring and maintenance of operationalized models is key to their ongoing dependability.
Note: If you missed Part I of this blog series, catch up and read Part I: The Problem. This is part of a four-blog series exploring the psychology and brain science behind the potential for extended reality tools to disrupt corporate Learning & Development.
Four Dissociable Learning Systems in the Brain
The human brain is comprised of at least four distinct learning systems. A schematic of the learning systems is provided in the figure below.
The cognitive skills learning system in the brain has evolved to obtain knowledge and facts. Cognitive skill learning tends to involve processing text and is limited by the learner’s working memory span and attention span. It requires focus and mental repetition for long-term memory storage. The cognitive skills learning system encompasses the prefrontal cortex, hippocampus and associated medial temporal lobe structures in the brain. The ultimate goal of this system is to transfer knowledge from short term memory in the prefrontal cortex to long term memory in the hippocampus and medial temporal lobes. Processing in this system is adversely affected by stress, pressure, and anxiety. This system is slow to develop, not reaching maturity until individuals are in their 20’s, and begins to decline in middle age. This is another reason why xR tools that broadly engage more than just the cognitive system are so effective.
The behavioral skills learning system in the brain has evolved to learn behaviors. It is one thing to know what to do, but it is completely different (and mediated by a different brain region) to know how to do it. Behavioral skills are learned by doing. Processing in this system is optimized when behavior is interactive and is followed in real-time (literally within milliseconds) by corrective feedback. Real-time video role play or xR with real-time feedback are ideal for behavioral skills training. Behaviors that are rewarded will be more likely to occur in the future, and behaviors that are punished will be less likely to occur in the future. Interestingly, this system does not rely on working memory and attention. In fact, there is strong scientific evidence that “overthinking it” hinders behavioral skills learning. Behavioral skill learning is mediated by the basal ganglia and gradual, incremental dopamine-mediated changes in behavior. The ultimate goal of this system is to use incremental, dopamine-mediated learning in the basal ganglia to train direct neural connections between sensory regions and motor regions in the brain that drive behavior.
The emotional learning system in the brain has evolved to facilitate the development of emotional and social context and a nuanced understanding of oneself, others, and emotionally charged situations that involve conflict, stress, pressure and anxiety. The detailed processing characteristics of this system are less well understood than the cognitive and behavioral skills learning systems, but socio-emotional processing strongly affects both cognitive and behavioral skills learning, facilitates situational awareness, and builds upon experiential learning systems described next. The critical brain regions are the amygdala and other limbic structures.
As Einstein so eloquently stated, experience is at the heart of all learning. It is also the key ingredient in xR training. The experiential learning system has evolved to represent the sensory aspects of an experience, whether visual, auditory, tactile or olfactory. This is distinct from the socio-emotional aspects but when combined with emotional processing a rich contextual representation emerges. Every experience is unique and has some emotional valence to it. In that sense, the emotional and experiential systems go hand in hand, and both add rich context to cognitive and behavioral skills learning. The critical brain regions associated with experiential learning differ as a function of the sensory input. Visual representations are formed in the occipital lobes and auditory representations are formed in the temporal lobes. Tactile representations are formed in the parietal lobes and olfactory information is represented in the piriform cortex and olfactory bulb.
If you enjoyed this blog, please share it on your social networks! And if you would like to learn more about how to use this information to support your corporate learning efforts, please email us at email@example.com to speak with me.
As part of Amalgam Insights’ coverage of the Technology Expense Management market, we provide the following guidance to sourcing, procurement, and operations professionals seeking to better understand how to manage technology expenses.
In immature or monopoly markets where one dominant vendor provides technology services, vendor management challenges are limited. Although buyers can potentially purchase services outside of the corporate umbrella, enterprises can typically work both with the vendor and with corporate compliance efforts to consolidate spend. However, vendor management becomes increasingly challenging in a world where multiple vendors provide similar but not equivalent technology services. To effectively optimize services across multiple vendors, organizations must be able to manage all relevant spend in a single location.
In Telecom Expense Management, this practice has been a standard for over a decade as companies manage AT&T, Vodafone, Verizon, and many other telecom carriers with a single solution. For Software-as-a-Service, a number of solutions are starting to emerge that solve this challenge. And with Infrastructure-as-a-Service, this challenge is only starting to emerge in earnest as Microsoft Azure and Google Cloud Platform rise up as credible competitors to Amazon Web Services.
To effectively manage sourcing and vendor management in complex technology categories, Amalgam suggests starting with the following contractual steps:
Align vendor and internal Service Level Agreements. There is no reason that any vendor should provide a lower level of service than the IT department has committed to the enterprise and other commercial partners.
Define bulk and tiered discounts for all major subcategories of spend within a vendor contract. Vendors are typically willing to discount for any usage category where a business buys in bulk, but there is no reason for them to simply hand over discounts without being asked. This step sounds simple, but typically requires a basic understanding of service and usage categories to identify relevant categories.
Avoid “optional” fees. For instance, on telecom bills, there are a number of carrier fees that are included in the taxes, surcharges, and fees part of the bill. These charges are negotiable and will vary from vendor to vendor. Ensure that the enterprise is negotiating all fees that are carrier-based, rather than assuming that these fees are all mandatory government taxes or surcharges which can’t be avoided.
Renegotiate contracts as necessary, not just based on your “scheduled” contract dates. There is no need to constantly renegotiate contracts just for the sake of getting every last dime, but companies should seek to renegotiate if significantly increasing the size of their purchase or significantly changing the shape of their technology portfolio. For instance, an Amazon contract may not be well-suited for a significant usage increase of a service due to business demand.
Embed contract agreements and terms into service order invoice processing and service management areas. It is not uncommon to see elegant contract negotiations go to waste because the terms are not enforced during operational, financial, or service management. Structure the business relationship to support the contract, then place relevant contract terms within other processes and management solutions so that these terms are readily available to all stakeholders, not just the procurement team.
Effective vendor and contract management is an important starting point to support each subsequent element of the technology lifecycle to enforce effective management at scale. In future blogs, we will cover best practices for inventory, invoice, service order, and lifecycle management across telecom, mobility, network, SaaS, and IaaS spend.
Key Stakeholders: Chief Learning Officers, Chief Human Resource Officers, Chief People Officer, Chief Talent Officer, Learning & Development Directors and Managers, Corporate Trainers, Content and Learning Product Managers, Hiring Directors, Hiring Managers, Human Resource Directors, Human Resource Managers.
Why It Matters: A major goal of corporate Learning and Development (L&D) is to build scalable tools that facilitate hard and behavioral (soft and technical) skills mastery. Mastery is most effectively achieved through experiential learning and repetition that engages multiple learning systems in the brain in synchrony and facilitates the development of situational awareness.
Top Takeaway: Compared to traditional learning tools, extended reality (xR) technologies, such as virtual and augmented reality, speed the development of mastery and expertise through repeated experiential learning that broadly engages multiple learning systems in the brain in synchrony and is scalable.
“Learning is an experience. Everything else is just information” – Albert Einstein
This is a powerful quote from Albert Einstein and is supported by learning science—the marriage of psychology and brain science. As I elaborate below, experiential learning is effective because it engages multiple learning systems in the brain in synchrony.
Taken a step further, if one can obtain multiple related, but distinct experiences, then one can begin to develop mastery, expertise, and broad-based situational awareness. It is one thing to have knowledge and behavioral skills, but it is another to be able to apply that knowledge and behavior under time or social pressure, when you are well or poorly rested, or with a team that is new or familiar to you.
When attempting to master hard skills, such as the ability to identify the signs of harassment in the workplace, it is important to experience multiples signs of harassment and from different points of view such as from that of the harasser, the target of the harassment, or a bystander.
When attempting to master people skills, such as the verbal and non-verbal communication skills needed to be an effective leader, it is important to gain experience with multiple verbal and non-verbal skills and from different points of view such as from that of the manager or employee, an interview or performance review setting, or a large contentious team meeting. When attempting to master a technical skill, such as the ability to maintain and run a large piece of equipment, it is important to gain experience with multiple aspects of the equipment and under different scenarios such as routine maintenance, emergency maintenance under time pressure, or a situation in which you are training or supervising a new technician. In each of these examples, multiple related, but distinct experiences provide not only the opportunity for learning, but also for developing situational awareness; a hallmark of mastery and expertise.
The need for experience-based learning and repetition to build mastery and expertise provides the foundational principles for the effectiveness of extended reality (xR) technologies in Learning and Development (L&D) in general, and corporate L&D in particular. xR technologies include virtual reality (VR) in which the learner is immersed in a completely new virtual environment and augmented reality (AR) in which the learner is in a combined real and virtual environment where digital information is overlaid onto the learner’s field of view. These technologies are grounded in experience-based learning and offer repeatable, broad-based practice that builds situational awareness and is scalable.
This report focuses on a learning science evaluation of the potential for xR technologies to disrupt corporate L&D. xR technologies have the potential to improve the quality and quantity of training, to speed learning and enhance retention in all aspects of corporate learning. This follows because xR technologies broadly engage multiple learning systems in the brain in synchrony, especially experiential learning systems, and allow the training to repeated many times to enhance situational awareness and facilitate the development of mastery and expertise.
The Importance of Training in the Corporate Sector
The pace of change in the corporate sector is such that high-quality training is a necessity. Employees must constantly obtain new hard skills, whether learning rules and regulations in the workplace, definitions of appropriate and inappropriate behavior, or how to interpret data science applications. Employees must continually acquire and refine their people (aka soft) skills to be more effective communicators, collaborators and leaders. Whether it is growing concerns with automation, the extensive data suggesting that diversity increased revenue and workplace harmony, the #metoo movement, or more likely the combination of all three, employees and management must acknowledge the need for effective people skills training in organizations large and small.
Employees are constantly gaining new behavioral and technical skills such as learning new digital technologies, upskilling on some existing piece of software, or learning to use and maintain a piece of equipment. With each of these classes of skills–hard, people, and technical–employees must not only become proficient, but the goal is to obtain mastery and expertise. Mastery and expertise lead to situational awareness and the ability to perform effectively under any condition, whether routine or non-routine involving stress, pressure or anxiety, and the ability to anticipate the future. Whereas one can have a catalog of facts, and a repertoire of behaviors, in the end, one has to extract the appropriate information and engage the appropriate behavior in each distinct situation. Experience and repetition drive hard skills, people skills, and technical skills for situational awareness.
In Part 2, we’ll explore the brain science in greater detail and go over four distinct learning systems that affect learning.
If you enjoyed this blog, please share it on your social networks! And if you would like to learn more about how to use this information to support your corporate learning efforts, please email us at firstname.lastname@example.org to speak with me.
On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Amazon, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, Domino, Elastic, Google, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta, TROVE.
- H2O.ai Collaborates with Alteryx to Advance Data Science Workflows
- H2O.ai Advances Leading Data Science and Machine Learning Platforms
- H2O.ai and Kx Partnership Provides Faster Insights on Time Series Data
- H2O.ai Teams Up with Intel to Drive an AI Transformation in the Enterprise
At H2O World in San Francisco, H2O.ai made several important announcements. Partnerships with Alteryx, Kx, and Intel will extend Driverless AI’s accessibility, capabilities, and speed, while improvements to Driverless AI, H2O, Sparkling Water, and AutoML focused on expanding support for more algorithms and heavier workloads. Amalgam Insights covered H2O.ai’s H2O World announcements.
At IBM Think in San Francisco, IBM announced the expansion of Watson’s availability “anywhere” – on-prem, and in any cloud configuration, whether private or public, singular or multi-cloud. Data no longer has to be hosted on the IBM Cloud to use Watson on it – instead, a connector from IBM Cloud Private for Data permits organizations to bring various Watson services to data that cannot be moved for privacy and security reasons. Update: Amalgam Insights now has a more in-depth evaluation of IBM Watson Anywhere.
Databricks’ $250 Million Funding Supports Explosive Growth and Global Demand for Unified Analytics; Brings Valuation to $2.75 Billion
Databricks has raised $250M in a Series E funding round, bringing its total funding to just shy of $500M. The funding round raises Databricks’ valuation to $2.75B in advance of a possible IPO. Microsoft joins this funding round, reflecting continuing commitment to the Azure Databricks collaboration between the two companies. This continued increase in valuation and financial commitment demonstrates that funders are satisfied with Databricks’ vision and execution.