Posted on Leave a comment

Revealing the Learning Science for Improving IT Onboarding

Key Stakeholders: IT Managers, IT Directors, Chief Information Officers, Chief Technology Officers, Chief Digital Officers, IT Governance Managers, and IT Project and Portfolio Managers.

Top Takeaways: Information technology is innovating at an amazing pace. These technologies hold the promise of increased effectiveness, efficiency and profits. Unfortunately, the training tools developed to onboard users are often constructed as an afterthought and are ineffective. Technologies with great potential are underutilized because they are poorly trained. Learning scientists can remedy this problem and can help IT professional build effective training tools. Onboarding will become more efficient, and profits will follow in short order.

IT Onboarding Has an Adoption Problem

In my lifetime I have seen a number of amazing new technologies emerge. I remember the first handheld calculators and the first “flip” phones. Fast forward to 2018 and the majority of Americans now carry some of the most sophisticated technology that exists in their palm.

In the corporate world technology is everywhere. Old technologies are being updated regularly, and new innovative, disruptive technologies are being created every day. With every update or new technology comes a major challenge that is being poorly addressed.

How do we get users to adopt the technology, and to use it efficiently and effectively?

This is a problem in training. As a learning scientist I find it remarkable that so much time and money is allocated to developing these amazing technologies, but training is treated as an afterthought. This is a recipe for poor adoption, underutilization of these powerful tools, and reduced profits.

The IT Onboarding Issue

I experienced this dozens of times in my 25-year career as a University Professor. I routinely received emails outlining in excruciating detail a new set of rules, regulations or policies. The email would be laced with threats for non-compliance, but poorly designed instructions on how to obtain compliance. The usual approach was to ignore the instructions in the email and instead to use the grapevine to identify which buttons to click, and in what order to achieve compliance. I also received emails explaining (again usually in excruciating detail) how a new version of an existing software has changed, or how some new technology was going to be unveiled that would replace a tool that I currently used. I was provided with a schedule of “training workshops” to attend, or a link to some unintelligible “training manual”. I either found a way to use the old technology in secret, made a formal request to continue to use the “old” technology to avoid having to learn the new technology, or I asked the “tech” guy to come to my office and show me how to do the 5 things that I needed the new technology to do. I would take copious notes and save them for future reference.

If I got an email detailing a new technology that did not affect my current system, I deleted it with relief.

My story is common, it suggests a broken and ineffective system, and it all centered around the lack of quality training.

This is not restricted to the University setting. I have a colleague who builds mobile apps for large pharmacy chains. The mobile app reminds patients to refill prescriptions, allows pharmacists to adjust prescriptions, and several other features. It is a great offering and adds value for his clients. As with most IT, he rolls out a new release every few months. His main struggles are determining what features to improve, what new features to add, and how to effectively onboard users.

He gets some guidance on how to improve an existing feature, or what new features to add, but he often complains that these suggestions sound more like a client’s personal preference and are not driven by objectively-determined customer needs. With respect to training, he admits that the training manual is an afterthought and he is frustrated because his team is constantly fielding questions that are answered in the manual.

The result: “Improved” or new features no one wants, and difficulty training users of the app.

Experts in IT development should not be expected to have the skills to build an effective training manual, but they do need to understand that onboarding is critical, and effective onboarding requires a training tool that is effective.

Key Recommendations for Improving IT Onboarding

This is where learning science should be leveraged. An extensive body of psychological and brain science research (much of my own) has been conducted over the past several decades that provides specific guidelines on how to effectively train users. Here are some suggestions for improving IT training and development that derive from learning science.

Recommendation 1 – Survey Your Users: All too often technology is “improved” or a new feature is released and few clients see its value. Users know what they want. They know what problems that they need to solve and want tools for solving those problems. Users should be surveyed to determine what features they like, what features they feel could be improved, and what problems they would like solved with a new feature. The simplest, cheapest and most effective way to do this is to ask them via an objective survey. Don’t just ask the CEO, ask the users.

Recommendation 2 – Develop Step-by-Step Instructions for Each Feature and Problem it Solves: Although your new technology may have a large number of features and can solve a large number of problems, most technology users take advantage of a small handful of features to solve a small number of problems. Step-by-step instructions should be developed that train the user on each specific feature and how it helps them solve a specific problem. If I only use five features to solve two main problems, I should be able to train on those features within the context of these two problems. This approach will be fast and effective.

Recommendation 3 – Training Content Should be Grounded in Microlearning: Microlearning is an approach to training that leverages the brain science of learning. Attention spans are short and working memory capacity is limited. People learn best when training content comes in small chunks (5 – 10 minutes) that is focused on a single coherent concept. If you need to utilize two independent features to solve a specific problem, training will be most effective if you train on each feature separately, then train how to use those two features in concert to solve the specific problem.

Recommendation 4 – Develop Training Materials in Multiple Mediums: Some learners prefer to read, some prefer to watch videos, and some prefer to listen. Training materials should be available in as many of these mediums as possible.

Recommendation 5 – Incorporate Knowledge Checks: One of the best ways to train our brain is to test our knowledge of material that we have learned. Testing a learner’s knowledge of the steps needed to solve some problem, or their knowledge of the features of your app requires the learner to use cognitive effort to extract and retrieve that information from memory. This process strengthens information already learned and can identify areas of weakness in their knowledge. This would cue the learner to revisit the training material.

How to Implement These Recommendations & Representative Vendors

Now that we have identified the problem and offered some solutions, key stakeholders need to know how to implement these recommendations. Several avenues are available. Key stakeholders can work with corporate learning and internal learning and development solutions, as well as with corporate communications from an internal marketing perspective. We at Amalgam provide guidance on the best solutions and how to implement them.

Solutions are available from companies such as: Accord LMS, Agylia Group, Axonify, Cornerstone, Crossknowledge, Degreed, Docebo, EdCast, Expertus, Fivel, GamEffective, Grovo, Litmos, Lumesse, MindTickle, Mzinga, PageUp, Pathgather, PeopleFluent, Qstream, Reflekive, Saba, Salesforce, SAP SuccessFactors, Skillsoft, Talentquest, TalentSoft, Thought Industries, Totara Learn, and Zunos. Over the coming months, I will be researching each of these and other offerings in greater detail and will be writing about how they support IT education.

Posted on Leave a comment

Cloud Foundry Provides an Open Source Vision for Enterprise Software Development

From April 18-20, Amalgam Insights attended Cloud Foundry Summit 2018 in our hometown of Boston, MA. Both Research Fellow Tom Petrocelli and Founder Hyoun Park attended as we explored the current positioning of Cloud Foundry as an application development platform in light of the ever-changing world of technology. The timing of Cloud Foundry Summit this year coincided with Pivotal’s IPO, which made this Summit especially interesting. Through our attendance of keynote sessions, panels, and the analyst program, we took away several key lessons.

First, the conflict between Cloud Foundry and Kubernetes is disappearing as each solution has found its rightful place in the DevOps world. My colleague Tom Petrocelli goes into more detail in explaining how the perceived conflict between Cloud Foundry’s initial containerization efforts and Kubernetes is not justified. This summit made very clear that Cloud Foundry is a very practical solution focused on supporting enterprise-grade applications that abstracts both infrastructure and scale. Amalgam takes the stance that this conflict in software abstraction should never have been there in the first place. Practitioners have been creating an artificial conflict that the technology solutions are seeking to clarify and ameliorate.

At the event, Cloud Foundry also announced heavy-hitting partners. Alibaba Cloud is now a Gold member of the Cloud Foundry Foundation. With this announcement, Cloud Foundry goes to China and joins the fastest growing cloud in the world. This announcement mirrors Red Hat’s announcement of Alibaba becoming an Red Hat Certified Cloud and Service Provider last October and leads to an interesting showdown in China as developers choose between Cloud Foundry and OpenStack to build China’s future of software.

In addition, Cloud Foundry Foundation announced Cloud.gov as the 8th certified provider of Cloud Foundry. This step forward will allow federal agencies to use a certified and FedRAMP Authorized Cloud Foundry platform. The importance of this announcement was emphasized in an Air Force-led session on Project Kessel Run, which is focused on agile software for the Air Force. This session showed how how Cloud Foundry accelerated the ability to execute in an environment where the average project took over 8 years to complete due to challenges such as the need to ask for Congressional approval on a yearly basis. By using Cloud Foundry, the Air Force has identified opportunities to build applications in a couple of months and get these tools directly to soldiers to create culture that is #AgileAF (which obviously stands for “Agile Air Force”). The role of Cloud Foundry is accelerating one of the most challenging and governed application development environments in the world accentuated the value of Cloud Foundry in effectively enabling the vaunted goal of digital transformation.

From a technical perspective, the announcement that really grabbed our attention was the demonstration of cfdev in providing a full Cloud Foundry development experience on a local laptop or workstation on native hypervisors. This will make adoption far easier for developers seeking to quickly develop and debut applications as well as to help developers build test and sandbox environments for Cloud Foundry.

Overall, this event demonstrated the evolution of Cloud Foundry. The themes of Cloud Foundry as both a government enabler and the move to China were front and center throughout this event. Combined with the Pivotal IPO and Cloud Foundry’s ability to work with Kubernetes, it is hard to deny Cloud Foundry’s progress as an enterprise and global solution for application development acceleration and in working as a partner with other appropriate technologies.

Posted on Leave a comment

Lynne Baer: Clarifying Data Science Platforms for Business

My name is Lynne Baer, and I’ll be covering the world of data science software for Amalgam Insights. I’ll investigate data science platforms and apps to solve the puzzle of getting the right tools to the right people and organizations.

“Data science” is on the tip of every executive’s tongue right now. The idea that new business initiatives (and improvements to existing ones) can be found in the data a company is already collecting is compelling. Perhaps your organization has already dipped its toes in the data discovery and analysis waters – your employees may be managing your company’s data in Informatica, or performing statistical analysis in Statistica, or experimenting with Tableau to transform data into visualizations.

But what is a Data Science Platform? Right now, if you’re looking to buy software for your company to do data science-related tasks, it’s difficult to know which applications will actually suit your needs. Do you already have a data workflow you’d like to build on, or are you looking to the structure of an end-to-end platform to set your data science initiative up for success? How do you coordinate a team of data scientists to take better advantages of existing resources they’ve already created? Do you have coders in-house already who can work with a platform designed for people writing in Python, R, Scala, Julia? Are there more user-friendly tools out there your company can use if you don’t? What do you do if some of your data requires tighter security protocols around it? Or if some of your data models themselves are proprietary and/or confidential?

All of these questions are part and parcel of the big one: How can companies tell what makes a good data science platform for their needs before investing time and money? Are traditional enterprise software vendors like IBM, Microsoft, SAP, SAS dependable in this space? What about companies like Alteryx, H2O.ai, KNIME, RapidMiner? Other popular platforms under consideration should also include Anaconda, Angoss (recently acquired by Datawatch), Domino, Databricks, Dataiku, MapR, Mathworks, Teradata, TIBCO. And then there’s new startups like Sentenai, focused on streaming sensor data, and slightly more established companies like Cloudera looking to expand from their existing offerings.

Over the next several months, I’ll be digging deeply to answer these questions, speaking with vendors, users, and investors in the data science market. I would love to speak with you, and I look forward to continuing this discussion. And if you’ll be at Alteryx Inspire in June, I’ll see you there.

Posted on 1 Comment

The Software Abstraction Disconnect is Silly

Tom Petrocelli, Amalgam Insights Research Fellow

Over the past two weeks, I’ve been to two conferences that are run by an open source community. The first was the CloudFoundry Summit in Boston followed by KubeCon+CloudNativeCon Europe 2018 in Copenhagen. At both, I found passionate and vibrant communities of sysops, developers, and companies. For those unfamiliar with CloudFoundry and Kubernetes, they are open source technologies that abstract software infrastructure to make it easier for developers and sysops to deliver applications more quickly.

Both serve similar communities and have a generally similar goal. There is some overlap – CloudFoundry has its own container and container orchestration capability – but the two technologies are mostly complementary. It is possible, for example, to deploy CloudFoundry as a Kubernetes cluster and use CloudFoundry to deploy Kubernetes. I met with IT professionals that are doing one or both of these. The same is true for OpenStack and CloudFoundry (and Kubernetes for that matter). OpenStack is used to abstract the hardware infrastructure, in effect creating a cloud within a data center. It is a tool used by sysops to provision hardware as easily scalable resources, creating a private cloud. So, like CloudFoundry does for software, OpenStack helps to manage resources more easily so that a sysop doesn’t have to do everything by hand. CloudFoundry and OpenStack are clearly complementary. Sysops use OpenStack to create resources in the form of a private cloud; developers then use CloudFoundry to pull together private and public cloud resources into a platform they deploy applications to. Kubernetes can be found in any of those places.

Fake News, Fake Controversies

Why then, is there this constant tension between the communities and adopters of these technologies? It’s as if carpenters had hammer people and saw people who argued over which was better. According to my carpenter friends, they don’t. The foundations and vendors avoid this type of talk, but these kinds of discussions are happening at the practitioner and contributor level all the time. During KubeCon+CloudnativeCon Europe 2018, I saw a number of tweets that, in essence, said: “Why is Cloud Foundry Executive Director Abby Kearns speaking at KubeCon?” They questioned what one had to do with the other. Why not question what peanut butter and jelly have to do with each other?

Since each of these open source projects (and the products based on them) have a different place in a modern hybrid cloud infrastructure, how is it that very smart people are being so short-sighted? Clearly, there is a problem in these communities that limit their point of view. One theory lies in what it takes to proselytize these projects within an organization and wider community. To put it succinctly, to get corporate buy-in and widespread adoption, community members have to become strongly focused on their specific project. So focused, that some put on blinders and can no longer see the big picture. In fact, in order to sell the world on something that seems radical at first, you trade real vision for tunnel vision.

People become invested in what they do and that’s good for these type of community developed technologies. They require a commitment to a project that can’t be driven by any one company and may not pan out. It turns toxic when the separate communities become so ensconced in their own little corner of the tech world that they can’t see the big picture. The very nature of these projects defies an overriding authority that demands the everyone get along, so they don’t always.

It’s time to get some perspective, to see the big picture. We have an embarrassment of technology abstraction riches. It’s time to look up from individual projects and see the wider world. Your organizations will love you for it.

Posted on 1 Comment

KubeCon+CloudNativeCon Europe 2018 Demonstrates The Breadth and Width of Kubernetes


Standing in the main expo hall of KubeCon+CloudNativeCon Europe 2018 in Copenhagen, the richness of the Kubernetes ecosystem is readily apparent. There are booths everywhere, addressing all the infrastructure needs for an enterprise cluster. There are meetings everywhere for the open source projects that make up the Kubernetes and Cloud Native base of technology. The keynotes are full. What was a 500-person conference in 2012 is now, 6 years later, a 4300-person conference even though it’s not in one of the hotbeds of American technology such as San Francisco or New York City.

What is amazing is how much Kubernetes has grown in such a short amount of time. It was only a little more than a year ago that Docker released its Kubernetes competitor called Swarm. While Swarm still exists, Docker also supports, and arguably is betting the future, on Kubernetes.
Kubernetes came out of Google, but that doesn’t really explain why it expanded like the early universe after the Big Bang. Google is not the market leader in the cloud space – it’s one of the top vendors but not the top vendor – and wouldn’t have provided enough market pull to drive the Kubernetes engine this hot. Google is also not a major enterprise infrastructure software vendor the way IBM, Microsoft, or even Red Hat and Canonical are.

Kubernetes benefited from the first mover effect. They were early into the market with container orchestration, were fully open source, and had a large amount of testing in Google’s own environment. Docker Swarm, on the other hand, was too closely tied to Docker, the company, to appease the open source gods.

Now, Kubernetes finds itself like a new college graduate. It’s all grown up but needs to prepare for the real world. The basics are all in place and it’s mature but there is an enormous amount of refinement and holes that need to be filled in for it to be a common part of every enterprise software infrastructure. KubeCon+CloudNativeCon shows that this is well underway. The focus now is on security, monitoring, network improvement, and scalability. There doesn’t seem to be a lot of concern about stability or basic functionality.

Kubernetes has eaten the container world and didn’t get indigestion. That’s rare and wonderful.

Posted on Leave a comment

Market Milestone: ServiceNow Buys VendorHawk and SaaS Management Comes of Age

Industry: Software Asset Management

Key Stakeholders: CIO, CFO, Chief Digital Officer, Chief Technology Officer, Chief Mobility Officer, IT Asset Directors and Managers, Procurement Directors and Managers, Accounting Directors and Managers
Why It Matters: Software-as-a-Service (SaaS) is now a strategic IT component. As enterprise SaaS doubles in market size over the next three years, this complex spend category will continue to expand beyond the ability to manually manage it
Top Takeaways: With this acquisition, ServiceNow will have a cutting-edge & converged Software Asset Management solution for both SaaS and on-premises applications in 2019. Enterprise organizations managing over $25,000 a month should consider an enterprise SaaS vendor management solution to optimize licenses, de-duplicate vendor categories, and gain enterprise-grade governance.

With ServiceNow’s acquisition of VendorHawk, the era of SaaS Vendor Management is emergent.

ServiceNow Acquires VendorHawk

On April 25th, 2018, ServiceNow announced its acquisition of SaaS Vendor Management solution VendorHawk in an all-cash transaction scheduled to close in April. This acquisition highlights the increasingly strategic role of SaaS from an IT service management perspective and validates the need for Software Asset Management solutions to support SaaS. In addition, this acquisition continues a string of acquisitions that ServiceNow has made over the past year including acquisitions of:

• Qlue, an artificial intelligence framework for customer service
• Telepathy, a design firm focused on massive adoption of applications
• SkyGiraffe, a no-code mobile app development platform used to make all ServiceNow applications mobile-friendly

The VendorHawk acquisition falls in line with these acquisitions in that VendorHawk will help enterprises to support the widespread adoption of SaaS.
Continue reading Market Milestone: ServiceNow Buys VendorHawk and SaaS Management Comes of Age

Posted on 2 Comments

The Adoption Gap in Learning & Development: How Learning Science Can Bridge the Divide


Key Stakeholders: Chief Learning Officers, Chief Human Resource Officers, Learning and Development Directors and Managers, Corporate Trainers, Content and Learning Product Managers

Key Takeaways: L&D vendors offer a vast array of innovative functionality and technology for their clients. Unfortunately, clients are overwhelmed by the breadth of offerings and desire guidance on what technology to use when, all in the interest of increasing adoption and the effectiveness of learning. An extensive body of learning science research exists that should be leveraged to provide clients with the much-needed guidance. This approach will reduce the existing adoption gap and improve the performance.
Continue reading The Adoption Gap in Learning & Development: How Learning Science Can Bridge the Divide

Posted on 1 Comment

Managing IT Complexity through Infrastructure as Code (IaC)

Tom Petrocelli, Amalgam Insights Contributing Analyst

(Note: This blog is an excerpt from Tom Petrocelli’s current report: Infrastructure as Code: Managing Hybrid Infrastructure at Scale)

Key Stakeholders: CIO, Sysops, System Admins, Network Admins, Storage Admins, IT Operations Managers

Why It Matters: New software architectures continue to add complexity to it infrastructure management. At the same time, organizations expect IT to deploy applications faster. New tools are needed for IT operations to perform in this environment.

Top Takeaways: Infrastructure as Code, or IaC, offers a path to faster and less error prone management of new software infrastructure at enterprise scale. IaC abstracts the myriad of ways IT professionals interact with systems into a simple, plain text, code file.

Infrastructure as Code

Today, IT is continuing to virtualize infrastructure even more with container clusters. Containers often fulfill the same role as a server, even though they do not require an entire stack including an operating system. Like a server, they are a unit of computing that houses services that comprise an application. Unlike a server, containers often contain a single purpose service called a microservice. Microservices architectures lead to a large number of containers, within virtual servers, running on physical or cloud servers. For large enterprises, this new model expands the number of virtual, physical, and cloud devices under management, adding complexity to the infrastructure.

Managing tens of thousands of heterogeneous nodes, where only a few thousand, fairly homogeneous ones existed before, represents a massive challenge to IT. This is further compounded by the presence of (often more than one) cloud services alongside on-premises servers. To add to the challenge, new development methodologies have increased pace of modern software development which constantly alters the IT infrastructure.

To cope with this greatly enlarged management burden, IT managers and professionals are increasingly turning to Infrastructure as Code (IaC). IaC is part management technique and part toolset. The philosophy behind IaC is to write code that defines the desired state of the infrastructure. While this could be carried out using shell scripts or homegrown programs, increasingly IT practitioners are turning to purpose-built tools that allow for infrastructure to be defined as a program (i.e. code) and then executed by automation servers, often with the help of local commands and agents on the physical and virtual servers or service calls of cloud services.

Key Infrastructure as Code Functions and Challenges
While provisioning, configuration, and code deployment may be the most common functions of IaC, it is hardly limited to such a small set of capabilities. IaC can accomplish most of what sysops, network administrators, and other IT operations professionals have to do by hand, via shell scripts, or through management consoles through the following capabilities.

While there are some clear advantages to DevOps, there are also some issues with the approach. Some of the problems are technical but many are social or managerial. A mixture of IT silo politics and skill deficits may lead to a toxic DevOps team environment that no amount of technology can overcome. However, problems associated with IaC itself are relatively straightforward and can be managed with training, support, and planning. Some of the standout issues for IaC include:

Key IaC Vendors

There are a number of vendors offering products in the IaC space. While they all offer the basic functions of provisioning, updating, and configuration, many have a number of other features as well. No product offers the full list of these features, so it is important to choose a vendor based on the automation priorities of the organization.


*Participated in Amalgam’s research process

Conclusion

As enterprise IT infrastructure has evolved from a simple, single mainframe to the highly distributed, hybrid cloud, multi-cloud, microservices architecture, managing a datacenter has become terribly complex. Along the way, the tools available to sysops and admins have likewise evolved into entire management platforms, the so-called single pane of glass.

Posted on Leave a comment

Skillsoft Perspectives 2018 – A Vision For The Future of Corporate Learning

Key Stakeholders: CHRO, Chief Learning Officers, Talent Acquisition Directors and Managers, Learning & Development Directors, Training Managers, Corporate Education Managers, LMS Managers

On April 11th – 13th, Amalgam Insights attended Skillsoft Perspectives as both analysts Hyoun Park and W. Todd Maddox, Ph.D. looked at the latest advances in Skillsoft’s platforms, key narratives from Skillsoft’s customers and partners, and opportunities for pushing the future of corporate learning. Amalgam spoke with key executives and customers while attending keynote sessions and even presenting as well. Continue reading Skillsoft Perspectives 2018 – A Vision For The Future of Corporate Learning

Posted on 1 Comment

Cloudera Analyst Conference Makes The Case for Analytic & AI Insights at Scale

On April 9th and 10th, Amalgam Insights attended the fifth Cloudera’s Industry Analyst and Influencer Conference (which I’ll self-servingly refer to as the Analyst Conference since I attended as an industry analyst) in Santa Monica. Cloudera sought to make the case that it was evolving beyond the market offerings that it is currently best known for as a Hadoop distribution and commercial data lake in becoming a machine learning and analytics platform. In doing so, Cloudera was extremely self-aware of its need to progress beyond the role of multi-petabyte storage at scale to a machine learning solution.
Cloudera’s Challenges in Enterprise Machine Learning 

Continue reading Cloudera Analyst Conference Makes The Case for Analytic & AI Insights at Scale