Posted on Leave a comment

Tangoe Makes Its Case as a Change Agent at Tangoe LIVE 2018


Key Stakeholders: CIO, CFO, Controllers, Comptrollers, Accounting Directors and Managers, IT Finance Directors and Managers, IT Expense Management Directors and Managers, Telecom Expense Management Directors and Managers, Enterprise Mobility Management Directors and Managers, Digital Transformation Managers, Internet of Things Directors and Managers

On May 21st and 22nd, Amalgam Insights attended and presented at Tangoe LIVE 2018, the global user conference for Tangoe. Tangoe is the largest technology expense management vendor with over $38 billion in technology spend under management.

Tangoe is in an interesting liminal position as it has achieved market dominance in its core area of telecom expense management where Amalgam estimates that Tangoe manages as much technology spend as its five largest competitors combined (Cass, Calero, Cimpl, MDSL, and Sakon). However, at the same time, Tangoe is being asked by its 1200+ clients how to better manage a wide variety of spend categories, including Software-as-a-Service, Infrastructure-as-a-Service, IoT, managed print, contingent labor, and outsourced contact centers. Because Tangoe is in this middle space, Amalgam finds Tangoe to be an interesting vendor. In addition, Tangoe’s average client is a 10,000 employee company with roughly $2 billion in revenue, placing Tangoe’s focus squarely on the large, global enterprise.

In this context, Amalgam analyzed both Tangoe’s themes and presentations in this context. First, Tangoe LIVE started with a theme of “Work Smarter” with the goal of helping clients to better understand changes in telecom, mobility, cloud, IoT, and other key IT areas.

Tangoe Themes and Keynotes

CEO Bob Irwin kicked off the keynotes by describing Tangoe’s customer base as 1/3rd fixed telephony, 1/3rd mobile-focused, and 1/3rd combined fixed and mobile telecom. This starting point shows Tangoe’s potential in growing simply by focusing on existing clients. In theory, Tangoe could double its revenue and spend under management without adding another client based purely on this point alone. This fact alone makes Tangoe difficult to measure, as much of its potential growth is based on execution rather than the market penetration that most of Tangoe’s competitors are trying to achieve. Irwin also brought up three key disruptive trends of Globalization, Uberization (real-time and localized shared services), and Digital Transformation. On the last point, Irwin noted that 65% of Tangoe customers were currently going through some level of Digital Transformation.

Next, Chief Product Officer Ivan Latanision took the stage. Latanision is relatively new to Tangoe with prior experience with clinical development, quality management, e-commerce, and financial services which should prove useful in supporting Tangoe’s product portfolio and the challenges of enterprise governance. In his keynote, Latanision introduced the roadmap to move Tangoe clients to one of three platforms: Rivermine, Asentinel, or the new Atlas platform being developed by Tangoe. This initial migration will be based on whether the client is looking for deeply configurable workflows, process automation, or net-new deployments with a focus on broad IT management, respectively. Based on the rough timeline provided, Amalgam expects that this three-platform strategy will be used for the next 18-24 months. Then, starting sometime in 2020, Amalgam expects that Tangoe will converge customers to the new Atlas platform.

SVP of Product Mark Ledbetter followed up with a series of demos to show the new Tangoe Atlas platform. These demos were run by product managers Amy Ramsden, Cynthia Flynn, and Chris Molnar and demonstrated how Atlas would work from a user interface, asset and service template, and cloud perspective. Amalgam’s initial perspective was that this platform represented a significant upgrade over Tangoe’s CMP platform and was on par with similar efforts that Amalgam has seen from a front-end perspective.

Chris Molnar Presents Cloud Management

Product Management guru Michele Wheeler also came out to thank the initial customers who have been testing out Atlas in a production environment. Amalgam believes this is an important step forward for Tangoe in that it shows that Atlas is a workable environment and has a legitimate future as a platform going forward. The roadmap for Tangoe customers has been somewhat of a mystery in the past only because there was a wide variety of platforms to integrate and aggregate from the M&A activity that Tangoe had previously participated in. Chief of Operations, Tom Flynn completed the operational presentation by providing updates on Tangoe’s service delivery and upgrades. Flynn had previously acted as Tangoe’s Chief Administrative Officer and General Counsel, which makes this shift in responsibility interesting as Flynn now gets to focus on service delivery and process management rather than the governance, risk management, and compliance issues where Flynn previously was guiding Tangoe.

Thought Leadership Keynotes at Tangoe LIVE

Tangoe also featured two celebrity keynotes focused on the importance of innovation and preparing for the future. Amalgam believes these keynotes were well-suited for the fundamental challenge that much of the audience faced in figuring out how to expand their responsibilities beyond the traditional view of supporting network, telecom, and mobility spend to the larger world of holistic enterprise technology spend. Innovation expert Stephen Shapiro provided the audience with a focus on how to thinking outside of the boundaries that we lock ourselves into by looking at other departments, industries, and categories for new solutions. Business advisor and industry analyst Maribel Lopez followed up on this theme the next day by framing our current challenge of technology as both the Best of Times and the Worst of Time in determining how technology can effectively empower workers. By providing a starting point for achieving the goal of digital mastery, Lopez challenged the audience to take on the challenge of controlling technology and maximizing its value rather than becoming overwhelmed by the variety, scale, and scope of technology.

Amalgam’s Role at Tangoe LIVE

Amalgam Insights also provided two presentations at Tangoe LIVE: “Budgeting and Forecast for Technology Trends” and “The Convergence of Usage Management and IoT,” which was co-presented with Tangoe’s Craig Riegelhaupt.

In the Budgeting and Forecast presentation, Amalgam shared key trends that will affect telecom and mobility budgeting over the rest of 2018 and 2019 including:
USTelecom’s Petition for Forbearance to the FCC, which would allow large carriers to immediately raise the resale rate of circuits by 15% and potentially allow them to stop reselling circuits to smaller carriers altogether
• Universal Service Fee variances and the potential to avoid these costs by changing carriers or procure circuits more effectively
• The 2 GB limit for rate plans and how this will affect rate plan optimization
• The T-Mobile/Sprint Merger and its potential effects on enterprise telephony
• The wild cards of cloud and IoT as expense categories
• Why IoT Management is the ultimate Millennial Reality Check

Amalgam’s Hyoun Park speaks at Tangoe Live

Amalgam also teamed up with Tangoe to dig more deeply into the Internet of Things and why IoT will be a key driver for transforming mobility management over the next five years. By providing representative use cases, connectivity options, and key milestone metrics, this presentation provided Tangoe end users with an introductory guide to how their mobility management efforts needed to change to support IoT.

Tangoe’s Center of Excellence and Amalgam Insights’ Role

At Tangoe LIVE, Tangoe also announced the launch of the Tangoe Center of Excellence, which will provide training courses in topics including inventory management, expense management, usage management, soft skills, & billing disputes with a focus on providing guidance from industry experts.

As a part of this launch, Tangoe announced that Amalgam Insights is a part of Tangoe’s COE Advisory Board. In this role, we look forward to helping Tangoe educate its customer base on the key market trends associated with managing digital transformation and creating an effective “manager of managers” capability to effectively support the quickly-expanding world of enterprise technology. These classes will start in the Fall of 2018 and Amalgam is excited to support this effort to teach best practices to enterprise IT organizations.

Conclusion

Tangoe LIVE lived up to its billing as the largest Telecom and Technology Expense Management user show. But even more importantly, Tangoe was able to show its Atlas platform and provide a framework for users that are being pushed to support change, transformation, and new spend categories. Amalgam believes that the greatest challenge over rest of this decade for TEM managers is how they will support the emerging trends of cloud, IoT, outsourcing, and managed services as all of these spend categories require management and optimization. In this show, Tangoe LIVE made its case that Tangoe intends to be a platform solution to support this future of IT management and support its customers across invoice, expense, service order, and usage management.

Although Tangoe’s official theme for this show was “Work Smarter,” Amalgam believes that the true theme for this show based on keynotes, roadmap, sessions, and end user questions was “Prepare for Change” with the tacit assumption of doing this with Tangoe. In facing this question head on and providing clear guidance and advisory, Tangoe made a compelling case for managing the future of IT at Tangoe LIVE.

Posted on Leave a comment

AMALGAM INSIGHTS MEDIA ALERT: Starbucks is closing its stores. Is It Enough?

For release 830am May 28, 2018

For more information:

Hyoun Park Steve Friedberg
Amalgam Insights MMI Communications
hyoun@amalgaminsights.com steve@amalgaminsights.com
415.754.9686 484.550.2900

AMALGAM INSIGHTS MEDIA ALERT:  Starbucks is closing its stores and doing awareness training for all of its employees this week.  Learning researcher asks, “Is it enough?”

WHAT:  Starbucks says it’s closing its 8,000 stores tomorrow, May 29, for what it calls “a conversation and learning session on race, bias and the building of a diverse welcoming company.”

Todd Maddox, Ph.D., an Amalgam Insights Learning Scientist/Research Fellow, applauds the company’s commitment to ongoing training, saying that approach may work, but warns that unless the training is continuous, Starbucks runs the risk of backsliding:

“My hope is that the company utilizes training content that focuses on true behavior change, as opposed to simply teaching people to identify inappropriate behavior. I also hope that Starbucks goes beyond training during the onboarding process, and incorporates it as a regular, ongoing part of employee training. The brain is hardwired to forget and requires refreshers to consolidate information in long-term memory.


“Just as sexual harassment prevention and many other people skills are about behavior, so is unconscious (racial) bias and all other aspects of appropriate interaction. People skills matter in all facets of society and corporate life. It is time to embrace the science of learning and work to address these shortcomings with effective training.”

 

WHO:  Todd Maddox, Ph.D. has more than 200 published articles, 10,000 citings, and $10 million in external research funding in his 25+ years researching the brain basis of behavior.  He’s been quoted in Forbes, CBS Radio, Training Journal, Chief Learning Officer, and other publications on topics such as the use of virtual reality in workplace sexual harassment avoidance training.

 

Todd’s available for comment on this or other topics; if you’d like to speak with him, please contact steve@amalgaminsights.com.

 

Posted on Leave a comment

Monitoring Containers: What’s Inside YOUR Cluster?

Tom Petrocelli, Amalgam Insights Research Fellow

It’s not news that there is a lot of buzz around containers. As companies begin to widely deploy microservices architectures, containers are the obvious choice with which to implement them. As companies deploy container clusters into production, however, an issue has to be dealt with immediately: container architectures have a lot of moving parts. The whole point of microservices is to break apart monolithic components into smaller services. This means that what was once a big process running on a resource-rich server is now multiple processes spread across one or many servers. On top of the architecture change, a container cluster usually encompasses a variety of containers that are not application code. These include security, load balancing, network management, web servers, etc. Entire frameworks, such as NGINX Unit 1.0, may be deployed as infrastructure for the cluster. Services that used to be centralized in a network are now incorporated into the application itself as part of the container network.

Because an “application” is now really a collection of smaller services running in a virtual network, there’s a lot more that can go wrong. The more containers, the more opportunities for misbehaving components. For example:

  • Network issues. No matter how the network is actually implemented, there are opportunities for typical network problems to emerge including deadlocked communication and slow connections. Instead of these being part of monolithic network appliances, they are distributed throughout a number of local container clusters.
  • Apps that are slow and make everything else slower. Poor performance of a critical component in the cluster can drag down overall performance. With microservices, the entire app can be waiting on a service that is not responding quickly.
  • Containers that are dying and respawning. A container can crash which may cause an orchestrator such as Kubernetes to respawn the container. A badly behaving container may do this multiple times.

These are just a few examples of the types of problems that a container cluster can have that negatively affect a production system. None of these are new to applications in general. Applications and services can fail, lock up, or slow down in other architectures. There are just a lot more parts in a container cluster creating more opportunities for problems to occur. In addition, typical application monitoring tools aren’t necessarily designed for container clusters. There are events that traditional application monitoring will miss especially issues with containers and Kubernetes themselves.

To combat these issues, a generation of products and open source projects are emerging that are retrofit or purpose-built for container clusters. In some cases, app monitoring has been extended to include containers (New Relic comes to mind). New companies, such as LightStep, have also entered the market for application monitoring but with containers in mind from the onset. Just as exciting are the open source projects that are gaining steam. Prometheus (for application monitoring), OpenTracing (network tracing), and Jaeger (transaction tracing) are some of the open source projects that are help gather data about the functioning of a cluster.

What makes these projects and products interesting is that they place monitoring components in the clusters, close to the applications’ components, and take advantage of container and Kubernetes APIs. This placement helps sysops to have a more complete view of all the parts and interactions of the container cluster. Information that is unique to containers and Kubernetes are available alongside traditional application and network monitoring data.

As IT departments start to roll scalable container clusters into production, knowing what is happening within is essential. Thankfully, the ecosystem for monitoring is evolving quickly, driven equally but companies and open source communities.

Posted on Leave a comment

Outsourcing Core IT When You Are Core IT

Today, I provided a quick presentation on the BrightTALK channel on Outsourcing Core IT when you ARE Core IT. It turns out that one of the takeways from this webinar is about your risk of being outsourced based on your IT model. First, your fear and approach to outsourcing really depend on whether you are part of core IT, enabling IT, or transformational IT:

Operational IT is a cost-driven, commoditized, and miserable IT environment that does not realize that technology can provide a strategic advantage. In this world, a smartphone, a server running an AI algorithm, and a pile of paper clips are potentialy all of equal value and defined by their cost. (Hint: This is not a great environment to work in. You are at risk of being outsourced!)

Enabling IT identifies that technology can affect revenues, rather than just over head costs. This difference is not trivial, as the average US employee in 2012 made about $47,000 per year, but companies brought in over $280,000 per employee each year. The difference between calculating the value of tech at roughly $20 an hour vs. roughly $140 an hour for each employee is immense!

And finally, there is Transformational IT, where the company is literally bet on new Digital Transformation and IT efforts are focused on moving fast.

These are three different modes of IT, each requiring its own approach to IT outsourcing. To learn more, check out the embedded webinar below to watch the entire 25-minute webinar. I look forward to your thoughts and opinions and hope this helps you on your career path. If you have any questions, please reach out to me at hyoun @ amalgaminsights . com!

Posted on 2 Comments

EPM at a Crossroads Part 1: Why Is EPM So Confusing?

This blog is the first of a multi-blog series explaining the challenges of Enterprise Performance Management aka Financial Performance Management, Business Performance Management, Corporate Performance Management, Financial Planning and Analysis, and Planning, Budgeting, and Forecasting. Frankly, this list of names alone really helps explain a lot of the confusion. But one of the strangest aspects of this constant market renaming is that there are software and service vendors that literally provide Performance Management for Enterprises (so, literal Enterprise Performance Management) that refuse to call themselves this because of the confusion created by other industry analysts.

This blog series and the subsequent report seek to be a cure to this disease by exploring and defining Enterprise Performance Management more carefully and specifically so that companies can start using common-sense terms to describe their products once more.

Key Stakeholders: CFO, Chief Accounting Officer, Controllers, Finance Directors and Managers, Treasury Managers, Financial Risk Managers, Accounting Directors and Managers

Why It Matters: The need for true Enterprise Performance Management (EPM) has increased with the emergence of the strategic CFO and demands for cross-departmental business visibility. But market confusion has made EPM a very confusing term. Amalgam Insights is stepping up to fix this term and more accurately define the tasks and roles of EPM.

So, Why is Enterprise Performance Management So Confusing?

The market of Enterprise Performance Management has been a confusing one to follow over the past few years. During this decade, the functions of enterprise planning and budgeting have quickly evolved as the role of the CFO has transformed from chief beancounter to a chief strategist who has eclipsed the Chief Operating Officer and Chief Revenue Officer as the most important C-Level officer in the enterprise. The reason for this is simple: a company only goes as far as money allows it to.

The emergence of the strategic CFO has led to four interesting trends in the Finance Office.

  1. Financial officers now need to access non-financial data to support strategic needs and to track business progress in areas that are not purely financial in nature. This means that the CFO is looking for better data, improved integration between key data sources and applications, and more integrated planning between multiple departments.
  2. “Big Data” also becomes a big deal for the CFO, who must integrate relevant supply chain and manufacturing data, audio & video documents, external weather and government data, and other business descriptors that go outside of the traditional accounting entries used to build the balance sheet, income statement, and other traditional financial documents. To integrate the gigantic data volumes and non-traditional data formats associated with Big Data, planning tools must be scalable and flexible. This may mean going beyond the traditional OLAP cube to support semi-structured and unstructured data that provides business guidance.

  3. A quantitative approach to business management also requires a data-driven and analytic view of the business. After all, what is the point of data if it is not aligned and contextualized to the business. Behind the buzzwords, this means that the CFO must upgrade her tech skills and review college math including statistics and possibly combinatorics and linear algebra. In addition, the CFO needs to track all aspects of strategic finance, including consolidation & close, treasury management, risk management, and strategic business modeling capabilities.

  4. Finally, the CFO must think beyond data and consider the workflows associated with each key financial and operational task. This means the CFO is now also the Chief Process Officer, a role once assigned to the Chief Operating Officer, because of the CFO’s role in governance and strategy. Although accountants are not considered masters of operational management and supply chain, the modern CFO is slowly being cornered into this environment.

These drivers of strategy, data, and workflows force CFOs outside of their accounting and finance backgrounds to be true business managers. And in this context, it makes sense that Enterprise Performance Management is breaking up into multiple different paths to support the analytics, machine learning, workflows, document management, strategy, qualitative data, strategy, and portfolio management challenges that the modern strategic CFO faces.

Finance and accounting are evolving quickly in a world with new business models, massive data capacities, and complex analytical tools. But based on this fragmentation, it is difficult to imagine at first glance how any traditional planning and budgeting solution could effectively support all of these new tasks. So, in response, “EPM” solutions have started to go down four different paths. In upcoming reports, I’ll explain the paths that modern EPM solutions are taking to solve the strategic CFO’s dilemmas.

Posted on 1 Comment

Market Milestone: Oracle Builds Data Science Gravity By Purchasing DataScience.com

Industry: Data Science Platforms

Key Stakeholders: IT managers, data scientists, data analysts, database administrators, application developers, enterprise statisticians, machine learning directors and managers, current DataScience.com customers, current Oracle customers

Why It Matters: Oracle released a number of AI tools in Q4 2017, but until now, it lacked a data science platform to support complete data science workflows. With this acquisition, Oracle now has an end-to-end platform to manage these workflows and support collaboration among teams of data scientists and business users, and it joins other major enterprise software companies in being able to operationalize data science.

Top Takeaways: Oracle acquired DataScience.com to retain customers with data science needs in-house rather than risk losing their data science-based business to competitors. However, Oracle has not yet not defined a timeline for rolling out the unified data science platform, or its future availability on the Oracle Cloud.

Oracle Acquires DataScience.com

On May 16, 2018, Oracle announced that it had agreed to acquire DataScience.com, an enterprise data science platform that Oracle expects to add to the Oracle Cloud environment. With Oracle’s debut of a number of AI tools last fall, this latest acquisition telegraphs Oracle’s intent to expedite its entrance into the data science platform market by buying its way in.

Oracle is reviewing DataScience.com’s existing product roadmap and will supply guidance in the future, but they mean to provide a single unified data science platform in concert with Oracle Cloud Infrastructure and its existing SaaS and PaaS offerings, empowering customers with a broader suite of machine learning tools and a complete workflow. Continue reading Market Milestone: Oracle Builds Data Science Gravity By Purchasing DataScience.com

Posted on Leave a comment

4 Key Business & Enterprise Recommendations for Google Duplex

This week, everybody is talking about Google Duplex, announced earlier this week at Google I/O. Based on previous interactions with IVRs from calling vendors for customer support, Duplex is an impressive leap forward in natural language AI, and offers future hope at making some clerical tasks easier to complete. Duplex will be tested further by a limited number of users in Google Assistant this summer, refining its ability to complete specific tasks: getting holiday hours for a business, making restaurant reservations, and scheduling appointments specifically at a hair salon.

So what does this mean for most businesses?

Continue reading 4 Key Business & Enterprise Recommendations for Google Duplex

Posted on Leave a comment

Revealing the Learning Science for Improving IT Onboarding

Key Stakeholders: IT Managers, IT Directors, Chief Information Officers, Chief Technology Officers, Chief Digital Officers, IT Governance Managers, and IT Project and Portfolio Managers.

Top Takeaways: Information technology is innovating at an amazing pace. These technologies hold the promise of increased effectiveness, efficiency and profits. Unfortunately, the training tools developed to onboard users are often constructed as an afterthought and are ineffective. Technologies with great potential are underutilized because they are poorly trained. Learning scientists can remedy this problem and can help IT professional build effective training tools. Onboarding will become more efficient, and profits will follow in short order.

IT Onboarding Has an Adoption Problem

In my lifetime I have seen a number of amazing new technologies emerge. I remember the first handheld calculators and the first “flip” phones. Fast forward to 2018 and the majority of Americans now carry some of the most sophisticated technology that exists in their palm.

In the corporate world technology is everywhere. Old technologies are being updated regularly, and new innovative, disruptive technologies are being created every day. With every update or new technology comes a major challenge that is being poorly addressed.

How do we get users to adopt the technology, and to use it efficiently and effectively?

This is a problem in training. As a learning scientist I find it remarkable that so much time and money is allocated to developing these amazing technologies, but training is treated as an afterthought. This is a recipe for poor adoption, underutilization of these powerful tools, and reduced profits.

The IT Onboarding Issue

I experienced this dozens of times in my 25-year career as a University Professor. I routinely received emails outlining in excruciating detail a new set of rules, regulations or policies. The email would be laced with threats for non-compliance, but poorly designed instructions on how to obtain compliance. The usual approach was to ignore the instructions in the email and instead to use the grapevine to identify which buttons to click, and in what order to achieve compliance. I also received emails explaining (again usually in excruciating detail) how a new version of an existing software has changed, or how some new technology was going to be unveiled that would replace a tool that I currently used. I was provided with a schedule of “training workshops” to attend, or a link to some unintelligible “training manual”. I either found a way to use the old technology in secret, made a formal request to continue to use the “old” technology to avoid having to learn the new technology, or I asked the “tech” guy to come to my office and show me how to do the 5 things that I needed the new technology to do. I would take copious notes and save them for future reference.

If I got an email detailing a new technology that did not affect my current system, I deleted it with relief.

My story is common, it suggests a broken and ineffective system, and it all centered around the lack of quality training.

This is not restricted to the University setting. I have a colleague who builds mobile apps for large pharmacy chains. The mobile app reminds patients to refill prescriptions, allows pharmacists to adjust prescriptions, and several other features. It is a great offering and adds value for his clients. As with most IT, he rolls out a new release every few months. His main struggles are determining what features to improve, what new features to add, and how to effectively onboard users.

He gets some guidance on how to improve an existing feature, or what new features to add, but he often complains that these suggestions sound more like a client’s personal preference and are not driven by objectively-determined customer needs. With respect to training, he admits that the training manual is an afterthought and he is frustrated because his team is constantly fielding questions that are answered in the manual.

The result: “Improved” or new features no one wants, and difficulty training users of the app.

Experts in IT development should not be expected to have the skills to build an effective training manual, but they do need to understand that onboarding is critical, and effective onboarding requires a training tool that is effective.

Key Recommendations for Improving IT Onboarding

This is where learning science should be leveraged. An extensive body of psychological and brain science research (much of my own) has been conducted over the past several decades that provides specific guidelines on how to effectively train users. Here are some suggestions for improving IT training and development that derive from learning science.

Recommendation 1 – Survey Your Users: All too often technology is “improved” or a new feature is released and few clients see its value. Users know what they want. They know what problems that they need to solve and want tools for solving those problems. Users should be surveyed to determine what features they like, what features they feel could be improved, and what problems they would like solved with a new feature. The simplest, cheapest and most effective way to do this is to ask them via an objective survey. Don’t just ask the CEO, ask the users.

Recommendation 2 – Develop Step-by-Step Instructions for Each Feature and Problem it Solves: Although your new technology may have a large number of features and can solve a large number of problems, most technology users take advantage of a small handful of features to solve a small number of problems. Step-by-step instructions should be developed that train the user on each specific feature and how it helps them solve a specific problem. If I only use five features to solve two main problems, I should be able to train on those features within the context of these two problems. This approach will be fast and effective.

Recommendation 3 – Training Content Should be Grounded in Microlearning: Microlearning is an approach to training that leverages the brain science of learning. Attention spans are short and working memory capacity is limited. People learn best when training content comes in small chunks (5 – 10 minutes) that is focused on a single coherent concept. If you need to utilize two independent features to solve a specific problem, training will be most effective if you train on each feature separately, then train how to use those two features in concert to solve the specific problem.

Recommendation 4 – Develop Training Materials in Multiple Mediums: Some learners prefer to read, some prefer to watch videos, and some prefer to listen. Training materials should be available in as many of these mediums as possible.

Recommendation 5 – Incorporate Knowledge Checks: One of the best ways to train our brain is to test our knowledge of material that we have learned. Testing a learner’s knowledge of the steps needed to solve some problem, or their knowledge of the features of your app requires the learner to use cognitive effort to extract and retrieve that information from memory. This process strengthens information already learned and can identify areas of weakness in their knowledge. This would cue the learner to revisit the training material.

How to Implement These Recommendations & Representative Vendors

Now that we have identified the problem and offered some solutions, key stakeholders need to know how to implement these recommendations. Several avenues are available. Key stakeholders can work with corporate learning and internal learning and development solutions, as well as with corporate communications from an internal marketing perspective. We at Amalgam provide guidance on the best solutions and how to implement them.

Solutions are available from companies such as: Accord LMS, Agylia Group, Axonify, Cornerstone, Crossknowledge, Degreed, Docebo, EdCast, Expertus, Fivel, GamEffective, Grovo, Litmos, Lumesse, MindTickle, Mzinga, PageUp, Pathgather, PeopleFluent, Qstream, Reflekive, Saba, Salesforce, SAP SuccessFactors, Skillsoft, Talentquest, TalentSoft, Thought Industries, Totara Learn, and Zunos. Over the coming months, I will be researching each of these and other offerings in greater detail and will be writing about how they support IT education.

Posted on Leave a comment

Cloud Foundry Provides an Open Source Vision for Enterprise Software Development

From April 18-20, Amalgam Insights attended Cloud Foundry Summit 2018 in our hometown of Boston, MA. Both Research Fellow Tom Petrocelli and Founder Hyoun Park attended as we explored the current positioning of Cloud Foundry as an application development platform in light of the ever-changing world of technology. The timing of Cloud Foundry Summit this year coincided with Pivotal’s IPO, which made this Summit especially interesting. Through our attendance of keynote sessions, panels, and the analyst program, we took away several key lessons.

First, the conflict between Cloud Foundry and Kubernetes is disappearing as each solution has found its rightful place in the DevOps world. My colleague Tom Petrocelli goes into more detail in explaining how the perceived conflict between Cloud Foundry’s initial containerization efforts and Kubernetes is not justified. This summit made very clear that Cloud Foundry is a very practical solution focused on supporting enterprise-grade applications that abstracts both infrastructure and scale. Amalgam takes the stance that this conflict in software abstraction should never have been there in the first place. Practitioners have been creating an artificial conflict that the technology solutions are seeking to clarify and ameliorate.

At the event, Cloud Foundry also announced heavy-hitting partners. Alibaba Cloud is now a Gold member of the Cloud Foundry Foundation. With this announcement, Cloud Foundry goes to China and joins the fastest growing cloud in the world. This announcement mirrors Red Hat’s announcement of Alibaba becoming an Red Hat Certified Cloud and Service Provider last October and leads to an interesting showdown in China as developers choose between Cloud Foundry and OpenStack to build China’s future of software.

In addition, Cloud Foundry Foundation announced Cloud.gov as the 8th certified provider of Cloud Foundry. This step forward will allow federal agencies to use a certified and FedRAMP Authorized Cloud Foundry platform. The importance of this announcement was emphasized in an Air Force-led session on Project Kessel Run, which is focused on agile software for the Air Force. This session showed how how Cloud Foundry accelerated the ability to execute in an environment where the average project took over 8 years to complete due to challenges such as the need to ask for Congressional approval on a yearly basis. By using Cloud Foundry, the Air Force has identified opportunities to build applications in a couple of months and get these tools directly to soldiers to create culture that is #AgileAF (which obviously stands for “Agile Air Force”). The role of Cloud Foundry is accelerating one of the most challenging and governed application development environments in the world accentuated the value of Cloud Foundry in effectively enabling the vaunted goal of digital transformation.

From a technical perspective, the announcement that really grabbed our attention was the demonstration of cfdev in providing a full Cloud Foundry development experience on a local laptop or workstation on native hypervisors. This will make adoption far easier for developers seeking to quickly develop and debut applications as well as to help developers build test and sandbox environments for Cloud Foundry.

Overall, this event demonstrated the evolution of Cloud Foundry. The themes of Cloud Foundry as both a government enabler and the move to China were front and center throughout this event. Combined with the Pivotal IPO and Cloud Foundry’s ability to work with Kubernetes, it is hard to deny Cloud Foundry’s progress as an enterprise and global solution for application development acceleration and in working as a partner with other appropriate technologies.

Posted on Leave a comment

Lynne Baer: Clarifying Data Science Platforms for Business

My name is Lynne Baer, and I’ll be covering the world of data science software for Amalgam Insights. I’ll investigate data science platforms and apps to solve the puzzle of getting the right tools to the right people and organizations.

“Data science” is on the tip of every executive’s tongue right now. The idea that new business initiatives (and improvements to existing ones) can be found in the data a company is already collecting is compelling. Perhaps your organization has already dipped its toes in the data discovery and analysis waters – your employees may be managing your company’s data in Informatica, or performing statistical analysis in Statistica, or experimenting with Tableau to transform data into visualizations.

But what is a Data Science Platform? Right now, if you’re looking to buy software for your company to do data science-related tasks, it’s difficult to know which applications will actually suit your needs. Do you already have a data workflow you’d like to build on, or are you looking to the structure of an end-to-end platform to set your data science initiative up for success? How do you coordinate a team of data scientists to take better advantages of existing resources they’ve already created? Do you have coders in-house already who can work with a platform designed for people writing in Python, R, Scala, Julia? Are there more user-friendly tools out there your company can use if you don’t? What do you do if some of your data requires tighter security protocols around it? Or if some of your data models themselves are proprietary and/or confidential?

All of these questions are part and parcel of the big one: How can companies tell what makes a good data science platform for their needs before investing time and money? Are traditional enterprise software vendors like IBM, Microsoft, SAP, SAS dependable in this space? What about companies like Alteryx, H2O.ai, KNIME, RapidMiner? Other popular platforms under consideration should also include Anaconda, Angoss (recently acquired by Datawatch), Domino, Databricks, Dataiku, MapR, Mathworks, Teradata, TIBCO. And then there’s new startups like Sentenai, focused on streaming sensor data, and slightly more established companies like Cloudera looking to expand from their existing offerings.

Over the next several months, I’ll be digging deeply to answer these questions, speaking with vendors, users, and investors in the data science market. I would love to speak with you, and I look forward to continuing this discussion. And if you’ll be at Alteryx Inspire in June, I’ll see you there.