In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects. To read the introduction, click here. To read about Stage 1: Executive Design, click here To read about Stage 2: Technical Development, click here. This blog focuses on Operational Deployment, the third of…
When we started Amalgam Insights, we oh-so-cleverly chose the AI initials with the understanding that artificial intelligence (the other AI…), data science, machine learning, programmatic automation, augmented analytics, and neural inputs would lead to the greatest advances in technology. At the same time, we sought to provide practical guidance for companies seeking to bridge the…
In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects. To read the introduction, click here. To read about Stage 1: Executive Design, click here This blog focuses on Technical Development, the second of the Three Keys to Ethical AI described in the…
In this blog post series, Amalgam Insights is providing a practical model for businesses to plan the ethical governance of their AI projects. To read the introduction, click here. This blog focuses on Executive Design, the first of the Three Keys to Ethical AI introduced in the last blog. Stage I: Executive Design As a…
As we head into 2020, the concept of “AI (Artificial Intelligence) for Good” is becoming an increasingly common phrase. Individuals and organizations with AI skillsets (including data management, data integration, statistical analysis, machine learning, algorithmic model development, and application deployment skills) have effort into pursuing ethical AI efforts.
Amalgam Insights believes that these efforts have largely been piecemeal and inadequate to meet common-sense definitions for companies to effectively state that they are pursuing, documenting, and practicing true ethical AI because of the breadth and potential repercussions of AI on business outcomes. This is not due to a lack of interest, but based on a couple of key considerations. First, AI is a relatively new capability in the enterprise IT portfolio that often lacks formal practices and guidelines and has been managed as a “skunkworks” or experimental project. Second, businesses have not seen AI as a business practice, but as a purely technical practice and made a number of assumptions in skipping to the technical development that would typically not have been made for more mature technical capabilities and projects.
In the past, Amalgam Insights has provided frameworks to help organizations take the next step to AI through our BI to AI progression.
To pursue a more ethical model of AI, Amalgam Insights believes that AI efforts need to be analyzed through three key lenses:
- Executive Design
- Technical Development
- Operational Deployment
Figure 2: Amalgam’s Three Key Areas for Ethical AI
In each of these areas, businesses must ask the right questions and adequately prepare for the deployment of ethical AI. In this framework, AI is not just a set of machine learning algorithms to be utilized, but an enabler to effectively augment problem-solving for appropriate challenges.
Over the next week, Amalgam Insights will explore 12 areas of bias across these three categories with the goal of developing a straightforward framework that companies can use to guide their AI initiatives and take a structured approach to enforcing a consistent set of ethical guidelines to support governance across the executive, technical, and operational aspects of initiating, developing, and deploying AI.
In our next blog, we will explore Executive Design with a focus on the five key questions that an executive must consider as they start considering the use of AI within their enterprise.
This year’s KubeCon+CloudNativeCon was, to say the least, an experience. Normally sunny San Diego treated conference-goers to torrential downpours. The unusual weather turned the block party event into a bit of a sog. My shoes are still drying out. The record crowds – this year’s attendance was 12,000 up from last year’s 8000 in Seattle – made navigating the show floor a challenge for many attendees.
Despite the weather and the crowds, this was an exciting KubeCon+CloudNativeCon. On display was the maturation of the Kubernetes and container market. Both the technology and the best practices discussions were less about “what is Kubernetes” and, instead more about “how does this fit into my architecture?” and “how enterprise-ready is this stuff?” This shift from the “what” to the “how” is a sign that Kubernetes is heading quickly to the mainstream. There are other indicators at Kubecon+CloudNativeCon that, to me, show Kubernetes maturing into a real enterprise technology.
First, the makeup of the Kubernetes community is clearly changing. Two years ago, almost every company at KubeCon+CloudNativeCon was some form of digital forward company like Lyft or cloud technology vendor such as Google or Red Hat. Now, there are many more traditional companies on both the IT and vendor side. Vendors such as HPE, Oracle, Intel, and Microsoft, mainstays of technology for the past 30 years, are here in force. Industries like telecommunications (drawn by the promise of edge computing), finance, manufacturing, and retail are much more visible than they were just a short time ago. While microservices and Kubernetes are not yet as widely deployed as more traditional n-Tier architectures and classic middleware, the mainstream is clearly interested.
Another indicator of the changes in the Kubernetes space is the prominence of security in the community. Not only are there more vendors than ever, but we are seeing more keynote time given to security practices. Security is, of course, a major component of making Kubernetes enterprise-ready. Without solid security practices and technology, Kubernetes will never be acceptable to a broad swatch of large to mid-sized businesses. That said, there is still so much more that needs to be done with Kubernetes security. The good news is that the community is working on it.
Finally, there is clearly more attention being paid to operating Kubernetes in a production environment. That’s most evident in the proliferation of tracing and logging technology, from both new and older companies, that were on display on the show floor and mainstage. Policy management was also an important area of discussion at the conference. These are all examples of the type of infrastructure that Operations teams will need to manage Kubernetes at scale and a sign that the community is thinking seriously about what happens after deployment.
It certainly helps that a lot of basic issues with Kubernetes have been solved but there is still more work to do. There are difficult challenges that need attention. How to migrate existing stateful apps originally written in Java and based on n-Tier architectures is still mostly an open question. Storage is another area that needs more innovation, though there’s serious work underway in that space. Despite the need for continued work, the progress seen at KubeCon+CloudNativeCon NA 2019 point to future where Kubernetes is a major platform for enterprise applications. 2020 will be another pivotal year for Kubernetes, containers, and microservices architectures. It may even be the year of mainstream adoption. We’ll be watching.
Key Stakeholders: Chief Information Officer, Chief Financial Officer, Chief Accounting Officer, Controllers, IT Directors and Managers, Enterprise Mobility Directors and Managers, Networking Directors and Managers, Software Asset Directors and Managers, Cloud Service Directors and Managers, and other technology budget holders responsible for telecom, network, mobility, SaaS, IaaS, and IT asset and service expenses.
Why It Matters: The race for IT spend management consolidation continues. The financial management of IT is increasingly seen as a strategic advantage for managing the digital supply chain across network, telecom, wireless, cloud, software, and service portfolios.
Top Takeaway: The new combined business with over 800 employees, 3,500 customers, and an estimated 2 million devices and $20 billion under management both serves as legitimate competition for market leader Tangoe and an attractive potential acquisition for larger IT management vendors.
[Disclaimer: Amalgam Insights has worked with Calero and MDSL. Amalgam Insights has provided end-user inquiries to both Calero and MDSL customers. Amalgam Insights has provided consulting services to investors and advisors involved in this acquisition.]
Amalgam Insights’ Research Fellow Tom Petrocelli has just published a groundbreaking Market Landscape on Continuous Integration and Continuous Delivery titled “The 2020 Guide to Continuous Integration and Continuous Delivery: Process, Projects, and Products”
This Market Landscape provides guidance on the processes, projects, products, and vendors that allow leading software development departments to effectively support continuous integration and delivery across their application portfolio. It is recommended for Software Engineering Directors and IT Executives making “buy versus build” decisions and designing CI/CD workflows.
In a recently published Market Milestone, Todd Maddox, Ph.D., Learning Scientist and Research Fellow for Amalgam Insights, evaluated CrossKnowledge’s CK CONNECT Solution Suite from a neuroscience perspective.
Maddox argues that people (aka soft) skills training is an important part of any organizations Learning & Development strategy, but that most people skills training solutions are ineffective because they engage only the cognitive learning systems in the brain, instead of engaging a combination of cognitive, behavioral, and emotional learning systems in the brain. CK CONNECT meets these challenges and provides the competitive advantage that organizations need to attract, train, and retain the best talent.
For more information, read the full Market Milestone available on the CrossKnowledge website at no cost.
Organizations are more vulnerable than ever to cybersecurity threats. Global annual cybersecurity costs are predicted to grow from $3 trillion in 2015 to $6 trillion annually by 2021. To stay safe organizations must train their employees to identify cybersecurity threats and to avoid them. To address this, global spending on cybersecurity products and services is projected to exceed $1 trillion from 2017 to 2021.
Unfortunately, cybersecurity training is particularly challenging because cybersecurity is more about training behavioral “intuition” and situational awareness than it is about training a cognitive, analytic understanding. It is one thing to know “what” to do, but it is another (and mediated by completely different systems in the brain) to know “how” to do it, and to know how to do it under a broad range of situations.
Regrettably, knowing what to do and what not to do, does not translate into actually doing or not doing. To train cybersecurity behaviors, the learner must be challenged through behavioral simulation. They must be presented with a situation, generate an appropriate or inappropriate response, and must receive real-time, immediate feedback regarding the correctness of their behavior. Real-time, interactive feedback is the only way to effectively engage the behavioral learning system in the brain. This system learns through gradual, incremental dopamine-mediated changes in the strength of muscle memory that reside in the striatum of the brain. Critically, the behavioral learning system in the brain is distinct from the cognitive learning system in the brain, meaning that knowing “what” to do has no effect on learning “how” to do it.
Cybersecurity behavioral training must be broad-based with the goal of training situational awareness. Cybersecurity hackers are creative with each attack often having a different look and feel. Simulations must mimic this variability so that they elicit different experiences and emotions. This is how you engage experiential centers in the brain that represent the sensory aspects of an interaction (e.g., sight and sound) and emotional centers in the brain that build situational awareness. By utilizing a broad range of cybersecurity simulations that engage experiential and emotional centers in different ways, the learner trains cybersecurity behaviors that generalize and transfer to multiple settings. Ideally, it is also useful to align the difficulty of the simulation to the user’s performance. This personalized approach will be more effective and will speed learning relative to a one-size-fits-all approach.
If your organization is worried about cybersecurity threats and is looking for a cybersecurity training tool, a few considerations are in order. First, and foremost, do not settle for a training solution that focuses only on providing learners with knowledge and information around cybersecurity. This “what” focused approach will be ineffective at teaching the appropriate behavioral responses to cybersecurity threats, and will leave your organization vulnerable. Instead focus on solutions that are grounded in simulation training, preferably with content and delivery that is broad-based to train situational awareness. Solutions that personalize the difficulty of each simulation are a bonus as they will speed learning and long-term retention of cybersecurity behaviors.