Posted on Leave a comment

Azure Advancements Announced at Microsoft Inspire 2018

Last week, Microsoft Inspire took place, which meant that Microsoft made a lot of new product announcements regarding the Azure cloud. In general, Microsoft is both looking up and trying to catch up to Amazon from a market share perspective while trying to keep its current #2 place in the Infrastructure as a Service world ahead of rapidly growing Google Cloud Platform as well as IBM and Oracle.  Microsoft Azure is generally regarded as a market-leading cloud platform, along with Amazon, that provides storage, computing, and security and is moving towards analytics, networking, replication, hybrid synchronization, and blockchain support.

Key functionalities that Microsoft has announced include:
Continue reading Azure Advancements Announced at Microsoft Inspire 2018

Posted on Leave a comment

The Learning Science of Situational Awareness and Patient Safety

An Interactive Webinar with Qstream CEO, Rich Lanchantin

On Wednesday, July 17, 2018, Amalgam’s Learning Scientist and Research Fellow, Todd Maddox, Ph.D. and Qstream’s CEO, Rich Lanchantin conducted a live, interactive webinar focused on the critically important topic of situational awareness and patient safety. Achieving the highest quality in patient care requires a careful balance between efficiency and situational awareness in order to prevent medical errors. Healthcare professionals must be able to observe the current environment while also keeping the various potential outcomes top of mind in order to avoid unnecessary complications.

In the webinar, we discussed the learning science—the marriage of psychology and brain science—of situational awareness and patient safety and showed how to most effectively help clinicians learn and retain information, which results in long-term behavior change. We focused specifically on the challenges faced in optimally training the “what”, “feel” and “how” learning systems in the brain that mediate situational awareness, and how the Qstream platform effectively recruits each of these learning systems.

A replay of the webinar is available for all interested parties at the following link. Simply click the “Webcasts & Slideshare” button and the webcast is there. You will also see a second webcast that I recorded with Rich Lanchantin focused on “The Psychology of Hard vs. Soft Skills Training”. Enjoy!

If you would be interested in retaining Todd Maddox, the most-cited researcher in corporate learning, for a webinar, speaking engagement, or workshop, please contact us at sales@amalgaminsights.com

Posted on Leave a comment

The Adoption Gap in Learning and Development: Brandon Hall Hosts A Podcast with Amalgam Insights’ Todd Maddox

On Thursday July 19, 2018, Brandon Hall Group released a podcast discussion between Amalgam Insights’ Learning Scientist and Research Fellow, Todd Maddox, Ph.D. and Brandon Hall’s COO, Rachel Cooke. The podcast focused on the “adoption gap” in Learning & Development that results when users are presented with a large number of tools and technologies, but little if any guidance on what tool to use when.

Todd and Rachel discuss the importance of leveraging learning science—the marriage of psychology and brain science—to provide best practices for mapping tools onto learning problems in the most effective manner. Todd and Rachel discuss the challenges faced in optimally training hard and people (aka soft) skills.

A replay of the podcast is available for all interested parties at the following link.

If you would be interested in retaining Todd Maddox, the most-cited researcher in corporate learning, for a podcast, webinar, speaking engagement, or workshop, please contact us at sales@amalgaminsights.com

Posted on Leave a comment

Market Milestone: Informatica and Google Cloud Partner to Open Up Data, Metadata, Processes, and Applications as Managed APIs

[This Research Note was co-written by Hyoun Park and Research Fellow Tom Petrocelli]

Key Stakeholders: Chief Information Officers, Chief Technical Officers, Chief Digital Officers, Data Management Managers, Data Integration Managers, Application Development Managers

Why It Matters: his partnership demonstrates how Informatica’s integration Platform as a Service brings Google Cloud Platform’s Apigee products and Informatica’s machine-learning-driven connectivity into a single solution.

Key Takeaway: This joint Informatica-Google API management solution provides customers with a single solution that provides data, process, and application integration as well as API management. As data challenges evolve into workflow and service management challenges, this solution bridges key gaps for data and application managers and demonstrates how Informatica can partner with other vendors as a neutral third-party to open up enterprise data and support next-generation data challenges.
Continue reading Market Milestone: Informatica and Google Cloud Partner to Open Up Data, Metadata, Processes, and Applications as Managed APIs

Posted on 1 Comment

The Brain Science View on Why Microlearning Is Misused & Misapplied in Enterprise Learning Environments

Microlearning is taking the Learning and Development world by storm. Although many incorrectly identify microlearning as simply short duration training sessions, leaders in the field define microlearning as an approach to training that focuses on conveying information about a single, specific idea. The goal is to isolate the idea that is to be trained and then to focus all of the training effort on explaining that single idea with engaging and informative content. For example, with respect to sexual harassment, one might watch a brief piece of video content focused on the qualities of an inclusive leader, or ways to identify the symptoms of hostility in the workplace. The information would be presented in an engaging format that stimulates knowledge acquisition in the learner. The microlearning training goal is clear: train one idea succinctly with engaging content, and with as few “extras” as possible.

The Psychological and Brain Science of Microlearning: Training the Hard Skills of People Skills

Learning science—the marriage of psychology and brain science–suggests that microlearning is advantageous for at least two reasons. First, the emphasis on training a single idea as concisely and succinctly as possible, increases the likelihood that the learner will remain engaged and attentive during the whole microlearning session. Put another way, the likelihood that the learner’s attention span will be exceeded is low.

Second, the aim of microlearning to eliminate any ancillary information that is not directly relevant to the target idea, means that the cognitive machinery (i.e., working memory and executive attention) available to process the information can focus on the idea to be learned, with minimal effort being expended on filtering out irrelevant information that can lead the learner astray. The learner’s cognitive load will all be focused on the idea to be trained.

Because microlearning techniques are targeted at working memory, executive attention, and attention span in general, microlearning strongly affects processing in the cognitive skills learning system. The cognitive skills learning system in the brain recruits the prefrontal cortex, a region of cortex directly behind the forehead that mediates the learning of hard skills. These include learning rules and regulations, new software, and critical skills such as math and coding. Hard skill learning requires focused attention and the ability to process and rehearse the information. One learns by reading, watching, and listening, and information is ultimately retained through mental repetitions.

Thus, microlearning is optimal for hard skill training. Microlearning can, and appears to be, revolutionizing online eLearning of hard skills.

The Psychological and Brain Science of Microlearning and People Skills Training

I showed in a recent article that online eLearning approaches to corporate training use the same, one-size-fits-all, delivery platform and procedures when training hard skills and people (aka soft) skills. Although generally effective for hard skills training, especially when tools like microlearning are incorporated, this one-size-fits-all approach is only marginally effective at training people skills because people skills are ultimately behavioral skills. People skills are about behavior. They are what we do, how we do it, and our intent. These are the skills that one needs for effective interpersonal communication and interaction, for showing genuine empathy, embracing diversity, and avoiding situations in which unconscious biases drive behavior.

Behavioral skill learning is not mediated by the cognitive skills learning system in the brain, but rather is mediated by the behavioral skills learning system in the brain. Whereas the cognitive skills learning system in the brain recruits the prefrontal cortex, and relies critically on working memory and executive attention, the behavioral skills learning system in the brain recruits the basal ganglia, a subcortical brain structure, that does not rely on working memory and executive attention for learning. Rather the basal ganglia learn behaviors gradually and incrementally via dopamine-mediated error-correction learning. When the learner generates a behavior that is followed in real-time, literally with 100s of milliseconds, by feedback that rewards the behavior, dopamine is released, and that behavior will be incrementally more likely to occur next time the learner is in the same context. On the other hand, when the learner generates a behavior that is followed in real-time by feedback that punishes the behavior, dopamine is not released, and that behavior will be incrementally less likely to occur next time the learner is in the same context.

People skills are learned by doing and involve physical repetitions.

Microlearning: The Hard Skills of People Skills

So how effective is microlearning for people skills training? The answer is that microlearning is very effective for early epochs of people skills training when the focus is on learning the hard skills of people skills. It is also effective when learning to identify good and bad people skills. For example, if you are learning about the definition of empathy, are being shown a demonstration of unconscious bias, or are learning some of the advantages of a diverse workplace. In these cases, microlearning is very useful because you are gaining a cognitive understanding of various aspects of people skills.

When microlearning content is grounded in rich scenario-based training its effectiveness is enhanced. This follows because scenario-based training engages emotional learning centers in the brain that affect hard but also people skills learning. Rich scenario allow learners to “see themselves” in the training which primes the system for behavior change.

Microlearning: The Behavioral Skills of People Skills

Despite the effectiveness of microlearning approaches for training hard skills, and when supplemented with rich scenarios, for engaging emotion centers, people skills are ultimately behavioral skills. The ultimate goal is behavior change. All of the cognitive skills training is in the service of preparing the learner for effective behavior change.

How effective is microlearning for behavioral skills learning and for effective behavior change?

The behavioral science is clear. Behavior skills training is optimized when you train the learner on multiple different behaviors, across multiple different settings. Ideally, the learner has no idea what is coming next. They could be placed in a routine situation such as a weekly team meeting, or a non-routine situation in which an angry client is on the phone and the learner has only a few minutes to de-escalate the situation. In other words, if I have multiple leadership situations that I want to train, such as leading an effective meeting, giving an effective performance review, or evidencing active listening skills, then generalization, transfer and long-run behavior change is most effective if you randomly present the learner with these leadership settings. This teaches the leader to “think on their feet” and to be confident that they can handle any situation at any time. Put another way, it is optimal to train simultaneously, and in a random order, several people skill “ideas”. You don’t want to focus on one idea and just train it, then switch to another and just train it.

You also want to incorporate a broad set of environmental contexts. Although the context is not central to the skill to be trained, including a broad range of contexts leads to more robust behavior change. For example, during leadership training in which I am training effective performance reviews, it would be ideal for the office setting to change across scenarios from modern to retro, to minimalist. Similarly, it is best to practice with a range of employees who differ in age, gender, ethnicity, etc. The broader based the training the better.

Summary

Microlearning is one of the most important advances in corporate training in decades. Microlearning directly addresses the need for continuous on-the-job learning. Microlearning’s focus on a single idea with as little ancillary information as possible, is advantageous for hard and cognitive skill learning. It effectively recruits the cognitive machinery of working memory and attention and focuses these resources on the idea to be trained. It is time and performance effective.

On the other hand, microlearning is less effective for behavioral skill learning. Behavioral skills are learned by recruiting the basal ganglia and its dopamine-mediated incremental learning system. Behaviors are learned most effectively, and with greater generalization and transfer, when ancillary information is present and varies from training epoch to training epoch. This leads to a robust behavioral skill development that is less context sensitive, and more broad-based. It facilitates an ability to “think on one’s feet” and to obtain the confidence necessary to feel prepared in any situation.

As I have outlined in recent research reports, microlearning represents one of the many exciting new tools and technologies available to L&D clients. That said, one-size-does-not-fit-all and different tools and technologies are optimal for different learning problems. Learning scientists are needed to map the appropriate tool onto the appropriate learning problem.

 

Posted on Leave a comment

What Wall Street is missing regarding Broadcom’s acquisition of CA Technologies: Cloud, Mainframes, & IoT

(Note: This blog contains significant contributions from long-time software executive and Research Fellow Tom Petrocelli)

On July 11, Broadcom ($AVGO) announced an agreement to purchase CA for $18.9 billion. If this acquisition goes through, this will be the third largest software acquisition of all time behind only Microsoft’s $26 billion acquisition of LinkedIn and Facebook’s $19 billion acquisition of WhatsApp. And, given CA’s focus, I would argue this is the largest enterprise software acquisition of all time, since a significant part of LinkedIn’s functionality is focused on the consumer level.

But why did Broadcom make this bet? The early reviews have shown confusion with headlines such as:
Broadcom deal to buy CA makes little sense on the surface
4 Reasons Broadcom’s $18.9B CA Technologies Buy Makes No Sense
Broadcom Buys CA – Huh?

All of these articles basically hone in on the fact that Broadcom is a hardware company and CA is a software company, which leads to the conclusion that these two companies have nothing to do with each other. But to truly understand why Broadcom and CA can fit together, let’s look at the context.

In November 2017, Broadcom purchased Brocade for $5.5 billion to build out data center and networking markets. This acquisition expanded on Broadcom’s strengths in supporting mobile and connectivity use cases by extending Broadcom’s solution set beyond the chip and into actual connectivity.

Earlier this year, Broadcom had tried to purchase Qualcomm for over $100 billion. Given Broadcom’s lack of cash on hand, this would have been a debt-based purchase with the obvious goal of rolling up the chip market. When the United States blocked this acquisition in March, Broadcom was likely left with a whole lot of money ready to deploy that needed to be used or lost and no obvious target.

So, add these two together and Broadcom had both the cash to spend and a precedent for showing that it wanted to expand its value proposition beyond the chip and into larger integrated solutions for two little trends called “the cloud,” especially private cloud, and “the Internet of Things.”

Now, in that context, take a look at CA. CA’s bread and butter comes from its mainframe solutions, which make up over $2 billion in revenue per year. Mainframes are large computers that handle high-traffic and dedicated workloads and increasingly need to be connected to more data sources, “things,” and clients. Although CA’s mainframe business is a legacy business, that legacy is focused on some of the biggest enterprise computational processing needs in the world. Thus, this is an area that a chipmaker would be interested in supporting over time. The ability to potentially upsell or replace those workloads over time with Broadcom computing assets, either through custom mainframe processors or through private cloud data centers, could add some predictability to the otherwise cyclical world of hardware manufacturing. Grab enterprise computing workloads at the source and then custom build to their needs.

This means that there’s a potential hyperscale private cloud play here as well for Broadcom by bringing Broadcom’s data center networking business together with CA’s server management capabilities, which end up looking at technical monitoring issues both from a top-down and bottoms-up perspective.

CA also is strong in supporting mobile development, developer operations (DevOps), API management, IT Operations, and service level management in its enterprise solutions business, which earned $1.75 billion in annual revenue over the past year. On the mobile side, this means that CA is a core toolset for building, testing, and monitoring the mobile apps and Internet of Things applications that will be running through Broadcom’s chips. To optimize computing environments, especially in mobile and IoT edge environments where computing and storage resources are limited, applications need to optimized on available hardware. If Broadcom is going to take over the IoT chip market over time, the chips need to support relevant app workloads.

I would expect Broadcom to increase investment in CA’s Internet of Things & mobile app dev departments expand once Broadcom completes this transaction as well. Getting CA’s dev tools closer to silicon can only help performance and help Broadcom to provide out-of-the-box IoT solutions. This acquisition may even push Broadcom into the solutions and services market, which would blow the minds of hardware analysts and market observers but would also be a natural extension of Broadcom’s current acquistions to move through the computing value stack.

From a traditional OSI perspective, this acquisition feels odd because Broadcom is skipping multiple layers between its core chip competency and CA’s core competency. But the Brocade acquisition helps close the gaps even after spinning off Ruckus Wireless, Lumina SDN, and data center networking businesses. Broadcom is focused on processing and guiding workloads, not on transport and other non-core activities.

So, between mainframe, private cloud, mobile, and IoT markets, there are a number of adjacencies between Broadcom & CA. It will be challenging to knit together all of these pieces accretively. But because so much of CA’s software is focused on the monitoring, testing, and security of hardware and infrastructure, this acquisition isn’t quite as crazy as a variety of pundits seem to think. In addition, the relative consistency of CA’s software revenue compared to the highs and lows of chip building may also provide some benefits to Broadcom by providing predictable cash flow to manage debt payments and to fund the next acquisition that Hock Tan seeks to hunt down.

All this being said, this is still very much an acquisition out of left field. I’ll be fascinated in seeing how this transaction ends up. It is somewhat reminiscent of Oracle’s 2009 acquisition of Sun to bring hardware and software together. This does not necessarily create confidence in the acquisition, since hardware/software mergers have traditionally been tricky, but doesn’t disprove the synergies that do exist. In addition, Oracle’s move points out that Broadcom seems to have skipped a step of purchasing a relevant casing, device, or server company. Could this be a future acquisition to bolster existing investments and push further into the world of private cloud?

A key challenge and important point that my colleague Tom Petrocelli brings up is that CA and Broadcom sell to very different customers. Broadcom has been an OEM-based provider while CA sells directly to IT. As a result, Broadcom will need to be careful in maintaining CA’s IT-based direct and indirect sales channels and would be best served to keep CA’s go-to-market teams relatively intact.

Overall, the Broadcom acquisition of CA is a very complex puzzle with several potential options.

1. The diversification efforts will work to smooth out Broadcom’s revenue over time and provide more predictable revenues to support Broadcom’s continuing growth through acquisition. This will help their stock in the long run and provide financial benefit.
2. Broadcom will fully integrate the parts of CA that make the most sense for them to have, especially the mobile security and IoT product lines, and sell or spin off the rest to help pay for the acquisition. Although Brocade spinoffs occured prior to the acquisition, there are no forces that prevent Broadcom from spinning off non-core CA assets and products, especially those that are significantly outside the IoT and data center markets.
3. In a worst case scenario, Broadcom will try to impose its business structure on CA, screw up the integration, and kill a storied IT company over time through mismanagement. Note that Amalgam Insights does not recommend this option.

But there is some alignment here and it will be fascinating to see how Broadcom takes advantage of CA’s considerable IT monitoring capabilities, takes advantage of CA’s business to increase chip sales, and uses CA’s cash flow to continue Broadcom’s massive M&A efforts.

Posted on Leave a comment

“Walking a Mile in My Shoes” With Skillsoft’s Leadership Development Program: A Market Milestone

In a recently published Market Milestone, Todd Maddox, Ph.D., Learning Scientist and Research Fellow for Amalgam Insights, evaluated Skillsoft’s Leadership Development Program (SLDP) from a learning science perspective. This involves evaluating the content, and the learning design and delivery. Amalgam’s overall evaluation is that SLDP content is highly effective. The content is engaging and well-constructed with a nice mix of high-level commentary from subject matter experts, dramatic and pragmatic storytelling from a consistent cast of characters faced with real-world problems, and a mentor to guide the leader-in-training through the process. Each course is approximately one hour in length and is comprised of short 5 – 10 minute video segments built with single concept micro-learning in mind.

From a learning design and delivery standpoint, the offering is also highly effective. Brief, targeted, 5 to 10 minute content is well-suited to the working memory and attentional resources available to the learner. Each course begins with a brief reflective question that primes the cognitive system in preparation for the subsequent learning and activates existing knowledge, thus providing a rich context for learning. The Program is grounded in a storytelling, scenario-based training approach with a common set of characters and a “mentor” who guides the training. This effectively recruits the cognitive skills learning system in the brain while simultaneously activating emotion and motivation centers in the brain. This draws the learner into the situation and they begin to see themselves as part of the story. This “walk a mile in my shoes” experience increases information retention and primes the learner for experiential behavior change.

For more information, read the full Market Milestone on the Skillsoft website.

Posted on Leave a comment

Amalgam Provides 4 Big Recommendations for Self-Service BI Success

 

Recently, my colleague Todd Maddox, Ph.D., the most-cited analyst in the corporate training world, and I were looking at the revolution of self-service BI, which has allowed business analysts and scientists to quickly explore and analyze their own data easily. At this point, any BI solution lacking a self-service option should not be considered a general business solution.

However, businesses still struggle to teach and onboard employees on self-service solutions, because self-service represents a new paradigm for administration and training, including the brain science challenges of training for IT. In light of these challenges, Dr. Maddox and I have the following  four recommendations for better BI adoption.

  1. Give every employee a hands-on walkthrough. If Self-Service is important enough to invest in, it is important enough to train as well. This doesn’t have to be long, but even 15-30 minutes spent on having each employee understand how to start accessing data is important.
  2. Drive a Culture of Curiosity. Self-Service BI is only as good as the questions that people ask. In a company where employees are either set in their ways and not focused on continuous improvement, Self-Service BI just becomes another layer of shelfware.

    Maddox adds: The “shelfware” comment is spot on. I was a master of putting new technology on the shelf! If what I have now works for my needs, then I need to be convinced, quickly and efficiently, that this new approach is better. I suggest asking users what they want to use the software for. If you can put users into one of 4 or 5 bins of business use cases, then you can customize the training and onboard more quickly and effectively.

  3. Build short training modules for key challenges in each department. This means that departmental managers need to commit to recording, say, 2-3 short videos that will cover the basics for self-service. Service managers might be looking for missed SLAs while sales managers look for close rates and marketing managers look for different categories of pipeline. But across these areas, the point is to provide a basic “How-to” so that users can start looking for the right answers.

    Maddox adds: Businesses are strongly urged to include 4 or 5 knowledge check questions for each video. Knowledge testing is one of the best ways to add additional training. It also provides quick insights on what aspects of your video is effective and what is not. Train by testing!

  4. Analytics knowledge must become readily available . As users start using BI, they need to figure out the depth and breadth of what is possible with BI, formulas, workflows, regression, and other basic tools. This might be as simple as an aggregation of useful Youtube videos to a formal program developed in a corporate learning platform.

By taking these training tips from one of the top BI influencers and the top-cited training analyst on the planet, we hope you are better equipped to support self-service BI at scale for your business.

Posted on Leave a comment

Data Science Platforms News Roundup, June 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, Datawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta. Continue reading Data Science Platforms News Roundup, June 2018

Posted on 1 Comment

Destroying the CEO Myth: Redefining The Power Dynamics of Managing DevOps

Tom Petrocelli, Amalgam Insights Research Fellow

I am constantly asked the question “What does one have to do to implement DevOps”, or some variant.  Most people who ask this question say how they have spent time searching for an answer. The pat answers they encounter typically is either technology-based (“buy these products and achieve DevOps magic”) or a management one such as “create a DevOps culture.” Both are vague, flippant, and decidedly unhelpful.

 

My response is twofold. First, technology and tools follow management and culture. Tools do not make culture and a technology solution without management change is a waste. So, change the culture and management first. Unfortunately, that’s the hard part. When companies talk about changing culture for DevOps they often mean implementing multifunction teams or something less than that. Throwing disparate disciplines into an unregulated melting pot doesn’t help. These teams can end up as dysfunctional as with any other management or project structure. Team members will bicker over implementation and try to protect their hard-won territory.

 

As the old adage goes, “everything old is new again” and so-called DevOps culture is no different. Multi-functional teams are just a flavor of matrix management which has been tried over and over for years. They suffer from the same problems. Team members have to serve two masters and managers act like a group of dogs with one tree among them. Trying to please both the project leader and their functional management creates inherent conflicts.

 

Another view of creating DevOps culture is, what I think of as, the “CEO Buy-in Approach”. Whenever there is new thinking in IT there always seems to be advocacy for a top-down approach that starts with the CEO or CIO “buying in” to the concept. After that magic happens and everyone holds hands and sings together. Except that they don’t. This approach is heavy-handed and an unrealistic view of how companies, especially large companies, operate. If simply ordering people to work well together was all it took, there would be no dysfunctional companies or departments.

 

A variation on this theme advocates picking a leader (or two if you have two-in-the-box leadership) to make everyone work together happily. Setting aside the fact that finding people with broad enough experience to lead multi-disciplinary teams, this leads to what I have always called “The Product Manager Problem.”

 

The problem that all new product managers face is the realization that they have all the responsibility and none of the power to accomplish their mission.

 

That’s because responsibility for the product concentrates in one person, the product manager, and all other managers can diffuse their responsibility across many products or functions.

 

Having a single leader responsible for making multi-functional teams work creates a lack of individual accountability. The leader, not the team, is held accountable for the project while the individual team members are still accountable to their own managers. This may work when the managers and project team leaders all have great working relationships. In that case, you don’t need a special DevOps structure. Instead, a model that creates a separate project team leader or leaders enables team dysfunction and the ability to maintain silos through lack of direct accountability. You see this when you have a Scrum Master, Product Owner, or Release Manager who has all the responsibility for a project.

 

The typical response to this criticism of multi-functional teams (and the no-power Product Manager) is that leaders should be able to influence and cajole the team, despite having no real authority. This is ridiculous and refuses to accept that individual managers and the people that work for them are motivated to maintain their own power. Making the boss look good works well when the boss is signing your evaluation and deciding on your raise. Sure, project and team leaders can be made part of the evaluation process but, really who has the real power here? The functional manager in control of many people and resources or the leader of one small team?

 

One potential to the DevOps cultural conundrum is collective responsibility. In this scheme, all team members benefit or are hurt by the success of the project. Think of this as the combined arms combat team model. In the Army, a multi-functional combined arms teams are put together for specific missions. The team is held responsible for the overall mission. They are responsible collectively and individually. While the upper echelons hold the combined arms combat team responsible for the mission, the team leader has the ability to hold individuals accountable.

 

Can anyone imagine an Army or Marine leader being let off the hook for mission failure because one of their people didn’t perform? Of course not, but they also have mechanisms for holding individual soldiers accountable for their performance.

 

In this model, DevOps teams collectively would be held responsible for on-time completion of the entire project as would the entire management chain. Individual team members would have much of their evaluation based on this and the team leader would have the power to remediate nonperformance including removing a team member who is not doing their job (i.e. fire them). They would have to have the ability to train up and fill the role of one type of function with another if a person performing a role wasn’t up to snuff or had to be removed. It would still be up to the “chain of command” to provide a reasonable mission with appropriate resources.

 

Ultimately, anyone in the team could rise up and lead this or another team no matter their speciality. There would be nothing holding back an operations specialist from becoming the Scrum Master. If they could learn the job, they could get it.

 

The very idea of a specialist would lose power, allowing team members to develop talents no matter their job title.

 

I worked in this model years ago and it was successful and rewarding. Everyone helped everyone else and had a stake in the outcome. People learned each other’s jobs, so they could help out when necessary, learning new skills in the process. It wasn’t called DevOps but it’s how it operated. It’s not a radical idea but there is a hitch – silo managers would either lose power or even cease to exist. There would be no Development Manager or Security Manager. Team members would win, the company would win, but not everyone would feel like this model works for them.

 

This doesn’t mean that all silos would go away. There will still be operations and security functions that maintain and monitor systems. The security and ops people who work on development projects just wouldn’t report into them. They would only be responsible to the development team but with full power (and resources) to make changes in production systems.

 

Without collective responsibility, free of influence from functional managers, DevOps teams will never be more than a fresh coat of paint on rotting wood. It will look pretty but underneath, it’s crumbling.