Oracle Autonomous Transaction Processing Lowers Barriers to Entry for Data-Driven Business

I recently wrote a Market Milestone report on Oracle’s launch of Autonomous Transaction Processing, the latest in a string of Autonomous Database announcements made by Oracle following announcements in Autonomous Data Warehousing and the initial announcement of the Autonomous Database late last year.

This string of announcements by Oracle takes advantage of Oracle’s investments in infrastructure, distributed hardware, data protection and security and index optimization to create a new set of database services that seek to automate basic support and optimization capabilities. These announcements matter because, as transactional and data-centric business models continue to proliferate, both startups and enterprises should seek a data infrastructure that will remain optimized, secure, and scalable over time without become cost and resource intensive. With Oracle Automated Transaction Processing, Oracle provides its solution to provide an enterprise-grade data foundation for this next generation of businesses.

One of Amalgam Insights’ key takeaways in this research is the analyst estimate that Oracle ATP could reduce the cost of cloud-based transactional database management by 65% compared to similar services managed on Amazon Web Services. Frankly, companies that need to support net-new transactional databases that must be performant and scalable to support Internet of Things, messaging, and other new data-driven businesses should consider Oracle ATP and should do due diligence on Oracle Autonomous Database Cloud for reducing long-term Total Cost of Ownership. This chart is based on the costs of a 10 TB Oracle database on a reserved instance on Amazon Web Services vs. a similar database on the Oracle Autonomous Database Cloud

One of the most interesting aspects of the Autonomous Database in general that Oracle will need to further explain is how to guide companies with existing transactional databases and data warehouses to an Automated environment. It is no secret that every enterprise IT department is its own special environment driven by a combination of business rules, employee preferences, governance, regulation, security, and business continuity expectations. At the same time, IT is used to automation and rapid processing of some aspects of technology management, such as threat management and logs for patching and other basic transactions. But considering the needs of IT for extreme customization, how does IT gain enough visibility to the automated decisions made in indexing and ongoing optimization?

At this point, Amalgam Insights believes that Oracle is pushing a fundamental shift in database management that will likely lead to the automation of manual technical management tasks. This change will be especially helpful for net-new databases where organizations can use the Automated Database Cloud to help establish business rules for data access, categorization, and optimization. This is likely a no-brainer decision, especially for Oracle shops that are strained in their database management resources and seeking to handle more data for new transaction-based business needs or machine learning.

For established database workloads, enterprises will have to think about how or if to transfer existing enterprise databases to the Autonomous Database Cloud. Although enterprises will likely gain some initial performance improvements and potentially reduce the support costs associated with large databases, they will also likely spend time in double-checking the decisions and lineage associated with Automated Database decisions, both in test and in deployment settings. Amalgam Insights would expect that Autonomous Database management would lead to indexing, security, and resource management decisions that may be more optimal than human-led decisions, but with a logic that may not be fully transparent to IT departments that have strongly-defined and governed business rules and processes.

Although Amalgam Insights is convinced that Oracle Autonomous Database is the beginning of a new stage of Digitized and Automated IT, we also believe that a next step for Oracle Autonomous Database Cloud will be to create governance, lineage, and audit packages to support regulated industries, legislative demands, and documentation to describe the business rules for Autonomous logic. Amalgam Insights expects that Oracle would want to keep specific algorithms and automation logic as proprietary trade secrets. But without some level of documentation that is tracable and auditable, large enterprises will have to conduct significant work on their own to figure out if they are able to transfer large databases to Oracle Autonomous Database Cloud, which Amalgam Insights would expect to be an important part of Oracle’s business model and cloud revenue projections going forward.

To read the full report with additional insights and details on the Oracle Autonomous Transaction Processing announcement, please download the full report on Oracle’s launch of Autonomous Transaction Processing, available at no cost for a limited time.

Area9: Leveraging Brain and Computer Science to Build an Effective Adaptive Learning Platform

I recently received an analyst briefing from Nick Howe, the Chief Learning Officer at Area9 Learning who offer an adaptive learning solution. Although Area9 Learning was founded in 2006, I have known about area 9 since the 1980s and it was first “discovered” in 1909. How is that possible?

In 1909, the German anatomist Korbinian Brodmann developed a numbering system for mapping the cerebral cortex based on the organization of cells (called cytoarchitecture). Brodmann area 9, or BA9, includes the prefrontal cortex (a region of brain right behind the forehead) which is a critical structure in the cognitive skills learning system in the brain and functionally serves working memory and attention.

The cognitive skills learning system, prefrontal cortex (BA9), working memory and attention are critical for many aspects of learning, especially hard skills learning.

Please register or log into your Free Amalgam Insights Community account to read more.
Log In Register

Growing Your Data Science Team: Diversifying Beyond Unicorns

A herd of cloned data scientist unicorns

If your organization already has a data scientist, but your data science workload has grown beyond their capacity, you’re probably thinking about hiring another data scientist. Perhaps even a team of them. But cloning your existing data scientist isn’t the best way to grow your organization’s capacity for doing data science.

Why not simply hire more data scientists? First, so many of the tasks listed above are actually well outside the core competency of data scientists’ statistical work, and other roles (some of whom likely already exist in your organization) can perform these tasks much more efficiently. Second, data scientists who can perform all of these tasks well are a rare find; hoping to find their clones in sufficient numbers on the open market is a losing proposition. Third, though your organization’s data science practice continues to expand, the amount of time your original domain expert is able to spend with the data scientist on a growing pool of data science projects does not; it’s time to start delegating some tasks to operational specialists.

What does a data science team look like? Most organizations start out with a line of business director recognizing the need for a data scientist to find answers in the mass of company data, hiring that data scientist, then providing some guidance to what they want to learn from their data to said data scientist – a data science team of two people. The data scientist engages with the data to understand it, get it into shape for analysis, extract relevant features, create a machine learning model, and then turn that model into the desired type of outputs so that the line of business director can act on the results. But when you’re looking to do data science in your organization at a larger scale, companies need to build out these skillsets across multiple different roles within the organization, evolving over time to construct a team of specialists.

Growing the Team

The first person you’re likely to add to your baseline data science team of a domain expert and a data scientist is a data analyst or business analyst. It may be a cliche that the majority of time spent on the technical aspects of doing data science involves data preparation, but it’s a key part of data analyst work. Gathering the data and ensuring that it’s both the right kind of data to answer the questions asked and that it’s in in a properly structured form for an analysis to be effective is core data analyst work. If the model output is destined for a report or a dashboard, the data analyst can take care of this simple work as well, leaving data scientists more time to focus on their core competency: model creation.

As a project grows in complexity, it’s likely that the domain expert who recognized the strategic need for initiating data science projects will need to offload the day-to-day aspects of managing those projects on the business side to a departmental operations manager or project manager. That line-of-business manager will be responsible for keeping everybody on the project on the same page – you’re now up to a data science team of four people, but each of these people is able to more effectively work in the comfort zone of their specialties.

The project manager is also likely to be the key person interfacing with IT to request necessary compute infrastructure and other resources. If your organization has multiple data science projects occurring at the same time, especially once your organization is doing enough data science to require multiple data scientists and then multiple data science teams, they will need to share available technical resources, and an IT manager will need to manage the availability of those resources and determine how they are to be shared.

If the desired output of a model is not for reports and dashboards, but for APIs, micro services, and embedded into apps, data science teams will frequently add a software engineer specifically to bring a given model into production via those forms. Often, models are created in Python or R, but need to be translated to a different language to work within the context of other software; a software engineer can optimize this better and faster than a data scientist usually can.

By the time your organization has multiple data scientists doing numerous data science projects across a number of data science teams, the need for a specific data science manager is clear. A data science manager manages data scientists, keeps track of your organizations’ data science projects from a broader perspective, and is able to provide guidance and resources to your data scientists that the original two-person team didn’t have access to.

As data science work continues to scale across the organization, some companies are finding value in adding data engineers to the team: individuals with a background in big data management, designing the workflow of data to be more efficient across distributed systems by writing specialized software bringing together various frameworks to create this data pipeline. They may come from IT, or they may come from your organization’s software development department; they may even be a data scientist who’s found their niche is really the management of the data pipeline.

A modern, fully scaled-out data science team consists of about seven people: a data scientist, a data analyst, a manager or director-level domain expert who sees the strategic need for data science to answer specific questions and serves as the executive champion, a project manager from the line-of-business side to keep the team updated, a software engineer to generate coded outputs (whether batch analytics, real-time code, or something else), an IT manager to appropriately provision hardware resources, a data science manager who has oversight across multiple data scientists’ work across the organization, and a data engineer to control the data pipeline. Remember your original two-person team? Your data scientist was doing specialized work across at least five domains: data preparation (data analyst work), project management (project manager), provisioning compute resources (often just running models on their company laptop, maybe outsourcing to the cloud if they have the budget – IT work), putting models into production (either a data analyst or a software engineer depending on the outputs), and probably managing multiple data science initiatives from multiple different departments. No wonder we keep referring to full-stack data scientists as unicorns!

Recommendations

If your organization is considering hiring its first data scientist, the business executive expressing the need to hire a data scientist for a given initiative will need to work directly with them. The data scientist will know how to manipulate the data, but may not have domain expertise in the particular area of study, and will not be aware of company-specific nuances for which the business executive can provide valuable context.

Remember that solo data scientists perform tasks that can often be delegated to other roles when time is short. Data analysts can do data prep and reporting work just as easily as data scientists can. Software engineers can translate models into custom apps faster. Data engineers focus on optimizing the data pipeline for multiple data scientists and data science projects. And project managers keep everyone in the loop, getting the whole process out of the data scientist’s head and into an accessible shared resource that can be used as a model to standardize the process for future data science projects and make them more efficient and repeatable.

When your organization’s data science workload has ramped up enough that your project needs are more than what a solo data scientist can support, before hiring another data scientist, look to other departments that may already exist in your organization to offload key tasks from your data scientist that aren’t “analyze the data to create a working model.” For example, if your organization has been collecting significant data for awhile, you likely have data analysts in-house performing some level of analytics on that data. They can do some of the prep and reporting work, allowing your data scientist to spend their time in the higher-value added activity of model creation.

Consider your desired outputs. What form does your organization require these outputs to take? Do the results of a model need to appear in a report or dashboard? Or do those results need to be made available programmatically via an API, as a micro service, or embedded into an app? The business demands for the outputs determines the necessary form, and to what role you can pass along the task of generating said outputs (data analysts for reports and dashboards, software engineers for coded options). For a model to be useful, it needs to be put into production.

Infrastructure as Code Provides Advantages for Proactive Compliance

Tom Petrocelli, Amalgam Insights Research Fellow

Companies struggle with all types of compliance issues. Failure to comply with government regulations, such as Dodd-Frank, EPA or HIPAA, is a significant business risk for many companies. Internally mandated compliance also represents problems as well. Security and cost control policies are just as vital as other forms of regulation since they protect the company from reputational, financial, the operational risks.
Continue reading “Infrastructure as Code Provides Advantages for Proactive Compliance”

Cloud Vendors Race to Release Continuous Integration and Continuous Deployment Tools

Tom Petrocelli, Amalgam Insights Research Fellow
Development organization continue to feel increasing pressure to produce better code more quickly. To help accomplish that faster-better philosophy, a number of methodologies have emerged that that help organizations quickly merge individual code, test it, and deploy to production. While DevOps is actually a management methodology, it is predicated on an integrated pipeline that drives code from development to production deployment smoothly. In order to achieve these goals, companies have adopted continuous integration and continuous deployment (CI/CD) tool sets. These tools, from companies such as Atlassian and GitLab, help developers to merge individual code into the deployable code bases that make up an application and then push them out to test and production environments.

Cloud vendors have lately been releasing their own CI/CD tools to their customers. In some cases, these are extensions of existing tools, such as Microsoft Visual Team Studio on Azure. Google’s recently announced Cloud Build as well as AWS CodeDeploy and CodePipeline are CI/CD tools developed specifically for their cloud environments. Cloud CI/CD tools are rarely all-encompassing and often rely on other open source or commercial products, such as Jenkins or Git, to achieve a full CI/CD pipeline.

These products represent more than just new entries into an increasingly crowded CI/CD market. They are clearly part of a longer-term strategy by cloud service providers to become so integrated into the DevOps pipeline that moving to a new vendor or adopting a multi-cloud strategy would be much more difficult. Many developers start with a single cloud service provider in order to explore cloud computing and deploy their initial applications. Adopting the cloud vendor’s CI/CD tools embeds the cloud vendor deeply in the development process. The cloud service provider is no longer sitting at the end of the development pipeline; They are integrated and vital to the development process itself. Even in the case where the cloud service provider CI/CD tools support hybrid cloud deployments, they are always designed for the cloud vendors own offerings. Google Cloud Build and Microsoft Visual Studio certainly follow this model.

There is danger for commercial vendors of CI/CD products outside these cloud vendors. They are now competing with native products, integrated into the sales and technical environment of the cloud vendor. Purchasing products from a cloud vendor is as easy as buying anything else from the cloud portal and they are immediately aware of the services the cloud vendor offers. No fuss, no muss.

This isn’t a problem for companies committed to a particular cloud service provider. Using native tools designed for the primary environment offers better integration, less work, and ease of use that is hard to achieve with external tools. The cost of these tools is often utility-based and, hence, elastic based on the amount of work product flowing through the pipeline. The trend toward native cloud CI/CD tools also helps explain Microsoft’s purchase of GitHub. GitHub, while cloud agnostic, will be much for powerful when completely integrated into Azure – for Microsoft customers anyway.

Building tools that strongly embed a particular cloud vendor into the DevOps pipeline is clearly strategic even if it promotes monoculture. There will be advantages for customers as well as cloud vendors. It remains to be seen if the advantages to customers overcome the inevitable vendor lock-in that the CI/CD tools are meant to create.

Data Science Platforms News Roundup, July 2018

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Anaconda, Cloudera, Databricks, Dataiku, DataRobotDatawatch, Domino, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta.

New SAS Viya Release Fueled by AI Capabilities, Allowing Customers a Look Under the Hood of Machine Learning Techniques

SAS Viya’s latest release addresses key concerns that have been top-of-mind in the data science world lately. First: providing transparency around recommendations from complex machine learning models using open source frameworks such as PDP, LIME, and ICE. Second, keeping customers’ personal information private automatically by automatically identifying and tagging such data (particularly in light of GDPR and similar legislation). The embrace of open source extends to providing the ability to use Python and R models within Viya as well.

Machine Learning in Google BigQuery

Google announced the beta of BigQuery ML, a new capability within its BigQuery software that lets data analysts use simple SQL extensions to employ two machine learning modeling techniques (linear regression and binary logistic regression) to analyze data residing in Google Cloud Storage. BigQuery ML can be accessed from within BigQuery, and also via external tools such as Jupyter notebooks and BI tools; Looker announced its support of BigQuery ML the day it debuted, and other Google Cloud Platform partners are likely to follow suit, though again, the capability remains in beta. I provide recommendations for organizations considering testing out the BigQuery ML capabilities in an earlier piece.

H2O.ai and Google Cloud Announce Collaboration to Drive Enterprise AI Adoption

H2O.ai announced a partnership with Google Cloud that brings H2O.ai’s H2O-3, Sparkling Water, and Driverless AI to the Google Cloud Platform. The partnership provides the entire H2O.ai suite on GCP, allowing customers to bring automated machine learning and AI capabilities to their data in Google Cloud in an accelerated timeframe.

Databricks Survey Gets To The Heart of the AI Dilemma: Nearly 90% of Organizations Investing in AI, Very Few Succeeding

A Databricks-commissioned survey by IDC reveals that the vast majority of large organizations pursuing AI initiatives run into significant trouble along the way. Companies commonly encounter issues in timeliness (timely data aggregation and preparation is a challenge because data is siloed and inconsistent; AI projects take more than six months to be deployed into production, and only 1/3 of these projects succeed anyway), complexity (nearly 90% of large organizations have invested in multiple machine learning tools), and collaboration (data science at scale is a “team sport” but communication is scattered) presenting significant obstacles. The solution: an “end-to-end” platform uniting data preparation, machine learning, and collaboration capabilities – a data science platform, in other words.

Domino Data Lab Partners With SAS to Accelerate Data Science Work in the Cloud

Domino users can now run SAS Analytics for Containers in the public cloud on AWS while using Domino as the orchestration layer. The ability to shift on-prem SAS Analytics compute work to the cloud as necessary can provide much-needed flexibility around speed and cost for “spiky” workloads while letting data scientists using Domino treat these SAS containers like any other model. I covered the Domino-SAS partnership in more detail earlier this month.

DataRobot Acquires Automated Machine Learning Startup Nexosis

DataRobot announced the acquisition of Nexosis, an automated machine learning company. Nexosis’ primary offering is an automated machine learning platform called Axon which amasses multiple data sources to produce actionable insights. Though the details of the acquisition remain classified, DataRobot continues to push its vision of automating AI development as the way to accelerate the deployment of machine learning and AI initiatives.

 


Finally, a reminder that I’m currently taking briefings for Amalgam Insights’ Vendor SmartList for the Data Science Platforms space. If you’d like to learn more about this research initiative, or set up a briefing with Amalgam Insights for potential inclusion, please email me at lynne@amalgaminsights.com.

Domino Deploys SAS Analytics Into a Model-Driven Cloud

The announcement: On July 10, Domino Data Lab announced a partnership with SAS Analytics that will let Domino users run SAS Analytics for Containers in the public cloud on AWS while using Domino’s data science platform as the orchestration layer for the infrastructure provisioning and management. This partnership will allow SAS customers to use Domino as an orchestration layer to access multiple SAS environments for model building, deploy multiple SAS applications on AWS, track each SAS experiment in detail, while having reproducibility of prior work.

What does this mean?

Domino customers with SAS Analytics workloads currently running on-prem will now be able to deploy those workloads to the public cloud on AWS by using SAS Analytics for Containers via the Domino platform. Domino plans to follow up with support for Microsoft Azure and Google Cloud Platform to further enable enterprises to offload containerized SAS workloads in the cloud. By running SAS Analytics for Containers via Domino, Domino users will be able to track, provide feedback on, and reproduce their containerized SAS experiments the same way they do so with other experiments they’ve constructed using Python, R, or other tools within Domino.

This partnership was driven by multiple joint SAS and Domino customers that have well-established SAS Analytics workloads in production on-prem. As these workloads spike, spinning up additional on-prem resources is more of a pain point than spinning up similar resources in the cloud. Being able to push the workload up to the cloud provides more flexibility around load balancing, speed, and cost than on-prem servers can usually provide. Accessing these additional resources via Domino provides the added convenience of being able to do so from within a single data science platform environment, and permits data scientists to treat the containerized SAS Analytics like any other model.

Large enterprise clients will often have data science workloads distributed across multiple languages: Python, R, and SAS among them. Each language has its strengths and weaknesses – in particular, SAS code is frequently written in the context of regulated environments. With the above-mentioned SAS workloads in production, the goal is to provide cloud resources to support data scientists leveraging their language of choice. This partnership is intended for SAS customers to use Domino as a data science enabler in conjunction with their existing SAS investments.

Recommendations

In general, enterprises with established on-premises SAS workloads and working on modern analytic modeling and data science projects should consider Domino Data Lab. SAS-using Enterprises adopting Domino will be able to deploy their SAS Analytics workloads to the public cloud on AWS. Shifting this workload to cloud services provides more flexibility around speed and cost than on-prem servers can typically provide to support peak demand.

Amalgam Insights believes that data science platforms able to operationalize a variety of different languages and dedicated workloads will provide an advantage to companies needing to bridge gaps between traditional and modern systems. Large organizations in particular are likely to have departments with multiple language requirements. This Domino partnership represents a step in this direction, with its ability to support both traditional SAS workloads that are embedded into many large enterprises and modern Python and R being used in many modern analytics and data science projects. Given that SAS still has a significant portion of the analytics market workload, Domino supporting SAS in this manner demonstrates a mature approach that treats established SAS Analytics models as valuable and usable resources.

Domino’s partnership with SAS represents a full business partnership including product, engineering, and go-to-market efforts. Amalgam Insights believes that the Domino-SAS partnership represents an important step in providing scalability for existing on-premises SAS workloads and allows data science-savvy organizations with dedicated SAS workloads with the opportunity to integrate some of their most important enterprise analytics with modern data science approaches while providing consistent support for scale, lineage, and governance across all experiments.

Google BigQuery ML Extends the Power of (Some) Modeling to Data Analysts

Last week at Google Next ‘18, Google announced a new beta capability in their BigQuery cloud data warehouse: BigQuery ML, which lets data analysts apply simple machine learning models to data residing in BigQuery data warehouses.

Data analysts know databases and SQL, but generally don’t have a lot of experience in building machine learning models using Python or R. An additional issue is the expense, time-consumption, and possible regulatory violations of moving data out of storage in order to send it through machine learning models. BigQuery ML aims to address these problems by letting data analysts push data through linear regression models (to predict a numeric value) or binary logistic regression models (to classify a value into one of two categories, such as “high” or “low”), using simple extensions of SQL on Google databases, run in place.

Running a linear regression model using Google BigQuery ML in two steps: 1. Model creation, 2. Prediction.
Running a linear regression model using Google BigQuery ML. Animation courtesy Google.

Though BigQuery ML is in beta (which has a flexible definition given that this is Google), and it is currently limited to just the two predictive model types mentioned, this covers common business queries, such as predicting the cost of something or the likelihood of customer churn. Google Cloud may be the underdog in the cloud storage market compared to AWS or Azure, but for companies and departments trying to do relatively simple modeling of the scenarios mentioned above, BigQuery ML puts a fair bit of power in the hands of data analysts.

Recommendations for Organizations Considering BigQuery ML

If you’re interested in testing out BigQuery ML, you’ll first need data stored in Google BigQuery on Google Cloud. It is accessible on the Google Cloud Platform Free tier, but charges will apply beyond specific storage and processing limits. Once your data has been loaded into BigQuery, you can access it and process it through BigQuery’s web interface, via the command line, or via BigQuery’s REST API. It can also be accessed via external tools such as Jupyter notebooks or BI platforms. Looker has already announced its integration with BigQuery ML, and that it looks forward to integrating it into Looker Blocks. Look for other Google Cloud Platform partners to add similar functionality in the near future.

If you’re a data analyst looking to try out BigQuery ML, you’ll want to brush up on your statistics knowledge of linear regression and (binary) logistic regression to understand the results of your models; it’s time to dust off your stats textbook.

Finally, BigQuery ML is in beta – given that Google has put up warnings on the documentation pages that this product “might be changed in backward-incompatible ways and is not subject to any SLA or deprecation policy,” treat it as the testing ground that it is and don’t put it into production just yet.

 

Amalgam Insights Releases List of 2018’s Best Enablement Solution Vendors for “Building Better Sales Brains”

Optimized sales training requires three critical aspects; combining effective sales enablement, people skills, and situational awareness

July 31, 2018 08:30 ET | Source: Amalgam Insights

BOSTON, July 31, 2018 — It turns out that the traits common to the best salespeople aren’t necessarily innate; research indicates they can be scientifically coached to be top producers. A new report, issued today from Amalgam Insights, evaluates the technology vendors who are leading the industry in optimizing this kind of training.

Todd Maddox, the most-cited researcher in corporate learning and Learning Science Research Fellow at Amalgam Insights, authored this report. This research uniquely identifies learning science approaches of optimizing sales training, which Maddox says requires a combination of effective sales enablement, people skills, and situational awareness.

The companies recognized in this research are (in alphabetical order): Allego, Brainshark, CrossKnowledge, Gameffective, Highspot, JOYai, Lessonly, Mindtickle, Qstream, myTrailhead, Seismic, and Showpad.

“Companies are continually looking for a competitive edge; leveraging technology to develop a top-tier sales force is a key area for improvement,” Maddox says. “If you want sales enablement and training tools that are highly effective, they must be grounded in learning science – the marriage of psychology and brain science. Many sales teams and sales-focused vendors have access to these tools. What is needed is an effective way to leverage these tools to maximum advantage,” he added.

Maddox’s report notes that successful companies work to map sales processes to three distinct learning systems in the brain:

  • The cognitive skills learning system is about knowing facts (the “what”) and is directly linked to sales enablement;
  • The behavioral skills learning system is about behavior (the “how”) and is directly linked to people skills training; and
  • The emotional learning system, which is about “reading” people and situations (the “feel”) and is directly linked to situational awareness.

“When these three traits are combined, salespeople become far more effective in understanding both the tangible and intangible needs of their customers and can craft and deliver solutions that continually produce solid results for their companies,” Maddox said.

Maddox’s report, “2018’s Best Sales Enablement Solutions for Building Better Sales Brains,” is available for Sales VPs, Directors, and Managers at www.amalgaminsights.com.

About Amalgam Insights:
Amalgam Insights (www.amalgaminsights.com) is a consulting and strategy firm focused on the transformative value of Technology Consumption Management. AI believes that all businesses must fundamentally reimagine their approach to data, design, cognitive augmentation, pricing, and technology usage to remain competitive. AI provides marketing and strategic support to enterprises, vendors, and institutional investors for conducting due diligence in Technology Consumption Management.

For more information:
Hyoun Park
Amalgam Insights
hyoun@amalgaminsights.com
415.754.9686

Steve Friedberg
MMI Communications
steve@mmicomm.com
484.550.2900

Azure Advancements Announced at Microsoft Inspire 2018

Last week, Microsoft Inspire took place, which meant that Microsoft made a lot of new product announcements regarding the Azure cloud. In general, Microsoft is both looking up and trying to catch up to Amazon from a market share perspective while trying to keep its current #2 place in the Infrastructure as a Service world ahead of rapidly growing Google Cloud Platform as well as IBM and Oracle.  Microsoft Azure is generally regarded as a market-leading cloud platform, along with Amazon, that provides storage, computing, and security and is moving towards analytics, networking, replication, hybrid synchronization, and blockchain support.

Key functionalities that Microsoft has announced include:
Continue reading “Azure Advancements Announced at Microsoft Inspire 2018”