Data Science and Machine Learning News Roundup, May 2019

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Amazon, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, Domino, Elastic, Google, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta, TROVE.

Domino Data Lab Champions Expert Data Scientists While Outpacing Walled-Garden Data Science Platforms

Domino announced key updates to its data science platform at Rev 2, its annual data science leader summit. For data science managers, the new Control Center provides information on what an organization’s data science team members are doing, helping managers address any blocking issues and prioritize projects appropriately. The Experiment Manager’s new Activity Feed supplies data scientists with better organizational and tracking capabilities on their experiments. The Compute Grid and Compute Engine, built on Kubernetes, will make it easier for IT teams to install and administer Domino, even in complex hybrid cloud environments. Finally, the beta Domino Community Forum will allow Domino users to share best practices with each other, as well as submit feature requests and feedback to Domino directly. With governance becoming a top priority across data science practices, Domino’s platform improvements around monitoring and making experiments repeatable will make this important ability easier for its users.

Informatica Unveils AI-Powered Product Innovations and Strengthens Industry Partnerships at Informatica World 2019

At Informatica World, Informatica publicized a number of key partnerships, both new and enhanced. Most of these partnerships involve additional support for cloud services. This includes storage, both data warehouses (Amazon Redshift) and data lakes (Azure, Databricks). Informatica also announced a new Tableau Dashboard Extension that enables Informatica Enterprise Data Catalog from within the Tableau platform. Finally, Informatica and Google Cloud are broadening their existing partnership by making Intelligent Cloud Services available on Google Cloud Platform, and providing increased support for Google BigQuery and Google Cloud Dataproc within Informatica. Amalgam Insights attended Informatica World and provides a deeper assessment of Informatica’s partnerships, as well as CLAIRE-ity on Informatica’s AI initiatives.

Microsoft delivers new advancements in Azure from cloud to edge ahead of Microsoft Build conference

Microsoft announced a number of new Azure Machine Learning and Azure AI capabilities. Azure Machine Learning has been integrated with Azure DevOps to provide “MLOps” capabilities that enable reproducibility, auditability, and automation of the full machine learning lifecycle. This marks a notable increase in making the machine learning model process more governable and compliant with regulatory needs. Azure Machine Learning also has a new visual drag-and-drop interface to facilitate codeless machine learning model creation, making the process of building machine learning models more user-friendly. On the Azure AI side, Azure Cognitive Services launched Personalizer, which provides users with specific recommendations to inform their decision-making process. Personalizer is part of the new “Decisions” category within Azure Cognitive Services; other Decisions services include Content Moderator, an API to assist in moderation and reviewing of text, images, and videos; and Anomaly Detector, an API that ingests time-series data and chooses an appropriate anomaly detection model for that data. Finally, Microsoft added a “cognitive search” capability to Azure Search, which allows customers to apply Cognitive Services algorithms to search results of their structured and unstructured content.

Microsoft and General Assembly launch partnership to close the global AI skills gap

Microsoft also announced a partnership with General Assembly to address the dearth of qualified data workers, with the goal of training 15,000 workers by 2022 for various artificial intelligence and machine learning roles. The two companies will found an AI Standards Board to create standards and credentials for artificial intelligence skills. In addition, Microsoft and General Assembly will develop scalable training solutions for Microsoft customers, and establish an AI Talent network to connect qualified candidates to AI jobs. This continues the trend of major enterprises building internal training programs to bridge the data skills gap.

Data Science and Machine Learning News Roundup, April 2019

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Amazon, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, Domino, Elastic, Google, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta, TROVE.

Alteryx Acquires ClearStory Data to Accelerate Innovation in Data Science and Analytics

Alteryx acquired ClearStory Data, an analytics solution for complex and unstructured data with a focus on automating Big Data profiling, discovery, and data modeling.  This acquisition reflects Alteryx’s interest in expanding its native capabilities to include more in-house data visualization tools. ClearStory Data provides a visual focus on data prep, blending, and dashboarding with their Interactive Storyboards that partners with Alteryx’s ongoing augmentation of internal visualization capabilities throughout the workflow such as Visualytics.

Dataiku Announces the Release of Dataiku Lite Edition

Dataiku released two new versions of its machine learning platform, Dataiku Free and Dataiku Lite, targeted towards small and medium businesses. Dataiku Free will allow teams of up to three users to work together simultaneously; it is available both on-prem and on AWS and Azure. Dataiku Lite will provide support for Hadoop and job scheduling beyond the capabilities of Dataiku Free. Since Dataiku already partners with over 1000 small and medium businesses, creating versions of its existing platform more financially accessible to such organizations lowers a significant barrier to entry, and grooms smaller companies to grow their nascent data science practices within the Dataiku family.

DataRobot Celebrates One Billion Models Built on Its Cloud Platform

DataRobot announced that as of mid-April, its customers had built one billion models on its automatic machine learning program. Vice President of Product Management Phil Gurbacki noted that DataRobot customers build more than 2.5 million models per day. Given that the majority of models created are never successfully deployed – a common theme cited this month at both Enterprise Data World and at last week’s Open Data Science Conference – it seems likely that DataRobot customers don’t currently have one billion models operationalized. If the percentage of deployed models is significantly higher than the norm, though, this would certainly boost DataRobot in potential customers’ eyes, and serve to further legitimize AutoML software solutions as plausible options.

Microsoft, SAS, TIBCO Continue Investments in AI and Data Skills Training

Microsoft announced a new partnership with OpenClassrooms to train students for the AI job marketplace via online coursework and projects. Given an estimate that projects 30% of AI and data jobs will go unfilled by 2022, OpenClassrooms’ recruiting 1000 promising candidates seems like just the beginning of a much-needed effort to address the skills gap.

SAS provided more details on the AI education initiatives they announced last month. First, they launched SAS Viya for Learners, which will allow academic institutions to access SAS AI and machine learning tools for free. A new SAS machine learning course and two new Coursera courses will provide access to SAS Viya for Learners to those wanting to learn AI skills without being affiliated with a traditional academic institution. SAS also expanded on the new certifications they plan to offer: three SAS specialist certifications in machine learning, natural language and computer vision, and forecasting and optimization. Classroom and online options for pursuing both of these certifications will be available.

Meanwhile, TIBCO continued expanding its partnerships with educational institutions in Asia to broaden analytics knowledge in the region. Most recently, it has augmented its existing partnership with Singapore Polytechnic to train 1000 students in analytics and IoT skillsets by 2020. Other analytics education partnerships TIBCO has announced in the last year include Yuan Ze University in Taiwan, Asia Pacific University of Technology and Innovation in Malaysia, and BINUS University in Indonesia.

The big picture: existing data science degree programs and machine learning and AI bootcamps are not providing a large enough volume of highly-skilled job candidates quickly enough to fill many of these data-centric positions. Expect to hear more about additional educational efforts forthcoming from data science, machine learning, and AI vendors.

Quick AI Insights at #MSBuild in an Overstuffed Tech Event Week

We are in the midst of one of the most packed tech event weeks in recent memory. This week alone, Amalgam Insights is tracking *six* different events:

This means a lot of announcements this week that will be directly comparable. For instance, Google, Microsoft, Red Hat, SAP, and ServiceNow should all have a variety of meaty DevOps and platform access announcements. Google, Microsoft, SAP, and possibly IBM and ServiceNow should have interesting new AI announcements. ServiceNow and Red Hat will both undoubtedly be working to one-up each other when it comes to revolutionizing IT. We’ll be providing some insights and give you an idea of what to look forward to.

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

How is Salesforce Taking on AI: a look at Einstein at Salesforce World Tour Boston

On April 3rd, Amalgam Insights attended Salesforce World Tour 2019 in Boston. Salesforce users may know this event as an opportunity to meet with their account managers and catch up with new functionalities and partners without having to fly to San Francisco and navigate through the colossus that is Dreamforce.

Salesforce also uses this tour as an opportunity to present analysts with the latest and greatest changes in their offerings. Amalgam Insights was interested both in learning more about Salesforce’s current positioning from a data perspective, including the vendor’s acquisition of Mulesoft as well as its progression in both the Einstein Analytics and Einstein Platform in providing value-added insights and artificial intelligence to Salesforce clients.

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

Enterprise Data World 2019: Data Science Will Take Over The World! … Eventually.

Amalgam Insights attended Enterprise Data World, a conference focused on data management, in late March. Though the conference tracks covered a wide variety of data practices, our primary interest was in the sessions on the AI and Machine Learning track. We came away with the impression that the data management world is starting to understand and support some of the challenges that organizations face when trying to get complex data initiatives off the ground, but that the learning process will continue to have growing pains.

Data Strategy Bootcamp

I began my time at Enterprise Data World with the Data Strategy Bootcamp on Monday. Often, organizations focus on getting smaller data projects done quickly in a tactical fashion at the expense of consciously developing their broader data strategy. The bootcamp addressed how to incorporate these “quick wins” into the bigger picture, and delved into the details of what a data strategy should include, and what does the process of building one look like. For people in data analytics and data scientist roles, understanding and contributing to your organization’s data strategy is important because well-documented and properly-managed data means data analysts and data scientists can spend more of their time doing analytics and building machine learning models. The “data scientists spend 80% of their time cleaning and preparing data” number continues to circulate without measurable improvement. To build a successful data strategy, organizations will need to identify business goals that are data-centric to align the organization’s data strategy with its business strategy, assess the organization’s maturity and capabilities across its data ecosystem, and determine long-term goals and “quick wins” that will provide measurable progress towards those goals.

Getting Started with Data Science, Machine Learning, and Artificial Intelligence Initiatives

Actually getting started on data science, machine learning, and artificial intelligence initiatives remains a point of confusion for many organizations looking to expand beyond the basic data analytics they’re currently doing. Both Kristin Serafin and Lizzie Westin of FINRA and Vinay Seth Mohta of Manifold led sessions discussing how to turn talk about machine learning and artificial intelligence into action in your organizations, and how to do so in a way that can scale up quickly. Key takeaways: your organization needs to understand its data to understand what questions it wants answered that require a machine learning approach; it needs to understand what tools are necessary to move forward; it needs to understand who already has pertinent data capabilities within the organization, and who is best positioned to improve their skills in the necessary manner; and you need to obtain buy-in from relevant stakeholders.

Data Job Roles

Data job roles were discussed in multiple sessions; I attended one from the perspective of how analytical jobs themselves are evolving, and one from the perspective of analytical career development. Despite the hype, not everyone is a data scientist, even if they may perform some tasks that are part of a data science pipeline! Data engineers are the difference between data scientists’ experiments sitting in silos and getting them into production where they can affect your company. Data analysts aren’t going anywhere – yet. (Though Michael Stonebraker, in his keynote Tuesday morning, stated that he believed data science would eventually replace BI, pending upskilling a sufficient number of data workers.) And data scientists spend 80% of their time doing data prep instead of building machine learning models; they’d like to do more of the latter, and because they’re an expensive asset, the business needs them to be doing less prep and more building as well.

By the same token, there are so many different specialties across the data environment, and the tool landscape is incredibly large. No one will know everything; even relatively low-level people will need to provide leadership in their particular roles to bridge the much-bemoaned gap between IT and Business. So how can data people do that? They’ll need to learn to talk about their initiatives and accomplishments in business terms – increasing revenue, decreasing cost, managing risk. By doing this, data strategy can be tied to business strategy, and this barrier to success can be surmounted.

Data Integration at Scale

Michael Stonebraker’s keynote highlighted the growing need for people with data science capabilities, but the real meat of his talk centered around how to support complex data science initiatives: doing data integration at scale. One example: General Electric’s procurement system problem. Clearly, the ideal number of procurement systems in any company is “one.” Given mergers and acquisitions, over time, GE had accumulated *75* procurement systems. They could save $100M if they could bring together all of these systems, with all of the information on the terms and conditions negotiated with each vendor via each of these systems. But this required a rather complex data integration process. Once that was done, the same process remained for dealing with their supplier databases, and their customer databases, and a whole host of other data. Machine learning can help with this – once there are sufficient people with machine learning skills to address these large problems. But doing data integration at scale will remain a significant challenge for enterprises for now, with machine learning skills being relatively costly and rare, data accumulation continuing to grow exponentially, and bringing in third-party data to supplement existing analyses..

Knowledge Graphs and Semantic AI

A number of sessions discussed knowledge graphs and their importance for supporting both data management and data science tasks. Knowledge graphs provide a “semantic” layer over standard relational databases – they prioritize documenting the relationships between entities, making it easier to understand how different parts of your organization’s data are interrelated. Because having a knowledge graph about your organization’s data provides natural-language context around data relationships, it can make machine learning models based on that data more “explainable” due to the additional human-legible information available for interpretation and understanding. Another example: if you’re trying to perform a search, most results rely on exact matches. Having a knowledge graph makes it simple to pull up “related” results based on the relationships documented in that knowledge graph.

Data Access, Control, and Usage

My big takeaway from Scott Taylor’s Data Architecture session: data should be a shared, centralized asset for your entire organization; it must be 1) accessible by its consumers 2) in the format they require 3) via the method they require 4) if they have permission to access it (security) 5) and they will use it in a way that abides by governance standards and laws. Data scientists care about this because they need data to do their job, and any hurdle in accessing usable data makes it more likely they’ll avoid using official methods to access the data. Nobody has three months to wait for a data requisition from IT’s data warehouses to be turned around anymore; instead, “I’ll just use this data copy on my desktop” – or more likely these days, in a cloud-hosted data silo. Making centralized access easy to use makes data users much more likely to comply with data usage and access policies, which helps secure data properly, govern its use appropriately, and prevent data silos from forming.

Digging a bit more into the security and governance aspects mentioned above, it’s surprisingly easy to identify individuals in a set of anonymized data. In separate presentations, Matt Vogt of Immuta demonstrated this with a dataset consisting of anonymized NYC taxi data, even as more and more information was redacted from it. Jeff Jonas of Senzing’s keynote took this further – as context accumulates around data, it gets easier to make inferences, even when your data is far from clean. With GDPR on the table, and CCPA coming into effect in nine months, how data workers can use data, ethically and legally, will shift, significantly affecting data workflows. Both the use of data and the results provided by black-box machine learning models will be challenged.

Recommendations

Data scientists and machine learning practitioners should familiarize themselves with the broader data management ecosystem. Said practitioners understand why dirty data is problematic, given that they spend most of their work hours cleaning that data so they can do the actual machine learning model-building, but there are numerous tools available to help with this process, and possibly obviate the need for a particular cleaning job that’s already been done once. As enterprise data catalogs become more common, this will prevent data scientists from spending hours on duplicative work when someone else has already cleaned the set they were planning to use and made it available for the organization’s use.

Data scientists and data science managers should also learn how to communicate the business value of their data initiatives when speaking to business stakeholders. From a technical point of view, making a model more accurate is an achievement in and of itself. But knowing what it means from a business standpoint builds understanding of what that improved accuracy or speed means for the business as a whole. Maybe your 1% improvement in model accuracy means you save your company tens of thousands of dollars by more accurately targeting potential customers who are ready to buy your product – that’s what will get the attention of your line-of-business partners.

Data science directors and Chief Data or Chief Analytics Officers should approach building their organization’s data strategy and culture with the long-term view in mind. Aligning your data strategy with the organization’s business strategy is crucial to your organization’s success. Rather than having both departments tugging on opposite ends of the rope going in different directions, develop an understanding of each others’ needs and capabilities and apply that knowledge to keep everyone focused on the same goal.

Chief Data Officers and Chief Analytics Officers should understand their organization’s capabilities by conducting an assessment both of their data capabilities and capacity available by individual, and to assess the general maturity in each data practice area (such as Master Data Management, Data Integration, Data Architecture, etc.). Knowing the availability of both technical and people-based resources is necessary to develop a scalable set of data processes for your organization with consistent results no matter who the data scientist or analyst is in charge of executing on the process for any given project.

As part of developing their organization’s data strategy, Chief Data Officers and Chief Analytics Officers must work with their legal department to develop rules and processes for accumulating, storing, accessing, and using data appropriately. As laws like GDPR and the California Privacy Act start being enforced, data access and usage will be much more scrutinized; companies not adhering to the letters of those laws will find themselves fined heavily. Data scientists and data science managers who are working on projects that involve sensitive or personal data should talk to their general counsel to ensure they remain on the right side of the law.

At IBM Think, Watson Expands “Anywhere”

At IBM Think in February, IBM made several announcements around the expansion of Watson’s availability and capabilities, framing these announcements as the launch of “Watson Anywhere.” This piece is intended to provide guidance to data analysts, data scientists, and analytic professionals seeking to implement machine learning and artificial intelligence capabilities and evaluating the capabilities of…

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

Data Science and Machine Learning News Roundup, February 2019

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Amazon, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, Domino, Elastic, Google, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta,…

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

Four Key Announcements from H2O World San Francisco

Last week at H2O World San Francisco, H2O.ai announced a number of improvements to Driverless AI, H2O, Sparkling Water, and AutoML, as well as several new partnerships for Driverless AI. The improvements provide incremental improvements across the platform, while the partnerships reflect H2O.ai expanding their audience and capabilities. This piece is intended to provide guidance…

Please register or log into your Amalgam Insights Community account to read more.
Log In Register

Data Science and Machine Learning News Roundup, January 2019

On a monthly basis, I will be rounding up key news associated with the Data Science Platforms space for Amalgam Insights. Companies covered will include: Alteryx, Amazon, Anaconda, Cambridge Semantics, Cloudera, Databricks, Dataiku, DataRobot, Datawatch, DominoElastic, Google, H2O.ai, IBM, Immuta, Informatica, KNIME, MathWorks, Microsoft, Oracle, Paxata, RapidMiner, SAP, SAS, Tableau, Talend, Teradata, TIBCO, Trifacta, TROVE.

Cloudera and Hortonworks Complete Planned Merger

In early January, Cloudera and Hortonworks completed their planned merger. With this, Cloudera becomes the default machine learning ecosystem for Hadoop-based data, while providing an easy pathway for expanding into  machine learning and analytics capabilities for Hortonworks customers.

Study: 89 Percent of Finance Teams Yet to Embrace Artificial Intelligence

A study conducted by the Association of International Certified Professional Accountants (AICPA) and Oracle revealed that 89% of organizations have not deployed AI to their finance groups. Although a correlation exists between companies with revenue growth and companies that are using AI, the key takeaway is that artificial intelligence is still in the early adopter phase for most organizations.

Gartner Magic Quadrant for Data Science and Machine Learning Platforms

In late January, Gartner released its Magic Quadrant for Data Science and Machine Learning Platforms. New to the Data Science and Machine Learning MQ this year are both DataRobot and Google – two machine learning offerings with completely different audiences and scope. DataRobot offers an automated machine learning service targeted towards “citizen data scientists,” while Google’s machine learning tools, though part of Google Cloud Platform, are more of a DIY data pipeline targeted towards developers. By contrast, I find it curious that Amazon’s SageMaker machine learning platform – and its own collection of task-specific machine learning tools, despite their similarity to Google’s – failed to make the quadrant, given this quadrant’s large umbrella.

While data science and machine learning are still emerging markets, the contrasting demands of these technologies made by citizen data scientists and by cutting-edge developers warrants splitting the next Data Science and Machine Learning Magic Quadrant into separate reports targeted to the considerations of each of these audiences. In particular, the continued growth of automated machine learning technologies will likely drive such a split, as citizen data scientists pursue a “good enough” solution that provides quick results.

CES 2019 Ramifications for Enterprise IT

Vendors and Organizations Mentioned: IBM, Ose, WindRiver, Velodyne, UV Partners, TDK Corporation, Chirp Microsystems, Qualcomm, Intel, Zigbee Alliance, Thread Group, Impossible Foods

The CES (Consumer Electronics Show) is traditionally known as the center of consumer technology. Run by the CTA (Consumer Technology Association) in Las Vegas, this show brings out enormous volumes of new technology ranging from smart cars to smart homes to smart sports equipment to smart… well, you get the picture. But within all of these announcements, there were also a number of important announcements that affect the enterprise IT world and the definition of IT that will be important for tech professionals to think about in 2019. Amalgam Insights went through hundreds of different technology press releases and announcements to find the most important announcements that will affect your professional career.

Come along with me as we look at Quantum Computing, Gender Equality, Autonomous Vehicles, Disinfected Smartphones, Low Power Virtual Reality, Neural Net Chips, Internet of Things Interoperability, and, yes, the Impossible Burger.

Quantum Computing

On January 8th, 2019, IBM announced IBM Q System One, the “first integrated universal approximate quantum computing system” designed for commercial use. From a practical perspective, this will allow R&D departments to actually have their own quantum computers. Today, the vast majority of quantum computing work is done based on remote access either to quantum computers or quantum computing emulators, which provide limits on the experimenters’ abilities to customize and configure their computing environments.

To create a quantum computing system, IBM had to bring together hardware that provided high-quality and low-error rate qubits, cryogenic equipment to cool the hardware and quantum activity, as well as the electronics, firmware, and traditional computing capabilities needed to support a quantum environment. Of course, IBM is not new to quantum computing and has been a market leader in this emerging category.

Quantum computing fundamentally matters because we are running up against the physical limits of material science that allow microprocessors to get smaller and faster, which we typically sum up as Moore’s Law. In addition, quantum computing potentially allows both for more secure encryption or the ability to quickly decrypt extremely secure technologies, depending on whether one takes a white-hat or black-hat approach. But the ramifications mean that it is important for security organizations to both start understanding quantum computing and to either stay ahead of black-hat quantum computing efforts or provide white-hat security answers to stay ahead.

Gender Equality at CES

At CES, a woman-designed sex toy originally given an innovation award (Warning: may not be Safe For Work) had its award revoked. The Ose vibrator designed by Lora DiCarlo was entered in the robotics and drone category based on its design by a robotics lab at Oregon State University and eight patents pending for a variety of robotic and biomimicry capabilities.

The product was undoubtedly risque. But CES has previously allowed virtual reality pornography to be shown within the show as well as other anatomical simulations designed for sex.

Given CES’ historical standards for other exhibitors to present similar products and objects, the revoking of this award looks biased. This is an important lesson that the answer to providing a gender-equal environment is not necessarily to simply remove all sexual content. The goal is to eliminate harassment and abuse while providing equal opportunity across gender. As long as sex is a part of consumer technology, CES needs to provide equal opportunity for all genders to present.

Autonomous Vehicles

There were a number of announcements associated with Lidar sensors and edge computing innovations. Two that got Amalgam Insights’ attention included:

WindRiver’s integration of its Chassis automotive software with its TitaniumCloud virtualization software. This announcement hints at the need for the car, as computing system, to be integrated with the cloud. This integration will be important as car manufacturers seek to upgrade car capabilities. As we continue to think about the car both as an autonomous data center of its own and set of computing and processing workloads that need to be upgraded on a regular basis, we will need to consider how the operational technologies associated with autonomous vehicles and other “Things” integrate with carrier-grade and public clouds.

Velodyne announced an end-to-end Lidar solution that includes both a hemisphere Lidar sensor called VelaDome as well as its Velia software. This launch reflects the need for hardware components and software to be integrated in the vehicle world, just as it is in the appliances and virtual machines we often use in the world of IT. This is another data point showing how autonomous vehicles are coming closer to our world of IT both in creating integrated solutions and in requiring IT-like support in the future.

Disinfected Smartphones

UV Partners announced a new product called the UV Angel Aura Clean & Charge, which combines both wireless charging with ultraviolet light disinfection. This product matters because, quite frankly, mobile phones tend to be filthy. That’s what happens when people are holding them for hours a day and rarely wash or disinfect the phones. So, this device will be useful for germophobes.

But there is also the practical aspect of being able to clean phone surfaces with this object more easily. This may lead to being able to use the phone to detect biological matter or changes more effectively without additional dirt and biocontaminants. This could make phones or other sensors more accurate in trying to detect trace elements or compounds and increase the functionality of both phones and “Things” as a result.

Low Power Virtual Reality

TDK Corporation announced its work with Qualcomm through the group company of Chirp Microsystems to improve controller tracking for mobile virtual reality and augmented reality headsets (). Most importantly, the tracking system used for these devices is only several miiliwatts, which is a small fraction of the total power within a standard smartphone battery. This compares to several hundred milliwatts for a standard optical tracking system. With this primary technology in development, both AR and VR experiences become more usable simply because they will take significantly less power to support.

This change may not sound exciting, but Amalgam Insights believes that one of the key challenges to the adoption of AR and VR is simply the battery life needed to use these applications for any extended amount of time. This breakthrough could significantly extend the life of AR and VR apps.

Artificial Intelligence

Intel made a number of chip announcements. Amalgam Insights is not a hardware analyst firm, so most of the mobile and laptop-based announcements are beyond our coverage. But the announcement that got our attention was the Intel Nervana Neural Network Processor. This chip, developed with Facebook, is developed to accelerate the detection of inference associated with the algorithmic processing of neural nets and will drive higher performance machine learning and artificial intelligence efforts.

At a time when every chip player is trying to get ahead with GPUs and TPUs, Intel is making its mark by focusing on the detection of iterative inference, which is a necessary part of the “intelligence” of AI. Amalgam Insights looks forward to seeing how the Nervana processor is made available for commercial use and as a cloud-based capability for the enterprise world.

Internet of Things Interoperability

The Zigbee Alliance and Thread Group announced completing the Dotdot 1.0 specification, which will improve interoperability across smart home devices and networks made by different vendors. By providing a standard application layer that works across a wide variety of vendors and works on an IP networking standard, Dotdot brings a level of standardization to application-level configuration, testing, and certification.

This standard is an important step forward for companies working on Smart Home devices or related Smart Office devices and seeking a common way to ensure that new devices will be able to communicate with existing device investments. Amalgam Insights looks forward to seeing how this standard revolutionizes Smart Buildings and the Future of Work.

And, the Impossible Burger

The belle of the ball, so to speak, at CES was the Impossible Burger 2.0, a soy-based protein held together by heme with iron and protein content similar to beef.

So, this is very cool, but why is this relevant to IT? First, this burger reminds us that food is now tech. Think about both how interesting and weird this is. A company has made custom proteins to build a new type of food designed to replace the taste and role of beef. Or at least that’s where they are today.

Meanwhile in the IT world, identity is increasingly based on biometrics: eyes, fingerprints, facial recognition. It is only a matter of time before either protein or DNA profiles are added to this mix. There will undoubtedly be some controversies and hiccups as this happens, but it is almost inevitable given the types of sensors we have and the evolution of DNA technologies like CRISPR that rapidly sequence and cut up DNA.

So, as we get better at replicating the nutrition and texture of meat with plant-based proteins at the same time that our physical bodies are increasingly used to provide access to our accounts… yes, this gets weird. But we’re probably five-to-ten years away from being hacked by some combination of these technbologies as the DNA, protein, and biometric worlds keep coming closer and closer together.

For now, this is just cool to watch. And the Impossible Burger 2.0 sounds like a great vegan alternative to a burger. But putting the pieces together, identity in 2030 is going to be extremely difficult to manage.