June 3: From BI to AI (bodo.ai, Gigasheet, Incorta, One AI, Oracle, Rockset, Saturn Cloud)
If you would like your announcement to be included in Amalgam Insights’ weekly data and analytics roundups, please email email@example.com.
Gigasheet, a no-code analytics platform, announced that they had secured $7M in Series A funding. Participants in the funding round included Accomplice, Argon, Founder Collective, and REV, along with individual investors. The funds will go towards filling out their product road map and expanding their future enterprise offering.
One AI, a natural language processing provider, announced that they had raised $8M in seed funding from angel investors. Along with the funding, One AI emerged from stealth, launching their NLP-as-a-Service offering. Their Language Skills API includes a number of NLP models for specific business use cases such as conversation and article summarization, clustering and text analytics, and emotion and sentiment extraction, among others. Developers will be able to use these models to transform unstructured text into structured data.
Launches and Updates
Incorta, a realtime analytics platform, debuted new capabilities this week. Among the new features are a native Delta Sharing integration, allowing Incorta customers to securely share operational data more quickly. Incorta also launched several data apps that acquire operational data from source systems and prepare it for analysis, with already-built business schemas and dashboards for Oracle EBS, Oracle ERP and EPM Clouds, Netsuite, SAP, and others.
Analytics platform Rockset announced a new integration with Oracle this week, allowing developers to run search, aggregations, and joins on data from Oracle databases in real time. Rockset ingests change data capture streams from Oracle, enabling swift analytical queries.
Data science and machine learning platform Saturn Cloud and parallel data compute platform bodo.ai have launched a partnership. Bodo.ai software running within Saturn Cloud resources will allow data scientists to scale up their model prototypes to “petabyte-scale parallel processing production” without requiring tuning or re-coding a model for scaling.