Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox, sign up here!
StreamNative, founded by the creators of Apache Pulsar, introduced Lakestream, a new architectural paradigm for lakehouse-native streaming. The impetus for the solution is that traditional streaming systems push interoperability up to the protocol layer, creating data silos for each protocol. In contrast, Lakestream pushes interoperability down to the storage and catalog layers, achieving unification through a shared lakehouse-native storage foundation and unified metadata catalog. Streams become first-class lakehouse primitives alongside tables.
As a result, Lakestream unifies data streaming and the lakehouse. The approach is analogous to that of the lakehouse paradigm, which demonstrated that data warehouses and data lakes don’t need to be separate. In this case, Lakestream streaming and the lakehouse don’t need to be separate either.
In practice, a Kafka topic and an Iceberg table can be the same object: no movement, no connectors, no waiting. The Lakestream architecture that makes this possible is built on three layers: cloud-native stream storage that writes directly to object storage in open formats (Iceberg, Delta Lake); a Lakestream Catalog that federates with Databricks Unity Catalog, Snowflake Horizon Catalog, and AWS S3 Tables; and stateless protocol servers that let Kafka, Pulsar, and other protocols all write to the same underlying storage.
In other company news, StreamNative announced the launch of Ursa For Kafka (UFK). The solution is a native Apache Kafka service that puts the Lakestream vision into practice.
Real-time analytics news in brief
BMC Software announced new innovations with purpose-built AI embedded in the mainframe tools that operators and developers use daily. New mainframe AI capabilities and innovations include zAdviser Enterprise Application Analysis, an AI-powered mainframe development productivity insights platform that combines source code analysis, BMC AMI DevX telemetry, and development productivity data into a single AI-generated narrative intelligence report.
The company also announced BMC AMI Assistant, which is now pervasive across the BMC AMI portfolio. It brings Knowledge Hub and Knowledge Expert Chat into the workflow, making institutional knowledge accessible from sources such as runbooks, tickets, log files, and prior resolutions. The company today also expanded upon recently introduced agentic AI innovations for the Control-M solution with a new Control-M Archive Service that automatically archives job logs and output data to provide a cloud-native, long-term repository for auditing, regulatory compliance, and post-execution analysis.
AWS announced Amazon S3 Files, a new file system that seamlessly connects any AWS compute resource with Amazon Simple Storage Service (Amazon S3). Built using Amazon EFS, S3 Files gives users the performance and simplicity of a file system with the scalability, durability, and cost-effectiveness of S3. Using the solution, users no longer need to duplicate data or cycle it between object storage and file system storage. S3 Files maintains a view of the objects in a bucket and intelligently translates file system operations into efficient S3 requests on the user’s behalf.
Blaize announced the planned launch of Blaize AI Services, a platform designed to help AI infrastructure providers and enterprises deploy production-ready, application-level AI services without building the underlying AI stack from scratch. The solution combines modular APIs, hybrid computing, and Forward Deployed Engineering into a single platform that makes AI easier to operationalize and scale. The platform intelligently decomposes high-level tasks and schedules the components across Blaize accelerators and GPUs based on cost, power, and performance targets.
Broadcom announced the next evolution of its enterprise automation solution, Automic Automation v26. This release combines open MCP-based LLM connectivity, native Python and text-to-workflow DevEx, and enhanced zero-downtime upgrades to safely operationalize AI at enterprise scale. Designed to supercharge AI adoption in production without sacrificing trust, Automic Automation v26 wraps a governed orchestration layer around every AI interaction so organizations gain reliability, auditability, and control over agentic execution.
C3 AI announced the general availability of C3 Code, which combines autonomous agentic coding with the full depth of the C3 Agentic AI Platform. Key capabilities of C3 Code include 40+ enterprise AI applications and packages, a C3 AI-type system that offers a unified abstraction layer connecting enterprise data across sources, pre-built production domain AI algorithms, full-stack application generation, and parallel and sequential agent execution. governed deployment, and more.
Cloudera announced significant advancements to its hybrid data and AI platform. These updates help enterprises lower infrastructure costs and accelerate analytics and AI across their entire data estate. Key features include guaranteed operational stability and the ability to modernize seamlessly by providing simultaneous updates to on-premises and cloud deployments. The update also introduces new capabilities to enhance performance, flexibility, and data collaboration across modern data architectures.
CoChat introduced a centralized workspace where employees can communicate, share AI chats, deploy assistants, and generate automated workflows. The platform is designed to bring greater coordination, visibility, and AI fluency, while mitigating tool sprawl and security risks for businesses. The solution currently supports access to hundreds of models through its gateway infrastructure and includes roughly 70 integrations across popular business and technical tools, including Slack, Discord, Salesforce, GitHub, GitLab, Intercom, Typeform, Google Drive, Grafana, PostHog, and more.
DBmaestro announced the launch of its Model Context Protocol (MCP) server. The solution exposes the company’s database release automation, source control, CI/CD orchestration, and compliance capabilities to AI agents and enterprise copilots. Teams can now use natural language to trigger real DBmaestro workflows, such as automating project creation, multi-environment deployments, package management, and release orchestration. All of this can be accomplished while preserving enterprise governance, role-based access control, and full auditability.
Dremio announced that V3 support is now available in Dremio Cloud. Iceberg V3 is designed to support more diverse and complex data types, offer greater control over schema evolution, and deliver performance enhancements for large-scale, high-concurrency environments. Dremio’s V3 integration advances handling of semi-structured data, row-level changes, and schema evolution, with full support in Dremio Cloud, including the VARIANT data type for JSON, deletion vectors for faster CDC (change data capture), and improved schema evolution.
Lucidworks announced the launch of its Model Context Protocol (MCP) server, designed to help enterprises seamlessly and securely connect AI agents to crucial enterprise data with little custom integration required. Lucidworks MCP key features include proprietary retrieval quality, enterprise security controls, self-hosted deployment options for data sovereignty and compliance requirements, and an API-first architecture to support custom AI applications and integrations.
OpenNebula Systems announced the release of OpenNebula 7.2, a major update designed to support production-scale sovereign clouds and AI factories. The new release introduces deeper automation, hardware-rooted security, and high-performance orchestration capabilities for GPU-accelerated systems and high-speed networking to address the operational requirements of AI, HPC, and regulated cloud environments. To that end, the release integrates NVIDIA Fabric Manager, enabling optimized orchestration of NVSwitch and NVLink interconnects for large multi-GPU topologies. OpenNebula 7.2 is also validated with NVIDIA Spectrum-X networking platforms, supporting low-latency Ethernet fabrics optimized for large-scale AI clusters.
OrbitronAI announced the launch of NovaOS, a platform designed to support the deployment and management of AI agents in regulated industries. The system introduces a structured approach to AI operations, focusing on auditability, human oversight, and compliance. NovaOS acts as a control layer on top of existing enterprise systems, allowing organizations to manage how AI agents are deployed and operated without replacing current infrastructure.
Nasuni unveiled its expanded brand and product strategy, including new Active Everywhere and AI Activate offerings. Resilio Active Everywhere v6 (Preview Q2, GA in Q3 2026) enhances integration with the Nasuni platform, delivering edge teams LAN-speed access to governed file data without WAN optimization appliances or proprietary caching hardware. AI Activate (invite-only preview Q2, GA in Q4 2026 extends the value of the Nasuni platform via the Model Context Protocol (MCP) to AI agents and large language models with the same governed, permission-aware access that enterprise teams rely on every day.
Nutanix announced new capabilities to the Nutanix Cloud Platform (NCP) solution designed to help organizations operate reliably as AI workloads expand and cloud environments grow more complex. The updates include the Nutanix Agentic AI solution, NKP Metal (in early access), Nutanix Unified Storage (NUS) 5.3, and an updated Nutanix Data Lens 2.0 solution that is generally available now. Additionally, Nutanix and MongoDB announced a certified integration between Nutanix Database Service and MongoDB Ops Manager that is built on MongoDB’s third-party backup integration model.
Redpanda announced the general availability of four new components in Redpanda Connect: an Amazon DynamoDB change data capture (CDC) input, an Oracle CDC input, and both a processor and an output for Salesforce. Designed to run efficiently in any Kubernetes environment, these connectors enable enterprises to bypass the fragile middleware and heavy infrastructure traditionally required to stream changes from their most critical systems into the Redpanda platform.
Snowflake made several open source and interoperability-related announcements at this week’s Iceberg Summit. The announcements included expanded support for the latest open table format. To that end, Snowflake is delivering the broad production-ready support for Apache Iceberg V3. The company also announced continued investments in open source projects like Apache Polaris and the Open Semantic Interchange.
Partnerships, collaborations, and more
Rafay Systems announced that Argentum AI selected the Rafay Platform to power its expanding AI infrastructure business. The partnership enables Argentum to deliver differentiated, customized compute environments to the world’s largest AI operators, including hyperscalers, neoclouds, and enterprise-scale GPU offtakers, through a single unified software orchestration layer.
Rebellions announced a collaboration with SK Telecom (SKT) and Arm to develop AI inference infrastructure designed to support sovereign AI and telecommunications-focused AI data centers. Through this collaboration, the companies plan to develop an AI server combining Arm AGI CPU, the first Arm-designed data center CPU, with Rebellions’ AI chips. The system will be validated in SKT’s AI data center environment before expanding to global markets.
SambaNova announced the next phase of its collaboration with Intel that focuses on a heterogeneous hardware solution, which combines GPUs for prefill, Intel Xeon 6 processors as both host and “action” CPUs, and SambaNova RDUs for decode to deliver premium inference for the most demanding Agentic AI applications. The design will be made available in H2 2026 to enterprises, cloud providers, and sovereign AI programs that want to run coding agents and other agentic workloads at scale.
Virtana announced AI Factory Observability for Nutanix Agentic AI environments, extending system-aware observability across Nutanix Cloud Infrastructure and Nutanix Enterprise AI. The solution expands AI Factory Observability from Nutanix Cloud Infrastructure into Nutanix Enterprise AI, extending visibility and control from the infrastructure layer into AI platforms and model-driven workloads.
If your company has real-time analytics news, send your announcements to ssalamone@rtinsights.com.
In case you missed it, here are our most recent weekly real-time analytics news roundups: