Real-time Analytics News for the Week Ending March 21

Real-time Analytics News for the Week Ending March 21

In this week’s real-time analytics news: NVIDIA and its partners made numerous AI-related announcements at this week’s annual GTC event.

Mar 22, 2026

Keeping pace with news and developments in the real-time analytics and AI market can be a daunting task. Fortunately, we have you covered with a summary of the items our staff comes across each week. And if you prefer it in your inbox, sign up here!

NVIDIA and its partners made numerous AI-related announcements at this week’s annual GTC event held in San Jose, CA. Top NVIDIA announcements include:

  • The introduction of its Agent Toolkit, which is an open-source platform for building autonomous AI agents. The toolkit provides the models, the runtime, the security framework, and the optimization libraries that AI agents need to operate autonomously inside organizations. Each is optimized for Nvidia hardware.
  • The introduction of NVIDIA Dynamo 1.0, which is open-source software for generative and agentic inference at scale. Together with the NVIDIA Blackwell platform, Dynamo 1.0 enables cloud providers, AI innovators, and global enterprises to deliver high-performance AI inference.
  • The launch of the NVIDIA Vera CPU, which is a processor purpose-built for the age of agentic AI and reinforcement learning. The company claims Vera is a new class of CPU that delivers higher AI throughput, responsiveness, and efficiency for large-scale AI services such as coding assistants, as well as consumer and enterprise agents.
  • The introduction of the NVIDIA Vera Rubin platform, which brings together the NVIDIA Vera CPU, NVIDIA Rubin GPU, NVIDIA NVLink 6 Switch, NVIDIA ConnectX-9 SuperNIC, NVIDIA BlueField-4 DPU, and NVIDIA Spectrum-6 Ethernet switch, as well as the newly integrated NVIDIA Groq 3 LPU.
  • The release of the Vera Rubin DSX AI Factory Reference Design and Omniverse DSX Digital Twin Blueprint with broad industry support. The new NVIDIA Vera Rubin DSX AI Factory reference design provides a guide for building codesigned AI infrastructure that delivers maximum token per watt and accelerated time to first production. The NVIDIA Omniverse DSX Blueprint, now generally available with the NVIDIA Vera Rubin AI Factory reference design, powers digital twins for large-scale AI factory design and simulation.
  • The introduction of the NVIDIA NemoClaw stack for the OpenClaw agent platform. The solution lets users install NVIDIA Nemotron models and the newly announced NVIDIA OpenShell runtime in a single command, adding privacy and security controls to make self-evolving, autonomous AI agents, or claws, more trustworthy, scalable, and accessible to the world.

Major partner announcements at NVIDIA GCT

IBM announced an expanded collaboration with NVIDIA to help enterprises operationalize AI at scale. Specifically, IBM and NVIDIA are collaborating on an open-source integration to increase performance and reduce costs around how enterprises extract intelligence from their massive datasets. For this work, IBM watsonx.data’s SQL engine Presto is accelerated by NVIDIA cuDF to enable faster query execution on large datasets.

Other collaborative efforts include:

  • IBM and NVIDIA are addressing data silo and access issues with Docling from IBM and NVIDIA Nemotron open models. The combination is designed to make intelligent document extraction available at enterprise scale.
  • IBM and NVIDIA are also deepening their partnership across cloud and enterprise consulting to advance clients’ enterprise AI adoption. IBM plans to offer NVIDIA Blackwell Ultra GPUs on IBM Cloud in early Q2 2026 for large-scale training, high-throughput inferencing, and AI reasoning. This technology will also be integrated across Red Hat AI Factory with NVIDIA and VPC servers with enterprise-grade compliance and data residency controls.

Microsoft made multiple announcements that combine accelerated computing with cloud-scale engineering to bring advanced AI capabilities to customers. Key conference news includes:

  • Expanded Microsoft Foundry capabilities to build, deploy, and operate production-ready AI agents on NVIDIA accelerators and open NVIDIA Nemotron models.
  • New Azure AI infrastructure optimized for inference-heavy, reasoning-based workloads, including the first hyperscale cloud to power next-generation NVIDIA Vera Rubin NVL72 systems.
  • Deeper integration across Microsoft Foundry, Microsoft Fabric, and NVIDIA Omniverse libraries and open frameworks to support Physical AI systems from simulation to real‑world operations.

Oracle and NVIDIA jointly announced expanded AI capabilities on Oracle Cloud Infrastructure (OCI) that help redefine scalable AI performance, accelerate vector database operations, and simplify enterprise AI deployment using cloud-native services. To that end, Oracle introduced a next-generation OCI Supercluster powered by the NVIDIA Vera Rubin platform, including NVIDIA Rubin GPUs, NVIDIA Vera CPUs, NVIDIA BlueField-4 DPUs, sixth-generation NVLink, NVIDIA ConnectX-9 SuperNICs, and NVIDIA Spectrum-X Ethernet switches, purpose-built to accelerate next-generation training and high-throughput inference workloads.

Oracle also announced that Oracle AI Database can now use NVIDIA AI infrastructure and NVIDIA cuVS to accelerate large-scale embedding generation and vector index creation, helping reduce time-to-value for AI-driven applications.

Additional partner announcements at the conference

Akamai Technologies unveiled the first global-scale implementation of the NVIDIA AI Grid reference design. By integrating NVIDIA AI infrastructure into Akamai’s infrastructure and leveraging intelligent workload orchestration across its network, Akamai intends to move the industry beyond isolated AI factories toward a unified, distributed grid for AI inference. To that point, the Akamai Inference Cloud implementation of NVIDIA AI Grid intelligently routes AI workloads across its edge, regional, and core footprint to balance latency, cost, and performance

Anaconda unveiled a major expansion of its partnership with NVIDIA, bringing NVIDIA’s Nemotron 2 and Nemotron 3 models to Anaconda’s AI Catalyst. Additionally, Anaconda announced it has integrated NVIDIA technology within the full enterprise AI stack, from GPU-accelerated Python environments to open models for agentic AI. This includes official support with DGX Spark, an increased release cadence, and ongoing framework support updates for CUDA, and more.

Cisco announced a major expansion of its Secure AI Factory with NVIDIA, giving customers a framework for deploying AI across their entire infrastructure. Enterprises, neoclouds, sovereign clouds, and service providers can now move AI from pilot to full-scale production without stitching together disconnected systems, compressing deployment timelines from months to weeks and embedding security from the start.

Cognite announced the integration of NVIDIA’s NV-Tesseract family of models into the Cognite AI and Data Platform to operationalize foundational forecasting models for heavy industry. This integration leverages the context-rich data within Cognite’s platform to power NVIDIA’s advanced time-series AI, delivering unprecedented predictive accuracy for critical manufacturing processes. The integration connects the Industrial Knowledge Graph within Cognite Data Fusion to NVIDIA’s NV-Tesseract models packaged as NVIDIA NIM microservices.

Crusoe announced a deepened strategic collaboration with NVIDIA spanning models, inference, and physical infrastructure. Specifically, Crusoe announced support for NVIDIA Vera CPU and NVIDIA Nemotron 3 Super and NVIDIA Nemotron 3 VoiceChat, added support for NVIDIA Omniverse DSX Blueprint, and contributed Crusoe Managed Inference tokenizer to NVIDIA Dynamo open source framework.

DDN announced several new technologies aimed at solving a key challenge in large-scale AI systems: keeping massive GPU clusters continuously fed with data for training and inference. This includes new capabilities that accelerate AI inference with KV cache loading, enabled by deep integration with NVIDIA Dynamo and GPUDirect Storage. Other announcements include IndustrySync Pipelines and DDN Horizon, which is orchestration software designed to turn GPU clusters into revenue-ready AI-as-a-service platforms.

Dell Technologies announced the Dell AI Data Platform with NVIDIA advancements that help enterprises discover and activate enterprise data while delivering extreme storage performance to power AI applications and autonomous AI agents. Specifically, the Dell AI Data Platform with NVIDIA activates enterprise data for AI while maintaining security, governance, and best-in-class performance at scale. 

EnterpriseDB (EDB) announced expanded integrations with NVIDIA cuDF for Apache Spark that accelerate Postgres on NVIDIA AI infrastructure, delivering the performance, economics, and operational predictability required for enterprise-grade agentic AI deployments. The enhanced capabilities are delivered through EDB Postgres AI (EDB PG AI), EDB’s sovereign deployment option for secure enterprise operations. 

Everpure (formerly Pure Storage) announced Evergreen//One for FlashBlade//EXA and the upcoming beta of Everpure Data Stream to help organizations reduce cost and complexity barriers that stall enterprise AI projects. Evergreen//One (EG1) for AI now extends across FlashBlade//EXA, providing the massive performance, scalability, and throughput required for large-scale training and inference. Complementing this, the Everpure Data Stream Beta, which will launch later in 2026, accelerates time-to-result by eliminating the friction of manual data movement with a direct, automated pipeline from data ingestion to inference.

GMI Cloud announced an ongoing global initiative to architect and deploy sovereign AI Factories for countries worldwide. As the critical backbone of these initial buildouts, GMI Cloud is bringing significant capacity of the newly announced NVIDIA Vera Rubin NVL72 online, establishing a standard for national-scale artificial intelligence deployments. This initiative for sovereign AI Factory buildouts is already underway.

Hammerspace announced the general availability of its new AI Data Platform (AIDP) solution. AIDP is a turnkey approach that provides seamless access to distributed enterprise datasets. It overcomes data gravity by continuously identifying the data that matters, orchestrating it efficiently to GPUs, and enabling processing where it’s most optimal, whether that’s local GPU resources near the data or centralized GPUs at scale.

Hitachi Vantara announced new capabilities across the Hitachi iQ portfolio, including enhanced AI blueprints and multi-agent coordination in Hitachi iQ Studio, expanded NVIDIA AI infrastructure options, and deeper data integration to support agentic AI in on-premises and virtualized environments. Together, these enhancements position Hitachi iQ as a comprehensive, enterprise-ready AI solution, enabling customers to build and manage AI agents within their own environments.

HPE announced the HPE AI Grid, an end-to-end solution built on the NVIDIA reference architecture to securely connect AI factories and distributed inference clusters across regional and far‑edge sites. The HPE AI Grid enables service providers to deploy and operate thousands of distributed inference sites, turning AI installations into a single intelligent system.

In other HPE news, the company announced innovations to the NVIDIA AI Computing by HPE portfolio focused on large-scale AI factories and supercomputers that enable customers to scale, deploy efficiently, and gain faster time-to-insight. The full-stack AI solutions with NVIDIA include tightly integrated compute, GPUs, networking, liquid cooling, software, and services designed for at-scale and sovereign environments.

Lenovo unveiled the new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments. This next phase of Hybrid AI execution expands the Lenovo Hybrid AI Advantage with NVIDIA from device to data center to gigawatt-scale AI cloud deployments.

MinIO announced that MinIO AIStor will support object data stores for the NVIDIA STX reference architecture. Designed with the NVIDIA STX rack-scale reference architecture, AIStor delivers a unified, high-performance datastore that powers the full AI lifecycle, from large-scale model training to enterprise RAG and real-time agentic inference.

NetApp announced two new innovations aimed at helping enterprises overcome key barriers to scaling AI. NetApp introduced next-gen EF-Series storage systems built for performance-intensive workloads like AI and HPC, and launched NetApp AI Data Engine (AIDE), a new AI data platform stack co-engineered with NVIDIA to help enterprises discover, govern, and activate data across the AI pipeline.

Nutanix announced the Nutanix Agentic AI solution, a full software stack purpose built to help customers accelerate adoption of Agentic AI for business transformation. The solution integrates with NVIDIA AI Enterprise at the Agent Builder layer and orchestrates the NVIDIA-certified ecosystem of AI factories for supported configurations. Nutanix and NVIDIA are also working together to build the foundation for autonomous agents in the enterprise through integration with the NVIDIA Agent Toolkit.

Starburst announced optimizations for the NVIDIA Vera CPU. Specifically, Starburst delivers hybrid, federated, and governed data access at inference speed directly where the data lives, across lakes, warehouses, and operational systems, without movement or duplication. Starburst customers will gain access to breakthrough query performance, lower-latency AI inference, and significant cost efficiencies once Vera is available later in 2026.

VAST Data announced the availability of VAST Foundation Stacks, a new open source library that augments and extends NVIDIA AI Blueprints into production-ready pipeline implementations that run natively on the VAST AI Operating System. VAST Foundation Stacks extend the AI blueprints into production-ready templates, enabling organizations to deploy and operate NVIDIA-powered pipelines natively on the VAST AI Operating System.

VDURA announced three major advances at NVIDIA GTC 2026: the availability of Remote Direct Memory Access (RDMA) capability, the upcoming first phase of its Context-Aware Tiering technology planned for later this year, and optimized infrastructure configurations for the VDURA Data Platform built on AMD EPYC Turin processors and NVIDIA ConnectX-7 high-speed networking adapters. Together, these advances offer users more performance from their GPU infrastructure and storage efficiency. 

WEKA announced the general availability of its enterprise-ready NeuralMesh AI Data Platform (AIDP), which delivers composable, high-performance infrastructure optimized for AI Factory deployments. Based on the NVIDIA AI Data Platform reference design, the solution is an end-to-end system that accelerates the delivery of AI-ready data to AI factories.

ZEDEDA unveiled its Edge Intelligence Platform, providing a solution that can create, deploy, and secure edge AI at scale. With the release of the Edge Intelligence Platform, enterprises can now orchestrate the complete edge AI lifecycle in a single control plane and API, defining agent behavior, model versioning, inference optimization, and managing distributed edge infrastructure.

Real-time analytics news in brief

Snowflake announced the research preview of Project SnowWork, a new autonomous enterprise AI platform designed to help business users massively accelerate workflows. Acting as a proactive AI partner, Project SnowWork empowers individuals and teams to simply ask for what they need and have Project SnowWork securely complete multi-step tasks based on conversational prompts. Currently, Project SnowWork can:

  • Plan and autonomously execute simple or complex multi-step workflows across governed Snowflake data to deliver finished outputs.
  • Generate analysis with recommended actions, turning insights into prioritized next steps tailored to each business role.
  • Securely orchestrate data, AI, and enterprise systems to complete tasks end-to-end, reducing backlogs and accelerating decisions.

BMC announced new AI innovations for the Control-M solution that advance the orchestration foundation enterprises need in an AI-powered world. The solution helps teams build, run, and manage workflows more efficiently. The Control-M solution also expands its AI orchestration integrations, making it easier for organizations to orchestrate AI agents and AI-powered tasks alongside data pipelines, applications, and operational workflows. New integrations include solutions from CrewAI, LangGraph, and Snowflake Cortex.

Boomi announced new capabilities within the Boomi Enterprise Platform. The Boomi Enterprise Platform now adds new semantic context to help AI agents operate on grounded business realities, expands governed SAP data movement with change data capture, enhances transparency and oversight across agentic workflows, and introduces a dedicated European platform instance for localized data control.

Franz Inc. announced AllegroGraph v8.5, with an enhanced AI-powered Natural Language Query interface. The new release combines knowledge graphs, vector embeddings, and neuro-symbolic reasoning to provide the semantic layer needed for AI agents to interpret data meaningfully and deliver more accurate, explainable results. New Capabilities in AllegroGraph v8.5 include optimized Natural Language Query (NLQ), expanded MCP support, faster Vector Processing, enhanced observability, and production-ready AI Semantic Graph Infrastructure.

Kioxia America announced the development of its Super High IOPS SSD, a new type of SSD enabling the GPU to directly access high-speed flash memory as an expansion to High Bandwidth Memory (HBM) in AI systems. The new Super High IOPS SSD, the KIOXIA GP Series, is purpose-built to meet the growing performance demands of AI and high-performance computing, providing larger GPU-accessible memory capacity for faster data access to AI workloads.

Kore.ai announced the launch of its Agent Management Platform (AMP), a unified command center designed to govern, monitor, and manage AI agents and AI systems across the enterprise. The solution provides enterprises with a single operational layer to manage AI systems across frameworks, clouds, and development environments, including LangGraph, CrewAI, AutoGen, Google ADK, AWS AgentCore, Microsoft Foundry, Salesforce Agentforce, and proprietary systems. It also consolidates AI observability, governance enforcement, performance monitoring, and value measurement.

Orange Business announced the launch of Live Intelligence Studio, a new capability within its Live Intelligence platform designed to help organizations step into the Agentic AI era. The platform enables enterprises and public sector organizations to design, deploy, and govern autonomous AI agents securely within its trusted infrastructure. By leveraging Live Intelligence Studio, customers benefit from AI agents capable of handling intricate tasks in a secure and compliant environment.

R Systems announced the launch of EXIQO, an AI Studiodesigned to enable enterprises to scale production-grade agentic AI across business and technology functions. EXIQO combines enterprise context, embedded guardrails, and human oversight to deliver governed, enterprise-grade execution at scale. The solution is built to accelerate engineering velocity. It brings together 1,400+ AI-native engineers, R Systems’ proprietary OptimaAI Suite, and governed execution methodology.

ThoughtSpot announced the launch of Spotter for Industries, delivering domain-specific intelligence to organizations operating in highly specialized sectors. The solution extends ThoughtSpot’s agentic analyst, Spotter, with deep industry context, allowing it to understand the language of each specific industry. This ensures every insight is grounded in a trusted analytics foundation that delivers deterministic (consistent and repeatable) results and turns complex business data into industry-specific insights, strategic recommendations, and immediate action.

Advertisement

Partnerships, collaborations, and more

Accenture and Databricks announced a strategic expansion of their partnership. To that end, the companies are launching the Accenture Databricks Business Group, with a focus on helping clients adopt Databricks as their core data and AI platform. Together, the companies will help clients leverage Databricks’ newest innovations, including Lakebase for serverless Postgres databases built for AI, Genie to let any employee chat with their data, and Agent Bricks for high-quality agents built on enterprise data.

InfluxData and Amazon Web Services (AWS) announced added support for clusters up to 15 nodes on Amazon Timestream for InfluxDB 3, along with a one-click migration path from InfluxDB 3 Core (open source) to Enterprise. For developers building real-time systems, support for clusters up to 15 nodes removes a major scaling ceiling and allows teams to run high-concurrency workloads without dashboards or alerts slowing down; ingest millions of data points per second and still query in real-time; and more.

StorMagic announced a partnership with HiveRadar to deliver a fully-integrated edge computing solution designed for mobility, resilience, and high performance. The joint offering combines StorMagic’s SvHCI software and HiveRadar’s Portable Edge Data Center (P-EDC) and is purpose-built for organizations that require secure computing in remote, mobile, and off-grid environments.

If your company has real-time analytics news, send your announcements to ssalamone@rtinsights.com.

In case you missed it, here are our most recent weekly real-time analytics news roundups:

Salvatore Salamone

Salvatore Salamone is a physicist by training who writes about science and information technology. During his career, he has been a senior or executive editor at many industry-leading publications including High Technology, Network World, Byte Magazine, Data Communications, LAN Times, InternetWeek, Bio-IT World, and Lightwave, The Journal of Fiber Optics. He also is the author of three business technology books.

Recommended for you...

Real-time Analytics News for the Week Ending March 14
Real-time Analytics News for the Week Ending March 7
Real-time Analytics News for the Week Ending February 28
IBM’s New Acquisition Highlights Organizations Aren’t Ready for Real-Time
Max Vermeir
Feb 24, 2026

Featured Resources from Cloud Data Insights

Real-time Analytics News for the Week Ending March 21
Anchorage Digital Bank and Tether Launch USA₮ in Times Square St. Patrick’s Day Campaign
TechnologyWire
Mar 20, 2026
Is the Front–Back Office Divide Over or Is This the Latest Sales Narrative?
Dr. John Bates
Mar 20, 2026
What Will Define the Next Era of Product Intelligence?
Onur Alp Soner
Mar 19, 2026
RT Insights Logo

Analysis and market insights on real-time analytics including Big Data, the IoT, and cognitive computing. Business use cases and technologies are discussed.

Property of TechnologyAdvice. © 2026 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.