- Aria Networks announces the general availability of its Deep Networking solution – Designed from the ground up for the AI factory era to maximize Model Flop Utilization and token efficiency, a fundamentally new approach combining hardened SONiC, end-to-end telemetry, and intelligent agents across every layer of the stack.
- Aria Networks Raises $125M to Build Networks that Think – Backed by Sutter Hill Ventures, Atreides Management, Valor Equity Partners, and Eclipse Ventures.
- Gavin Baker of Atreides Management joins Aria Networks' board, alongside Stefan Dyckerhoff of Sutter Hill Ventures and the founding team.
Today, Aria Networks announces the general availability of the Networks that Think – the world’s first AI-native network built from the ground up to maximize Token Efficiency. At the core of the network is Deep Networking, a fundamentally different approach to how networks operate.
This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20260407939233/en/

Token Efficiency is the defining metric of the AI factory era and the single best proxy for whether an AI cluster is delivering on its investment. Token Efficiency directly relates to Model Flop Utilization (MFU) and cost per token – improvements in either translate directly into improvements in revenue. And, as tokens become the currency of intelligence, we empower operators to become the lowest-cost producers in the market – turning infrastructure efficiency into a competitive advantage.
The network is at the center of this equation, not merely as a bottleneck but as a potential multiplier. When the network underperforms, it drags down every other component in the stack. When it is optimized, it lifts them all. While the network comprises only 10-15% of the total cluster cost, its impact is substantial. A mere 1% improvement in MFU recoups the entire cost of the network.
Suboptimal network performance prevents the full realization of gains from all other infrastructure investments: in training, it affects how quickly gradients are synchronized; in disaggregated inference, it affects how efficiently KV caches are transferred, and how seamlessly jobs are scheduled across thousands of xPUs. Inference clusters especially are getting larger and more complex, introducing bigger networking challenges – not just for the backend, but also for the frontend.
AI factories seek solutions that will enable them to produce tokens more efficiently, at the lowest cost – so that they can enable the fastest production, as well as cheapest consumption of intelligence.
Aria was built to unlock this leverage. Deep Networking is our answer, a fundamentally different approach that turns the network from a constraint into a competitive advantage.
Legacy networking solutions treat telemetry as an afterthought and rely on static configurations that were designed for a different era. Deep Networking changes that, and is built on five pillars, all of which must be present to deliver the desired outcome:
- AI Optimized Hardware, and Hardened SONiC. Aria's switch platform, built from the ground up on AI-native SONiC, delivers leading 800GbE and 1.6T switching in liquid-cooled and air-cooled form factors.
- Fine-grained, end to end telemetry. 100–10,000x finer resolution than traditional tools, collected across switches, transceivers, and hosts in a single unified view.
- Intelligent agents at every layer. Specialized agents evaluate signals, extract insights, and take action at the appropriate resolution – from the switching ASIC all the way up to cloud orchestration.
- Networking expertise built in. Every agent and every decision is grounded in deep networking domain knowledge – the system doesn’t just see data, it understands what it means.
- Continuous updates. New capabilities are developed seamlessly and continuously, keeping the network at the forefront of performance for every new workload.
The combination of these five elements creates a flywheel: the more workloads the system sees, the smarter it gets – delivering a seamlessly optimized network.
Deep Networking is not just a technology architecture, it is a set of outcomes that operators experience from day one:
- Seamless, automatic network fine-tuning. The platform continuously fine-tunes every aspect of the networking fabric for the specific cluster it serves, without manual intervention – across routing, load balancing, congestion management, and failover – eliminating the manual, error-prone workflows that slow down traditional deployments.
- Intent-based configuration. Operators express what they need, and the platform configures the fabric accordingly.
- Real-time, adaptive performance optimization. The system doesn’t wait for a ticket or a threshold breach. It continuously evaluates network state and takes action in real time to keep accelerators productive and every token flowing. The network adapts to each workload, each topology, each failure condition, automatically.
- Agentic partnership with operators. Operators gain fine-grained telemetry data at their fingertips, are alerted to issues as they arise, can ask questions about any alert in natural language, and collaborate with Aria’s agents to devise strategies for resolving issues or optimizing performance. This is not a black box, it is a partnership.
- Embedded Field Deployment Engineers. Aria’s FDEs are not a professional services add-on. They are an extension of the Aria solution itself, embedded directly within the customer’s team, managing the full lifecycle from architecture to performance tuning, co-developing alongside them, and integrating the Aria network within the full AI factory stack.
These outcomes give operators a critical advantage. They enable them to be accelerator agnostic, and to extract more value from their accelerator investments. Critically, they enable operators to scale their cluster linearly while maintaining peak MFU.
Alongside today’s platform announcement, Aria is pleased to announce that Gavin Baker, Managing Partner and CIO at Atreides Management LP, has joined the company’s board of directors, reinforcing the caliber of conviction and strategic partnership behind Aria’s mission. Together with Mansour Karam, Subhachandra Chandra, and Stefan Dyckerhoff, they bring decades of networking expertise combined with a cutting edge AI infrastructure focus. Aria Networks is also pleased to announce that Atreides Management, Valor Equity Partners, and Eclipse Ventures join Sutter Hill Ventures as investors in Aria Networks.
Ethernet has become the dominant fabric for new AI back-end deployments, driven by its openness, ubiquity, and multi-vendor scalability. Liquid cooling adoption is projected to reach 76% of AI servers this year as rack densities quickly approach 1MW. And the transition to 1.6T is accelerating faster than 800G ever did, with over 22 million ports expected to ship by 2027. Aria's switch platform, built from the ground up on AI-native SONiC, delivers leading 800GbE and 1.6T switching in liquid-cooled and air-cooled form factors with no vendor lock-in.
Aria Networks is poised to redefine how AI infrastructure is built, deployed, and optimized at scale. As demand for high-performance AI continues to accelerate, Aria Networks remains committed to pushing the boundaries of network intelligence by helping customers unlock greater efficiency, maximize accelerator performance, and drive down the cost of innovation in the AI factory era.
Aria Networks already has customer orders in hand and is actively deploying. For product inquiries, please contact sales@arianetworks.com.
About Aria Networks
The networking industry was built for a different era. Aria Networks was built for this one. Founded in 2025 and headquartered in Palo Alto, Aria Networks is building the networking company for the AI era, from scratch, with AI at the center of everything. Aria Networks’ approach, Deep Networking, combines hardened SONiC, end-to-end telemetry, intelligent agents, deep domain context, and continuous cloud-delivered updates to maximize token efficiency – Backed by Sutter Hill Ventures, Atreides Management, Valor Equity Partners, and Eclipse Ventures.
To learn more, follow us on LinkedIn or visit arianetworks.com.
Supporting quotes:
“The network has become a key obstacle in AI infrastructure. Deep Networking changes that – and the economics prove it: a 10% gain in tokens per second is a 10% gain in revenue. What this team has built and shipped in such a short time is extraordinary.” — Mansour Karam, Founder & CEO, Aria Networks
“Networking is one of the most consequential bets in the AI infrastructure stack – and Aria is getting it right. They identified a real problem, built a differentiated solution around a metric that operators actually care about, and they already have customer orders in hand. Deep Networking is a category-defining approach, and I'm excited to help Aria deliver it.” — Gavin Baker, Managing Partner & CIO, Atreides Management LP
“Aria has done something rare – identified a measurable problem and built a differentiated solution that customers are already ordering. Our conviction is grounded in the technology, the momentum, and above all, the founders.” — Stefan Dyckerhoff, Founding Investor and Board Member, Aria Networks
“Proprietary fabrics are a thing of the past. With its 1.6Tbps launch, combined with a telemetry-centric software architecture, Aria Networks is proving that the highest-performance AI networks on the planet are being built on a foundation of open, scalable Ethernet such as Broadcom's Tomahawk 6 switch series.” — Hasan Siraj, Vice President of Product Marketing, Core Switching Group, Broadcom
“In my experience architecting high performance fabrics for AI clusters, the biggest bottleneck is the 'blind spots' in the network. I'm personally impressed with how Aria Networks is moving beyond simple detection into true predictive orchestration. By leveraging microsecond level telemetry, they don't just alert you to congestion; they deliver the intelligence to anticipate and prevent it in an inherently bursty traffic environment. It's a powerful shift to an active pilot ensuring maximum efficiency across the entire AI fabric.” — Prakash Sripathy, Vice President, Supermicro
“As AI infrastructure evolves, efficiency and utilization are becoming as critical as scale, placing new demands on the network for visibility, predictability, and control. To meet these demands, AMD is committed to enabling customer choice through an open ecosystem. The AMD Pensando™ Pollara 400 AI NIC, deployed with Aria Networks, helps customers achieve improved performance, deeper insight, and enhanced control over AI network infrastructure.” — Shane Corban, Sr. Director of Product Management, Networking Technology and Solutions Group, AMD
“San Francisco Compute is building a GPU marketplace that allows customers to sell back unused capacity while securing long term offtake agreements for neoclouds. Aria Networks understands that AI has fundamentally changed how data center networks are built. We're excited to partner with them to securely deliver precisely allocated compute to our customers.” — Eric Park, CTO, San Francisco Compute
“Delivering peak performance per dollar is critical to our approach at Positron. Aria Networks is the first networking company we've seen that applies that focus on optimizing Cluster MFU (Model Flop Utilization) and MBU (Model Bandwidth Utilization), which are directly proportional to performance per dollar. Given our focus on AI infrastructure efficiency, the ability to optimize both across the entire stack makes them a natural partner. Aria is the networking platform that makes our hardware perform the way it was designed to." — Thomas Sohmers, Founder and CTO, Positron
“Managing AI clusters at scale exposed a critical blind spot: the networking layer. Without visibility into the fabric, performance issues remain unresolved, costs spiral fast, and traditional tools simply can't keep up. An AI-first, autonomous approach finally gives us control and measurable savings.” — Dali Kilani, Founder and CTO, Blackfuel.ai
“Amphenol is pleased to partner with Aria Networks to address AI Factories’ most pressing needs: (1) optimized cluster MFU (Model Flop Utilization), through Aria’s Deep Networking approach that critically includes fine grain telemetry from optical transceivers and other cables and connections and (2) delivering a wide range of connectivity options, which is especially critical with 1.6Tbps Ethernet and 200Gig SERDES speeds.” — Brian Kirk, CTO, Amphenol
“AI clusters demand a ground-up rethinking of the networking stack. Aria Networks meets this need with a clean-slate approach – a team that started with a blank sheet of paper to build what AI operators actually need: Deep Networking, designed to maximize cluster utilization and token efficiency, not just move packets.” — Sameh Boujelbene, Vice President, Dell'Oro Group
“With the launch of its 102.4Tbps portfolio, Aria is solving the AI connectivity bottleneck. By pairing the scale of standard Ethernet with the raw performance of Broadcom's Tomahawk 6 and the power of its telemetry-centric software architecture, Aria is providing the definitive blueprint for the next generation of AI clusters.” — Dylan Patel, Founder & CEO, SemiAnalysis
View source version on businesswire.com: https://www.businesswire.com/news/home/20260407939233/en/
Contacts
Media Contact:
Clifford Yeung
cyeung@wireside.com
Wireside Communications