Ventures in the News

New 'AI Pods' Bring Low-Latency Compute to Smaller US Cities

January 15, 2026

Source: datacenterknowledge.com

Hunter Newby, co-CEO of IXP.us, was quoted in this Data Center Knowledge article on the QAI Moon partnership bringing AI compute to smaller US cities.


by Shane Snider

Modular infrastructure firm Moonshot Energy, QumulusAI, and Connect Nation Internet Exchange Points (IXP.us) have unveiled plans to design and deploy QAI Moon Pods at 25 sites across the US, eventually scaling to 125 cities.

The collaboration combines the firms' carrier-neutral interconnection, modular AI infrastructure, and GPU-as-a-Service to create a scalable national infrastructure for inference and AI workloads, reducing latency and extending AI compute capabilities beyond the reach of traditional hyperscale data centers.

The first deployment will begin by July 2026 on the Wichita State University campus in Kansas and expansion to 25 other cities is expected to follow. The companies have identified up to 125 sites in US university research campuses and municipalities.

"We are building internet exchange points and AI models – this is not a data center play, and we're not building megawatts or gigawatts," IXP.us co-CEO Hunter Newby told Data Center Knowledge, adding that the AI pod buildouts take several months, not years.

Modular Approach

The 2,000 kW modular units are designed and built by Moonshot, while GPU infrastructure is handled by QumulusAI. The companies say the AI pods will offer low-latency AI compute at the edge without the constraints of being attached to a large hyperscale data center.

"This partnership represents the physical convergence of power, compute and interconnection at the exact point where AI demand is moving," Ethan Ellenberg, CEO of Moonshot, said in a statement.

Steven Dickens, CEO and analyst at HyperFrame Research, told Data Center Knowledge that the collaboration addresses a need in underserved areas.

"This is bringing inference to the edge and closer," he said. "We're building out these huge training data centers, and they're fantastic for what they're used for – training. But we're probably going to see 10 times the volume of GPUs deployed into inference over the next few years. That's going to computer vision, that's going to smart retail, that's going to smart manufacturing and healthcare. For all these use cases, you need access to the GPUs with good round-trip latency."

He added: "There is a raft of different industries that are going to need AI and they're going to need it locally."

Inference Market Play

The companies saw an opportunity to address inferencing demand in areas without access to costly data center GPU firepower.

"All workloads are increasingly inference-driven, latency-sensitive, and distributed, but the infrastructure hasn't kept pace," Mike Maniscalo, CEO of QumulusAI, said. "This partnership allows us to place GPU compute directly at the network edge, where data moves and decisions happen... we're building a national platform that makes high-performance AI compute practical, scalable, and economically viable beyond hyperscale data centers."

Continue Reading  

Featured News & Ventures

Connected Nation, network interconnection pioneer Hunter Newby form joint venture to build, operate Internet Exchange Points in 125+ regional hub communities across America

Press Release

Supply chain disruption: why it’s happening and how to tackle the problem

Article

Interconnection Pioneer Hunter Newby Joins DataVerge’s Board of Directors

Press Release

Newsletter

Subcribe for notifications about new research articles and tools, as well as important news about Newby Ventures and the industry.

Social Channels

Newby Ventures        Newby Ventures