WHAT TO EXPECT

The data movement track is designed for the technologists building groundbreaking solutions to overcome AI data bottlenecks.

Explore the technical design hub of the AI Infra Summit, with deep dives from those creating novel memory approaches, high performance interconnect architectures, and blisteringly fast storage solutions. 

For systems, memory, storage, and networking engineers and architects helping data transfer reach the speed of light.


 

Data Movement Speakers

Who Attends?

How Will You Benefit?

EXPAND YOUR DATA INFRA NETWORK

Connect with leading architects, engineers, and infrastructure providers focused on novel memory design approaches, optimized storage performance, and high-speed interconnects in AI systems.

KEEP PACE WITH INFRA ADVANCES

Review architectural strategies and product developments aimed at reducing data transfer bottlenecks from those building hyperscale systems.

SOLVE PRODUCTION-SCALE DATA MOVEMENT CHALLENGES

Examine how end users are addressing bottlenecks across memory, storage, and networking in real-world AI deployments, including performance trade-offs and operational considerations.

FAQs

Traditional events focus on individual components like networking or storage. This track focuses on data movement as a system-level constraint in AI, examining how memory, storage, and interconnects work together in real deployments. 

Sessions focus on: 

  • Interconnect limits in AI clusters, including scale-up vs scale-out trade-offs and where systems break first  

  • The shift to co-packaged optics and next-gen switching for AI factories  

  • The AI memory wall, including HBM constraints and memory hierarchy design  

  • How storage architectures are evolving for inference workloads  

You’ll gain practical insights into how to: 

  • Improve GPU utilisation and reduce idle compute  

  • Optimise data pipelines between storage, memory, and compute  

  • Reduce latency in distributed training and inference  

  • Rethink system architecture for AI-scale workloads  

You’ll see real architectures and performance insights coming directly from hyperscalers, chipmakers, and system builders, covering interconnect, memory, and storage at scale. The focus is on where systems break, how they’re optimized, and what trade-offs are being made in production environments, rather than theoretical designs..