HPC centers are at an inflection point. For decades, batch scheduling systems have been the backbone of supercomputing — reliable, predictable, and optimized for throughput. But as scientific discovery increasingly depends on tightly coupled simulation and AI/ML workflows, that same reliability has become a constraint. At HPSFCon 2026, Vanessa Sochat of Lawrence Livermore National Laboratory made the case for a fundamental reimagining of how HPC centers operate.
Her talk, “Converged Computing and Agentic Workflows,” introduced a vision where HPC centers evolve from static resource providers to dynamic, intelligent discovery engines. The catalyst is the convergence of HPC and cloud — not just in infrastructure, but in operating philosophy. Through the Flux Framework, Sochat demonstrated how HPC schedulers can integrate directly with Kubernetes, enabling ephemeral, sovereign “MiniClusters” that deliver cloud-like automation without sacrificing HPC performance.
Central to this approach is agentic orchestration — where intelligent systems manage resource allocation, workflow routing, and environment provisioning. This isn’t a future concept. National initiatives like the DOE’s Genesis program are already demanding these capabilities.
What makes this talk compelling is its practicality. Rather than abstract principles, Sochat presented concrete integrations and real-world deployments — showing how the tools to begin this transition exist today. The Flux Framework’s role as connective tissue between HPC schedulers and cloud-native ecosystems positions it as critical infrastructure for the next generation of scientific computing.
View the full playlist from HPSFCon 2026: https://www.youtube.com/playlist?list=PLRKq_yxxHw29oZTboj6fmdYhQMWHUaj4u.