This episode examines Liquid AI’s LFM2.5-1.2B model family optimized for edge deployment with sub-gigabyte memory footprints across NPU and CPU hardware, NVIDIA’s release of four open physical AI models including Isaac GR00T N1.6 and Cosmos Transfer 2.5 alongside OSMO orchestration and Isaac Lab-Arena simulation frameworks, and AMD’s Helios rack-scale architecture delivering three AI exaflops per rack with projections for thousand-fold performance increases by 2027. The briefing covers quantization-aware training at INT4 precision, robot policy evaluation in simulation-first workflows, and datacenter GPU roadmaps extending to yottaflop infrastructure requirements.