thank you let me make a couple more make a couple of comments that I made earlier again that data center worldwide are in Full Steam to modernize the entire Computing stack with accelerated Computing and generative AI Hopper demand remains strong and the anticipation for blackw is incredible let me highlight the top five things the top five things of our company accelerated Computing has reached the Tipping Point CPU scaling slows developers must must accelerate everything possible accelerated Computing starts with Cuda X Library liaries new libraries open new markets for NVIDIA we released many new libraries including could accelerated polers pandas and Spark deleting data science and data processing libraries qvs for Vector Pro Vector databases this is incredibly hot right now Ariel and shiona for 5G wireless base station a whole Suite of a whole world of data centers that we can go into now pair bricks for jeene SE quencing and Alpha 2 for protein structure prediction is now C accelerated we are at the beginning of our journey to modernize a trillion dollars worth of data centers from general purpose Computing to accelerated Computing that's number one number two Blackwell is a step function leap over Hopper Blackwell is an AI infrastructure platform not just a GPU also happens to be in the name of our GPU but it's an AI infrastructure platform as we reveal more more of Blackwell and sample systems to our partners and customers the extent of Blackwell's leap becomes clear the blackwall vision took nearly five years and seven one-of-a-kind chips to realize the gray CPU the Blackwell dual GPU and a Coos package connectx dpu for East West traffic blue field dpu for north north north south and storage traffic mvlink switch for all to all GPU Communications and Quantum and Spectrum X for both infiniband ethernet can support the massive burst traffic of AI Blackwell AI factories are building siiz computers Nvidia designed and optimized the Blackwell platform full stack end to end from chips systems networking even structured cables power and Cooling and mountains of software to make it fast for customers to build AI factories these are very Capital intensive infrastructures customers want to deploy it as soon as they get their hands on the equipment and deliver the best performance and TCO Blackwell provides three to five times more AI throughput in a power limited data center than Hopper the third is mvlink this is a very big deal with its all to all GPU switch is gamechanging the black W system lets us connect 144 gpus in 72 gb200 packages into one mvlink domain with an aggregate aggregate mvlink bandwidth of 259 terabytes per second in one rack just put that in perspective that's about 10 times higher than Hopper 259 terabytes per second kind of makes sense because you need to boost the training of multi-trillion parameter models on trillions of tokens and so that natural amount of data needs to be moved around from GPU to GPU for inference MV link is vital for low latency High throughput large language model token generation we now have three networking platforms mvlink for GPU scale up Quantum infiniband for supercomputing and dedicated AI factories and Spectrum X for AI on ethernet andas Network footprint is much bigger than before generative AI momentum is accelerating generative AI Frontier Model makers are racing to scale to the next AI Plateau to increase model safety and IQ we're also scaling to understand more modalities from text images and video to 3D physics chemistry and biology Chad Bots coding AIS and image generators are growing fast but it's just a tip of the I Iceberg internet services are deploying generative AI for large scale recommenders ad targeting and search systems AI startups are consuming tens of billions of dollars yearly of csp's cloud capacity and countries are recognizing the importance of AI and investing in Sovereign AI infrastructure and Nvidia Ai and Nvidia Omniverse is opening up the next era of AI General robotics and now the Enterprise AI wave has started and we're poised to help companies transform their businesses the Nvidia AI Enterprise platform consists of Nemo Nims Nim agent Blueprints and AI Foundry that our ecosystem Partners the world leading it companies used to help customer C companies customize AI models and build bespoke AI applications Enterprises can then Deploy on Nvidia AI Enterprise runtime and at $4,500 per GPU per year Nvidia AI Enterprise is an exceptional value for deploying AI anywhere and for nvidia's software Tam can be significant as the Cuda compatible GPU install base grows from Millions to tens of millions and as Colette mentioned Nvidia software will exit the year at a 2ill billion dollar run rate thank you all for joining us today