loading . . . ## **October 22-23**
#### **October 21:** Co-Located EVENTS
### **San Francisco, Ca**
#### #PyTorchCon
**register**
**view the schedule**
**Co-Located Events**
**See Whoâs Attending**
### **Step into the Future of AI at PyTorch Conference 2025**
Join us for PyTorch Conference 2025, October 22 â 23, 2025 in San Francisco â the worldâs premier event dedicated to the framework powering todayâs most groundbreaking AI innovations. Connect with AI pioneers, researchers, developers, and startup founders through deep-dive technical sessions, panels, workshops on AI from bare metal all the way up to the application and agent layers. Our program features keynotes from visionary AI leaders, interactive sessions on scaling and benchmarking models, and special tracks focusing on AI safety and ethical development.
Building on last yearâs success, weâve expanded to include dedicated âMeasuring Intelligence Summitâ, âOpen Agent Summitâ, and âAI Infra Summitâ on October 21, plus the launch of PyTorch training and certification.
Whether youâre an experienced ML engineer, researcher, or developer, PyTorch Conference 2025 is your gateway to the future of AI. Join the community thatâs creating the AI revolution, not just witnessing it.
1
5
19
0
40
Weeks
Days
Hours
Minutes
Seconds
##### **Tuesday Events**
* **Measuring Intelligence Summit**
* **Open Agent Summit**
* **AI Infra Summit**
* **Startup Showcase**
* **PyTorch Associate Training**
### Featured Speakers
### Jeremy Howard
#### Founding Researcher
### Jeremy Howard
#### Founding Researcher
Jeremy Howard is a data scientist, researcher, developer, educator, and entrepreneur. He created ULMFiT, the AI system at the heart of all of todayâs major language models, including ChatGPT and Google Gemini. Jeremy is the founding CEO of Answer.AI, a new kind of AI R&D lab which creates practical end-user products based on foundational research breakthroughs. He is also the co-founder of fast.ai, a research institute dedicated to making deep learning more accessible, an Honorary Professor at the University of Queensland, and a Digital Fellow at Stanford University. Previously, Jeremy was a Distinguished Research Scientist at the University of San Francisco, where he was the founding chair of the Wicklow Artificial Intelligence in Medical Research Initiative.
Jeremy is a co-founder of the global Masks4All movement, including leading the largest evidence review of masks, published in the Proceedings of the National Academy of Science, and becoming the most read paper of all time on preprints.org. He wrote the first article to push for public mask use in the English-speaking work, in the Washington Post, along with articles in The Guardian, The Atlantic, The Conversation, and the Sydney Morning Herald, and he appeared on most of the major national US TV channels, including Good Morning America and Nightline.
He co-authored the book Deep Learning for Coders with Fastai and PyTorch, which has 5 stars on Amazon. Googleâs Director of Research, Peter Norvig, said âThis is one of the best sources for a programmer to become proficient in deep learning.â The book is based on Jeremyâs free online course, which is the worldâs longest-running course on AI and deep learning. He also created the fastai software library, one of the worldâs most popular deep learning frameworks.
Jeremy was the founding CEO of Enlitic, which was the first company to apply deep learning to medicine, and was selected as one of the worldâs top 50 smartest companies by MIT Tech Review two years running. He was the President and Chief Scientist of the data science platform Kaggle, where he was the top ranked participant in international machine learning competitions 2 years running. He was the founding CEO of two successful Australian startups (FastMail, and Optimal Decisions Groupâpurchased by Lexis-Nexis). Before that, he spent 8 years in management consulting, at McKinsey & Co, and AT Kearney. Jeremy has invested in, mentored, and advised many startups, and contributed to many open source projects. His talk on TED.com, âThe wonderful and terrifying implications of computers that can learnâ, has over 2.5 million views.
### Percy Liang
#### Associate Professor of Computer Science
### Percy Liang
#### Associate Professor of Computer Science
Percy Liang is an Associate Professor of Computer Science at Stanford University (B.S. from MIT, 2004; Ph.D. from UC Berkeley, 2011) and the director of the Center for Research on Foundation Models (CRFM). He is currently focused on making foundation models (in particular, language models) more accessible through open-source and understandable through rigorous benchmarking. In the past, he has worked on many topics centered on machine learning and natural language processing, including robustness, interpretability, human interaction, learning theory, grounding, semantics, and reasoning. He is also a strong proponent of reproducibility through the creation of CodaLab Worksheets. His awards include the Presidential Early Career Award for Scientists and Engineers (2019), IJCAI Computers and Thought Award (2016), an NSF CAREER Award (2016), a Sloan Research Fellowship (2015), a Microsoft Research Faculty Fellowship (2014), and paper awards at ACL, EMNLP, ICML, COLT, ISMIR, CHI, UIST, and RSS.
### Nathan Lambert
#### Senior Research Scientist
### Nathan Lambert
#### Senior Research Scientist
Nathan is a machine learning researcher who works on building, understanding, and advocating for open language models and other responsible autonomous systems.
He is currently a post-training lead at the Allen Institute for AI.
### Animashree (Anima) Anandkumar
#### Bren Professor of Computing and Mathematical Sciences
### Animashree (Anima) Anandkumar
#### Bren Professor of Computing and Mathematical Sciences
Professor Anandkumarâs research interests are in the areas of large-scale machine learning, non-convex optimization and high-dimensional statistics. In particular, she has been spearheading the development and analysis of tensor algorithms for machine learning. Tensor decomposition methods are embarrassingly parallel and scalable to enormous datasets. They are guaranteed to converge to the global optimum and yield consistent estimates for many probabilistic models such as topic models, community models, and hidden Markov models. More generally, Professor Anandkumar has been investigating efficient techniques to speed up non-convex optimization such as escaping saddle points efficiently.
### Jim Fan
#### Director of Robotics & Distinguished Research Scientist
### Jim Fan
#### Director of Robotics & Distinguished Research Scientist
I am a Senior Research Scientist at NVIDIA and Lead of AI Agents Initiative. My mission is to build **generally capable agents across physical worlds (robotics) and virtual worlds (games, simulation)**. I share insights about AI research & industry extensively on **Twitter/X** and **LinkedIn**. Welcome to follow me!
My research explores the bleeding edge of multimodal foundation models, reinforcement learning, computer vision, and large-scale systems. I obtained my Ph.D. degree at Stanford Vision Lab, advised by Prof. Fei-Fei Li. Previously, I interned at OpenAI (w/ Ilya Sutskever and Andrej Karpathy), Baidu AI Labs (w/ Andrew Ng and Dario Amodei), and MILA (w/ Yoshua Bengio). I graduated as the Valedictorian of Class 2016 and received the Illig Medal at Columbia University.
I spearheaded **Voyager** (the first AI agent that plays Minecraft proficiently and bootstraps its capabilities continuously), **MineDojo** (open-ended agent learning by watching 100,000s of Minecraft YouTube videos), **Eureka** (a 5-finger robot hand doing extremely dexterous tasks like pen spinning), and **VIMA** (one of the earliest multimodal foundation models for robot manipulation). MineDojo won the Outstanding Paper Award at NeurIPS 2022. My works have been widely featured in news media, such as New York Times, Forbes, MIT Technology Review, TechCrunch, The WIRED, VentureBeat, etc.
_Fun fact: I was OpenAIâs very first intern in 2016_. During that summer, I worked on **World of Bits**, an agent that perceives the web browser in pixels and outputs keyboard/mouse control. It was way before LLM became a thing at OpenAI. Good old times!
### Dawn Song
#### Professor
### Dawn Song
#### Professor
Dawn Song is a Professor in the Department of Electrical Engineering and Computer Science at UC Berkeley. Her research interest lies in deep learning and security. She has studied diverse security and privacy issues in computer systems and networks, including areas ranging from software security, networking security, database security, distributed systems security, applied cryptography, to the intersection of machine learning and security. She is the recipient of various awards including the MacArthur Fellowship, the Guggenheim Fellowship, the NSF CAREER Award, the Alfred P. Sloan Research Fellowship, the MIT Technology Review TR-35 Award, the George Tallman Ladd Research Award, the Okawa Foundation Research Award, the Li Ka Shing Foundation Women in Science Distinguished Lecture Series Award, the Faculty Research Award from IBM, Google and other major tech companies, and Best Paper Awards from top conferences. She obtained her Ph.D. degree from UC Berkeley. Prior to joining UC Berkeley as a faculty, she was an Assistant Professor at Carnegie Mellon University from 2002 to 2007.
### Sergey Levine
#### Associate Professor
### Sergey Levine
#### Associate Professor
I am an Associate professor in the Department of Electrical Engineering and Computer Sciences at UC Berkeley. In my research, I focus on algorithms that can enable autonomous agents to acquire complex behaviors through learning, especially general-purpose methods that could enable any autonomous system to learn to solve any task. Applications of such methods include robotics, as well as a range of other domains that require autonomous decision making. To see a more formal biography, click here.
### Ion Stoica
#### Professor of Computer Science, UC Berkeley; Director of Sky Computing Lab; and Co-founder of Anyscale, Databricks, and Conviva Networks
### Ion Stoica
#### Professor of Computer Science, UC Berkeley; Director of Sky Computing Lab; and Co-founder of Anyscale, Databricks, and Conviva Networks
Ion Stoica is a Professor in the EECS Department and holds the Xu Bao Chancellor Chair at the University of California at Berkeley, the Director of Sky Computing Lab, and the Executive Chairman of Databricks and Anyscale. He is currently doing research on AI systems and cloud computing, and his work includes numerous open-source projects such as SkyPilot, vLLM, ChatBot Arena , Ray and Apache Spark. He is a Member of National Academy of Engineering, an Honorary Member of the Romanian Academy and an ACM Fellow. He also co-founded three companies, Anyscale (2019), Databricks (2013) and Conviva (2006)
### Dr. Sharon Zhou
#### Vice President of Artificial Intelligence
### Dr. Sharon Zhou
#### Vice President of Artificial Intelligence
Dr. Sharon Zhou is the VP of AI at AMD and was the founder and CEO of Lamini, an AI startup that won last yearâs VentureBeat Gen AI Startup Award and has been recognized as a Forbes Cloud 100 Rising Star. As a former faculty member at Stanford, she led a 50+ person Generative AI research group and published award-winning papers in generative AI. Sharon teaches some of the most popular AI courses on Coursera, including Fine-tuning LLMs and Diffusion models, reaching millions of developers and executives. She earned her PhD in AI from Stanford, where she was advised by Dr. Andrew Ng. Before her PhD, she worked as an AI product manager at Google. She received her bachelorâs degree in computer science and Classics from Harvard. Additionally, Sharon has served as an AI advisor in Washington, D.C., and has been featured in MIT Technology Reviewâs 35 Under 35 list.
### Eric P. Xing
#### President, MBZUAI; Professor of Computer Science, Carnegie Mellon University; and Chief Scientist, GenBio AI
### Eric P. Xing
#### President, MBZUAI; Professor of Computer Science, Carnegie Mellon University; and Chief Scientist, GenBio AI
Professor Eric P. Xing is the President of the Mohamed bin Zayed University of Artificial Intelligence, Professor of Computer Science at Carnegie Mellon University, and Chief Scientist at GenBio AI. His main research interests are the development of machine learning and statistical methodology, and large-scale computational system and architectures, for solving problems involving automated learning, reasoning, and decision-making in artificial, biological, and social systems. Prof. Xing has served on the editorial boards of leading scientific journals including the Journal of the American Statistical Association, Annals of Applied Statistics, PLOS Journal of Computational Biology, IEEE Journal of Pattern Analysis and Machine Intelligence, Machine Learning Journal, and Journal of Machine Learning Research. He was elected Fellows AAAI, ACM, ASA, IEEE, and IMS.
### Sarah Guo
#### Investor, Founder
### Jerry Liu
#### CEO & Co-founder
### Jerry Liu
#### CEO & Co-founder
Jerry is the co-founder/CEO of LlamaIndex, the data framework for building LLM applications. Before this, he has spent his career at the intersection of ML, research, and startups. He led the ML monitoring team at Robust Intelligence, did self-driving AI research at Uber ATG and worked on recommendation systems at Quora.
### Simon Mo
#### Lead, vLLM & PhD Student
### Simon Mo
#### Lead, vLLM & PhD Student
Currently, Iâm a PhD student at Berkeley Sky Computing Lab for machine learning system and cloud infrastructures. I am advised by Prof. Joseph Gonzalez and Prof. Ion Stoica. My latest focus is building an end to end stack for LLM inference on your own infrastructure:
* vLLM runs LLM inference efficiently.
Previous exploration includes:
* Conex: builds, push, and pull containers fast.
* SkyATC: orchestrate LLMs in multi-cloud and scaling them to zero.
I previously work on _Model Serving System_ @anyscale.
* Ray takes your Python code and scale it to thousands of cores.
* Ray Serve empowers data scientists to own their end-to-end inference APIs.
Before Anyscale, I was a undergraduate researcher @ucbrise.
### Noam Brown
#### Research Scientist
### Noam Brown
#### Research Scientist
I am a Research Scientist at OpenAI working on multi-step reasoning, self-play, and multi-agent AI.
I previously worked at FAIR (Meta), where my teammates and I developed CICERO, the first AI to achieve human-level performance in the strategy game Diplomacy. The paper is available here.
I have also applied my research to making the first AI to defeat top humans in no-limit poker. With my CMU advisor, I created Libratus and Pluribus, which defeated top human poker professionals in Human vs. Machine competitions. Libratus received the Marvin Minsky Medal for Outstanding Achievements in AI. Pluribus was on the cover of Science Magazine and was a runner-up for Scienceâs Breakthrough of the Year for 2019. I was also named one of MIT Tech Reviewâs 35 Innovators Under 35.
I received a PhD in computer science from Carnegie Mellon. Before CMU, I worked at the Federal Reserve Board in the International Financial Markets section, where I researched algorithmic trading in financial markets. Before that, I worked in algorithmic trading.
### Dylan Patel
#### Founder, CEO, and Chief Analyst
### Dylan Patel
#### Founder, CEO, and Chief Analyst
Dylan is the Founder, CEO, and Chief Analyst of SemiAnalysis â the preeminent authority on all things AI and semiconductors. Through Dylanâs unwavering commitment to excellence, he has built the firm from the ground up as the recognized authority on the semiconductor supply chain to the cloud ecosystem, machine learning models, and all things in between. Since 2020, SemiAnalysis has transformed its business from a solo venture into a cohesive and focused team to provide breaking news and in-depth analysis for the most strategic, complex, and escalating challenges in the semiconductor industry.
### Chip Huyen
#### Storyteller
### Chip Huyen
#### Storyteller
Chip Huyen runs Tep Studio at the intersection of AI, education, and storytelling. Previously, she was with Snorkel AI and NVIDIA, founded an AI infrastructure startup (acquired), and taught Machine Learning Systems Design at Stanford.
She was a core developer of NeMo, NVIDIAâs generative AI framework.
Her first English book, Designing Machine Learning Systems (2022), is an Amazon bestseller in AI and has been translated into 10+ languages. Her new book, AI Engineering (2025), has been the most-read book on the OâReilly platform since its launch.
### Mark Saroufim
#### Software Engineer
### Mark Saroufim
#### Software Engineer
Mark Saroufim is a PyTorch Engineer at Meta working on inference, compilers and community.
### Matt White
#### Executive Director
### Matt White
#### Executive Director
Matt White is the Executive Director of the PyTorch Foundation and GM of AI at the Linux Foundation. He is also the Director of the Generative AI Commons, an open community initiative focused on advancing responsible generative AI under the LF AI & Data Foundation. Matt has nearly 30 years of experience in applied research and standards in AI and data in telecom, media and gaming industries. He began his career programming expert systems in the telecom industry and since 2012 has been focused on machine learning and research at the intersection of AI, simulations and multi-sensory learning. Matt is the Co-founder and Chair of the Open Metaverse Foundation at the Linux Foundation. He is also a Chair at the Metaverse Standards Forum and runs the Silicon Valley Generative AI paper reading group and is a co-organizer at the GenAI Collective.
### Sebastian Raschka, PhD
#### LLM Research Engineer
### Sebastian Raschka, PhD
#### LLM Research Engineer
Sebastian Raschka, PhD, has been working in machine learning and AI for more than a decade. In addition to being a researcher, Sebastian has a strong passion for education. He is known for his bestselling books on machine learning with Python and his contributions to open source.
Sebastian is a Staff Research Engineer at Lightning AI, focusing on implementing and training large language models. Before his industry experience, Sebastian was an assistant professor in the Department of Statistics at the University of WisconsinâMadison, where he focused on deep learning research. You can learn more about Sebastian at https://sebastianraschka.com.
### Joe Spisak
#### Product Director, Super Intelligence Labs
### Joe Spisak
#### Product Director, Super Intelligence Labs
Joe Spisak is a Product Director in Metaâs Superintelligence Labs (MSL), where he leads product efforts across PyTorch and Metaâs Agentic platform. He has over a decade of experience building AI platforms, with leadership roles at Meta, Google, and Amazon. Joe helped make PyTorch the worldâs leading open-source AI framework and guided its transition to the Linux Foundation in partnership with industry leaders. He spearheaded the open-source strategy for Llama, making cutting-edge models broadly accessible and enabling the community to scale AI in an open and collaborative way. Joe is also an active angel and advisor to next-generation AI startups such as Anthropic, General Reasoning, ReflectionAI and many others.
### Lysandre Debut
#### Chief Open Source Officer
### Lysandre Debut
#### Chief Open Source Officer
Lysandre is the Chief Open-Source Officer at Hugging Face; ensuring that the ecosystem is as well supported as possible in the ML lifecycle, with open-source tools. He has been at Hugging Face for the past six years and was the first open-source employee at Hugging Face; working on transformers and the entire stack of Hugging Face open-source libraries since then.
### Daniel Han
#### Cofounder
### Daniel Han
#### Cofounder
I helped fix bugs in Llama, Mistral, Gemma, Phi, Qwen, Mistral and more. We have over 10 million monthly downloads, and collaborate with large model labs to release more accurate and bug free models. Our Github package has >40K stars and is used by NASA, the UN and many others for RL, GRPO, finetuning, inference and more!
### Robert Nishihara
#### Co-founder
### Robert Nishihara
#### Co-founder
Robert Nishihara is one of the co-founders of Anyscale and one of the co-creators of Ray, the leading open source framework for scalable AI. He did his PhD in machine learning and distributed systems in the computer science department at UC Berkeley. Before that, he majored in math at Harvard.
### Adam Jones
#### Member of Technical Staff
### Adam Jones
#### Member of Technical Staff
**Adam Jones** is working to make the transition to advanced AI systems go well. Heâs currently a member of technical staff at Anthropic, helping develop the Model Context Protocol.
Previously, he led AI safety talent programs at BlueDot Impact, inculding the large-scale AI Safety Fundamentals courses. He also advised the UK Governmentâs Department for Science, Innovation and Technology (DSIT) on AI safety policy.
Outside of work, he enjoys making things! This includes writing blog articles (mainly about AI safety) and building popular open-source tools: his AWS Email Simulator has over 1 million downloads, his YouTube thumbnail-hiding browser extension serves 30,000+ weekly users, and his Airtable MCP server has spurred 60 forks. Finally, he also contributes media to Wikimedia Commons, with his work appearing in textbooks, academic papers, and YouTube videos with millions of views.
When not making new things, he enjoys co-operative board games, video games, sunny weather, and playing capture the flag on Hampstead Heath.
### Yuxin Wu
#### Co-founder
### Yuxin Wu
#### Co-founder
Iâm building large multimodal models at Moonshot AI. Weâre hiring top talents in China and US.
Prior to this, I worked at Google Brain on foundation models, and at Facebook AI Research on computer vision. I have expertise in research and infrastructure for deep learning and computer vision.
My previous works at FAIR have received Best Paper Honorable Mention in ECCV 2018, Best Paper Nomination in CVPR 2020, and Mark Everingham Prize in ICCV 2021. I also created detectron2, one of the most popular Facebook AI projects.
### Lianmin Zheng
#### Member of Technical Staff
### Lianmin Zheng
#### Member of Technical Staff
I am a member of technical staff at xAI. I lead the inference team responsible for building efficient and reliable infrastructure to run Grok models. My expertise includes machine learning systems, large language models, compilers, and distributed systems.
Previously, I completed my Ph.D. at UC Berkeley, where I was advised by Ion Stoica and Joseph E. Gonzalez. I obtained my B.S. degree from ACM Honored Class, Shanghai Jiao Tong University. I was honored to receive the Meta PhD Fellowship and the a16z Open Source AI Grant (twice) in recognition of my innovative research and impactful open-source projects. I also co-founded the non-profit organization LMSYS.org to advance open research on large models. We have developed open models with millions of downloads, crowdsourced platforms with millions of users, and systems that are orders of magnitude faster.
### Vincent Weisser
#### CEO & Co-Founder
### Vincent Weisser
#### CEO & Co-Founder
Hey, Iâm Vincent. Building Prime Intellect to commoditize compute and AGI in order to advance scientific progress and ensure the benefits of AGI will be distributed broadly. Interested in solving all diseases and automating scientific discovery.
You can contact me at mail[at]vincentweisser.com or via X.
x / goodreads / github / linkedin / instagram
### Bowen Peng
#### Chief Scientist
### Bowen Peng
#### Chief Scientist
Bowen Peng is currently the Chief Scientist at Nous Research, where he leads fundamental research on transformers and generative models. With a Masterâs degree from Mila, his research spans distributed training, transformers, and multimodal generative models. He is the lead author of YaRN (Yet Another RoPE Extension Method), a widely adopted innovation in long context large language models and DeMo (Decoupled Momentum Optimization). Bowenâs work focuses on rethinking architectures for multimodal data, large scale distributed training and exploring new paradigms that push beyond todayâs transformer-dominated landscape.
### Yutaro Yamada
#### Research Scientist
### Chi Wang
#### Senior Staff Research Scientist
### Dmytro (Dima) Dzhulgakov
#### CTO, Co-Founder
### Yineng Zhang
#### Inference Lead
### Yineng Zhang
#### Inference Lead
Yineng Zhang serves as the Inference Lead of SGLang and is a core contributor to FlashInfer. He led critical releases enabling inference for models such as Llama 3 and DeepSeek V3, as well as deployments on GB200 NVL72 and he is also a selected member of LMSYS Org.
### Thomas Neff
#### Head of Systems Research & Engineering
### Thomas Neff
#### Head of Systems Research & Engineering
Thomas is leading the Systems Research & Engineering at Luma AI, where he is working on making large-scale training and inference efforts as efficient as possible, ranging from lower level optimizations to higher level improvements to scale training and inference workloads to thousands of GPUs.
He contributed significantly to every product shipped at Luma, including 3D reconstruction and rendering, text-to-3D (Genie), and text-to-video (Dream Machine, Ray2, âŚ)
### Hongpeng Guo
#### Research Scientist
### Hongpeng Guo
#### Research Scientist
Hongpeng Guo is a Research Scientist at ByteDance Seed, where he works on developing large scale post-training and reinforcement-learning infrastructures. His research interests encompass large-scale machine learning systems in general. He earned a B.Eng. in Computer Engineering from the University of Hong Kong and a Ph.D. in Computer Science from the University of Illinois at Urbana-Champaign.
### Aakanksha Chowdery
#### Researcher (Reflection AI) & Adjunct Professor (Stanford University)
### Aakanksha Chowdery
#### Researcher (Reflection AI) & Adjunct Professor (Stanford University)
I am pushing the frontier of agentic LLMs by leveraging RL techniques to enable autonomous self-improving agents, especially in software engineering at the startup Reflection AI.
At Stanford, I am co-teaching CS329A (Self-Improving AI agents) in Fall/Winter 2025 and I am the Program Chair for MLSys 2026.
Before this, I was the technical Lead of 540B PaLM model and lead researcher in Gemini at Google in pre-training, scaling, and finetuning of Large Language Models. I was also a core contributor in PaLM-E, MedPaLM, and Pathways project at Google. Prior to joining Google, I was technical lead for several interdisciplinary research initiatives at Microsoft Research and Princeton University across machine learning and distributed systems.
I completed my PhD in Electrical Engineering from Stanford University and was awarded the Paul Baran Marconi Young Scholar Award for the outstanding scientific contributions of my dissertation in the field of communications and Internet.
**Selected Honors & Awards:** Outstanding Paper Award MLSys 2023, Outstanding Paper Award MLSys 2022, Paul Baran Marconi Young Scholar Award 2012.
### Peter Salanki
#### Chief Technology Officer
### Peter Salanki
#### Chief Technology Officer
Peter Salanki is the Chief Technology Officer of CoreWeave, where he spearheads the development of innovative cloud solutions tailored for high-performance workloads. Originally from Sweden, Peter was recruited at just 18 to become Director of Engineering at Bahnhof AB before moving to the United States to lead Solutions Architecture for Procera Networks. Peter joined CoreWeave in 2019 and has played a pivotal role in scaling CoreWeaveâs infrastructure to meet the demands of diverse clients, from AI startups to Fortune 500 companies, making it the worldâs first AI-specific cloud service and reshaping the AI infrastructure landscape.
### Nitin Perumbeti
#### Chief Technology Officer
### Jeff Gehlhaar
#### Senior Vice President of Engineering
### Jeff Gehlhaar
#### Senior Vice President of Engineering
Jeff Gehlhaar is Senior Vice President of Engineering at Qualcomm, leading Qualcommâs AI software products. Jeffâs team focuses on the Qualcomm AI stack including all the quantization, optimization, runtime and accuracy debugging, performance analysis and AI visualization tools across all of Qualcommâs Snapdragon portfolio of SOCs. This includes support for open source runtimes â LiteRT, Executorch and ONNX Runtime.
Previously, Jeff led the team responsible for research into hardware and software for spiking neural networking applications. During his 34 years at Qualcomm, Jeff has worked on a variety of projects encompassing software for embedded systems, hardware bring-up, and software engineering. Jeff led the software research team developing LTE and LTE-Advanced networking protocols. He is a holder of several patents, and a graduate of the Jacobs School of Engineering.
### Anastasios Angelopolous
#### Researcher
### Anastasios Angelopolous
#### Researcher
Anastasios is a postdoctoral scholar at UC Berkeley with Ion Stoica. Previously, he completed his PhD at UC Berkeley under the supervision of Michael I. Jordan and Jitendra Malik. Before that, he studied electrical engineering at Stanford University.
### Luca Antiga
#### CTO
### Luca Antiga
#### CTO
CTO @ Lightning AI, Founder (Orobix, Tensorwerk), early PyTorch core contributor, Manning Author (Deep Learning with PyTorch). PhD in Bioengineering.
### SCHEDULE at a glance
**Tuesday, October 21**| Measuring Intelligence Summit, AI Infra Summit, AI Agent Summit, PyTorch Training & Certification, + Startup Showcase
---|---
**Wednesday, October 22**| Keynotes, Breakout Sessions, Expo Hall, Poster Sessions + Flare Party + Sponsor Booth Crawl
**Thursday, October 23**| Keynotes, Breakout Sessions, + Expo Hall
### **THE ONSITE EXPERIENCE**
SHOW MORE
### **2024 Session Recordings**
VIEW ENTIRE PLAYLIST https://events.linuxfoundation.org/pytorch-conference/