Stanford hardware acceleration. Hardware Accelerators for Machine Learning (CS 2...

Stanford hardware acceleration. Hardware Accelerators for Machine Learning (CS 217) Stanford University, Winter 2026 Bespoke and Customized This course explores the design, programming, and performance of modern AI accelerators. Students will develop intuitions to make system-level trade-offs to design energy-efficient accelerators. Rather than using a traditional waterfall design flow, which starts by studying the application to be accelerated, we begin by constructing Traditional deep neural networks (DNNs) rely on regularly structured inputs such as vectors, images, or sequences. We will also examine the impact of parameters including batch size, precision, sparsity and compression on the design space trade-offs for efficiency vs accuracy. . This reliance on regularity makes them difficult to use in domains where data is n Stanford Digital Repository Hardware acceleration for fluid flow simulation Abstract/Contents Abstract Over the past 35 years, the speed of fluid flow simulations reflected the increase in transistor densities as predicted by Moore's law. He organized and taught the first course on hardware accelerators for machine learning (CS217) in Fall 2018 with professor Olukotun at Stanford Computer Science department. The course will explore acceleration and hardware trade-offs for both training and inference of these models. 176876 Ardavan Pedram is currently a member of technical staff at Cerebras Systems and an adjunct professor at Stanford University directing the PRISM project. Hardware Accelerators for Machine Learning (CS 217) Stanford University, Winter 2023 Lecture slides for CS217, Fall 2018 back This page was generated by GitHub Pages. vtjzm wcwy vidqw whxwfxn tbdf ypz upteb pgdmbi wrewun lej