Shih-Chieh Hsu
[intermediate/advanced] Real-Time Artificial Intelligence for Science and Engineering
Summary
Artificial Intelligence (AI) applications have revolutionized diverse research domains and industries by harnessing specialized coprocessors for real-time inference and high-throughput computing. This transformation is exemplified in various fields, from autonomous vehicles employing ASICs for low-latency operations to the Large Hadron Collider utilizing FPGAs for rare event detection and LIGO leveraging GPUs for gravitational wave detection. To meet the demanding requirements of nanosecond to millisecond latencies within heterogeneous computing environments, researchers have developed and scaled specialized software tools and inference-as-a-service computing models. Advanced techniques such as model compression, quantization, and pruning have emerged to optimize custom hardware for inference acceleration. The adoption of specialized software tools like Triton Inference Server and TensorRT has significantly reduced computational complexity and costs, paving the way for streamlined AI workflow development across a wide range of applications.
Syllabus
This lecture will provide a comprehensive overview of AI implementation challenges in various latency and throughput regimes faced by the scientific community, along with available tools and resources. It will cover state-of-the-art model compression techniques, including pruning and quantization, for optimizing AI models in high-performance FPGA applications. The lecture will introduce the inference-as-a-service model, exploring client-server workflow construction and GPU-based hardware acceleration for deep learning inference. Practical tutorials using the hls4ml library and SONIC framework will be offered, focusing on converting pre-trained ML models into FPGA firmware for extreme low-latency inference and deploying as-a-service approaches in large-scale data processing. Attendees will gain insights into AI’s role in data-driven discovery and learn to apply advanced techniques for optimizing AI models in demanding scientific applications.
References
Hls4ml https://fastmachinelearning.org/hls4ml/
T. Aarrestad “Fast convolutional neural networks on FPGAs with hls4ml”, Mach. Learn.: Sci. Technol. 2 045015 (2021) (arXiv:2101.05108).
Hendrik Borras et al, “Open-source FPGA-ML codesign for the MLPerf Tiny Benchmark”, 3rd MLBench 2022 in MLSys (arXiv:2206.11791).
SONIC https://github.com/fastmachinelearning/SonicCMS
J. Krupa et al, “GPU coprocessors as a service for deep learning inference in high energy physics”, Mach. Learn.: Sci. Technol. 2 (2021) 035005 (arXiv:2007.10359).
Haoran Zhao et al, “Graph Neural Network-based Tracking as a Service”, CTD 2023 (arXiv:2402.09633).
Pre-requisites
Basic knowledge of machine learning and neural networks.
Short bio
Shih-Chieh Hsu is a Professor of Physics and Adjunct Professor of Electrical and Computer Engineering at the University of Washington. He directs the NSF HDR Institute for Accelerated AI Algorithms for Data-Driven Discovery (A3D3 – https://a3d3.ai). With degrees from National Taiwan University and UC San Diego, Dr. Hsu specializes in experimental particle physics focusing on dark matter searches, neutrino measurements and AI applications in data-intensive research. His work utilizes the Large Hadron Collider and incorporates real-time AI for rapid data analysis and decision-making across multiple scientific disciplines. He has received recognition for his innovative research, mentorship, and contributions to real-time AI applications in scientific discovery.