23. About Workshops @ Kempner#
Welcome to the Workshops @ Kempner section of our Computing Handbook. This section is dedicated to fostering continuous learning and professional development through a variety of interactive sessions. Our workshops are designed to equip participants with practical skills and knowledge, encouraging active engagement and collaboration. Whether you are a beginner eager to acquire new competencies or a seasoned professional aiming to enhance your expertise, our training programs offer valuable opportunities to advance your proficiency and contribute effectively to your field.
23.1. List of Workshops#
Workshop Name |
Description |
---|---|
Overview of how to access and use the Kempner AI cluster. |
|
Introduction to key concepts in distributed computing. |
|
Reviews parallelization techniques including Distributed Data Parallelism (DDP), Model Parallelism (MP), Tensor Parallelism (TP), Pipeline Parallelism (PP), and Fully Sharded Data Parallelism (FSDP). Provides hands-on examples for each approach. |
|
Provides a practical, interactive way to learn about transformers by building a simple language model. |
|
Provides hands-on training on hosting and running inference for large langauge models that don’t fit into a single GPU’s memory. |
|
Describes how to implement spike sorting algorithms for neural data on an HPC cluster, using a comprehensive pipeline and interactive examples. |
|
Demonstrates how to optimize machine learning workflows for efficient, reproducible training on an AI cluster. |