Get in Touch

Course Outline

Overview of the Chinese AI GPU Ecosystem

  • Comparison of Huawei Ascend, Biren, and Cambricon MLU
  • Comparison between CUDA and CANN, Biren SDK, and BANGPy models
  • Industry trends and vendor ecosystems

Preparing for Migration

  • Assessing your CUDA codebase
  • Identifying target platforms and SDK versions
  • Installing toolchains and setting up the environment

Code Translation Techniques

  • Porting CUDA memory access and kernel logic
  • Mapping compute grid and thread models
  • Options for automated versus manual translation

Platform-Specific Implementations

  • Utilizing Huawei CANN operators and custom kernels
  • Understanding the Biren SDK conversion pipeline
  • Rebuilding models with BANGPy (Cambricon)

Cross-Platform Testing and Optimization

  • Profiling execution on each target platform
  • Comparing memory tuning and parallel execution
  • Performance tracking and iterative improvement

Managing Mixed GPU Environments

  • Hybrid deployments involving multiple architectures
  • Fallback strategies and device detection
  • Implementing abstraction layers for code maintainability

Case Studies and Best Practices

  • Porting vision and NLP models to Ascend or Cambricon
  • Retrofitting inference pipelines on Biren clusters
  • Handling version mismatches and API gaps

Summary and Next Steps

Requirements

  • Experience in programming with CUDA or GPU-based applications
  • Understanding of GPU memory models and compute kernels
  • Familiarity with AI model deployment or acceleration workflows

Audience

  • GPU programmers
  • System architects
  • Porting specialists
 21 Hours

Upcoming Courses

Related Categories