Migrating CUDA Applications to Chinese GPU Architectures Training Course
Chinese GPU architectures like Huawei Ascend, Biren, and Cambricon MLUs provide alternatives to CUDA that are specifically designed for the local AI and HPC markets in the UAE.
This instructor-led training (online or at your location) is targeted at advanced GPU programmers and infrastructure specialists who want to migrate and optimize existing CUDA applications for deployment on Chinese hardware platforms.
By the end of this course, participants will be able to:
- Determine the compatibility of current CUDA workloads with Chinese chip alternatives.
- Migrate CUDA codebases to environments such as Huawei CANN, Biren SDK, and Cambricon BANGPy.
- Analyze performance differences and pinpoint optimization opportunities across various platforms.
- Overcome practical challenges in cross-architecture support and deployment.
Course Format
- Interactive lectures and discussions.
- Hands-on labs for code translation and performance comparison.
- Guided exercises focusing on multi-GPU adaptation strategies.
Customization Options
- To request a customized training based on your platform or CUDA project, please contact us to arrange the details.
Course Outline
Overview of Chinese AI GPU Ecosystem
- Comparison of Huawei Ascend, Biren, Cambricon MLU
- CUDA vs CANN, Biren SDK, and BANGPy models
- Industry trends and vendor ecosystems
Preparing for Migration
- Assessing your CUDA codebase
- Identifying target platforms and SDK versions
- Toolchain installation and environment setup
Code Translation Techniques
- Porting CUDA memory access and kernel logic
- Mapping compute grid/thread models
- Automated vs manual translation options
Platform-Specific Implementations
- Using Huawei CANN operators and custom kernels
- Biren SDK conversion pipeline
- Rebuilding models with BANGPy (Cambricon)
Cross-Platform Testing and Optimization
- Profiling execution on each target platform
- Memory tuning and parallel execution comparisons
- Performance tracking and iteration
Managing Mixed GPU Environments
- Hybrid deployments with multiple architectures
- Fallback strategies and device detection
- Abstraction layers for code maintainability
Case Studies and Best Practices
- Porting vision/NLP models to Ascend or Cambricon
- Retrofitting inference pipelines on Biren clusters
- Handling version mismatches and API gaps
Summary and Next Steps
Requirements
- Experience programming with CUDA or GPU-based applications
- Understanding of GPU memory models and compute kernels
- Familiarity with AI model deployment or acceleration workflows
Audience
- GPU programmers
- System architects
- Porting specialists
Need help picking the right course?
Migrating CUDA Applications to Chinese GPU Architectures Training Course - Enquiry
Upcoming Courses
Related Courses
Developing AI Applications with Huawei Ascend and CANN
21 HoursThe Huawei Ascend series is a collection of AI processors tailored for efficient inference and training tasks.
This instructor-led live training (delivered online or at your location) targets intermediate-level AI engineers and data scientists aiming to create and refine neural network models with the Huawei Ascend platform and CANN toolkit.
Upon completion, participants will be able to:
- Establish and configure the CANN development environment.
- Create AI applications using MindSpore and CloudMatrix workflows.
- Tune performance on Ascend NPUs with custom operators and tiling techniques.
- Deploy models in both edge and cloud settings.
Course Format
- Interactive lectures and discussions.
- Practical use of Huawei Ascend and the CANN toolkit through sample applications.
- Guided exercises centered on model creation, training, and deployment.
Customization Options for the Course
- If you wish to tailor this course based on your specific infrastructure or datasets, please contact us to arrange a customized session.
Deploying AI Models with CANN and Ascend AI Processors
14 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI compute stack for deploying and optimizing AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and engineers who wish to deploy trained AI models efficiently to Huawei Ascend hardware using the CANN toolkit and tools such as MindSpore, TensorFlow, or PyTorch.
By the end of this training, participants will be able to:
- Understand the CANN architecture and its role in the AI deployment pipeline.
- Convert and adapt models from popular frameworks to Ascend-compatible formats.
- Use tools like ATC, OM model conversion, and MindSpore for edge and cloud inference.
- Diagnose deployment issues and optimize performance on Ascend hardware.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work using CANN tools and Ascend simulators or devices.
- Practical deployment scenarios based on real-world AI models.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
AI Inference and Deployment with CloudMatrix
21 HoursCloudMatrix is Huawei’s comprehensive AI development and deployment platform tailored to support scalable, production-ready inference pipelines.
This instructor-led live training (online or in-person) targets beginner to intermediate-level AI professionals aiming to deploy and monitor AI models using the CloudMatrix platform, integrated with CANN and MindSpore.
By the end of this course, participants will be able to:
- Utilize CloudMatrix for model packaging, deployment, and serving.
- Convert and optimize models for Ascend chipsets.
- Establish pipelines for real-time and batch inference tasks.
- Monitor deployments and fine-tune performance in production environments.
Course Format
- Interactive lecture and discussion sessions.
- Hands-on experience with CloudMatrix using practical deployment scenarios.
- Guided exercises focused on conversion, optimization, and scaling techniques.
Customization Options for the Course
- To request a customized training based on your AI infrastructure or cloud environment, please contact us to arrange.
GPU Programming on Biren AI Accelerators
21 HoursBiren AI Accelerators are advanced GPUs tailored for artificial intelligence and high-performance computing tasks, supporting extensive training and inference processes.
This instructor-led live training (online or at your site) is targeted at intermediate to advanced developers looking to develop and fine-tune applications using Biren’s proprietary GPU technology. Practical comparisons will be made with CUDA-based environments.
By the conclusion of this course, attendees will be able to:
- Grasp the architecture and memory structure of Biren GPUs.
- Configure the development environment and utilize Biren’s programming framework.
- Convert and enhance CUDA-style code for use with Biren platforms.
- Implement performance optimization and debugging strategies.
Course Format
- Engaging lectures and discussions.
- Practical application of the Biren SDK in sample GPU tasks.
- Guided exercises centered on porting and optimizing performance.
Customization Options for the Course
- To request a customized training session based on your specific application stack or integration requirements, please contact us to arrange.
Cambricon MLU Development with BANGPy and Neuware
21 HoursCambricon MLUs (Machine Learning Units) are specialized AI chips designed to enhance performance in both inference and training tasks within edge computing and data center environments.
This instructor-led live training session (conducted either online or at your location) is tailored for intermediate developers looking to create and deploy AI models using the BANGPy framework alongside Neuware SDK on Cambricon MLU hardware.
Upon completion of this course, participants will be able to:
- Establish and configure development environments for BANGPy and Neuware.
- Create and refine Python- and C++-based models specifically for Cambricon MLUs.
- Deploy these models onto edge devices and data centers that utilize Neuware runtime.
- Incorporate ML workflows with features optimized for MLU acceleration.
Course Format
- Engaging lectures combined with interactive discussions.
- Practical hands-on experience using BANGPy and Neuware for both development and deployment tasks.
- Guided exercises centered on optimization, integration, and testing processes.
Customization Options
- If you require a customized training session based on your specific Cambricon device model or use case, please reach out to us for further arrangements.
Introduction to CANN for AI Framework Developers
7 HoursCANN (Compute Architecture for Neural Networks) is Huawei’s AI computing toolkit used to compile, optimize, and deploy AI models on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at beginner-level AI developers who wish to understand how CANN fits into the model lifecycle from training to deployment, and how it works with frameworks like MindSpore, TensorFlow, and PyTorch.
By the end of this training, participants will be able to:
- Understand the purpose and architecture of the CANN toolkit.
- Set up a development environment with CANN and MindSpore.
- Convert and deploy a simple AI model to Ascend hardware.
- Gain foundational knowledge for future CANN optimization or integration projects.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with simple model deployment.
- Step-by-step walkthrough of the CANN toolchain and integration points.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN for Edge AI Deployment
14 HoursHuawei's Ascend CANN toolkit enables powerful AI inference on edge devices such as the Ascend 310. CANN provides essential tools for compiling, optimizing, and deploying models where compute and memory are constrained.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI developers and integrators who wish to deploy and optimize models on Ascend edge devices using the CANN toolchain.
By the end of this training, participants will be able to:
- Prepare and convert AI models for Ascend 310 using CANN tools.
- Build lightweight inference pipelines using MindSpore Lite and AscendCL.
- Optimize model performance for limited compute and memory environments.
- Deploy and monitor AI applications in real-world edge use cases.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab work with edge-specific models and scenarios.
- Live deployment examples on virtual or physical edge hardware.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Understanding Huawei’s AI Compute Stack: From CANN to MindSpore
14 HoursHuawei’s AI stack — from the low-level CANN SDK to the high-level MindSpore framework — offers a tightly integrated AI development and deployment environment optimized for Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at beginner-level to intermediate-level technical professionals who wish to understand how the CANN and MindSpore components work together to support AI lifecycle management and infrastructure decisions.
By the end of this training, participants will be able to:
- Understand the layered architecture of Huawei’s AI compute stack.
- Identify how CANN supports model optimization and hardware-level deployment.
- Evaluate the MindSpore framework and toolchain in relation to industry alternatives.
- Position Huawei's AI stack within enterprise or cloud/on-prem environments.
Format of the Course
- Interactive lecture and discussion.
- Live system demos and case-based walkthroughs.
- Optional guided labs on model flow from MindSpore to CANN.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Optimizing Neural Network Performance with CANN SDK
14 HoursCANN SDK (Compute Architecture for Neural Networks) is Huawei’s AI compute foundation that allows developers to fine-tune and optimize the performance of deployed neural networks on Ascend AI processors.
This instructor-led, live training (online or onsite) is aimed at advanced-level AI developers and system engineers who wish to optimize inference performance using CANN’s advanced toolset, including the Graph Engine, TIK, and custom operator development.
By the end of this training, participants will be able to:
- Understand CANN's runtime architecture and performance lifecycle.
- Use profiling tools and Graph Engine for performance analysis and optimization.
- Create and optimize custom operators using TIK and TVM.
- Resolve memory bottlenecks and improve model throughput.
Format of the Course
- Interactive lecture and discussion.
- Hands-on labs with real-time profiling and operator tuning.
- Optimization exercises using edge-case deployment examples.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
CANN SDK for Computer Vision and NLP Pipelines
14 HoursThe CANN SDK (Compute Architecture for Neural Networks) provides powerful deployment and optimization tools for real-time AI applications in computer vision and NLP, especially on Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at intermediate-level AI practitioners who wish to build, deploy, and optimize vision and language models using the CANN SDK for production use cases.
By the end of this training, participants will be able to:
- Deploy and optimize CV and NLP models using CANN and AscendCL.
- Use CANN tools to convert models and integrate them into live pipelines.
- Optimize inference performance for tasks like detection, classification, and sentiment analysis.
- Build real-time CV/NLP pipelines for edge or cloud-based deployment scenarios.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on lab with model deployment and performance profiling.
- Live pipeline design using real CV and NLP use cases.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Building Custom AI Operators with CANN TIK and TVM
14 HoursCANN TIK (Tensor Instruction Kernel) and Apache TVM enable advanced optimization and customization of AI model operators for Huawei Ascend hardware.
This instructor-led, live training (online or onsite) is aimed at advanced-level system developers who wish to build, deploy, and tune custom operators for AI models using CANN’s TIK programming model and TVM compiler integration.
By the end of this training, participants will be able to:
- Write and test custom AI operators using the TIK DSL for Ascend processors.
- Integrate custom ops into the CANN runtime and execution graph.
- Use TVM for operator scheduling, auto-tuning, and benchmarking.
- Debug and optimize instruction-level performance for custom computation patterns.
Format of the Course
- Interactive lecture and demonstration.
- Hands-on coding of operators using TIK and TVM pipelines.
- Testing and tuning on Ascend hardware or simulators.
Course Customization Options
- To request a customized training for this course, please contact us to arrange.
Performance Optimization on Ascend, Biren, and Cambricon
21 HoursThe leading AI hardware platforms in China—Ascend, Biren, and Cambricon—provide distinctive acceleration and profiling tools tailored for large-scale AI workloads.
This instructor-led training session (conducted either online or on-site) is designed for advanced-level engineers specializing in AI infrastructure and performance. The goal is to enhance their ability to optimize model inference and training processes across various Chinese AI chip platforms.
Upon completion of this course, participants will be able to:
- Evaluate models using the Ascend, Biren, and Cambricon platforms.
- Detect system limitations and inefficiencies in memory and compute resources.
- Implement optimizations at the graph level, kernel level, and operator level.
- Refine deployment pipelines to boost throughput and reduce latency.
Course Format
- An interactive lecture combined with discussions.
- Practical use of profiling and optimization tools on each platform.
- Guided exercises centered around real-world tuning scenarios.
Customization Options for the Course
- If you require a customized training session based on your specific performance environment or model type, please contact us to arrange this.