Get in Touch

Course Outline

Introduction to Huawei CloudMatrix

  • Overview of the CloudMatrix ecosystem and deployment workflow.
  • Supported models, formats, and deployment modes.
  • Typical use cases and supported chipsets.

Preparing Models for Deployment

  • Exporting models from training tools such as MindSpore, TensorFlow, and PyTorch.
  • Utilizing ATC (Ascend Tensor Compiler) for format conversion.
  • Understanding static versus dynamic shape models.

Deploying to CloudMatrix

  • Creating services and registering models.
  • Deploying inference services via the user interface or command-line interface (CLI).
  • Managing routing, authentication, and access control.

Serving Inference Requests

  • Distinguishing between batch and real-time inference flows.
  • Implementing data preprocessing and postprocessing pipelines.
  • Integrating CloudMatrix services with external applications.

Monitoring and Performance Tuning

  • Tracking deployment logs and requests.
  • Managing resource scaling and load balancing.
  • Optimizing latency and throughput.

Integration with Enterprise Tools

  • Connecting CloudMatrix with OBS and ModelArts.
  • Utilizing workflows and model versioning.
  • Implementing CI/CD for model deployment and rollback.

End-to-End Inference Pipeline

  • Deploying a complete image classification pipeline.
  • Benchmarking and validating accuracy.
  • Simulating failover scenarios and system alerts.

Summary and Next Steps

Requirements

  • A solid understanding of AI model training workflows.
  • Practical experience with Python-based machine learning frameworks.
  • Basic familiarity with cloud deployment concepts.

Audience

  • AI operations teams.
  • Machine learning engineers.
  • Cloud deployment specialists working with Huawei infrastructure.
 21 Hours

Testimonials (2)

Upcoming Courses

Related Categories