-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Docs:Supplement NPU training script samples and documentation instruction #1169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Feng0w0
commented
Dec 30, 2025
- Add training script samples for NPU for the current 5 categories of models, stored under examples/xxx/model_training/special/npu_scripts, and provide training startup instructions in the document GPU_Support.md.
- replace "npu:0" with get_device_name
Summary of ChangesHello @Feng0w0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly expands and refines the project's support for Ascend NPU devices, particularly focusing on training workflows. It introduces a new collection of NPU-optimized training scripts for various models, ensuring better performance and compatibility. Concurrently, the underlying device management code has been improved for more dynamic NPU detection, and the user documentation has been thoroughly updated to provide clear guidance on both NPU inference and training configurations, making it easier for users to leverage NPU hardware. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces support for NPU training by adding example scripts and updating documentation. It also refactors device name handling to replace hardcoded values. My review identified a few critical issues. The new logic for determining the device for VRAM checks is flawed and can cause errors when running on systems with both CUDA and NPU devices. Additionally, one of the new NPU training scripts contains a path typo that will cause it to fail. I've also pointed out a couple of minor issues in the documentation and another training script. Please address the critical issues to ensure the correctness of the changes.
|
|
||
| def check_free_vram(self): | ||
| device = self.computation_device if self.computation_device != "npu" else "npu:0" | ||
| device = self.computation_device if not IS_NPU_AVAILABLE else get_device_name() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current logic for determining the device is incorrect. If IS_NPU_AVAILABLE is true, it will always use get_device_name() to get an NPU device, even if self.computation_device is set to a CUDA device. This will cause getattr(torch, self.computation_device_type).mem_get_info(device) to fail when self.computation_device_type is 'cuda' but device is an NPU device string.
The logic should only use get_device_name() when the computation device is specifically 'npu' to resolve it to a full device name like 'npu:0', while preserving other device specifications like 'cuda' or 'npu:1'.
| device = self.computation_device if not IS_NPU_AVAILABLE else get_device_name() | |
| device = get_device_name() if self.computation_device == "npu" else self.computation_device |
|
|
||
| def get_vram(self): | ||
| device = self.device if self.device != "npu" else "npu:0" | ||
| device = self.device if not IS_NPU_AVAILABLE else get_device_name() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current logic for determining the device is incorrect. If IS_NPU_AVAILABLE is true, it will always use get_device_name() to get an NPU device, even if self.device is set to a CUDA device. This will cause getattr(torch, self.device_type).mem_get_info(device) to fail when self.device_type is 'cuda' but device is an NPU device string.
The logic should only use get_device_name() when the device is specifically 'npu' to resolve it to a full device name like 'npu:0', while preserving other device specifications like 'cuda' or 'npu:1'.
| device = self.device if not IS_NPU_AVAILABLE else get_device_name() | |
| device = get_device_name() if self.device == "npu" else self.device |
| --task "sft:data_process" | ||
|
|
||
| accelerate launch examples/qwen_image/model_training/train.py \ | ||
| --dataset_base_path "./models/train/Qwen-Image-LoRA-splited-cache" \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There appears to be a typo in the dataset_base_path. The first part of the script saves the processed data to ./models/train/Qwen-Image-Edit-2509-LoRA-splited-cache, but the second part attempts to load from ./models/train/Qwen-Image-LoRA-splited-cache. The path is missing -Edit-2509, which will cause the script to fail.
| --dataset_base_path "./models/train/Qwen-Image-LoRA-splited-cache" \ | |
| --dataset_base_path "./models/train/Qwen-Image-Edit-2509-LoRA-splited-cache" \ |
| ``` | ||
|
|
||
| ### Training | ||
| NPU startup script samples have been added for each type of model,the scripts are stored in the `examples/xxx/special/npu_scripts`, for examples `examples/wanvideo/model_training/special/npu_scripts/Wan2.2-T2V-A14B-NPU.sh`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a grammatical error in this sentence. 'for examples' should be 'for example'.
| NPU startup script samples have been added for each type of model,the scripts are stored in the `examples/xxx/special/npu_scripts`, for examples `examples/wanvideo/model_training/special/npu_scripts/Wan2.2-T2V-A14B-NPU.sh`. | |
| NPU startup script samples have been added for each type of model,the scripts are stored in the `examples/xxx/special/npu_scripts`, for example `examples/wanvideo/model_training/special/npu_scripts/Wan2.2-T2V-A14B-NPU.sh`. |
| #### Parameters for specific models | ||
| | Model | Parameter | Note | | ||
| |-----------|---------------------------|-------------------| | ||
| | Wan 14B系列 | --initialize_model_on_cpu | The 14B model needs to be initialized on the CPU | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model name 'Wan 14B系列' is in Chinese, but this is an English document. It should be translated to 'Wan 14B series' for consistency.
| | Wan 14B系列 | --initialize_model_on_cpu | The 14B model needs to be initialized on the CPU | | |
| | Wan 14B series | --initialize_model_on_cpu | The 14B model needs to be initialized on the CPU | |
| @@ -0,0 +1,17 @@ | |||
| # This example is tested on 8*A100 | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment indicates that this script was tested on '8*A100', which are NVIDIA GPUs. Since this script is intended for NPU training, this comment is misleading. Please update it to reflect the correct NPU testing environment.
| # This example is tested on 8*A100 | |
| # This example is for NPU training. |