Skip to content

Conversation

@Feng0w0
Copy link

@Feng0w0 Feng0w0 commented Dec 30, 2025

  1. Add training script samples for NPU for the current 5 categories of models, stored under examples/xxx/model_training/special/npu_scripts, and provide training startup instructions in the document GPU_Support.md.
  2. replace "npu:0" with get_device_name

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Feng0w0, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly expands and refines the project's support for Ascend NPU devices, particularly focusing on training workflows. It introduces a new collection of NPU-optimized training scripts for various models, ensuring better performance and compatibility. Concurrently, the underlying device management code has been improved for more dynamic NPU detection, and the user documentation has been thoroughly updated to provide clear guidance on both NPU inference and training configurations, making it easier for users to leverage NPU hardware.

Highlights

  • NPU Training Scripts: Added a comprehensive set of NPU-specific training script samples for various model categories (Flux, Qwen, WanVideo, Z-Image) under examples/xxx/special/npu_scripts. These scripts include optimized environment variables and model-specific parameters for efficient NPU training.
  • Improved NPU Device Handling: Enhanced the core device handling logic by replacing hardcoded 'npu:0' references with dynamic get_device_name() calls and IS_NPU_AVAILABLE checks in diffsynth/core/vram/layers.py and diffsynth/diffusion/base_pipeline.py. This makes NPU device detection and memory management more robust.
  • Updated Documentation for NPU: The English and Chinese GPU_Support.md documentation has been updated to include detailed instructions for NPU training. This covers the location of new training scripts, recommended environment variables (PYTORCH_NPU_ALLOC_CONF, CPU_AFFINITY_CONF), and specific parameters for certain models, alongside an updated inference example.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for NPU training by adding example scripts and updating documentation. It also refactors device name handling to replace hardcoded values. My review identified a few critical issues. The new logic for determining the device for VRAM checks is flawed and can cause errors when running on systems with both CUDA and NPU devices. Additionally, one of the new NPU training scripts contains a path typo that will cause it to fail. I've also pointed out a couple of minor issues in the documentation and another training script. Please address the critical issues to ensure the correctness of the changes.


def check_free_vram(self):
device = self.computation_device if self.computation_device != "npu" else "npu:0"
device = self.computation_device if not IS_NPU_AVAILABLE else get_device_name()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current logic for determining the device is incorrect. If IS_NPU_AVAILABLE is true, it will always use get_device_name() to get an NPU device, even if self.computation_device is set to a CUDA device. This will cause getattr(torch, self.computation_device_type).mem_get_info(device) to fail when self.computation_device_type is 'cuda' but device is an NPU device string.

The logic should only use get_device_name() when the computation device is specifically 'npu' to resolve it to a full device name like 'npu:0', while preserving other device specifications like 'cuda' or 'npu:1'.

Suggested change
device = self.computation_device if not IS_NPU_AVAILABLE else get_device_name()
device = get_device_name() if self.computation_device == "npu" else self.computation_device


def get_vram(self):
device = self.device if self.device != "npu" else "npu:0"
device = self.device if not IS_NPU_AVAILABLE else get_device_name()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

The current logic for determining the device is incorrect. If IS_NPU_AVAILABLE is true, it will always use get_device_name() to get an NPU device, even if self.device is set to a CUDA device. This will cause getattr(torch, self.device_type).mem_get_info(device) to fail when self.device_type is 'cuda' but device is an NPU device string.

The logic should only use get_device_name() when the device is specifically 'npu' to resolve it to a full device name like 'npu:0', while preserving other device specifications like 'cuda' or 'npu:1'.

Suggested change
device = self.device if not IS_NPU_AVAILABLE else get_device_name()
device = get_device_name() if self.device == "npu" else self.device

--task "sft:data_process"

accelerate launch examples/qwen_image/model_training/train.py \
--dataset_base_path "./models/train/Qwen-Image-LoRA-splited-cache" \
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There appears to be a typo in the dataset_base_path. The first part of the script saves the processed data to ./models/train/Qwen-Image-Edit-2509-LoRA-splited-cache, but the second part attempts to load from ./models/train/Qwen-Image-LoRA-splited-cache. The path is missing -Edit-2509, which will cause the script to fail.

Suggested change
--dataset_base_path "./models/train/Qwen-Image-LoRA-splited-cache" \
--dataset_base_path "./models/train/Qwen-Image-Edit-2509-LoRA-splited-cache" \

```

### Training
NPU startup script samples have been added for each type of model,the scripts are stored in the `examples/xxx/special/npu_scripts`, for examples `examples/wanvideo/model_training/special/npu_scripts/Wan2.2-T2V-A14B-NPU.sh`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

There is a grammatical error in this sentence. 'for examples' should be 'for example'.

Suggested change
NPU startup script samples have been added for each type of model,the scripts are stored in the `examples/xxx/special/npu_scripts`, for examples `examples/wanvideo/model_training/special/npu_scripts/Wan2.2-T2V-A14B-NPU.sh`.
NPU startup script samples have been added for each type of model,the scripts are stored in the `examples/xxx/special/npu_scripts`, for example `examples/wanvideo/model_training/special/npu_scripts/Wan2.2-T2V-A14B-NPU.sh`.

#### Parameters for specific models
| Model | Parameter | Note |
|-----------|---------------------------|-------------------|
| Wan 14B系列 | --initialize_model_on_cpu | The 14B model needs to be initialized on the CPU |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The model name 'Wan 14B系列' is in Chinese, but this is an English document. It should be translated to 'Wan 14B series' for consistency.

Suggested change
| Wan 14B系列 | --initialize_model_on_cpu | The 14B model needs to be initialized on the CPU |
| Wan 14B series | --initialize_model_on_cpu | The 14B model needs to be initialized on the CPU |

@@ -0,0 +1,17 @@
# This example is tested on 8*A100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The comment indicates that this script was tested on '8*A100', which are NVIDIA GPUs. Since this script is intended for NPU training, this comment is misleading. Please update it to reflect the correct NPU testing environment.

Suggested change
# This example is tested on 8*A100
# This example is for NPU training.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant