Skip to content

Conversation

@HydrogenSulfate
Copy link
Collaborator

@HydrogenSulfate HydrogenSulfate commented Aug 13, 2025

fix eta computation code

Summary by CodeRabbit

  • Bug Fixes
    • Improved ETA accuracy in training/validation progress logs by adapting calculations to recent step intervals, reducing misleading estimates early in runs.
    • Consistent behavior across both backends, providing more reliable remaining-time estimates without changing any public interfaces.

Copilot AI review requested due to automatic review settings August 13, 2025 09:57
@HydrogenSulfate HydrogenSulfate changed the title pd/pt: fix eta fix(pt/pd): fix eta computation Aug 13, 2025
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Aug 13, 2025

📝 Walkthrough

Walkthrough

Adjusted ETA calculation in log_loss_valid for two training modules to use a dynamic divisor based on min(disp_freq, display_step_id - start_step). No public APIs changed.

Changes

Cohort / File(s) Summary
ETA computation update
deepmd/pd/train/training.py, deepmd/pt/train/training.py
Modified ETA formula in log_loss_valid from using disp_freq to min(disp_freq, display_step_id - start_step). No other logic or signatures changed.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

Possibly related PRs

  • deepmd/deepmd-kit#4806: Also adjusts timing/ETA logic in log_loss_valid by changing how timed steps are accumulated.
  • deepmd/deepmd-kit#4725: Modifies ETA handling within log_loss_valid, including denominator usage and logger integration.

Suggested labels

Python

Suggested reviewers

  • njzjz
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
deepmd/pd/train/training.py (1)

921-924: ETA fix is correct; add a defensive guard and a tiny readability improvement

Using min(disp_freq, display_step_id - start_step) correctly accounts for partial display intervals and stabilizes ETA early and near the end. To be extra safe against misconfiguration (e.g., disp_freq accidentally set to 0) and to improve readability, compute the interval once and guard it to be at least 1.

Apply this diff:

-                    eta = int(
-                        (self.num_steps - display_step_id)
-                        / min(self.disp_freq, display_step_id - self.start_step)
-                        * train_time
-                    )
+                    interval = max(1, min(self.disp_freq, display_step_id - self.start_step))
+                    eta = int((self.num_steps - display_step_id) / interval * train_time)

Additional note:

  • Consider asserting disp_freq > 0 at config parse time to prevent modulo-by-zero in display condition and future regressions.
  • Optional: align average training-time accounting with PT’s approach (track timed_steps and add min(disp_freq, display_step_id - start_step) each time) to avoid skew in the last, shorter interval.
deepmd/pt/train/training.py (1)

1004-1007: ETA denominator fix looks good; guard the interval and improve readability

This change fixes ETA spikes when the first/last display window is shorter than disp_freq. For robustness and clarity, compute a guarded interval once and reuse it.

Apply this diff:

-                    eta = int(
-                        (self.num_steps - display_step_id)
-                        / min(self.disp_freq, display_step_id - self.start_step)
-                        * train_time
-                    )
+                    interval = max(1, min(self.disp_freq, display_step_id - self.start_step))
+                    eta = int((self.num_steps - display_step_id) / interval * train_time)

Note:

  • You already maintain timed_steps consistently with the same min(...) logic below; this keeps ETA and average-time metrics conceptually aligned across PT/PD.
  • As a separate hardening step, consider validating disp_freq > 0 at config load to avoid modulo-by-zero in the display condition.
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between cefce47 and e23e6b6.

📒 Files selected for processing (2)
  • deepmd/pd/train/training.py (1 hunks)
  • deepmd/pt/train/training.py (1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (29)
  • GitHub Check: Test Python (6, 3.9)
  • GitHub Check: Test Python (5, 3.9)
  • GitHub Check: Test Python (1, 3.12)
  • GitHub Check: Test Python (1, 3.9)
  • GitHub Check: Test Python (4, 3.9)
  • GitHub Check: Test Python (6, 3.12)
  • GitHub Check: Test Python (2, 3.9)
  • GitHub Check: Test Python (3, 3.12)
  • GitHub Check: Test Python (4, 3.12)
  • GitHub Check: Test Python (5, 3.12)
  • GitHub Check: Test Python (3, 3.9)
  • GitHub Check: Test Python (2, 3.12)
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp310-manylinux_aarch64
  • GitHub Check: Build wheels for cp311-win_amd64
  • GitHub Check: Build wheels for cp311-macosx_arm64
  • GitHub Check: Build wheels for cp311-manylinux_x86_64
  • GitHub Check: Build wheels for cp311-macosx_x86_64
  • GitHub Check: Analyze (python)
  • GitHub Check: Analyze (c-cpp)
  • GitHub Check: Build C++ (cpu, cpu)
  • GitHub Check: Build C library (2.14, >=2.5.0,<2.15, libdeepmd_c_cu11.tar.gz)
  • GitHub Check: Build C library (2.18, libdeepmd_c.tar.gz)
  • GitHub Check: Build C++ (clang, clang)
  • GitHub Check: Build C++ (cuda, cuda)
  • GitHub Check: Build C++ (rocm, rocm)
  • GitHub Check: Test C++ (false)
  • GitHub Check: Test C++ (true)
  • GitHub Check: Build C++ (cuda120, cuda)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copilot encountered an error and was unable to review this pull request. You can try again by re-requesting a review.

@njzjz njzjz added this pull request to the merge queue Aug 13, 2025
Merged via the queue into deepmodeling:devel with commit 7601889 Aug 13, 2025
51 checks passed
@njzjz njzjz linked an issue Aug 14, 2025 that may be closed by this pull request
@HydrogenSulfate HydrogenSulfate deleted the fix_eta branch October 10, 2025 06:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG] eta is incorrect in step 1

2 participants