-
Notifications
You must be signed in to change notification settings - Fork 170
fix: Delay updating schedule until after fields have been initialized #2317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
To test the race condition in action This doesn't work because we're doing a bunch of stuff in the pre-test initialization, so we miss our window
This catches the bug in borgbase#2316
Forgot about shared state. Damn singletons... Also use event times in case we want to check those later
This should fix borgbase#2316
It's our own updates that are causing this, so we can just set a flag for ourselves and check it
This wasn't causing any issues I don't think, but it was confusing
3cf214f to
0c65ae4
Compare
checked -> toggled I think is an improvement, but not strictly related
0c65ae4 to
96ce92f
Compare
m3nu
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔍 Code Review
PR: #2317 - fix: Delay updating schedule until after fields have been initialized
Branch: debug-scheduler-race → master
Changes: +106 / -12 lines across 6 files
📋 Summary
This PR correctly fixes a race condition where schedule fields were being reset to 0 during UI initialization, causing spurious catchup runs. The fix uses a flag to defer writing to the profile until after all fields have been populated from the profile.
✅ What's Good
- Root cause correctly identified and addressed
- Clean implementation of
loadedsignal inMainWindowfor coordinating initialization - Good test coverage with parameterized test cases covering multiple scenarios
- Test fixture design (
window_load) allows fine-grained control over initialization order
🔍 Review Details
7 inline comment(s) added.
| Severity | Count |
|---|---|
| 🔴 Error | 0 |
| 🟡 Warning | 2 |
| 🔵 Info | 2 |
| 💡 Suggestion | 2 |
| ✅ Good | 1 |
📊 Verdict: COMMENT 💬
Overall this is a solid fix for a tricky race condition. The suggestions below are minor and mostly about code style and cleanup.
🤖 Reviewed by Claude Code
| next suitable backup time. | ||
| """ | ||
| profile = BackupProfileModel.get_or_none(id=profile_id) | ||
| logger.debug('Profile: %s, %d %d', str(profile), profile.schedule_fixed_hour, profile.schedule_fixed_minute) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔵 [debug-logging] Debug statement for review
This debug logging statement appears to be added for debugging purposes. Consider removing it before merging or ensure it's at the appropriate log level for production.
Note: If you want to keep debug logging, this is fine - just wanted to confirm it's intentional for ongoing diagnostics rather than leftover debug code.
| # unknown schedule mode | ||
| raise ValueError("Unknown schedule mode '{}'".format(profile.schedule_mode)) | ||
|
|
||
| logger.debug('Last run time: %s', last_time) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🔵 [debug-logging] Debug statement for review
Same as above - another debug logging statement. If these are intended to help with future schedule debugging, that's perfectly fine. Just flagging in case they were temporary.
| super().__init__(parent) | ||
| self.app = QApplication.instance() | ||
| self.setupUi(self) | ||
| self.hasPopulatedScheduleFields = False |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 [naming-convention] Non-PEP8 variable name
The codebase uses snake_case for instance variables. Consider renaming to _has_populated_schedule_fields for consistency with Python conventions and to indicate it's a private implementation detail.
Suggested fix:
self._has_populated_schedule_fields = False| # Listen for events | ||
| self.app.profile_changed_event.connect(self.populate_from_profile) | ||
|
|
||
| def on_scheduler_change(self, _): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 [code-clarity] Consider adding a brief comment
Good fix! The logic correctly moves the profile updates outside the for-loop so all fields are saved regardless of which radio button is checked. A brief comment explaining why the flag check is needed could help future maintainers understand the race condition being prevented.
Example:
def on_scheduler_change(self, _):
# Prevent writing default values back to profile during initial field population
if not self._has_populated_schedule_fields:
return| SchemaVersion, | ||
| ] | ||
|
|
||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🟡 [documentation] Missing docstring
The load_window function should have a docstring explaining its purpose since it's used as a fixture helper. The existing comment above is good but could be converted to a proper docstring.
Suggested fix:
def load_window(qapp: vorta.application.VortaApp):
"""
Reload the main window of the given application.
Used to repopulate fields after loading mock data in tests.
"""
qapp.main_window.deleteLater()
del qapp.main_window
qapp.main_window = MainWindow(qapp)| # We also reload because app init does that (via `set_borg_details_result`) | ||
| qapp.scheduler = VortaScheduler() | ||
| qapp.scheduler.reload_all_timers() | ||
| window_load() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 [resource-management] Consider cleaning up original scheduler
Replacing qapp.scheduler creates a new VortaScheduler instance, but the original scheduler may still have active timers or connections. Consider explicitly cleaning up the original scheduler first, or verify that the garbage collector handles this correctly.
Potential improvement:
# Clean up existing scheduler resources
old_scheduler = qapp.scheduler
# ... any cleanup if needed ...
qapp.scheduler = VortaScheduler()This might already be handled by the test teardown, just flagging for consideration.
|
|
||
|
|
||
| class MainWindow(MainWindowBase, MainWindowUI): | ||
| loaded = Signal() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
✅ [good-practice] Clean signal implementation
Nice addition of the loaded signal! This provides a clean way to coordinate actions that need to happen after the window is fully initialized. The placement at the end of __init__ after set_icons() ensures all setup is complete before the signal fires.
Resolved conflicts in test files: - tests/unit/conftest.py: Combined PR's window_load fixture with upstream's signal disconnect and keyring setting fixes - tests/network_manager/test_darwin.py: Used upstream's pytestmark approach for skipping non-darwin tests 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Description
This is a fix for #2316. It changes the schedule field handling so that updates to the schedule fields don't propagate to the profile until after the fields are initialized (that is, loaded from the profile).
This resolves a bug where we briefly reset schedule fields to 0 because we copied all field values to the profile after only updating the first (or second) field from the profile. This in turn got in a race with the initial schedule check, and caused spurious "catchup" runs where Vorta calculated the next run to be at midnight (00:00) regardless of the user's settings, and thus always generating a "catchup" run, since midnight is the very beginning of the day.
I also modified the
init_dbfixture in tests slightly to support deferring the window load to the test itself. Unless a test uses the newly-introducedwindow_loadfixture,init_dbfunctions as it did before.Related Issue
Issue #2316
Motivation and Context
Solves a bug where fixed-time schedules would always generate a catch-up run if the application was started before the scheduled time, which also caused the scheduled runs to not take place.
How Has This Been Tested?
I added a unit test specifically to test this bug, initially failing and then passing after the changes. I also manually tested as follows:
faketimeto simulate re-opening the app the next morningfaketimeto simulate re-opening the app again at 10pm that same dayAs expected, there was no catch-up run when opening the app in the morning, and it scheduled the 6pm run as anticipated. Opening the app at (simulated) 10pm did do a catch-up run, as it was past the scheduled run time.
I also ran all other tests as well, which remain passing, other than the
test_scheduler_create_backuptest that fails inmainon my machine too.Types of changes
Checklist:
I provide my contribution under the terms of the license of this repository and I affirm the Developer Certificate of Origin.