Skip to content

Conversation

@cincodenada
Copy link
Contributor

@cincodenada cincodenada commented Dec 17, 2025

Description

This is a fix for #2316. It changes the schedule field handling so that updates to the schedule fields don't propagate to the profile until after the fields are initialized (that is, loaded from the profile).

This resolves a bug where we briefly reset schedule fields to 0 because we copied all field values to the profile after only updating the first (or second) field from the profile. This in turn got in a race with the initial schedule check, and caused spurious "catchup" runs where Vorta calculated the next run to be at midnight (00:00) regardless of the user's settings, and thus always generating a "catchup" run, since midnight is the very beginning of the day.

I also modified the init_db fixture in tests slightly to support deferring the window load to the test itself. Unless a test uses the newly-introduced window_load fixture, init_db functions as it did before.

Related Issue

Issue #2316

Motivation and Context

Solves a bug where fixed-time schedules would always generate a catch-up run if the application was started before the scheduled time, which also caused the scheduled runs to not take place.

How Has This Been Tested?

I added a unit test specifically to test this bug, initially failing and then passing after the changes. I also manually tested as follows:

  • Set a fixed-time schedule to run at 6pm
  • Ran a backup to start the schedule
  • Used the Linux utility faketime to simulate re-opening the app the next morning
  • Used faketime to simulate re-opening the app again at 10pm that same day

As expected, there was no catch-up run when opening the app in the morning, and it scheduled the 6pm run as anticipated. Opening the app at (simulated) 10pm did do a catch-up run, as it was past the scheduled run time.

I also ran all other tests as well, which remain passing, other than the test_scheduler_create_backup test that fails in main on my machine too.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)

Checklist:

  • I have read the CONTRIBUTING guide.
  • My code follows the code style of this project.
  • My change requires a change to the documentation.
  • I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • All new and existing tests passed.

I provide my contribution under the terms of the license of this repository and I affirm the Developer Certificate of Origin.

To test the race condition in action

This doesn't work because we're doing a bunch of stuff in the
pre-test initialization, so we miss our window
Forgot about shared state. Damn singletons...

Also use event times in case we want to check those later
It's our own updates that are causing this, so we can just set a flag for ourselves and check it
This wasn't causing any issues I don't think, but it was confusing
@cincodenada cincodenada changed the title Debug scheduler race Delay updating schedule until after fields have been initialized Dec 17, 2025
@cincodenada cincodenada changed the title Delay updating schedule until after fields have been initialized fix: Delay updating schedule until after fields have been initialized Dec 17, 2025
@cincodenada cincodenada force-pushed the debug-scheduler-race branch 2 times, most recently from 3cf214f to 0c65ae4 Compare December 17, 2025 07:25
checked -> toggled I think is an improvement, but not strictly related
Copy link
Contributor

@m3nu m3nu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔍 Code Review

PR: #2317 - fix: Delay updating schedule until after fields have been initialized
Branch: debug-scheduler-racemaster
Changes: +106 / -12 lines across 6 files

📋 Summary

This PR correctly fixes a race condition where schedule fields were being reset to 0 during UI initialization, causing spurious catchup runs. The fix uses a flag to defer writing to the profile until after all fields have been populated from the profile.

✅ What's Good

  • Root cause correctly identified and addressed
  • Clean implementation of loaded signal in MainWindow for coordinating initialization
  • Good test coverage with parameterized test cases covering multiple scenarios
  • Test fixture design (window_load) allows fine-grained control over initialization order

🔍 Review Details

7 inline comment(s) added.

Severity Count
🔴 Error 0
🟡 Warning 2
🔵 Info 2
💡 Suggestion 2
✅ Good 1

📊 Verdict: COMMENT 💬

Overall this is a solid fix for a tricky race condition. The suggestions below are minor and mostly about code style and cleanup.


🤖 Reviewed by Claude Code

next suitable backup time.
"""
profile = BackupProfileModel.get_or_none(id=profile_id)
logger.debug('Profile: %s, %d %d', str(profile), profile.schedule_fixed_hour, profile.schedule_fixed_minute)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔵 [debug-logging] Debug statement for review

This debug logging statement appears to be added for debugging purposes. Consider removing it before merging or ensure it's at the appropriate log level for production.

Note: If you want to keep debug logging, this is fine - just wanted to confirm it's intentional for ongoing diagnostics rather than leftover debug code.

# unknown schedule mode
raise ValueError("Unknown schedule mode '{}'".format(profile.schedule_mode))

logger.debug('Last run time: %s', last_time)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔵 [debug-logging] Debug statement for review

Same as above - another debug logging statement. If these are intended to help with future schedule debugging, that's perfectly fine. Just flagging in case they were temporary.

super().__init__(parent)
self.app = QApplication.instance()
self.setupUi(self)
self.hasPopulatedScheduleFields = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 [naming-convention] Non-PEP8 variable name

The codebase uses snake_case for instance variables. Consider renaming to _has_populated_schedule_fields for consistency with Python conventions and to indicate it's a private implementation detail.

Suggested fix:

self._has_populated_schedule_fields = False

# Listen for events
self.app.profile_changed_event.connect(self.populate_from_profile)

def on_scheduler_change(self, _):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 [code-clarity] Consider adding a brief comment

Good fix! The logic correctly moves the profile updates outside the for-loop so all fields are saved regardless of which radio button is checked. A brief comment explaining why the flag check is needed could help future maintainers understand the race condition being prevented.

Example:

def on_scheduler_change(self, _):
    # Prevent writing default values back to profile during initial field population
    if not self._has_populated_schedule_fields:
        return

SchemaVersion,
]


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 [documentation] Missing docstring

The load_window function should have a docstring explaining its purpose since it's used as a fixture helper. The existing comment above is good but could be converted to a proper docstring.

Suggested fix:

def load_window(qapp: vorta.application.VortaApp):
    """
    Reload the main window of the given application.
    
    Used to repopulate fields after loading mock data in tests.
    """
    qapp.main_window.deleteLater()
    del qapp.main_window
    qapp.main_window = MainWindow(qapp)

# We also reload because app init does that (via `set_borg_details_result`)
qapp.scheduler = VortaScheduler()
qapp.scheduler.reload_all_timers()
window_load()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 [resource-management] Consider cleaning up original scheduler

Replacing qapp.scheduler creates a new VortaScheduler instance, but the original scheduler may still have active timers or connections. Consider explicitly cleaning up the original scheduler first, or verify that the garbage collector handles this correctly.

Potential improvement:

# Clean up existing scheduler resources
old_scheduler = qapp.scheduler
# ... any cleanup if needed ...
qapp.scheduler = VortaScheduler()

This might already be handled by the test teardown, just flagging for consideration.



class MainWindow(MainWindowBase, MainWindowUI):
loaded = Signal()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[good-practice] Clean signal implementation

Nice addition of the loaded signal! This provides a clean way to coordinate actions that need to happen after the window is fully initialized. The placement at the end of __init__ after set_icons() ensures all setup is complete before the signal fires.

Resolved conflicts in test files:
- tests/unit/conftest.py: Combined PR's window_load fixture with
  upstream's signal disconnect and keyring setting fixes
- tests/network_manager/test_darwin.py: Used upstream's pytestmark
  approach for skipping non-darwin tests

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@m3nu m3nu merged commit 2a3f2aa into borgbase:master Jan 1, 2026
7 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants