sweepbatcher: harden AddSweep against ctx closure#1130
sweepbatcher: harden AddSweep against ctx closure#1130hieblmi wants to merge 1 commit intolightninglabs:masterfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a race condition in the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enhances the AddSweep method in the Batcher to ensure that context cancellation or batcher shutdown errors are prioritized when internal operations fail. It introduces a helper method addSweepExitErr to unify this logic and adds a unit test to verify the behavior. Feedback suggests simplifying the repetitive error handling blocks by re-assigning the error variable and streamlining the ctx.Done() case, as the helper function is guaranteed to return an error in that scenario.
|
@claude review this |
Code Review: sweepbatcher: harden AddSweep against ctx closureOverviewThis PR fixes a real shutdown race in Issues1. Dead code in the final case <-ctx.Done():
err := b.addSweepExitErr(ctx)
if err != nil {
return err
}
return ctx.Err() // ← unreachableInside the 2. The fix covers if sweep.presigned && !fullyConfirmed {
err := ensurePresigned(ctx, sweeps, b.presignedHelper, ...)
if err != nil {
// no addSweepExitErr check here
return fmt.Errorf(...)
}
}The presigned path is less common, but it carries the same potential to leak backend errors during shutdown. 3. Regression test doesn't run against the real Postgres backend The new func runTests(t *testing.T, testFn func(t *testing.T, store testStore, batcherStore testBatcherStore)) {
t.Run("mocks", ...)
t.Run("loopdb", ...) // real Postgres
}Since the original failure was triggered specifically by Postgres returning 4. func (f *cancelingSweepFetcher) FetchSweep(context.Context, lntypes.Hash, wire.OutPoint) (*SweepInfo, error) {The context is unnamed and discarded. A one-line comment explaining that the mock simulates Postgres returning an error after the caller cancels would improve readability for future maintainers. Minor observations
SummaryThe fix is correct and addresses the root cause. Three items worth addressing before merge:
|
|
@claude review this again |
Code Review: sweepbatcher: harden AddSweep against ctx closureOverviewThis PR correctly fixes a real shutdown race: when a caller cancels their context during Issues1. Misleading error wrapping when substituting the exit error (minor-to-medium)The current pattern for every error site is: if err != nil {
exitErr := b.addSweepExitErr(ctx)
if exitErr != nil {
err = exitErr
}
return fmt.Errorf("someOperation failed: %w", err)
}When
The cleaner pattern (which also removes the need to re-assign if err != nil {
if exitErr := b.addSweepExitErr(ctx); exitErr != nil {
return exitErr
}
return fmt.Errorf("someOperation failed: %w", err)
}This is also what the final 2.
|
|
@claude review this again |
Code Review: sweepbatcher: harden AddSweep against ctx closureOverviewThis PR fixes a real, observable shutdown race in CorrectnessThe coverage of error sites is complete:
Priority order in Minor observation — case <-ctx.Done():
return b.addSweepExitErrIfAny(ctx)Because Possible Nit: unchecked race between successful operationsThe guards are only installed after failing calls. If a context cancellation happens between two successful calls (e.g. after Test CoverageThe new test func (f *cancelingSweepFetcher) FetchSweep(...) (*SweepInfo, error) {
f.cancel()
return nil, driver.ErrBadConn
}This is clean and correctly asserts both:
The test follows project conventions — lowercase One improvement worth considering: The other four guarded error paths ( Style / Minor
SummaryThe fix is correct, well-scoped, and directly addresses the production-observed bug. The test provides meaningful regression coverage. No blocking issues. The one optional improvement is adding a comment to the test about which code path it covers. Approved with the optional suggestion above. |
This PR fixes a shutdown race in Batcher.AddSweep where caller cancellation during setup could surface backend-specific errors, such as Postgres returning driver: bad connection, instead of a normal cancellation/shutdown error.
AddSweep now re-checks the batcher/caller shutdown state when setup calls fail and returns context.Canceled or ErrBatcherShuttingDown when appropriate. A regression test covers cancellation during sweep fetching so driver-level errors do not escape during normal shutdown.