File: internal/service/auth_service.go (lines 262, 808, 809, 811, 812, 845, 854)
Project: tinyauth
Severity: BUG • Confidence: high • Slug: other-race-condition
Finding
lockdownMode (lines 808-847) writes auth.lockdownCtx and auth.lockdownCancelFunc BEFORE acquiring auth.loginMutex (lines 811-812). Because RecordLoginAttempt spawns 'go auth.lockdownMode()' (line 262), and the 'is lockdown already active' check (lines 259-261) happens before the goroutine runs, two near-simultaneous RecordLoginAttempt calls that both observe lockdown==nil will each launch a lockdownMode goroutine. Both goroutines race on the unsynchronized lockdownCtx/lockdownCancelFunc field writes, both set auth.lockdown, both sleep on their own timer, and both will eventually set auth.lockdown=nil — meaning ClearRateLimitsTestingOnly's call to auth.lockdownCancelFunc() (line 854) may cancel the wrong context, and the lockdown can be cleared prematurely by whichever goroutine finishes first while the other one is still 'active.' This is a real data race the Go race detector should flag, and breaks the state-machine correctness the threat model explicitly cites as more important than the rate-limit constants.
Recommendation
Move the ctx/cancel field assignments inside the mutex-protected section. Add a re-check if auth.lockdown != nil && auth.lockdown.Active { return } after taking the lock in lockdownMode so the second goroutine bails out before overwriting state. Consider replacing the goroutine spawn with a sync.Once-style guard so only one lockdown can ever be active.
Revalidation
Verdict: true-positive
Confirmed. lockdownMode (line 808) executes auth.lockdownCtx = ctx and auth.lockdownCancelFunc = cancel at lines 811-812 BEFORE acquiring auth.loginMutex at line 814 — that is a Go data race the race detector would flag. The 'is lockdown already active' check in RecordLoginAttempt (lines 259-261) sits under loginMutex but the goroutine spawned at line 262 sets auth.lockdown only after re-acquiring the mutex inside lockdownMode at line 814+818. So two near-simultaneous RecordLoginAttempt calls (separate critical sections) can both observe lockdown==nil before either goroutine sets it, launching two lockdownMode goroutines. Each goroutine captures time.Until(auth.lockdown.ActiveUntil) from a struct that the other goroutine may have already overwritten, then runs an independent timer; the earlier goroutine's timer can clear lockdown while the later goroutine's window has not yet elapsed. ClearRateLimitsTestingOnly (line 854) calls auth.lockdownCancelFunc(), which is whichever value was last unsynchronizedly written, so it can cancel the wrong context. Practical exploitability in production is limited (both timers run with the same LoginTimeout so usually drift only by μs), but the data race is real and so is the multi-goroutine activation. Severity BUG is appropriate.
File:
internal/service/auth_service.go(lines 262, 808, 809, 811, 812, 845, 854)Project: tinyauth
Severity: BUG • Confidence: high • Slug:
other-race-conditionFinding
lockdownMode (lines 808-847) writes auth.lockdownCtx and auth.lockdownCancelFunc BEFORE acquiring auth.loginMutex (lines 811-812). Because RecordLoginAttempt spawns 'go auth.lockdownMode()' (line 262), and the 'is lockdown already active' check (lines 259-261) happens before the goroutine runs, two near-simultaneous RecordLoginAttempt calls that both observe lockdown==nil will each launch a lockdownMode goroutine. Both goroutines race on the unsynchronized lockdownCtx/lockdownCancelFunc field writes, both set auth.lockdown, both sleep on their own timer, and both will eventually set auth.lockdown=nil — meaning ClearRateLimitsTestingOnly's call to auth.lockdownCancelFunc() (line 854) may cancel the wrong context, and the lockdown can be cleared prematurely by whichever goroutine finishes first while the other one is still 'active.' This is a real data race the Go race detector should flag, and breaks the state-machine correctness the threat model explicitly cites as more important than the rate-limit constants.
Recommendation
Move the ctx/cancel field assignments inside the mutex-protected section. Add a re-check
if auth.lockdown != nil && auth.lockdown.Active { return }after taking the lock in lockdownMode so the second goroutine bails out before overwriting state. Consider replacing the goroutine spawn with a sync.Once-style guard so only one lockdown can ever be active.Revalidation
Verdict: true-positive
Confirmed. lockdownMode (line 808) executes
auth.lockdownCtx = ctxandauth.lockdownCancelFunc = cancelat lines 811-812 BEFORE acquiring auth.loginMutex at line 814 — that is a Go data race the race detector would flag. The 'is lockdown already active' check in RecordLoginAttempt (lines 259-261) sits under loginMutex but the goroutine spawned at line 262 sets auth.lockdown only after re-acquiring the mutex inside lockdownMode at line 814+818. So two near-simultaneous RecordLoginAttempt calls (separate critical sections) can both observe lockdown==nil before either goroutine sets it, launching two lockdownMode goroutines. Each goroutine capturestime.Until(auth.lockdown.ActiveUntil)from a struct that the other goroutine may have already overwritten, then runs an independent timer; the earlier goroutine's timer can clear lockdown while the later goroutine's window has not yet elapsed. ClearRateLimitsTestingOnly (line 854) calls auth.lockdownCancelFunc(), which is whichever value was last unsynchronizedly written, so it can cancel the wrong context. Practical exploitability in production is limited (both timers run with the same LoginTimeout so usually drift only by μs), but the data race is real and so is the multi-goroutine activation. Severity BUG is appropriate.