fix(import): remove rate limit from chunk endpoints — closes #113 regression#119
fix(import): remove rate limit from chunk endpoints — closes #113 regression#119fabiodalez-dev wants to merge 2 commits intomainfrom
Conversation
#113) The v0.5.9.6 fix (PR #109) removed the rate limit from the progress GET endpoints, but left RateLimitMiddleware(100, 60) on the chunk POST routes. For fast imports without scraping (local TSV file → DB inserts only), each 10-row chunk can complete in <370 ms, yielding >100 req/min — enough to trigger 429 mid-import. The chunk loop is inherently rate-limited by its own sequential design (JS awaits each response before sending the next) and is bounded by session + CSRF + session-scoped import_id. The /prepare POST endpoints (which validate and persist the file) keep their existing limits — those are the actual abuse vectors, not the chunk loop. This reopens and closes #113 (the 429 persisted in v0.5.9.6 for the very reason the original reporter described: >500 books on a fast server).
|
Warning Rate limit exceeded
To keep reviews running without waiting, you can enable usage-based add-on for your organization. This allows additional reviews beyond the hourly cap. Account admins can enable it under billing. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughRimossi i middleware di rate limiting ( ChangesRimozione Rate Limiting da Endpoint di Import
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minuti 🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Review rate limit: 0/1 reviews remaining, refill in 22 minutes and 41 seconds.Comment |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
|
@coderabbitai full review |
✅ Actions performedFull review triggered. |
1 similar comment
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@app/Routes/web.php`:
- Around line 1271-1276: Il commento corrente dice che il vettore di abuso è il
POST `/prepare` e suggerisce implicitamente un RateLimitMiddleware, ma in realtà
né `/admin/libri/import/upload` né `/admin/libri/import/librarything/prepare`
usano RateLimitMiddleware; correggi il commento per rimuovere il riferimento a
una protezione inesistente e documenta invece le effettive difese:
`AdminAuthMiddleware` + `CsrfMiddleware` per tutti gli endpoint di import,
mantenendo la nota sulla ragione per cui non si applica il rate limit ai chunk
sequenziali (chiusura per sessione + CSRF e chunk processing sequenziale).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: 4774bc3c-c376-4a0b-8771-19d1813d2ef8
📒 Files selected for processing (1)
app/Routes/web.php
Summary
RateLimitMiddleware(100, 60)fromPOST /admin/libri/import/chunk(CSV)RateLimitMiddleware(100, 60)fromPOST /admin/libri/import/librarything/chunk(LibraryThing)The v0.5.9.6 fix (PR #109) removed the rate limit from the GET progress endpoints but left it on the chunk POST endpoints. On a fast server (no network scraping, local TSV → DB inserts), each 10-row chunk completes in <370 ms, yielding >100 chunk requests/min — enough to trigger the 429 the reporter @ctariel still observed after v0.5.9.6.
The chunk loop is sequential by design (JS
whileloop awaits each response), scoped to a session-boundimport_id, and already gated byAdminAuthMiddleware+CsrfMiddleware. ThePOST /prepareendpoints (which validate and persist the uploaded file) keep their existing limits.Test plan
Closes #113 (regression that survived the v0.5.9.6 fix)
Summary by CodeRabbit
Note di rilascio