feat: complete edge transport streaming phases 0-3#45
Conversation
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 23 minutes and 45 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Repository UI (base), Organization UI (inherited) Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
📝 WalkthroughWalkthrough本PR在配置模型、编译、校验及核心路由里新增并暴露路由级的请求/响应缓冲与压缩策略(含压缩阈值与内容类型白名单、流式响应空闲超时),将这些策略贯穿到请求体准备、转发、压缩决策、超时/限流与运行时关闭逻辑,并补充大量集成测试与一份边缘无缓存传输专项规划文档;同时提升Rust MSRV到1.94。 Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (HTTP/1.1/2/3)
participant Proxy as rginx Proxy
participant Upstream as Upstream Server
participant Config as Route Config / Runtime
Client->>Proxy: 发起请求 (含 method, headers, body)
Proxy->>Config: 查找 Route,读取 request_buffering/response_buffering/compression/streaming_timeout
alt request_buffering == Off
Proxy->>Upstream: 以 streaming 方式转发请求体 (MaxBytesBody 包裹,实时限额检测)
Upstream-->>Proxy: 返回分块响应
alt response_buffering == Off
Proxy->>Client: 立即转发分块数据(保留 Transfer-Encoding)
else
Proxy->>Proxy: 收集响应后再压缩/转发
end
else
Proxy->>Proxy: 收集请求体(可重放),可进行 failover
Proxy->>Upstream: 转发完整请求体
end
note right of Proxy: 若上游返回 request-body-limit 错误\n解析错误链并返回 413
Proxy->>Client: 返回最终响应(可能 gzip),并应用 compression policy
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
This PR completes phases 0–3 of the “edge transport streaming” work by introducing route-level transport policies (buffering/compression/idle timeout), wiring them through the proxy + response pipeline for HTTP/1.1/2/3, and adding substantial regression coverage around streaming, reload/restart drain, upgrades, compression, and config validation.
Changes:
- Add route transport policy fields (request/response buffering, compression policy/settings, streaming response idle timeout) to core/config model + validate/compile pipelines and expose via
rginx check. - Rework proxy request-body handling to decouple streaming body limits from buffering/replayability and to disable peer failover when request buffering is off.
- Extend tests for streaming responses across protocols, drain behavior during reload/restart, upgraded tunnels, compression policy behavior, and config validation.
Reviewed changes
Copilot reviewed 44 out of 44 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| scripts/nginx_compare/launch.py | Make nginx-compare warmup/benchmark more robust by aggregating request failures and warning instead of hard-failing. |
| docker/nginx-compare/Dockerfile | Bump Docker toolchain arg to Rust 1.94.1. |
| crates/rginx-runtime/src/bootstrap/shutdown.rs | Make background/admin/health/ocsp tasks optional and improve shutdown/abort join handling. |
| crates/rginx-runtime/src/bootstrap/mod.rs | Spawn admin/health/ocsp tasks into Option<JoinHandle<_>> to support optional task lifecycle. |
| crates/rginx-runtime/src/bootstrap/listeners.rs | Track incremental joins via joined_tasks to support partial join progress across shutdown paths. |
| crates/rginx-http/src/timeout/tests.rs | Add unit tests for the new MaxBytesBody behavior. |
| crates/rginx-http/src/timeout/mod.rs | Re-export MaxBytesBody and RequestBodyLimitError. |
| crates/rginx-http/src/timeout/body.rs | Implement MaxBytesBody and RequestBodyLimitError for streaming request-body size enforcement. |
| crates/rginx-http/src/state/tests.rs | Update state test snapshots for new route transport fields. |
| crates/rginx-http/src/router.rs | Update router tests to populate new route transport fields. |
| crates/rginx-http/src/proxy/tests/peer_selection.rs | Update tests for new peer_failover_enabled field in prepared requests. |
| crates/rginx-http/src/proxy/request_body.rs | Add request buffering policy handling, typed mid-stream body-limit enforcement, and failover gating. |
| crates/rginx-http/src/proxy/mod.rs | Plumb RouteBufferingPolicy and timeout body types into proxy module surface. |
| crates/rginx-http/src/proxy/forward/mod.rs | Wire request buffering + route streaming idle timeout through forward path and improve payload-too-large handling for streamed bodies. |
| crates/rginx-http/src/proxy/forward/error.rs | Add helper to identify “request body too large” errors (currently message-based). |
| crates/rginx-http/src/proxy/clients/mod.rs | Improve upstream error reporting by formatting full source chains. |
| crates/rginx-http/src/handler/tests.rs | Update handler tests for new compression options plumbing + new route fields. |
| crates/rginx-http/src/handler/dispatch.rs | Compute per-route compression/buffering options, pass buffering + idle-timeout to proxy forwarding, and thread compression options to response finalization. |
| crates/rginx-http/src/compression.rs | Add ResponseCompressionOptions and enforce compression policy vs response buffering/content-types/min-bytes. |
| crates/rginx-core/src/lib.rs | Re-export new route policy types. |
| crates/rginx-core/src/config/tests.rs | Update config tests for new route fields. |
| crates/rginx-core/src/config/route.rs | Extend Route with transport policy fields and add RouteBufferingPolicy/RouteCompressionPolicy enums. |
| crates/rginx-core/src/config.rs | Re-export new route policy types from core config module. |
| crates/rginx-config/src/validate/tests.rs | Add/extend validation tests for compression/idle-timeout/policy constraints and request-buffering limit requirements. |
| crates/rginx-config/src/validate/route.rs | Validate route-level transport policy combinations and value constraints. |
| crates/rginx-config/src/validate.rs | Add global validation that request_buffering=On requires request body limits (legacy or all explicit listeners). |
| crates/rginx-config/src/model.rs | Add config model enums/fields for route buffering/compression/idle-timeout. |
| crates/rginx-config/src/compile/tests.rs | Update compile tests to include new route transport policy fields and add a compile test for defaults/overrides. |
| crates/rginx-config/src/compile/route.rs | Compile route transport policy configs into core Route fields with normalization. |
| crates/rginx-app/tests/upgrade.rs | Add reload regression to ensure upgraded tunnels survive config reload. |
| crates/rginx-app/tests/streaming_download.rs | New end-to-end regression for streaming download delivery timing + route streaming idle timeout. |
| crates/rginx-app/tests/reload/streaming_flow.rs | New reload/restart regression ensuring inflight streaming responses drain correctly. |
| crates/rginx-app/tests/reload.rs | Register new streaming reload flow tests and expose wait_for_http_ready passthrough. |
| crates/rginx-app/tests/phase1.rs | Add end-to-end regression for request_buffering=Off streaming uploads + mid-stream body-limit enforcement. |
| crates/rginx-app/tests/http3.rs | Add HTTP/3 streaming-response regression (no full buffering before first chunk). |
| crates/rginx-app/tests/http2.rs | Add HTTP/2 streaming-response regression (no full buffering before first chunk). |
| crates/rginx-app/tests/failover.rs | Add regression that request_buffering=Off disables failover for idempotent requests. |
| crates/rginx-app/tests/compression.rs | Add regressions for buffering-off no-compress, force compression with buffering-on, and custom content-type allowlist behavior. |
| crates/rginx-app/tests/check.rs | Add regression ensuring rginx check prints route transport policy summary details. |
| crates/rginx-app/src/main.rs | Extend check output with aggregated route transport policy counts. |
| clippy.toml | Bump Clippy MSRV baseline to 1.94.1. |
| README.md | Document the new architecture plan and update stated Rust version to 1.94+. |
| Cargo.toml | Bump workspace rust-version to 1.94. |
| ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md | Add the staged delivery plan document for the edge transport streaming initiative. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/rginx-http/src/handler/dispatch.rs (1)
406-416: 🛠️ Refactor suggestion | 🟠 Major将 build_route_response 的路由执行参数收拢成上下文对象。
函数当前有 9 个参数,超过 Clippy 配置的阈值(8),触发 too_many_arguments 警告。建议将
action、request_buffering、streaming_response_idle_timeout等路由执行参数整合到一个上下文结构体中,解决 CI 阻塞问题,同时改进可维护性。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rginx-http/src/handler/dispatch.rs` around lines 406 - 416, 将 build_route_response 中的路由执行参数(至少包括 action: RouteAction、request_buffering: rginx_core::RouteBufferingPolicy、streaming_response_idle_timeout: Option<Duration> 以及任何与路由执行相关的参数)封装到一个新的上下文结构体(例如 RouteExecutionContext),在 build_route_response 签名中以单一参数替换这些分散参数,并在所有调用处用该上下文构造器替代原有参数传递;保留其余参数(如 request, state, active, listener, client_address, request_id)不变并调整类型引用(例如 ListenerRequestContext<'_>、ActiveState)以匹配新结构体,更新相关单元/集成测试或调用链以编译通过并消除 too_many_arguments 警告。crates/rginx-http/src/proxy/request_body.rs (1)
197-207:⚠️ Potential issue | 🟡 Minor修复 Clippy 警告:函数参数过多
Pipeline 报告
clippy::too_many_arguments:此函数有 9 个参数,超过了默认限制 8 个。建议将相关参数封装到一个配置结构体中。🔧 建议的修复方案
可以创建一个配置结构体来封装相关参数:
struct PrepareRequestBodyConfig<'a> { upstream_name: &'a str, method: &'a Method, headers: &'a HeaderMap, body_timeout: Duration, max_replayable_request_body_bytes: usize, max_request_body_bytes: Option<usize>, request_buffering: RouteBufferingPolicy, grpc_web_mode: Option<&'a GrpcWebMode>, }或者在函数上方添加
#[allow(clippy::too_many_arguments)]属性来抑制此警告(如果认为当前设计是合理的)。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rginx-http/src/proxy/request_body.rs` around lines 197 - 207, The function prepare_request_body currently has 9 parameters and triggers clippy::too_many_arguments; fix by introducing a config struct (e.g., PrepareRequestBodyConfig<'a> containing upstream_name, method, headers, body_timeout, max_replayable_request_body_bytes, max_request_body_bytes, request_buffering, grpc_web_mode) and change prepare_request_body signature to accept this single config plus the body (HttpBody), updating all call sites to construct and pass PrepareRequestBodyConfig; alternatively, if refactoring is undesirable, add #[allow(clippy::too_many_arguments)] above the prepare_request_body function to suppress the lint.
🧹 Nitpick comments (5)
scripts/nginx_compare/launch.py (1)
321-329: 与benchmark_named_curl的异常处理模式保持一致。静态分析工具标记了第 323 行的
except Exception(BLE001)。benchmark_named_curl在第 366 行使用了# noqa: BLE001注释来抑制此警告。此外,记录第一个异常信息有助于调试预热阶段的失败。♻️ 建议的改进
def warmup_named_curl( *, url: str, flags: list[str], headers: list[str], body: bytes | None, requests: int, concurrency: int, timeout_secs: float, grpc_mode: str | None = None, ) -> None: warmup_concurrency = min(concurrency, requests, 16) failed_requests = 0 + first_error: Exception | None = None with concurrent.futures.ThreadPoolExecutor(max_workers=warmup_concurrency) as executor: futures = [ executor.submit( run_curl_request, url=url, flags=flags, headers=headers, body=body, timeout_secs=timeout_secs, grpc_mode=grpc_mode, ) for _ in range(requests) ] for future in concurrent.futures.as_completed(futures): try: future.result() - except Exception: + except Exception as exc: # noqa: BLE001 - warmup resilience failed_requests += 1 + if first_error is None: + first_error = exc if failed_requests: print( - f"[nginx-compare] warning: warmup failed {failed_requests}/{requests} requests for {url}", + f"[nginx-compare] warning: warmup failed {failed_requests}/{requests} requests for {url}; " + f"first error: {first_error!r}", file=sys.stderr, )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@scripts/nginx_compare/launch.py` around lines 321 - 329, The except Exception: block in the warmup loop should match benchmark_named_curl's pattern: suppress the BLE001 warning with a trailing "# noqa: BLE001" and capture/log the first exception for debugging; update the try/except around future.result() to store the first caught exception (e.g., first_exc) and then include that exception in the stderr print for the warmup warning (referencing failed_requests, requests, and url) while keeping the noqa comment on the except line.crates/rginx-runtime/src/bootstrap/listeners.rs (1)
418-432: 增量加入逻辑正确,但需注意错误场景下的行为。当前实现在任务加入失败时提前返回错误,此时
joined_tasks游标保持在失败任务的索引位置,tasks向量未清理。后续如果group被丢弃,未加入的JoinHandle也会被丢弃,导致对应任务变为 detached 状态继续后台运行。如果这是预期行为(即失败时允许剩余任务分离),建议添加注释说明此设计意图。如果需要确保所有任务都被正确回收,可考虑在错误分支中继续加入或中止剩余任务。
💡 可选:添加注释说明设计意图
/// Waits for all worker tasks in a listener group to finish. +/// +/// On error, remaining task handles are left in the group. If the group is +/// subsequently dropped, those tasks become detached and continue running. async fn join_listener_worker_group(group: &mut ListenerWorkerGroup) -> Result<()> {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rginx-runtime/src/bootstrap/listeners.rs` around lines 418 - 432, The current join_listener_worker_group function can return early on a join failure leaving group.tasks uncleared and joined_tasks at the failure index, which detaches remaining JoinHandles; update join_listener_worker_group (the loop that awaits (&mut group.tasks[worker_index]).await and the error mapping) so that on Err it either (A) continues attempting to join the remaining handles before returning the error, or (B) explicitly aborts/awaits and clears all remaining JoinHandles and resets group.joined_tasks to 0 before returning the error; if the early-return behavior is intentional, add a clear comment in join_listener_worker_group explaining that failing to join leaves remaining tasks detached by design.crates/rginx-http/src/timeout/tests.rs (1)
285-324: 建议再补一个“恰好等于上限”的边界用例。现在覆盖了“小于上限”和“超过上限”,但还没锁住“累计字节数正好等于上限”这个最容易回归成 off-by-one 的分支。这里实现依赖
>和剩余额度计算,补一个== limit的 case 会更稳。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rginx-http/src/timeout/tests.rs` around lines 285 - 324, Add a boundary test where cumulative bytes equal the limit to guard against off-by-one errors: create a new async test (e.g., max_bytes_body_allows_frames_exactly_at_limit) that constructs a StreamBody with frames whose combined length equals the MaxBytesBody limit, wrap it with Box::pin(MaxBytesBody::new(..., 8)), then poll_frame via poll_fn twice and assert both frames succeed (using .into_data() comparisons) just like in max_bytes_body_allows_frames_within_limit and max_bytes_body_errors_when_limit_is_exceeded to verify the implementation of MaxBytesBody::poll_frame correctly allows the exact == limit case.crates/rginx-app/tests/streaming_download.rs (1)
101-245: 把这些 chunked HTTP 测试 helper 提到共享support模块里。
read_http_head_and_pending、read_http_chunk、spawn_scripted_chunked_response_server这套逻辑已经和crates/rginx-app/tests/reload/streaming_flow.rs基本重复,crates/rginx-app/tests/phase1.rs里也有接近的 head 解析实现。现在协议细节一旦调整,很容易只修一处,后面的集成测试就开始各自漂移。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rginx-app/tests/streaming_download.rs` around lines 101 - 245, 将重复的 chunked-HTTP 测试 helper 抽到一个共享的 tests support 模块:把 read_http_head_and_pending、read_http_chunk、spawn_scripted_chunked_response_server(以及 write_chunk/write_chunk_result/connect_http_client 若需要)移动到一个公共的 tests/support/mod.rs 并将这些函数声明为 pub,确保它们使用 std::net/IO 类型签名不变并导出给其它测试;然后在原来使用这些辅助函数的测试文件(如 streaming_download.rs、reload/streaming_flow.rs、phase1.rs)替换为 use crate::support::<function_name> 或 mod support; 调整任何路径/可见性导致的编译错误并删除原文件中的重复实现。crates/rginx-app/tests/http2.rs (1)
223-227: 时序断言在 CI 环境中可能存在不稳定风险当前断言要求第一个 chunk 在 500ms 内到达,但在 CI 高负载环境下,TLS 握手、TCP 连接建立等可能导致偶发失败。
建议:如果后续出现 CI 不稳定,可考虑适当放宽阈值或调整断言策略。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/rginx-app/tests/http2.rs` around lines 223 - 227, 当前测试在 tests/http2.rs 中对 started.elapsed() 强制小于 Duration::from_millis(500) 导致在 CI 高负载下偶发失败;请改为放宽或可配置的超时时间并改写断言以提高稳定性,例如将硬编码的 Duration::from_millis(500) 替换为一个常量或从环境变量读取(如 RGINX_TEST_TIMEOUT_MS)并默认设为更宽松的值(比如 1000ms),或者使用带重试/等待循环的断言逻辑(检查 started.elapsed() 在允许的最大时长内并在短间隔内重试),修改涉及标识符:started 和 Duration::from_millis(...)(断言所在的测试函数内)。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md`:
- Around line 284-285:
文案中存在笔误:将出现的短语“响缩放在后面”修改为“压缩放在后面”,请在文档内定位包含该原句(例如整句“响缩放在后面,是因为它天然依赖响应传输策略”)并替换为“压缩放在后面,是因为它天然依赖响应传输策略”,同时检查文中是否还有其它相同或近似错误并保持前后语句一致性与标点规范。
In `@Cargo.toml`:
- Line 15: Cargo.toml 中的 rust-version = "1.94" 与 clippy.toml 中的 msrv = "1.94.1"
不一致,导致 CI 警告;将二者统一版本(建议改为 "1.94.1" 或统一到项目决定的 MSRV),在 Cargo.toml 更新 rust-version
字段或在 clippy.toml 更新 msrv 字段,使 rust-version 和 msrv 完全相同(检查并更新符号 rust-version 和
msrv 分别出现在 Cargo.toml 与 clippy.toml 的定义处)。
In `@crates/rginx-app/tests/http3.rs`:
- Around line 648-710: The test leaves the HTTP/3 driver task and QUIC
connection open on early returns; wrap the main client interaction (the connect
+ send_request + recv_data sequence that uses endpoint, connection_handle,
send_request, request_stream and driver_task) into a let result = async { ...
}.await block (or similar try/finally pattern) so you can capture its Result,
then always run a teardown section that calls connection_handle.close(...),
endpoint.close(...), and ensures driver_task is finished (await with timeout
then driver_task.abort() if needed) before returning result; this centralizes
cleanup for both success and error paths and prevents leaked tasks/connections.
In `@crates/rginx-config/src/compile/route.rs`:
- Around line 88-95: The code silently casts compression_min_bytes from u64 to
usize using `as usize` which can truncate on 32-bit targets; replace the
`.map(|value| value as usize)` with a fallible conversion using
`usize::try_from(value)` and propagate a configuration error on overflow (e.g.,
return Err with a clear message) so compilation of the route fails when
`compression_min_bytes` cannot fit into `usize`; update the surrounding function
that constructs the route config (the code block that sets
`compression_min_bytes` and returns the compiled route) to handle the Result
from `try_from` and return the configuration error accordingly.
In `@crates/rginx-http/src/proxy/forward/error.rs`:
- Around line 29-33: The downstream_request_body_too_large_error function
currently detects oversized bodies by string-matching candidate.to_string(),
which is brittle; change it to first walk the error chain and check concrete
error types (e.g., reuse or call the existing limit/error-typed helper function
you have for request-size or limit errors) to determine a 413 condition, and
only if no typed match is found fall back to the current contains("request body
exceeded configured limit of") string check; update the loop in
downstream_request_body_too_large_error to attempt type-based matching before
the string-based fallback.
In `@crates/rginx-http/src/proxy/request_body.rs`:
- Around line 211-215: Merge the nested ifs into a single conditional to satisfy
clippy::collapsible_if: replace the “if let Some(max_request_body_bytes) { if
body.size_hint().lower() > max_request_body_bytes as u64 { ... } }” pattern with
a single combined condition that destructures the Option and checks the size in
one line (e.g. an if-let with an additional boolean guard using
body.size_hint().lower()), keeping the same error return
PrepareRequestError::payload_too_large(max_request_body_bytes) and using
body.size_hint().lower() for the comparison.
In `@crates/rginx-http/src/timeout/body.rs`:
- Around line 281-288: The size_hint() implementation only clamps hint.upper()
which can violate SizeHint's invariant if hint.lower() exceeds the remaining
bytes; update size_hint() (the method on the wrapper that calls
self.inner.size_hint()) to compute remaining =
self.max_request_body_bytes.saturating_sub(self.bytes_read) as u64 and then
clamp both hint.lower() and hint.upper() to at most remaining (use
hint.set_lower(min(original_lower, remaining)) and
hint.set_upper(min(original_upper, remaining))), so lower <= upper is preserved
and set_upper() cannot panic.
In `@crates/rginx-runtime/src/bootstrap/shutdown.rs`:
- Around line 90-97: In join_admin_task_after_abort, the current timeout path
only inspects the JoinError and can swallow the admin task's own io::Error;
change the await handling to inspect both layers (the JoinHandle::await ->
Result<std::io::Result<()>, JoinError>): if the join returns Ok(Err(e)) log the
inner io::Error (e) similarly to the normal path, and if the join returns
Err(join_err) and !join_err.is_cancelled() log the join_err; update the
match/if-let on task.await in join_admin_task_after_abort to cover both
Ok(Err(e)) and Err(join_err) cases and emit appropriate tracing::warn! messages.
---
Outside diff comments:
In `@crates/rginx-http/src/handler/dispatch.rs`:
- Around line 406-416: 将 build_route_response 中的路由执行参数(至少包括 action:
RouteAction、request_buffering:
rginx_core::RouteBufferingPolicy、streaming_response_idle_timeout:
Option<Duration> 以及任何与路由执行相关的参数)封装到一个新的上下文结构体(例如 RouteExecutionContext),在
build_route_response 签名中以单一参数替换这些分散参数,并在所有调用处用该上下文构造器替代原有参数传递;保留其余参数(如 request,
state, active, listener, client_address, request_id)不变并调整类型引用(例如
ListenerRequestContext<'_>、ActiveState)以匹配新结构体,更新相关单元/集成测试或调用链以编译通过并消除
too_many_arguments 警告。
In `@crates/rginx-http/src/proxy/request_body.rs`:
- Around line 197-207: The function prepare_request_body currently has 9
parameters and triggers clippy::too_many_arguments; fix by introducing a config
struct (e.g., PrepareRequestBodyConfig<'a> containing upstream_name, method,
headers, body_timeout, max_replayable_request_body_bytes,
max_request_body_bytes, request_buffering, grpc_web_mode) and change
prepare_request_body signature to accept this single config plus the body
(HttpBody), updating all call sites to construct and pass
PrepareRequestBodyConfig; alternatively, if refactoring is undesirable, add
#[allow(clippy::too_many_arguments)] above the prepare_request_body function to
suppress the lint.
---
Nitpick comments:
In `@crates/rginx-app/tests/http2.rs`:
- Around line 223-227: 当前测试在 tests/http2.rs 中对 started.elapsed() 强制小于
Duration::from_millis(500) 导致在 CI 高负载下偶发失败;请改为放宽或可配置的超时时间并改写断言以提高稳定性,例如将硬编码的
Duration::from_millis(500) 替换为一个常量或从环境变量读取(如 RGINX_TEST_TIMEOUT_MS)并默认设为更宽松的值(比如
1000ms),或者使用带重试/等待循环的断言逻辑(检查 started.elapsed()
在允许的最大时长内并在短间隔内重试),修改涉及标识符:started 和 Duration::from_millis(...)(断言所在的测试函数内)。
In `@crates/rginx-app/tests/streaming_download.rs`:
- Around line 101-245: 将重复的 chunked-HTTP 测试 helper 抽到一个共享的 tests support 模块:把
read_http_head_and_pending、read_http_chunk、spawn_scripted_chunked_response_server(以及
write_chunk/write_chunk_result/connect_http_client 若需要)移动到一个公共的
tests/support/mod.rs 并将这些函数声明为 pub,确保它们使用 std::net/IO
类型签名不变并导出给其它测试;然后在原来使用这些辅助函数的测试文件(如
streaming_download.rs、reload/streaming_flow.rs、phase1.rs)替换为 use
crate::support::<function_name> 或 mod support; 调整任何路径/可见性导致的编译错误并删除原文件中的重复实现。
In `@crates/rginx-http/src/timeout/tests.rs`:
- Around line 285-324: Add a boundary test where cumulative bytes equal the
limit to guard against off-by-one errors: create a new async test (e.g.,
max_bytes_body_allows_frames_exactly_at_limit) that constructs a StreamBody with
frames whose combined length equals the MaxBytesBody limit, wrap it with
Box::pin(MaxBytesBody::new(..., 8)), then poll_frame via poll_fn twice and
assert both frames succeed (using .into_data() comparisons) just like in
max_bytes_body_allows_frames_within_limit and
max_bytes_body_errors_when_limit_is_exceeded to verify the implementation of
MaxBytesBody::poll_frame correctly allows the exact == limit case.
In `@crates/rginx-runtime/src/bootstrap/listeners.rs`:
- Around line 418-432: The current join_listener_worker_group function can
return early on a join failure leaving group.tasks uncleared and joined_tasks at
the failure index, which detaches remaining JoinHandles; update
join_listener_worker_group (the loop that awaits (&mut
group.tasks[worker_index]).await and the error mapping) so that on Err it either
(A) continues attempting to join the remaining handles before returning the
error, or (B) explicitly aborts/awaits and clears all remaining JoinHandles and
resets group.joined_tasks to 0 before returning the error; if the early-return
behavior is intentional, add a clear comment in join_listener_worker_group
explaining that failing to join leaves remaining tasks detached by design.
In `@scripts/nginx_compare/launch.py`:
- Around line 321-329: The except Exception: block in the warmup loop should
match benchmark_named_curl's pattern: suppress the BLE001 warning with a
trailing "# noqa: BLE001" and capture/log the first exception for debugging;
update the try/except around future.result() to store the first caught exception
(e.g., first_exc) and then include that exception in the stderr print for the
warmup warning (referencing failed_requests, requests, and url) while keeping
the noqa comment on the except line.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 3ee3439d-1468-486d-bcfb-cf1001135f55
📒 Files selected for processing (44)
ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.mdCargo.tomlREADME.mdclippy.tomlcrates/rginx-app/src/main.rscrates/rginx-app/tests/check.rscrates/rginx-app/tests/compression.rscrates/rginx-app/tests/failover.rscrates/rginx-app/tests/http2.rscrates/rginx-app/tests/http3.rscrates/rginx-app/tests/phase1.rscrates/rginx-app/tests/reload.rscrates/rginx-app/tests/reload/streaming_flow.rscrates/rginx-app/tests/streaming_download.rscrates/rginx-app/tests/upgrade.rscrates/rginx-config/src/compile/route.rscrates/rginx-config/src/compile/tests.rscrates/rginx-config/src/model.rscrates/rginx-config/src/validate.rscrates/rginx-config/src/validate/route.rscrates/rginx-config/src/validate/tests.rscrates/rginx-core/src/config.rscrates/rginx-core/src/config/route.rscrates/rginx-core/src/config/tests.rscrates/rginx-core/src/lib.rscrates/rginx-http/src/compression.rscrates/rginx-http/src/handler/dispatch.rscrates/rginx-http/src/handler/tests.rscrates/rginx-http/src/proxy/clients/mod.rscrates/rginx-http/src/proxy/forward/error.rscrates/rginx-http/src/proxy/forward/mod.rscrates/rginx-http/src/proxy/mod.rscrates/rginx-http/src/proxy/request_body.rscrates/rginx-http/src/proxy/tests/peer_selection.rscrates/rginx-http/src/router.rscrates/rginx-http/src/state/tests.rscrates/rginx-http/src/timeout/body.rscrates/rginx-http/src/timeout/mod.rscrates/rginx-http/src/timeout/tests.rscrates/rginx-runtime/src/bootstrap/listeners.rscrates/rginx-runtime/src/bootstrap/mod.rscrates/rginx-runtime/src/bootstrap/shutdown.rsdocker/nginx-compare/Dockerfilescripts/nginx_compare/launch.py
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/rginx-app/tests/http2.rs`:
- Around line 172-173: The test's "first-chunk timing" is measuring from before
the request (so it includes TLS/connection overhead) and uses a 900ms sleep
that's too close to the 1s cutoff, making CI flaky; change the timing to start
when the response headers are received (i.e., begin measuring after the code
that reads/observes the response HEADERS) and then measure the interval until
write_chunk/first data chunk arrives, and increase the artificial delay for the
second chunk (the thread::sleep call before write_chunk(&mut stream,
b"second\n")) to a clearly larger value than the cutoff (e.g., >1500ms) so the
assertion about "early first chunk vs buffering" is based solely on
header-to-first-data timing and is robust; update any assertions that compare
against the cutoff to use the new measured interval instead of the request-start
timestamp.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 45d94c1b-73dc-43b0-aa6f-7b1739a1fc20
📒 Files selected for processing (24)
ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.mdCargo.tomlREADME.mdcrates/rginx-app/src/main.rscrates/rginx-app/tests/http2.rscrates/rginx-app/tests/http3.rscrates/rginx-app/tests/reload/streaming_flow.rscrates/rginx-app/tests/streaming_download.rscrates/rginx-app/tests/support/mod.rscrates/rginx-config/src/compile/route.rscrates/rginx-config/src/load.rscrates/rginx-http/src/compression.rscrates/rginx-http/src/handler/dispatch.rscrates/rginx-http/src/handler/grpc.rscrates/rginx-http/src/proxy/forward/error.rscrates/rginx-http/src/proxy/forward/mod.rscrates/rginx-http/src/proxy/grpc_web/codec.rscrates/rginx-http/src/proxy/request_body.rscrates/rginx-http/src/timeout/body.rscrates/rginx-http/src/timeout/tests.rscrates/rginx-http/src/tls/ocsp/mod.rscrates/rginx-runtime/src/bootstrap/listeners.rscrates/rginx-runtime/src/bootstrap/shutdown.rsscripts/nginx_compare/launch.py
✅ Files skipped from review due to trivial changes (5)
- crates/rginx-config/src/load.rs
- crates/rginx-http/src/tls/ocsp/mod.rs
- crates/rginx-http/src/handler/grpc.rs
- crates/rginx-http/src/proxy/grpc_web/codec.rs
- scripts/nginx_compare/launch.py
🚧 Files skipped from review as they are similar to previous changes (8)
- README.md
- crates/rginx-http/src/timeout/tests.rs
- crates/rginx-config/src/compile/route.rs
- crates/rginx-http/src/proxy/forward/error.rs
- Cargo.toml
- crates/rginx-http/src/compression.rs
- crates/rginx-http/src/proxy/request_body.rs
- crates/rginx-app/tests/streaming_download.rs
Summary
Testing