Skip to content

feat: complete edge transport streaming phases 0-3#45

Merged
vansour merged 4 commits intomainfrom
feat/edge-transport-streaming-phases
Apr 15, 2026
Merged

feat: complete edge transport streaming phases 0-3#45
vansour merged 4 commits intomainfrom
feat/edge-transport-streaming-phases

Conversation

@vansour
Copy link
Copy Markdown
Owner

@vansour vansour commented Apr 14, 2026

Summary

  • complete edge transport streaming phases 0-3 for request/response streaming, buffering, compression, and timeout handling
  • wire route-level streaming response idle timeout through the proxy forwarding path and keep streaming responses unbuffered across HTTP/1.1, HTTP/2, and HTTP/3 paths
  • expand transport-focused regression coverage for streaming downloads, reload/restart drain behavior, upgraded tunnels, compression, and config validation
  • add the staged delivery plan document for edge transport streaming work

Testing

  • cargo fmt
  • cargo test -p rginx-http
  • cargo test -p rginx --test streaming_download --test reload --test upgrade --test http2 --test http3
  • cargo test -p rginx --no-run

Copilot AI review requested due to automatic review settings April 14, 2026 15:47
@chatgpt-codex-connector
Copy link
Copy Markdown

You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 14, 2026

Warning

Rate limit exceeded

@vansour has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 23 minutes and 45 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 23 minutes and 45 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 870e7c28-b1e0-4f66-b987-bb7b44eaa8ce

📥 Commits

Reviewing files that changed from the base of the PR and between 5083a27 and 76f8350.

📒 Files selected for processing (6)
  • crates/rginx-app/tests/http2.rs
  • crates/rginx-app/tests/reload/streaming_flow.rs
  • crates/rginx-app/tests/streaming_download.rs
  • crates/rginx-app/tests/support/mod.rs
  • crates/rginx-http/src/proxy/forward/error.rs
  • crates/rginx-http/src/proxy/request_body.rs
📝 Walkthrough

Walkthrough

本PR在配置模型、编译、校验及核心路由里新增并暴露路由级的请求/响应缓冲与压缩策略(含压缩阈值与内容类型白名单、流式响应空闲超时),将这些策略贯穿到请求体准备、转发、压缩决策、超时/限流与运行时关闭逻辑,并补充大量集成测试与一份边缘无缓存传输专项规划文档;同时提升Rust MSRV到1.94。

Changes

Cohort / File(s) Summary
架构规划文档
ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md
新增专项规划文档,定义无缓存边缘传输目标、分阶段计划与配置建议(仅文档)。
版本与工具链
Cargo.toml, clippy.toml, README.md, docker/nginx-compare/Dockerfile
将仓库声明的 Rust 最低版本/Clippy MSRV/Docker 构建参数从 1.85 升级到 1.94.1,并更新 README 中的说明。
配置模型与验证
crates/rginx-config/src/model.rs, .../validate.rs, .../validate/route.rs, .../validate/tests.rs
新增 RouteBufferingPolicyConfig/RouteCompressionPolicyConfigLocationConfig 中的可选字段,增加跨字段校验(非零值、force 与 buffering 不兼容、listener 上的 body 限制要求等)及相关测试。
配置编译
crates/rginx-config/src/compile/route.rs, .../compile/tests.rs
将配置映射为核心路由枚举(compile helpers),规范 content-type 小写/trim、压缩阈值转换与测试覆盖。
核心路由类型扩展
crates/rginx-core/src/config/route.rs, .../config.rs, .../lib.rs, .../config/tests.rs
Route 中加入 request_bufferingresponse_bufferingcompressioncompression_min_bytescompression_content_typesstreaming_response_idle_timeout 字段,新增并导出 RouteBufferingPolicyRouteCompressionPolicy 枚举。
压缩决策与处理管线
crates/rginx-http/src/compression.rs, .../handler/dispatch.rs, .../handler/tests.rs
引入 ResponseCompressionOptions、将压缩决策依赖于 route 的 buffering/compression/allowlist/min-bytes,修改 finalize 路径以传递选项并更新测试。
请求体准备与转发
crates/rginx-http/src/proxy/request_body.rs, .../proxy/forward/mod.rs, .../proxy/forward/error.rs, .../proxy/clients/mod.rs, .../proxy/mod.rs
根据 request_buffering 决定流式或可重放路径、将 failover 可用性与 buffering 解耦、加入对下游错误链中“超出请求体限制”信息的提取并直接返回 413,改进上游错误链格式化。
流式请求体大小限制
crates/rginx-http/src/timeout/body.rs, .../timeout/mod.rs, .../timeout/tests.rs
新增 MaxBytesBodyRequestBodyLimitError,在流式传输路径中即时检测并报告请求体超限,及相关单元测试。
运行时任务与优雅关闭
crates/rginx-runtime/src/bootstrap/listeners.rs, .../mod.rs, .../shutdown.rs
将后台任务句柄封装为 Option<JoinHandle>、按游标顺序 join worker tasks 并改进 graceful shutdown 的存在性/abort/join 逻辑与测试。
应用命令与检查
crates/rginx-app/src/main.rs, crates/rginx-app/tests/check.rs
rginx check 汇总输出新增路由传输策略统计,增加检查测试以验证摘要。
端到端与集成测试
crates/rginx-app/tests/* (多个文件如 compression.rs,http2.rs,http3.rs,failover.rs,phase1.rs,streaming_download.rs,upgrade.rs,reload/*)
新增/修改大量集成测试覆盖:响应压缩行为、HTTP/2/3 流式转发、上传流式场景下的请求限额和 failover 语义、长连接/升级在重载/重启下的行为及空闲超时。
测试支持工具
crates/rginx-app/tests/support/mod.rs
新增 TCP/HTTP 辅助函数(连接、读头、分块解析、脚本化分块响应服务)供多个流式测试使用。
小范围修复/风格与辅助
若干文件(如 crates/rginx-http/src/handler/grpc.rs,.../proxy/clients/mod.rs 等)
若干表达式替换为 .is_multiple_of(...)、错误链格式化改进、小重构与测试适配。

Sequence Diagram(s)

sequenceDiagram
  participant Client as Client (HTTP/1.1/2/3)
  participant Proxy as rginx Proxy
  participant Upstream as Upstream Server
  participant Config as Route Config / Runtime

  Client->>Proxy: 发起请求 (含 method, headers, body)
  Proxy->>Config: 查找 Route,读取 request_buffering/response_buffering/compression/streaming_timeout
  alt request_buffering == Off
    Proxy->>Upstream: 以 streaming 方式转发请求体 (MaxBytesBody 包裹,实时限额检测)
    Upstream-->>Proxy: 返回分块响应
    alt response_buffering == Off
      Proxy->>Client: 立即转发分块数据(保留 Transfer-Encoding)
    else
      Proxy->>Proxy: 收集响应后再压缩/转发
    end
  else
    Proxy->>Proxy: 收集请求体(可重放),可进行 failover
    Proxy->>Upstream: 转发完整请求体
  end
  note right of Proxy: 若上游返回 request-body-limit 错误\n解析错误链并返回 413
  Proxy->>Client: 返回最终响应(可能 gzip),并应用 compression policy
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 我在边缘轻轻跃,带来流式与压缩节拍,
片段先至,限额即时明白,
重载不中断,重连亦安然,
文档铺路,测试为伴,
小兔欢跳,传输更欢快 🥕🐇

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 44.35% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed 标题准确反映了变更的主要内容:完成流式传输的阶段0-3工作,涵盖请求/响应流式处理、缓冲、压缩和超时处理。
Description check ✅ Passed 描述清晰地关联了本次变更的所有主要方面:完成流式传输阶段、路由级超时配置、测试覆盖增强和计划文档,与实际变更内容相符。

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/edge-transport-streaming-phases

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR completes phases 0–3 of the “edge transport streaming” work by introducing route-level transport policies (buffering/compression/idle timeout), wiring them through the proxy + response pipeline for HTTP/1.1/2/3, and adding substantial regression coverage around streaming, reload/restart drain, upgrades, compression, and config validation.

Changes:

  • Add route transport policy fields (request/response buffering, compression policy/settings, streaming response idle timeout) to core/config model + validate/compile pipelines and expose via rginx check.
  • Rework proxy request-body handling to decouple streaming body limits from buffering/replayability and to disable peer failover when request buffering is off.
  • Extend tests for streaming responses across protocols, drain behavior during reload/restart, upgraded tunnels, compression policy behavior, and config validation.

Reviewed changes

Copilot reviewed 44 out of 44 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
scripts/nginx_compare/launch.py Make nginx-compare warmup/benchmark more robust by aggregating request failures and warning instead of hard-failing.
docker/nginx-compare/Dockerfile Bump Docker toolchain arg to Rust 1.94.1.
crates/rginx-runtime/src/bootstrap/shutdown.rs Make background/admin/health/ocsp tasks optional and improve shutdown/abort join handling.
crates/rginx-runtime/src/bootstrap/mod.rs Spawn admin/health/ocsp tasks into Option<JoinHandle<_>> to support optional task lifecycle.
crates/rginx-runtime/src/bootstrap/listeners.rs Track incremental joins via joined_tasks to support partial join progress across shutdown paths.
crates/rginx-http/src/timeout/tests.rs Add unit tests for the new MaxBytesBody behavior.
crates/rginx-http/src/timeout/mod.rs Re-export MaxBytesBody and RequestBodyLimitError.
crates/rginx-http/src/timeout/body.rs Implement MaxBytesBody and RequestBodyLimitError for streaming request-body size enforcement.
crates/rginx-http/src/state/tests.rs Update state test snapshots for new route transport fields.
crates/rginx-http/src/router.rs Update router tests to populate new route transport fields.
crates/rginx-http/src/proxy/tests/peer_selection.rs Update tests for new peer_failover_enabled field in prepared requests.
crates/rginx-http/src/proxy/request_body.rs Add request buffering policy handling, typed mid-stream body-limit enforcement, and failover gating.
crates/rginx-http/src/proxy/mod.rs Plumb RouteBufferingPolicy and timeout body types into proxy module surface.
crates/rginx-http/src/proxy/forward/mod.rs Wire request buffering + route streaming idle timeout through forward path and improve payload-too-large handling for streamed bodies.
crates/rginx-http/src/proxy/forward/error.rs Add helper to identify “request body too large” errors (currently message-based).
crates/rginx-http/src/proxy/clients/mod.rs Improve upstream error reporting by formatting full source chains.
crates/rginx-http/src/handler/tests.rs Update handler tests for new compression options plumbing + new route fields.
crates/rginx-http/src/handler/dispatch.rs Compute per-route compression/buffering options, pass buffering + idle-timeout to proxy forwarding, and thread compression options to response finalization.
crates/rginx-http/src/compression.rs Add ResponseCompressionOptions and enforce compression policy vs response buffering/content-types/min-bytes.
crates/rginx-core/src/lib.rs Re-export new route policy types.
crates/rginx-core/src/config/tests.rs Update config tests for new route fields.
crates/rginx-core/src/config/route.rs Extend Route with transport policy fields and add RouteBufferingPolicy/RouteCompressionPolicy enums.
crates/rginx-core/src/config.rs Re-export new route policy types from core config module.
crates/rginx-config/src/validate/tests.rs Add/extend validation tests for compression/idle-timeout/policy constraints and request-buffering limit requirements.
crates/rginx-config/src/validate/route.rs Validate route-level transport policy combinations and value constraints.
crates/rginx-config/src/validate.rs Add global validation that request_buffering=On requires request body limits (legacy or all explicit listeners).
crates/rginx-config/src/model.rs Add config model enums/fields for route buffering/compression/idle-timeout.
crates/rginx-config/src/compile/tests.rs Update compile tests to include new route transport policy fields and add a compile test for defaults/overrides.
crates/rginx-config/src/compile/route.rs Compile route transport policy configs into core Route fields with normalization.
crates/rginx-app/tests/upgrade.rs Add reload regression to ensure upgraded tunnels survive config reload.
crates/rginx-app/tests/streaming_download.rs New end-to-end regression for streaming download delivery timing + route streaming idle timeout.
crates/rginx-app/tests/reload/streaming_flow.rs New reload/restart regression ensuring inflight streaming responses drain correctly.
crates/rginx-app/tests/reload.rs Register new streaming reload flow tests and expose wait_for_http_ready passthrough.
crates/rginx-app/tests/phase1.rs Add end-to-end regression for request_buffering=Off streaming uploads + mid-stream body-limit enforcement.
crates/rginx-app/tests/http3.rs Add HTTP/3 streaming-response regression (no full buffering before first chunk).
crates/rginx-app/tests/http2.rs Add HTTP/2 streaming-response regression (no full buffering before first chunk).
crates/rginx-app/tests/failover.rs Add regression that request_buffering=Off disables failover for idempotent requests.
crates/rginx-app/tests/compression.rs Add regressions for buffering-off no-compress, force compression with buffering-on, and custom content-type allowlist behavior.
crates/rginx-app/tests/check.rs Add regression ensuring rginx check prints route transport policy summary details.
crates/rginx-app/src/main.rs Extend check output with aggregated route transport policy counts.
clippy.toml Bump Clippy MSRV baseline to 1.94.1.
README.md Document the new architecture plan and update stated Rust version to 1.94+.
Cargo.toml Bump workspace rust-version to 1.94.
ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md Add the staged delivery plan document for the edge transport streaming initiative.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread crates/rginx-http/src/proxy/forward/error.rs Outdated
Comment thread crates/rginx-http/src/proxy/forward/mod.rs Outdated
Comment thread crates/rginx-http/src/timeout/body.rs Outdated
Comment thread crates/rginx-http/src/handler/dispatch.rs
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
crates/rginx-http/src/handler/dispatch.rs (1)

406-416: 🛠️ Refactor suggestion | 🟠 Major

将 build_route_response 的路由执行参数收拢成上下文对象。

函数当前有 9 个参数,超过 Clippy 配置的阈值(8),触发 too_many_arguments 警告。建议将 actionrequest_bufferingstreaming_response_idle_timeout 等路由执行参数整合到一个上下文结构体中,解决 CI 阻塞问题,同时改进可维护性。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/handler/dispatch.rs` around lines 406 - 416, 将
build_route_response 中的路由执行参数(至少包括 action: RouteAction、request_buffering:
rginx_core::RouteBufferingPolicy、streaming_response_idle_timeout:
Option<Duration> 以及任何与路由执行相关的参数)封装到一个新的上下文结构体(例如 RouteExecutionContext),在
build_route_response 签名中以单一参数替换这些分散参数,并在所有调用处用该上下文构造器替代原有参数传递;保留其余参数(如 request,
state, active, listener, client_address, request_id)不变并调整类型引用(例如
ListenerRequestContext<'_>、ActiveState)以匹配新结构体,更新相关单元/集成测试或调用链以编译通过并消除
too_many_arguments 警告。
crates/rginx-http/src/proxy/request_body.rs (1)

197-207: ⚠️ Potential issue | 🟡 Minor

修复 Clippy 警告:函数参数过多

Pipeline 报告 clippy::too_many_arguments:此函数有 9 个参数,超过了默认限制 8 个。建议将相关参数封装到一个配置结构体中。

🔧 建议的修复方案

可以创建一个配置结构体来封装相关参数:

struct PrepareRequestBodyConfig<'a> {
    upstream_name: &'a str,
    method: &'a Method,
    headers: &'a HeaderMap,
    body_timeout: Duration,
    max_replayable_request_body_bytes: usize,
    max_request_body_bytes: Option<usize>,
    request_buffering: RouteBufferingPolicy,
    grpc_web_mode: Option<&'a GrpcWebMode>,
}

或者在函数上方添加 #[allow(clippy::too_many_arguments)] 属性来抑制此警告(如果认为当前设计是合理的)。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/proxy/request_body.rs` around lines 197 - 207, The
function prepare_request_body currently has 9 parameters and triggers
clippy::too_many_arguments; fix by introducing a config struct (e.g.,
PrepareRequestBodyConfig<'a> containing upstream_name, method, headers,
body_timeout, max_replayable_request_body_bytes, max_request_body_bytes,
request_buffering, grpc_web_mode) and change prepare_request_body signature to
accept this single config plus the body (HttpBody), updating all call sites to
construct and pass PrepareRequestBodyConfig; alternatively, if refactoring is
undesirable, add #[allow(clippy::too_many_arguments)] above the
prepare_request_body function to suppress the lint.
🧹 Nitpick comments (5)
scripts/nginx_compare/launch.py (1)

321-329: benchmark_named_curl 的异常处理模式保持一致。

静态分析工具标记了第 323 行的 except Exception (BLE001)。benchmark_named_curl 在第 366 行使用了 # noqa: BLE001 注释来抑制此警告。此外,记录第一个异常信息有助于调试预热阶段的失败。

♻️ 建议的改进
 def warmup_named_curl(
     *,
     url: str,
     flags: list[str],
     headers: list[str],
     body: bytes | None,
     requests: int,
     concurrency: int,
     timeout_secs: float,
     grpc_mode: str | None = None,
 ) -> None:
     warmup_concurrency = min(concurrency, requests, 16)
     failed_requests = 0
+    first_error: Exception | None = None
     with concurrent.futures.ThreadPoolExecutor(max_workers=warmup_concurrency) as executor:
         futures = [
             executor.submit(
                 run_curl_request,
                 url=url,
                 flags=flags,
                 headers=headers,
                 body=body,
                 timeout_secs=timeout_secs,
                 grpc_mode=grpc_mode,
             )
             for _ in range(requests)
         ]
         for future in concurrent.futures.as_completed(futures):
             try:
                 future.result()
-            except Exception:
+            except Exception as exc:  # noqa: BLE001 - warmup resilience
                 failed_requests += 1
+                if first_error is None:
+                    first_error = exc

     if failed_requests:
         print(
-            f"[nginx-compare] warning: warmup failed {failed_requests}/{requests} requests for {url}",
+            f"[nginx-compare] warning: warmup failed {failed_requests}/{requests} requests for {url}; "
+            f"first error: {first_error!r}",
             file=sys.stderr,
         )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/nginx_compare/launch.py` around lines 321 - 329, The except
Exception: block in the warmup loop should match benchmark_named_curl's pattern:
suppress the BLE001 warning with a trailing "# noqa: BLE001" and capture/log the
first exception for debugging; update the try/except around future.result() to
store the first caught exception (e.g., first_exc) and then include that
exception in the stderr print for the warmup warning (referencing
failed_requests, requests, and url) while keeping the noqa comment on the except
line.
crates/rginx-runtime/src/bootstrap/listeners.rs (1)

418-432: 增量加入逻辑正确,但需注意错误场景下的行为。

当前实现在任务加入失败时提前返回错误,此时 joined_tasks 游标保持在失败任务的索引位置,tasks 向量未清理。后续如果 group 被丢弃,未加入的 JoinHandle 也会被丢弃,导致对应任务变为 detached 状态继续后台运行。

如果这是预期行为(即失败时允许剩余任务分离),建议添加注释说明此设计意图。如果需要确保所有任务都被正确回收,可考虑在错误分支中继续加入或中止剩余任务。

💡 可选:添加注释说明设计意图
 /// Waits for all worker tasks in a listener group to finish.
+///
+/// On error, remaining task handles are left in the group. If the group is
+/// subsequently dropped, those tasks become detached and continue running.
 async fn join_listener_worker_group(group: &mut ListenerWorkerGroup) -> Result<()> {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-runtime/src/bootstrap/listeners.rs` around lines 418 - 432, The
current join_listener_worker_group function can return early on a join failure
leaving group.tasks uncleared and joined_tasks at the failure index, which
detaches remaining JoinHandles; update join_listener_worker_group (the loop that
awaits (&mut group.tasks[worker_index]).await and the error mapping) so that on
Err it either (A) continues attempting to join the remaining handles before
returning the error, or (B) explicitly aborts/awaits and clears all remaining
JoinHandles and resets group.joined_tasks to 0 before returning the error; if
the early-return behavior is intentional, add a clear comment in
join_listener_worker_group explaining that failing to join leaves remaining
tasks detached by design.
crates/rginx-http/src/timeout/tests.rs (1)

285-324: 建议再补一个“恰好等于上限”的边界用例。

现在覆盖了“小于上限”和“超过上限”,但还没锁住“累计字节数正好等于上限”这个最容易回归成 off-by-one 的分支。这里实现依赖 > 和剩余额度计算,补一个 == limit 的 case 会更稳。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-http/src/timeout/tests.rs` around lines 285 - 324, Add a
boundary test where cumulative bytes equal the limit to guard against off-by-one
errors: create a new async test (e.g.,
max_bytes_body_allows_frames_exactly_at_limit) that constructs a StreamBody with
frames whose combined length equals the MaxBytesBody limit, wrap it with
Box::pin(MaxBytesBody::new(..., 8)), then poll_frame via poll_fn twice and
assert both frames succeed (using .into_data() comparisons) just like in
max_bytes_body_allows_frames_within_limit and
max_bytes_body_errors_when_limit_is_exceeded to verify the implementation of
MaxBytesBody::poll_frame correctly allows the exact == limit case.
crates/rginx-app/tests/streaming_download.rs (1)

101-245: 把这些 chunked HTTP 测试 helper 提到共享 support 模块里。

read_http_head_and_pendingread_http_chunkspawn_scripted_chunked_response_server 这套逻辑已经和 crates/rginx-app/tests/reload/streaming_flow.rs 基本重复,crates/rginx-app/tests/phase1.rs 里也有接近的 head 解析实现。现在协议细节一旦调整,很容易只修一处,后面的集成测试就开始各自漂移。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-app/tests/streaming_download.rs` around lines 101 - 245, 将重复的
chunked-HTTP 测试 helper 抽到一个共享的 tests support 模块:把
read_http_head_and_pending、read_http_chunk、spawn_scripted_chunked_response_server(以及
write_chunk/write_chunk_result/connect_http_client 若需要)移动到一个公共的
tests/support/mod.rs 并将这些函数声明为 pub,确保它们使用 std::net/IO
类型签名不变并导出给其它测试;然后在原来使用这些辅助函数的测试文件(如
streaming_download.rs、reload/streaming_flow.rs、phase1.rs)替换为 use
crate::support::<function_name> 或 mod support; 调整任何路径/可见性导致的编译错误并删除原文件中的重复实现。
crates/rginx-app/tests/http2.rs (1)

223-227: 时序断言在 CI 环境中可能存在不稳定风险

当前断言要求第一个 chunk 在 500ms 内到达,但在 CI 高负载环境下,TLS 握手、TCP 连接建立等可能导致偶发失败。

建议:如果后续出现 CI 不稳定,可考虑适当放宽阈值或调整断言策略。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@crates/rginx-app/tests/http2.rs` around lines 223 - 227, 当前测试在 tests/http2.rs
中对 started.elapsed() 强制小于 Duration::from_millis(500) 导致在 CI
高负载下偶发失败;请改为放宽或可配置的超时时间并改写断言以提高稳定性,例如将硬编码的 Duration::from_millis(500)
替换为一个常量或从环境变量读取(如 RGINX_TEST_TIMEOUT_MS)并默认设为更宽松的值(比如
1000ms),或者使用带重试/等待循环的断言逻辑(检查 started.elapsed()
在允许的最大时长内并在短间隔内重试),修改涉及标识符:started 和 Duration::from_millis(...)(断言所在的测试函数内)。
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md`:
- Around line 284-285:
文案中存在笔误:将出现的短语“响缩放在后面”修改为“压缩放在后面”,请在文档内定位包含该原句(例如整句“响缩放在后面,是因为它天然依赖响应传输策略”)并替换为“压缩放在后面,是因为它天然依赖响应传输策略”,同时检查文中是否还有其它相同或近似错误并保持前后语句一致性与标点规范。

In `@Cargo.toml`:
- Line 15: Cargo.toml 中的 rust-version = "1.94" 与 clippy.toml 中的 msrv = "1.94.1"
不一致,导致 CI 警告;将二者统一版本(建议改为 "1.94.1" 或统一到项目决定的 MSRV),在 Cargo.toml 更新 rust-version
字段或在 clippy.toml 更新 msrv 字段,使 rust-version 和 msrv 完全相同(检查并更新符号 rust-version 和
msrv 分别出现在 Cargo.toml 与 clippy.toml 的定义处)。

In `@crates/rginx-app/tests/http3.rs`:
- Around line 648-710: The test leaves the HTTP/3 driver task and QUIC
connection open on early returns; wrap the main client interaction (the connect
+ send_request + recv_data sequence that uses endpoint, connection_handle,
send_request, request_stream and driver_task) into a let result = async { ...
}.await block (or similar try/finally pattern) so you can capture its Result,
then always run a teardown section that calls connection_handle.close(...),
endpoint.close(...), and ensures driver_task is finished (await with timeout
then driver_task.abort() if needed) before returning result; this centralizes
cleanup for both success and error paths and prevents leaked tasks/connections.

In `@crates/rginx-config/src/compile/route.rs`:
- Around line 88-95: The code silently casts compression_min_bytes from u64 to
usize using `as usize` which can truncate on 32-bit targets; replace the
`.map(|value| value as usize)` with a fallible conversion using
`usize::try_from(value)` and propagate a configuration error on overflow (e.g.,
return Err with a clear message) so compilation of the route fails when
`compression_min_bytes` cannot fit into `usize`; update the surrounding function
that constructs the route config (the code block that sets
`compression_min_bytes` and returns the compiled route) to handle the Result
from `try_from` and return the configuration error accordingly.

In `@crates/rginx-http/src/proxy/forward/error.rs`:
- Around line 29-33: The downstream_request_body_too_large_error function
currently detects oversized bodies by string-matching candidate.to_string(),
which is brittle; change it to first walk the error chain and check concrete
error types (e.g., reuse or call the existing limit/error-typed helper function
you have for request-size or limit errors) to determine a 413 condition, and
only if no typed match is found fall back to the current contains("request body
exceeded configured limit of") string check; update the loop in
downstream_request_body_too_large_error to attempt type-based matching before
the string-based fallback.

In `@crates/rginx-http/src/proxy/request_body.rs`:
- Around line 211-215: Merge the nested ifs into a single conditional to satisfy
clippy::collapsible_if: replace the “if let Some(max_request_body_bytes) { if
body.size_hint().lower() > max_request_body_bytes as u64 { ... } }” pattern with
a single combined condition that destructures the Option and checks the size in
one line (e.g. an if-let with an additional boolean guard using
body.size_hint().lower()), keeping the same error return
PrepareRequestError::payload_too_large(max_request_body_bytes) and using
body.size_hint().lower() for the comparison.

In `@crates/rginx-http/src/timeout/body.rs`:
- Around line 281-288: The size_hint() implementation only clamps hint.upper()
which can violate SizeHint's invariant if hint.lower() exceeds the remaining
bytes; update size_hint() (the method on the wrapper that calls
self.inner.size_hint()) to compute remaining =
self.max_request_body_bytes.saturating_sub(self.bytes_read) as u64 and then
clamp both hint.lower() and hint.upper() to at most remaining (use
hint.set_lower(min(original_lower, remaining)) and
hint.set_upper(min(original_upper, remaining))), so lower <= upper is preserved
and set_upper() cannot panic.

In `@crates/rginx-runtime/src/bootstrap/shutdown.rs`:
- Around line 90-97: In join_admin_task_after_abort, the current timeout path
only inspects the JoinError and can swallow the admin task's own io::Error;
change the await handling to inspect both layers (the JoinHandle::await ->
Result<std::io::Result<()>, JoinError>): if the join returns Ok(Err(e)) log the
inner io::Error (e) similarly to the normal path, and if the join returns
Err(join_err) and !join_err.is_cancelled() log the join_err; update the
match/if-let on task.await in join_admin_task_after_abort to cover both
Ok(Err(e)) and Err(join_err) cases and emit appropriate tracing::warn! messages.

---

Outside diff comments:
In `@crates/rginx-http/src/handler/dispatch.rs`:
- Around line 406-416: 将 build_route_response 中的路由执行参数(至少包括 action:
RouteAction、request_buffering:
rginx_core::RouteBufferingPolicy、streaming_response_idle_timeout:
Option<Duration> 以及任何与路由执行相关的参数)封装到一个新的上下文结构体(例如 RouteExecutionContext),在
build_route_response 签名中以单一参数替换这些分散参数,并在所有调用处用该上下文构造器替代原有参数传递;保留其余参数(如 request,
state, active, listener, client_address, request_id)不变并调整类型引用(例如
ListenerRequestContext<'_>、ActiveState)以匹配新结构体,更新相关单元/集成测试或调用链以编译通过并消除
too_many_arguments 警告。

In `@crates/rginx-http/src/proxy/request_body.rs`:
- Around line 197-207: The function prepare_request_body currently has 9
parameters and triggers clippy::too_many_arguments; fix by introducing a config
struct (e.g., PrepareRequestBodyConfig<'a> containing upstream_name, method,
headers, body_timeout, max_replayable_request_body_bytes,
max_request_body_bytes, request_buffering, grpc_web_mode) and change
prepare_request_body signature to accept this single config plus the body
(HttpBody), updating all call sites to construct and pass
PrepareRequestBodyConfig; alternatively, if refactoring is undesirable, add
#[allow(clippy::too_many_arguments)] above the prepare_request_body function to
suppress the lint.

---

Nitpick comments:
In `@crates/rginx-app/tests/http2.rs`:
- Around line 223-227: 当前测试在 tests/http2.rs 中对 started.elapsed() 强制小于
Duration::from_millis(500) 导致在 CI 高负载下偶发失败;请改为放宽或可配置的超时时间并改写断言以提高稳定性,例如将硬编码的
Duration::from_millis(500) 替换为一个常量或从环境变量读取(如 RGINX_TEST_TIMEOUT_MS)并默认设为更宽松的值(比如
1000ms),或者使用带重试/等待循环的断言逻辑(检查 started.elapsed()
在允许的最大时长内并在短间隔内重试),修改涉及标识符:started 和 Duration::from_millis(...)(断言所在的测试函数内)。

In `@crates/rginx-app/tests/streaming_download.rs`:
- Around line 101-245: 将重复的 chunked-HTTP 测试 helper 抽到一个共享的 tests support 模块:把
read_http_head_and_pending、read_http_chunk、spawn_scripted_chunked_response_server(以及
write_chunk/write_chunk_result/connect_http_client 若需要)移动到一个公共的
tests/support/mod.rs 并将这些函数声明为 pub,确保它们使用 std::net/IO
类型签名不变并导出给其它测试;然后在原来使用这些辅助函数的测试文件(如
streaming_download.rs、reload/streaming_flow.rs、phase1.rs)替换为 use
crate::support::<function_name> 或 mod support; 调整任何路径/可见性导致的编译错误并删除原文件中的重复实现。

In `@crates/rginx-http/src/timeout/tests.rs`:
- Around line 285-324: Add a boundary test where cumulative bytes equal the
limit to guard against off-by-one errors: create a new async test (e.g.,
max_bytes_body_allows_frames_exactly_at_limit) that constructs a StreamBody with
frames whose combined length equals the MaxBytesBody limit, wrap it with
Box::pin(MaxBytesBody::new(..., 8)), then poll_frame via poll_fn twice and
assert both frames succeed (using .into_data() comparisons) just like in
max_bytes_body_allows_frames_within_limit and
max_bytes_body_errors_when_limit_is_exceeded to verify the implementation of
MaxBytesBody::poll_frame correctly allows the exact == limit case.

In `@crates/rginx-runtime/src/bootstrap/listeners.rs`:
- Around line 418-432: The current join_listener_worker_group function can
return early on a join failure leaving group.tasks uncleared and joined_tasks at
the failure index, which detaches remaining JoinHandles; update
join_listener_worker_group (the loop that awaits (&mut
group.tasks[worker_index]).await and the error mapping) so that on Err it either
(A) continues attempting to join the remaining handles before returning the
error, or (B) explicitly aborts/awaits and clears all remaining JoinHandles and
resets group.joined_tasks to 0 before returning the error; if the early-return
behavior is intentional, add a clear comment in join_listener_worker_group
explaining that failing to join leaves remaining tasks detached by design.

In `@scripts/nginx_compare/launch.py`:
- Around line 321-329: The except Exception: block in the warmup loop should
match benchmark_named_curl's pattern: suppress the BLE001 warning with a
trailing "# noqa: BLE001" and capture/log the first exception for debugging;
update the try/except around future.result() to store the first caught exception
(e.g., first_exc) and then include that exception in the stderr print for the
warmup warning (referencing failed_requests, requests, and url) while keeping
the noqa comment on the except line.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 3ee3439d-1468-486d-bcfb-cf1001135f55

📥 Commits

Reviewing files that changed from the base of the PR and between 993e666 and da8680e.

📒 Files selected for processing (44)
  • ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md
  • Cargo.toml
  • README.md
  • clippy.toml
  • crates/rginx-app/src/main.rs
  • crates/rginx-app/tests/check.rs
  • crates/rginx-app/tests/compression.rs
  • crates/rginx-app/tests/failover.rs
  • crates/rginx-app/tests/http2.rs
  • crates/rginx-app/tests/http3.rs
  • crates/rginx-app/tests/phase1.rs
  • crates/rginx-app/tests/reload.rs
  • crates/rginx-app/tests/reload/streaming_flow.rs
  • crates/rginx-app/tests/streaming_download.rs
  • crates/rginx-app/tests/upgrade.rs
  • crates/rginx-config/src/compile/route.rs
  • crates/rginx-config/src/compile/tests.rs
  • crates/rginx-config/src/model.rs
  • crates/rginx-config/src/validate.rs
  • crates/rginx-config/src/validate/route.rs
  • crates/rginx-config/src/validate/tests.rs
  • crates/rginx-core/src/config.rs
  • crates/rginx-core/src/config/route.rs
  • crates/rginx-core/src/config/tests.rs
  • crates/rginx-core/src/lib.rs
  • crates/rginx-http/src/compression.rs
  • crates/rginx-http/src/handler/dispatch.rs
  • crates/rginx-http/src/handler/tests.rs
  • crates/rginx-http/src/proxy/clients/mod.rs
  • crates/rginx-http/src/proxy/forward/error.rs
  • crates/rginx-http/src/proxy/forward/mod.rs
  • crates/rginx-http/src/proxy/mod.rs
  • crates/rginx-http/src/proxy/request_body.rs
  • crates/rginx-http/src/proxy/tests/peer_selection.rs
  • crates/rginx-http/src/router.rs
  • crates/rginx-http/src/state/tests.rs
  • crates/rginx-http/src/timeout/body.rs
  • crates/rginx-http/src/timeout/mod.rs
  • crates/rginx-http/src/timeout/tests.rs
  • crates/rginx-runtime/src/bootstrap/listeners.rs
  • crates/rginx-runtime/src/bootstrap/mod.rs
  • crates/rginx-runtime/src/bootstrap/shutdown.rs
  • docker/nginx-compare/Dockerfile
  • scripts/nginx_compare/launch.py

Comment thread ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md Outdated
Comment thread Cargo.toml Outdated
Comment thread crates/rginx-app/tests/http3.rs
Comment thread crates/rginx-config/src/compile/route.rs
Comment thread crates/rginx-http/src/proxy/forward/error.rs Outdated
Comment thread crates/rginx-http/src/proxy/request_body.rs Outdated
Comment thread crates/rginx-http/src/timeout/body.rs
Comment thread crates/rginx-runtime/src/bootstrap/shutdown.rs
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@crates/rginx-app/tests/http2.rs`:
- Around line 172-173: The test's "first-chunk timing" is measuring from before
the request (so it includes TLS/connection overhead) and uses a 900ms sleep
that's too close to the 1s cutoff, making CI flaky; change the timing to start
when the response headers are received (i.e., begin measuring after the code
that reads/observes the response HEADERS) and then measure the interval until
write_chunk/first data chunk arrives, and increase the artificial delay for the
second chunk (the thread::sleep call before write_chunk(&mut stream,
b"second\n")) to a clearly larger value than the cutoff (e.g., >1500ms) so the
assertion about "early first chunk vs buffering" is based solely on
header-to-first-data timing and is robust; update any assertions that compare
against the cutoff to use the new measured interval instead of the request-start
timestamp.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI (base), Organization UI (inherited)

Review profile: CHILL

Plan: Pro

Run ID: 45d94c1b-73dc-43b0-aa6f-7b1739a1fc20

📥 Commits

Reviewing files that changed from the base of the PR and between da8680e and 5083a27.

📒 Files selected for processing (24)
  • ARCHITECTURE_EDGE_TRANSPORT_STREAMING_PLAN.md
  • Cargo.toml
  • README.md
  • crates/rginx-app/src/main.rs
  • crates/rginx-app/tests/http2.rs
  • crates/rginx-app/tests/http3.rs
  • crates/rginx-app/tests/reload/streaming_flow.rs
  • crates/rginx-app/tests/streaming_download.rs
  • crates/rginx-app/tests/support/mod.rs
  • crates/rginx-config/src/compile/route.rs
  • crates/rginx-config/src/load.rs
  • crates/rginx-http/src/compression.rs
  • crates/rginx-http/src/handler/dispatch.rs
  • crates/rginx-http/src/handler/grpc.rs
  • crates/rginx-http/src/proxy/forward/error.rs
  • crates/rginx-http/src/proxy/forward/mod.rs
  • crates/rginx-http/src/proxy/grpc_web/codec.rs
  • crates/rginx-http/src/proxy/request_body.rs
  • crates/rginx-http/src/timeout/body.rs
  • crates/rginx-http/src/timeout/tests.rs
  • crates/rginx-http/src/tls/ocsp/mod.rs
  • crates/rginx-runtime/src/bootstrap/listeners.rs
  • crates/rginx-runtime/src/bootstrap/shutdown.rs
  • scripts/nginx_compare/launch.py
✅ Files skipped from review due to trivial changes (5)
  • crates/rginx-config/src/load.rs
  • crates/rginx-http/src/tls/ocsp/mod.rs
  • crates/rginx-http/src/handler/grpc.rs
  • crates/rginx-http/src/proxy/grpc_web/codec.rs
  • scripts/nginx_compare/launch.py
🚧 Files skipped from review as they are similar to previous changes (8)
  • README.md
  • crates/rginx-http/src/timeout/tests.rs
  • crates/rginx-config/src/compile/route.rs
  • crates/rginx-http/src/proxy/forward/error.rs
  • Cargo.toml
  • crates/rginx-http/src/compression.rs
  • crates/rginx-http/src/proxy/request_body.rs
  • crates/rginx-app/tests/streaming_download.rs

Comment thread crates/rginx-app/tests/http2.rs Outdated
Comment thread crates/rginx-app/tests/support/mod.rs Outdated
@vansour vansour merged commit 633bdf4 into main Apr 15, 2026
6 checks passed
@vansour vansour deleted the feat/edge-transport-streaming-phases branch April 15, 2026 04:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants