Skip to content

Copy#6

Open
Ironboxplus wants to merge 69 commits intomainfrom
copy
Open

Copy#6
Ironboxplus wants to merge 69 commits intomainfrom
copy

Conversation

@Ironboxplus
Copy link
Copy Markdown
Owner

Description / 描述

Motivation and Context / 背景

Closes #XXXX

Relates to #XXXX

How Has This Been Tested? / 测试

Checklist / 检查清单

  • I have read the CONTRIBUTING document.
    我已阅读 CONTRIBUTING 文档。
  • I have formatted my code with go fmt or prettier.
    我已使用 go fmtprettier 格式化提交的代码。
  • I have added appropriate labels to this PR (or mentioned needed labels in the description if lacking permissions).
    我已为此 PR 添加了适当的标签(如无权限或需要的标签不存在,请在描述中说明,管理员将后续处理)。
  • I have requested review from relevant code authors using the "Request review" feature when applicable.
    我已在适当情况下使用"Request review"功能请求相关代码作者进行审查。
  • I have updated the repository accordingly (If it’s needed).
    我已相应更新了相关仓库(若适用)。

@Ironboxplus Ironboxplus force-pushed the copy branch 11 times, most recently from 95869e3 to 6d65605 Compare January 2, 2026 15:24
@Ironboxplus Ironboxplus force-pushed the main branch 2 times, most recently from 367ea09 to 6a81227 Compare January 2, 2026 15:43
@Ironboxplus Ironboxplus force-pushed the copy branch 7 times, most recently from 875a9cc to c36d3c5 Compare January 4, 2026 09:16
@Ironboxplus Ironboxplus force-pushed the copy branch 6 times, most recently from 0c8fbb5 to a1ccfd9 Compare January 7, 2026 13:59
@Ironboxplus Ironboxplus force-pushed the copy branch 2 times, most recently from 8b9dbb2 to ee97856 Compare January 8, 2026 16:00
cyk added 30 commits April 5, 2026 00:31
- Add Go module cache
- Add frontend download cache with commit SHA tracking
- Add Docker layer cache (registry-based)
- Cache will invalidate when frontend repo updates
恢复链接缓存命中时的 SyncClosers 引用计数检查:
- 缓存命中时若文件句柄已关闭,删除缓存条目并重新获取
- RequireReference 为 true 时用 SetTypeWithExpirable 绑定文件句柄生命周期
- 无需引用计数的链接仍使用默认 TTL,保持多客户端复用行为
- 恢复 for 循环处理 singleflight 返回已关闭句柄的竞态情况
…irectoryTree

- Add srcBasePath parameter to preCreateDirectoryTree to properly track the
  current source directory during recursion. The old code used t.SrcActualPath
  (the top-level path) for all recursion levels, causing incorrect op.List paths
  when maxDepth > 0. This was latent with maxDepth=1 (second pass returned early
  at maxDepth=0 before the bug triggered) but would break if maxDepth is raised.

- Remove the 50ms time.Sleep added per MakeDir call. Drivers that enforce QPS
  limits (e.g. 115, 115_open) already call d.WaitLimit(ctx) -- a token-bucket
  rate.Limiter with burst=1 -- inside their own MakeDir implementation, so
  op.MakeDir naturally blocks at the correct per-driver rate. An unconditional
  sleep would penalise all other drivers (S3, WebDAV, etc.) with no benefit.
…lity

Extract the core recursion logic from preCreateDirectoryTree into a pure
helper preCreateDirTreeFn that accepts makeDir and listSrc as injected
function parameters. The method wrapper passes the real op.MakeDir /
op.List closures unchanged, so production behavior is identical.

This makes the function unit-testable without a real storage driver or
database. Added 11 tests covering:
- empty / file-only objs → no MakeDir calls
- flat dirs at maxDepth=0 → correct dst paths, no List calls
- srcBasePath regression: recursive List must use subdirSrcPath not the
  fixed t.SrcActualPath (the original bug that was latent at maxDepth=1)
- maxDepth boundary: recursion stops exactly at the configured depth
- context cancellation (immediate and mid-recursion)
- MakeDir error non-fatal: remaining dirs still processed
- List error non-fatal: other subdirs still recursed
- mixed file+dir objects: only dirs trigger MakeDir
- context timeout
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.