Skip to content

channel support#21

Merged
newbe36524 merged 7 commits intomainfrom
syncToAzure
Feb 15, 2026
Merged

channel support#21
newbe36524 merged 7 commits intomainfrom
syncToAzure

Conversation

@newbe36524
Copy link
Contributor

@newbe36524 newbe36524 commented Feb 15, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added channel mapping support to organize versions by release channels (stable, beta, etc.)
    • Implemented smart upload optimization that skips unchanged files using hash comparison
    • Added reporting of skipped files during Azure upload operations
    • Enabled custom channel mapping configuration via JSON parameter
  • Documentation

    • Added comprehensive proposal and implementation plan documentation for channel support in index generation

@coderabbitai
Copy link

coderabbitai bot commented Feb 15, 2026

📝 Walkthrough

Walkthrough

Adds channel-based organization to Azure Blob Storage index generation. Introduces channel mapping from version strings, groups versions by channel, generates channels object in index.json, and implements MD5-based upload optimization to skip unchanged blobs. Also updates the sync workflow to trigger on syncToAzure branch pushes.

Changes

Cohort / File(s) Summary
Workflow Configuration
.github/workflows/sync-azure-storage.yml
Added push event trigger on syncToAzure branch alongside existing workflow_call trigger.
Build Schema
.nuke/build.schema.json
Added ChannelMapping property (string type) to NukeBuild schema for custom channel mapping in JSON format.
Build Parameters
nukeBuild/Build.cs
Introduced ChannelMapping CLI parameter (string, default empty) with Parameter attribute for custom channel mapping configuration.
Configuration Models
nukeBuild/AzureStorageConfiguration.cs
Added SkippedBlobs property to AzureBlobPublishResult to track blobs skipped during upload due to matching content hashes.
Core Adapter Logic
nukeBuild/Adapters/AzureBlobAdapter.cs
Implemented channel mapping support with ParseChannelMapping helper; added ExtractChannelFromVersion for version-to-channel parsing; introduced GroupVersionsByChannel and BuildChannelsObject for index organization; added VersionGroup and ChannelInfo public classes; enabled MD5-based conditional uploads to skip unchanged blobs; configured CamelCase JSON naming; enhanced logging for channel operations.
Build Orchestration
nukeBuild/Build.AzureStorage.cs
Updated AzureBlobAdapter instantiation to pass ChannelMapping parameter in both publish and index generation paths; added conditional logging for skipped blob counts after successful uploads.
Documentation & Proposal
openspec/changes/archive/2026-02-15-indexjson-channel-support/proposal.md, openspec/changes/archive/2026-02-15-indexjson-channel-support/tasks.md
Comprehensive proposal and multi-phase implementation plan documenting channel support strategy, version-to-channel mapping logic, channels object structure, Nuke configuration integration, testing phases, and deployment considerations.

Sequence Diagram

sequenceDiagram
    actor Build
    participant AzureBlobAdapter
    participant VersionParser
    participant Grouper
    participant Uploader
    participant AzureBlob as Azure Blob Storage

    Build->>AzureBlobAdapter: GenerateIndexFromBlobsAsync(versions, channelMappingJson)
    AzureBlobAdapter->>AzureBlobAdapter: ParseChannelMapping(channelMappingJson)
    
    loop For each version
        AzureBlobAdapter->>VersionParser: ExtractChannelFromVersion(version, customMapping)
        VersionParser-->>AzureBlobAdapter: channel
    end
    
    AzureBlobAdapter->>Grouper: GroupVersionsByChannel(versions)
    Grouper-->>AzureBlobAdapter: groupedByChannel
    
    AzureBlobAdapter->>AzureBlobAdapter: BuildChannelsObject(groupedByChannel)
    AzureBlobAdapter->>AzureBlobAdapter: Construct index.json with channels
    
    AzureBlobAdapter->>Uploader: Upload index.json
    loop For each blob
        Uploader->>Uploader: Compute local MD5
        Uploader->>AzureBlob: Get remote ContentHash
        alt Hash matches
            Uploader->>Uploader: Skip upload (record as skipped)
        else Hash differs
            Uploader->>AzureBlob: Upload blob
        end
    end
    
    AzureBlobAdapter-->>Build: PublishResult (with SkippedBlobs)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Poem

🐰 Channels bloom like spring flowers bright,
Versions grouped in Azure's light,
MD5 hashes skip with glee,
Index JSON, organized spree! 🌱✨

🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'channel support' is vague and overly generic. While it references a real aspect of the changes, it lacks specificity about what channels are being added to, what component is being modified, or the scope of the change. Consider a more descriptive title such as 'Add channel support to Azure index generation' or 'Implement multi-channel indexJSON support' to clearly convey the main purpose and scope.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into main

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch syncToAzure

Comment @coderabbitai help to get the list of available commands and usage tips.

@newbe36524 newbe36524 merged commit 06034e7 into main Feb 15, 2026
4 of 5 checks passed
@newbe36524 newbe36524 deleted the syncToAzure branch February 15, 2026 11:28
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@nukeBuild/Adapters/AzureBlobAdapter.cs`:
- Around line 105-138: Compute the local MD5 hash (using MD5.Create() and
md5.ComputeHashAsync on the file stream) before checking
blobClient.ExistsAsync(), then use that localHash for the pre-upload comparison
against properties.Value.ContentHash (remoteHash) and, when calling
blobClient.UploadAsync, pass a BlobUploadOptions with HttpHeaders.ContentHash
set to localHash so the hash is persisted for both small and large uploads;
update the existing logic around localHash, remoteHash, shouldUpload, and the
UploadAsync call (references: blobClient, GetPropertiesAsync, remoteHash,
localHash, UploadAsync, BlobUploadOptions.HttpHeaders.ContentHash) so
skip/upload behavior remains correct and consistent for all file sizes.

In `@openspec/changes/archive/2026-02-15-indexjson-channel-support/proposal.md`:
- Around line 1-128: Reformat the proposal document (proposal.md) to the
required section structure by replacing the current free-form sections with the
five mandated headings: "Why" (motivation/problem), "What Changes" (list
concrete code & config changes such as updates to
AzureBlobAdapter.GenerateIndexFromBlobsAsync,
AzureBlobAdapter.GenerateIndexOnlyAsync, and Build.cs ReleaseChannel usage), "UI
Design Changes" (if any consumer/renderer changes; note current files that do
not need edits), "Code Flow Changes" (describe new version-to-channel mapping
logic, channels object construction, and where to add helper methods in
AzureBlobAdapter), and "Impact" (outline affected files and
backward-compatibility). Ensure each section is concise, uses the unique
symbols/method names from the diff (GenerateIndexFromBlobsAsync,
GenerateIndexOnlyAsync, ReleaseChannel, AzureBlobAdapter.cs, Build.cs), and
preserve the implementation details from the original proposal (version parsing,
channels object format, Nuke config) while removing other headings.
🧹 Nitpick comments (2)
nukeBuild/Adapters/AzureBlobAdapter.cs (2)

408-415: Consider downgrading full index.json logs to verbose/debug.

The JSON can be large and noisy at info level; this might bloat CI logs.


426-544: Consolidate channel classification to avoid drift.

ExtractChannelFromVersion overlaps with SemverExtensions.GetChannel and ChannelClassification. Consider reusing one core classifier and layering custom mapping on top.

Comment on lines +105 to +138
// Check if blob exists and compare hashes to skip unnecessary uploads
bool shouldUpload = true;
if (await blobClient.ExistsAsync())
{
var properties = await blobClient.GetPropertiesAsync();
var remoteHash = properties.Value.ContentHash;

// Calculate local file hash
byte[] localHash;
await using (var stream = File.OpenRead(filePath))
using (var md5 = MD5.Create())
{
localHash = await md5.ComputeHashAsync(stream);
}

// Compare hashes
if (remoteHash != null && localHash.SequenceEqual(remoteHash))
{
Log.Information("Skipping {File} (unchanged, hash: {Hash})", fileName, Convert.ToHexString(localHash)[..8]);
shouldUpload = false;
result.SkippedBlobs.Add(blobClient.Uri.ToString());
}
}

if (shouldUpload)
{
Log.Information("Uploading: {File} -> {Container}/{Blob}", fileName, options.ContainerName, blobName);

await using var stream = File.OpenRead(filePath);
await blobClient.UploadAsync(stream, overwrite: true);
var blobUrl = blobClient.Uri.ToString();
result.UploadedBlobs.Add(blobUrl);
Log.Information("Upload successful: {Url}", blobUrl);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Azure SDK UploadAsync ContentHash Content-MD5 automatic computation persistence

💡 Result:

  • Azure Storage does not “magically” persist an MD5 for every upload. What happens depends on which REST operation the SDK ends up using:

    • If the upload is a single Put Blob request and you omit Content-MD5, the service generates an MD5 and it shows up later in blob properties / listings as Content-MD5. [2]
    • If the upload is a block upload (Put Block / Put Block List), the service generally does not compute/persist a whole-blob MD5 for you; you must compute it client-side and set it explicitly (via the blob property header x-ms-blob-content-md5 / SDK “content hash” header). [1]
  • UploadAsync in Azure SDK can switch between those paths. The .NET SDK’s Upload/UploadAsync may use PutBlob for small uploads and PutBlock/PutBlockList for larger uploads, so you can see “sometimes MD5 exists, sometimes it’s null” behavior unless you always set it yourself. [3]

  • To persist an MD5 using the SDK, you must supply it (the SDK won’t reliably compute-and-store it for you across all upload strategies). For ADLS Gen2 / DataLake, this is done by setting HttpHeaders.ContentHash in upload options. [5]

Sources: Microsoft Learn REST Put Blob (service MD5 behavior) [2]; Microsoft Q&A explanation of PutBlob vs PutBlock/PutBlockList MD5 persistence [1]; Azure SDK upload method behavior (PutBlob vs blocks) [3]; DataLake UploadAsync + ContentHash example [5].


🌐 Web query:

Azure.Storage.Blobs BlobUploadOptions BlobHttpHeaders ContentHash documentation

💡 Result:

In Azure.Storage.Blobs (.NET):

  • BlobUploadOptions.HttpHeaders lets you set standard blob HTTP headers (including content headers) as part of an upload. [1]
  • BlobHttpHeaders.ContentHash is the blob’s MD5 (as a byte[]). If you specify it on upload, the service validates the transmitted content; if the MD5 doesn’t match, the request fails with HTTP 400 (Bad Request). [2]
  • When reading blob properties, BlobProperties.ContentHash is returned (as byte[]) when the blob has an MD5 and the response is for the full blob, so clients can validate integrity. [3]

Example (upload with MD5 + content type):

using Azure.Storage.Blobs.Models;
using System.Security.Cryptography;

byte[] md5 = MD5.HashData(dataBytes); // 16 bytes

var options = new BlobUploadOptions
{
    HttpHeaders = new BlobHttpHeaders
    {
        ContentType = "application/octet-stream",
        ContentHash = md5
    }
};

await blobClient.UploadAsync(new BinaryData(dataBytes), options);

Sources: [1] [2] [3]


🌐 Web query:

Azure Blob Storage UploadAsync automatically compute Content-MD5 hash

💡 Result:

UploadAsync in the Azure .NET v12 Blob SDK does not reliably/always compute and set the blob Content-MD5 for you.

  • Upload/UploadAsync may upload as a single Put Blob for small payloads, but switches to Put Block / Put Block List for larger uploads. (azure.github.io)
  • For block uploads, the service won’t compute/populate Content-MD5 automatically; you’re expected to compute it client-side and set it yourself. (learn.microsoft.com)

What to do (compute it yourself and set it on upload)

In Azure.Storage.Blobs, set BlobHttpHeaders.ContentHash (MD5 bytes) via BlobUploadOptions:

using System.Security.Cryptography;
using Azure.Storage.Blobs;
using Azure.Storage.Blobs.Models;

await using var stream = File.OpenRead(path);
byte[] md5 = MD5.HashData(stream);
stream.Position = 0;

await blobClient.UploadAsync(stream, new BlobUploadOptions
{
    HttpHeaders = new BlobHttpHeaders { ContentHash = md5 }
});

(Equivalent tooling example: AzCopy/Storage Explorer “Put MD5” computes MD5 client-side and stores it to Content-MD5.) (learn.microsoft.com)

Citations:


Set ContentHash on upload to ensure hash persistence across all file sizes.

Azure's UploadAsync uses different strategies based on file size: single Put Blob for small uploads (auto-computes MD5) and Put Block / Put Block List for larger uploads (does not auto-persist MD5). Since the current code omits ContentHash, the hash comparison skip logic will only work for small files and fail inconsistently for large files.

Move the hash calculation before the existence check and pass it via BlobUploadOptions.HttpHeaders.ContentHash to ensure it's always persisted:

🔧 Suggested fix
-                bool shouldUpload = true;
-                if (await blobClient.ExistsAsync())
-                {
-                    var properties = await blobClient.GetPropertiesAsync();
-                    var remoteHash = properties.Value.ContentHash;
-
-                    // Calculate local file hash
-                    byte[] localHash;
-                    await using (var stream = File.OpenRead(filePath))
-                    using (var md5 = MD5.Create())
-                    {
-                        localHash = await md5.ComputeHashAsync(stream);
-                    }
+                // Calculate local file hash (used for compare + upload headers)
+                byte[] localHash;
+                await using (var hashStream = File.OpenRead(filePath))
+                using (var md5 = MD5.Create())
+                {
+                    localHash = await md5.ComputeHashAsync(hashStream);
+                }
+
+                bool shouldUpload = true;
+                if (await blobClient.ExistsAsync())
+                {
+                    var properties = await blobClient.GetPropertiesAsync();
+                    var remoteHash = properties.Value.ContentHash;

                     // Compare hashes
                     if (remoteHash != null && localHash.SequenceEqual(remoteHash))
                     {
                         Log.Information("Skipping {File} (unchanged, hash: {Hash})", fileName, Convert.ToHexString(localHash)[..8]);
                         shouldUpload = false;
                         result.SkippedBlobs.Add(blobClient.Uri.ToString());
                     }
                 }

                 if (shouldUpload)
                 {
                     Log.Information("Uploading: {File} -> {Container}/{Blob}", fileName, options.ContainerName, blobName);
-
-                    await using var stream = File.OpenRead(filePath);
-                    await blobClient.UploadAsync(stream, overwrite: true);
+                    await using var stream = File.OpenRead(filePath);
+                    var uploadOptions = new BlobUploadOptions
+                    {
+                        HttpHeaders = new BlobHttpHeaders { ContentHash = localHash }
+                    };
+                    await blobClient.UploadAsync(stream, uploadOptions);
                     var blobUrl = blobClient.Uri.ToString();
                     result.UploadedBlobs.Add(blobUrl);
                     Log.Information("Upload successful: {Url}", blobUrl);
                 }
🤖 Prompt for AI Agents
In `@nukeBuild/Adapters/AzureBlobAdapter.cs` around lines 105 - 138, Compute the
local MD5 hash (using MD5.Create() and md5.ComputeHashAsync on the file stream)
before checking blobClient.ExistsAsync(), then use that localHash for the
pre-upload comparison against properties.Value.ContentHash (remoteHash) and,
when calling blobClient.UploadAsync, pass a BlobUploadOptions with
HttpHeaders.ContentHash set to localHash so the hash is persisted for both small
and large uploads; update the existing logic around localHash, remoteHash,
shouldUpload, and the UploadAsync call (references: blobClient,
GetPropertiesAsync, remoteHash, localHash, UploadAsync,
BlobUploadOptions.HttpHeaders.ContentHash) so skip/upload behavior remains
correct and consistent for all file sizes.

Comment on lines +1 to +128
# indexJSON 生成添加 channel 支持

## 概述

为 HagiCode Desktop 的 Nuke 构建系统添加 channel(发布渠道)支持,使其在生成 indexJSON 时能够包含 `channels` 字段,用于区分不同发布渠道(如 beta、stable、canary)的版本。

## 背景

当前 HagiCode Desktop 项目的 Nuke 构建系统在生成 indexJSON 时不支持 channel 功能。参考文件 `/home/newbe36524/repos/newbe36524/pcode/artifacts/azure-index.json` 展示了支持 channel 的数据结构。

**重要**:此提案关注的是**生成** indexJSON 的构建系统,而非**读取** indexJSON 的应用逻辑(后者已实现)。

## 问题

### 当前实现分析

**GenerateIndexOnlyAsync** (AzureBlobAdapter.cs:108-143):
- 生成简单的 index 结构,只有 `version`、`channel`、`createdAt`、`files`
- `channel` 字段是单个字符串值,不是对象结构
- 无法支持多渠道版本管理

**GenerateIndexFromBlobsAsync** (AzureBlobAdapter.cs:278-361):
- 从 Azure Blob Storage 列出所有文件
- 按版本前缀分组(如 "0.1.0-beta.11")
- 生成 `versions` 数组,包含版本和文件信息
- **缺少** `channels` 对象结构

### 期望的数据结构

```json
{
"updatedAt": "2026-02-15T05:45:05.2931068Z",
"versions": [
{
"version": "0.1.0-beta.11",
"files": ["hagicode-0.1.0-beta.11-linux-x64-nort.zip", ...],
"assets": [...]
}
],
"channels": {
"beta": {
"latest": "0.1.0-beta.11",
"versions": ["0.1.0-beta.11", "0.1.0-beta.10", ...]
},
"stable": {
"latest": "1.0.0",
"versions": ["1.0.0", "0.9.0", ...]
}
}
}
```

## 解决方案

### 核心策略

1. **版本到渠道映射**:根据版本字符串确定渠道(如 beta、stable、canary)
2. **channels 对象生成**:为每个渠道创建包含 latest 和 versions 的对象
3. **构建配置扩展**:在 Nuke 配置中添加渠道定义和映射规则

### 实现要点

1. **版本解析逻辑**:
- 从版本字符串(如 "0.1.0-beta.11")提取渠道标识
- 支持常见的渠道命名约定:beta、stable、canary、alpha、dev
- 允许自定义渠道映射规则

2. **channels 对象构建**:
- 按渠道分组所有版本
- 为每个渠道选择 latest 版本(基于语义化版本排序)
- 生成 versions 数组(包含该渠道所有版本)

3. **Nuke 配置参数**:
- 利用现有的 `ReleaseChannel` 参数(Build.cs:59)
- 添加可选的渠道映射配置
- 支持多渠道同时生成

4. **向后兼容性**:
- 保持现有 index 结构(updatedAt、versions)
- channels 字段为可选添加
- 确保旧版本客户端仍能正常工作

## 影响范围

### 需要修改的文件

- **nukeBuild/Adapters/AzureBlobAdapter.cs**
- `GenerateIndexFromBlobsAsync` 方法:添加 channels 对象生成逻辑
- 新增版本到渠道映射方法
- 新增渠道对象构建方法

- **nukeBuild/Build.cs**
- 使用现有的 `ReleaseChannel` 参数
- 可能需要添加渠道配置相关参数

- **nukeBuild/Build.AzureStorage.cs**
- 更新日志和验证逻辑以支持 channels

### 不需要修改的文件

- **src/main/package-sources/http-index-source.ts**:已实现 channel 解析
- **src/main/version-manager.ts**:Version 接口已包含 channel 字段
- **src/renderer/** 前端相关文件:使用现有逻辑

## 实施计划

详见 `tasks.md` 文件。

## 成功标准

1. indexJSON 包含正确的 `channels` 对象
2. 每个渠道包含 `latest` 版本和 `versions` 数组
3. 版本到渠道的映射符合预期规则
4. 生成的 indexJSON 与 azure-index.json 参考格式一致
5. 向后兼容性:缺少 channels 时客户端仍能正常工作

## 风险与缓解

### 风险

- 版本命名不一致可能导致渠道分类错误
- 多渠道同时发布时的版本管理复杂度

### 缓解措施

- 提供明确的版本到渠道映射规则
- 支持自定义渠道映射配置
- 充分测试各种版本命名模式
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Reformat proposal to required section structure.

Please reorganize this doc into: Why, What Changes, UI Design Changes (if applicable), Code Flow Changes (if applicable), and Impact.

Based on learnings: "Applies to openspec/changes/*/proposal.md : Structure proposal.md with Why, What Changes, UI Design Changes (if applicable), Code Flow Changes (if applicable), and Impact sections".

🤖 Prompt for AI Agents
In `@openspec/changes/archive/2026-02-15-indexjson-channel-support/proposal.md`
around lines 1 - 128, Reformat the proposal document (proposal.md) to the
required section structure by replacing the current free-form sections with the
five mandated headings: "Why" (motivation/problem), "What Changes" (list
concrete code & config changes such as updates to
AzureBlobAdapter.GenerateIndexFromBlobsAsync,
AzureBlobAdapter.GenerateIndexOnlyAsync, and Build.cs ReleaseChannel usage), "UI
Design Changes" (if any consumer/renderer changes; note current files that do
not need edits), "Code Flow Changes" (describe new version-to-channel mapping
logic, channels object construction, and where to add helper methods in
AzureBlobAdapter), and "Impact" (outline affected files and
backward-compatibility). Ensure each section is concise, uses the unique
symbols/method names from the diff (GenerateIndexFromBlobsAsync,
GenerateIndexOnlyAsync, ReleaseChannel, AzureBlobAdapter.cs, Build.cs), and
preserve the implementation details from the original proposal (version parsing,
channels object format, Nuke config) while removing other headings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant