Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughMajor v2.0.0 migration: bumped module path to pkg.akt.dev/node/v2, added Oracle, Epochs, BME, Wasm modules, CosmWasm contracts (Wormhole/Pyth), upgrade/migration logic, expanded tests, and CI/build changes for wasm toolchain. Changes
Sequence Diagram(s)sequenceDiagram
rect rgba(200,200,255,0.5)
participant PF as Price Feeder
participant PC as Pyth Contract
participant VM as Wormhole Contract
participant OM as Oracle Module
participant Chain
end
PF->>PF: fetch external price
PF->>PC: submit VAA (UpdatePriceFeed)
PC->>VM: query VerifyVAA
VM->>VM: parse & verify VAA (guardian sigs)
VM-->>PC: parsed VAA (valid)
PC->>PC: extract price, validate
PC->>OM: MsgAddPriceEntry
OM->>Chain: persist price & emit event
Chain-->>PF: block confirmation
sequenceDiagram
rect rgba(200,200,255,0.5)
participant UM as Upgrade Handler
participant WC as Contract Store
participant MM as Migrator
participant OC as Oracle Module
participant Chain
end
UM->>WC: storeAndInstantiateWormhole()
WC->>Chain: StoreCode(wormhole.wasm)
WC->>Chain: InstantiateContract(wormhole)
WC-->>UM: wormhole_address
UM->>WC: storeAndInstantiatePyth(wormhole_address)
WC->>Chain: StoreCode(pyth.wasm)
WC->>Chain: InstantiateContract(pyth)
WC-->>UM: pyth_address
UM->>OC: RegisterOracleSource(pyth_address)
UM->>MM: RunMigrations (denom & module)
MM->>Chain: migrate KV/state
UM->>Chain: Emit EventUpgradeComplete
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Poem
✨ Finishing Touches🧪 Generate unit tests (beta)
|
There was a problem hiding this comment.
Actionable comments posted: 10
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
x/market/client/rest/rest.go (1)
13-138:⚠️ Potential issue | 🟠 MajorRemove the REST endpoint file entirely—it is completely unused dead code.
Verification confirms that
x/market/client/rest/rest.gois not imported or referenced anywhere in the codebase. TheRegisterRoutesfunction and all handler implementations are never called. The market module has migrated to gRPC endpoints viaRegisterGRPCGatewayRoutesand no longer needs REST API support. Files with only commented-out code and zero functionality should be deleted rather than maintained.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/client/rest/rest.go` around lines 13 - 138, This file contains dead REST endpoints (RegisterRoutes, listOrdersHandler, listBidsHandler, listLeasesHandler, getOrderHandler, getBidHandler, getLeaseHandler) with only commented-out code and is not referenced anywhere; delete x/market/client/rest/rest.go entirely and remove any leftover references to these symbols if present so the codebase only uses the gRPC gateway (RegisterGRPCGatewayRoutes).x/take/module.go (1)
109-112:⚠️ Potential issue | 🟡 MinorComment references "provider" instead of "take".
The comment on line 109 says "provider module name" but this is the take module's Name function.
Proposed fix
-// Name returns the provider module name +// Name returns the take module name func (AppModule) Name() string { return types.ModuleName }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/take/module.go` around lines 109 - 112, The comment above AppModule.Name wrongly says "provider module name"; update the doc comment to refer to the take module instead. Locate the AppModule.Name method and change the comment from something like "Name returns the provider module name" to "Name returns the take module name" (or similar accurate wording) so the comment matches the function (AppModule.Name) and the module identity (types.ModuleName).x/cert/module.go (2)
86-91:⚠️ Potential issue | 🟡 MinorInconsistent error message references "provider" instead of "cert".
The panic message on line 89 still references "provider grpc routes" but this is the cert module.
Proposed fix
func (AppModuleBasic) RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)) if err != nil { - panic(fmt.Sprintf("couldn't register provider grpc routes: %s", err.Error())) + panic(fmt.Sprintf("couldn't register cert grpc routes: %s", err.Error())) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/cert/module.go` around lines 86 - 91, In AppModuleBasic.RegisterGRPCGatewayRoutes, the panic message wrongly references "provider grpc routes"; update the error message to correctly reference the cert module (e.g., "couldn't register cert grpc routes") when types.RegisterQueryHandlerClient returns an error, so the panic includes the correct module name and the error details; ensure you keep err.Error() or use fmt.Errorf to include err when calling panic.
152-154:⚠️ Potential issue | 🟡 MinorComment references "audit" instead of "cert".
The comment on line 152 says "audit module" but this is the cert module's ExportGenesis function.
Proposed fix
-// ExportGenesis returns the exported genesis state as raw bytes for the audit -// module. +// ExportGenesis returns the exported genesis state as raw bytes for the cert +// module. func (am AppModule) ExportGenesis(ctx sdk.Context, cdc codec.JSONCodec) json.RawMessage {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/cert/module.go` around lines 152 - 154, The function comment above AppModule.ExportGenesis incorrectly refers to the "audit module"; update the comment text to reference the correct module name ("cert module") and/or remove the wrong module name so the docstring accurately describes AppModule.ExportGenesis for the cert module.app/upgrades.go (1)
37-37:⚠️ Potential issue | 🟡 MinorTypo: "unitialize" should be "initialize".
Minor typo in the error message.
📝 Proposed fix
- return fmt.Errorf("unable to unitialize upgrade `%s`: %w", name, err) + return fmt.Errorf("unable to initialize upgrade `%s`: %w", name, err)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/upgrades.go` at line 37, The error message in the return fmt.Errorf call contains a typo: change the string "unable to unitialize upgrade `%s`: %w" to "unable to initialize upgrade `%s`: %w" in the code that returns fmt.Errorf (the call using name and err) so the error reads correctly; update the literal in upgrades.go where that fmt.Errorf is constructed.x/market/keeper/keeper.go (1)
559-565:⚠️ Potential issue | 🟡 MinorDuplicate
deferfor iterator close.There are two identical
deferstatements to close the same iterator. The second one (lines 563-565) is redundant and should be removed.🧹 Proposed fix
func (k Keeper) WithBids(ctx sdk.Context, fn func(types.Bid) bool) { store := ctx.KVStore(k.skey) iter := storetypes.KVStorePrefixIterator(store, keys.BidPrefix) defer func() { _ = iter.Close() }() - defer func() { - _ = iter.Close() - }() - for ; iter.Valid(); iter.Next() {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/keeper/keeper.go` around lines 559 - 565, Remove the redundant duplicate defer that calls iter.Close(): there are two identical defer blocks both calling _ = iter.Close(), so delete the second defer (the later one) and keep a single defer that closes iter to avoid double-declaring the same cleanup; locate the duplicate around the iterator handling in keeper.go (the defer wrapping _ = iter.Close()) and remove the extra block.x/escrow/handler/handler.go (1)
12-13:⚠️ Potential issue | 🟡 MinorIncorrect doc comment: refers to "deployment" instead of "escrow".
The comment says
"deployment" type messagesbut this handler is for escrow messages.📝 Proposed fix
-// NewHandler returns a handler for "deployment" type messages +// NewHandler returns a handler for "escrow" type messages func NewHandler(keeper keeper.Keeper, authzKeeper AuthzKeeper, bkeeper BankKeeper) baseapp.MsgServiceHandler {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/escrow/handler/handler.go` around lines 12 - 13, Update the doc comment for NewHandler to accurately describe the handler's purpose for escrow messages (not "deployment"); locate the NewHandler function signature (NewHandler(keeper keeper.Keeper, authzKeeper AuthzKeeper, bkeeper BankKeeper) baseapp.MsgServiceHandler) and change the comment to state it returns a handler for "escrow" type messages (or more precise module-specific wording) so the documentation matches the implementation.x/audit/module.go (2)
94-100:⚠️ Potential issue | 🟡 MinorIncorrect module name in doc comment.
The doc comment on line 94 says "provider module" but this method is for the audit module. The panic message on line 98 correctly says "audit", so only the doc comment needs updating.
Proposed fix
-// RegisterGRPCRoutes registers the gRPC Gateway routes for the provider module. +// RegisterGRPCRoutes registers the gRPC Gateway routes for the audit module. func (AppModuleBasic) RegisterGRPCRoutes(clientCtx client.Context, mux *runtime.ServeMux) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/audit/module.go` around lines 94 - 100, Update the doc comment for the RegisterGRPCRoutes method on AppModuleBasic to correctly refer to the audit module (not the provider module); locate the RegisterGRPCRoutes function and replace "provider module" in its top comment with "audit module" so the comment matches the implementation and existing panic message.
86-92:⚠️ Potential issue | 🟡 MinorIncorrect module name in panic message.
The panic message on line 90 says "couldn't register provider grpc routes" but this is the audit module. This appears to be a copy-paste error.
Proposed fix
func (AppModuleBasic) RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { err := types.RegisterQueryHandlerClient(context.Background(), mux, types.NewQueryClient(clientCtx)) if err != nil { - panic(fmt.Sprintf("couldn't register provider grpc routes: %s", err.Error())) + panic(fmt.Sprintf("couldn't register audit grpc routes: %s", err.Error())) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/audit/module.go` around lines 86 - 92, The panic in AppModuleBasic.RegisterGRPCGatewayRoutes uses the wrong module name ("provider") — update the panic message to reference the audit module; specifically, in the RegisterGRPCGatewayRoutes function where types.RegisterQueryHandlerClient is called and the error is handled, change the fmt.Sprintf message from "couldn't register provider grpc routes: %s" to something like "couldn't register audit grpc routes: %s" so it correctly identifies the audit module.x/escrow/module.go (1)
86-91:⚠️ Potential issue | 🟡 MinorInconsistent error message: still references "provider" instead of "escrow".
The comment on line 86 was updated to reference the "escrow module", but the error message on line 90 still says "provider grpc routes". This should be updated for consistency.
Proposed fix
// RegisterGRPCGatewayRoutes registers the gRPC Gateway routes for the escrow module. func (AppModuleBasic) RegisterGRPCGatewayRoutes(clientCtx client.Context, mux *runtime.ServeMux) { err := v1.RegisterQueryHandlerClient(context.Background(), mux, v1.NewQueryClient(clientCtx)) if err != nil { - panic(fmt.Sprintf("couldn't register provider grpc routes: %s", err.Error())) + panic(fmt.Sprintf("couldn't register escrow grpc routes: %s", err.Error())) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/escrow/module.go` around lines 86 - 91, The panic message in AppModuleBasic.RegisterGRPCGatewayRoutes still says "couldn't register provider grpc routes" which is inconsistent with the escrow module; update the error string to reference "escrow" (e.g., "couldn't register escrow grpc routes" or "couldn't register escrow module grpc routes") so the message matches the function and comment in RegisterGRPCGatewayRoutes.app/modules.go (1)
228-234:⚠️ Potential issue | 🟡 MinorDuplicate
authzmodule.NewAppModuleregistration in simulation modules.The
authzmodule.NewAppModuleappears twice inappSimModules(lines 228-234 and 248-254). This will register the same module twice, which could cause issues during simulation.🛠️ Proposed fix
Remove the duplicate registration at lines 248-254:
app.interfaceRegistry, ), - authzmodule.NewAppModule( - app.cdc, - app.Keepers.Cosmos.Authz, - app.Keepers.Cosmos.Acct, - app.Keepers.Cosmos.Bank, - app.interfaceRegistry, - ), gov.NewAppModule(Also applies to: 248-254
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/modules.go` around lines 228 - 234, There is a duplicate registration of authzmodule.NewAppModule inside the appSimModules list; remove the redundant call so the simulation modules include only one authzmodule.NewAppModule. Locate the two occurrences of authzmodule.NewAppModule that pass app.cdc, app.Keepers.Cosmos.Authz, app.Keepers.Cosmos.Acct, app.Keepers.Cosmos.Bank, app.interfaceRegistry and delete the second/duplicate entry (keep the first), ensuring appSimModules no longer registers the same module twice.
🟡 Minor comments (12)
x/wasm/handler/server.go-20-38 (1)
20-38:⚠️ Potential issue | 🟡 MinorFix the constructor comment; the nil-guard pattern is inconsistent across handler implementations.
The comment incorrectly references "staking" instead of "wasm". While a nil guard for
reqis defensible, none of theUpdateParamsimplementations across the codebase (x/wasm, x/take, x/oracle, x/market, x/deployment) include one in handler/server.go files. Nil/empty validation is consistently applied only in query handlers (keeper/grpc_query.go), suggesting a deliberate architectural separation. This may warrant module-wide review if the pattern should change.✅ Minimal fix (comment only)
-// NewMsgServerImpl returns an implementation of the akash staking MsgServer interface +// NewMsgServerImpl returns an implementation of the akash wasm MsgServer interface // for the provided Keeper.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/handler/server.go` around lines 20 - 38, Update the top-of-file constructor comment for NewMsgServerImpl to reference the wasm module (replace "staking" with "wasm") so it correctly describes the returned MsgServer; do not add or remove nil-request guards in UpdateParams (leave the existing logic in msgServer.UpdateParams as-is) since nil/empty validation is handled elsewhere.x/oracle/handler/server.go-20-38 (1)
20-38:⚠️ Potential issue | 🟡 MinorFix the constructor comment and add a nil request guard.
The comment incorrectly references "staking" instead of "oracle", and
UpdateParamscan panic ifreqis nil.Suggested fix
import ( "context" sdk "github.com/cosmos/cosmos-sdk/types" govtypes "github.com/cosmos/cosmos-sdk/x/gov/types" + "google.golang.org/grpc/codes" + "google.golang.org/grpc/status" types "pkg.akt.dev/go/node/oracle/v1" "pkg.akt.dev/node/v2/x/oracle/keeper" ) -// NewMsgServerImpl returns an implementation of the akash staking MsgServer interface +// NewMsgServerImpl returns an implementation of the akash oracle MsgServer interface // for the provided Keeper. func NewMsgServerImpl(k keeper.Keeper) types.MsgServer { return &msgServer{ keeper: k, } } func (ms msgServer) UpdateParams(goCtx context.Context, req *types.MsgUpdateParams) (*types.MsgUpdateParamsResponse, error) { + if req == nil { + return nil, status.Error(codes.InvalidArgument, "invalid request") + } if ms.keeper.GetAuthority() != req.Authority { return nil, govtypes.ErrInvalidSigner.Wrapf("invalid authority; expected %s, got %s", ms.keeper.GetAuthority(), req.Authority) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/handler/server.go` around lines 20 - 38, Update the NewMsgServerImpl comment to reference the oracle MsgServer (not staking) and add a nil-request guard at the start of UpdateParams: check if req == nil and return an appropriate error (e.g., sdkerrors.Wrap or fmt.Errorf) before accessing req.Authority or req.Params; keep the authority check using ms.keeper.GetAuthority() and existing SetParams call on msgServer/UpdateParams/MsgUpdateParams to avoid panics when callers pass nil.x/oracle/keeper/keeper.go-23-83 (1)
23-83:⚠️ Potential issue | 🟡 MinorFix stale comments referencing other modules.
The comments mention “deployment store,” “x/mint,” and “x/take” in the oracle keeper. Please align them with the oracle module to avoid confusion.✏️ Example cleanup
-// Keeper of the deployment store +// Keeper of the oracle store ... -// NewKeeper creates and returns an instance of take keeper +// NewKeeper creates and returns an instance of oracle keeper ... -// GetAuthority returns the x/mint module's authority. +// GetAuthority returns the oracle module's authority. ... -// SetParams sets the x/take module parameters. +// SetParams sets the oracle module parameters. ... -// GetParams returns the current x/take module parameters. +// GetParams returns the current oracle module parameters.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/keeper.go` around lines 23 - 83, Update stale comments and docstrings in the keeper code to reference the oracle module instead of other modules: change the top-line comment for the keeper struct from "Keeper of the deployment store" to something like "Keeper of the oracle store", update the NewKeeper comment to "creates and returns an instance of oracle keeper", change the GetAuthority comment to mention the oracle module (or the correct module account) rather than "x/mint", and rename any "x/take" mentions in SetParams/GetParams comments to "x/oracle" (or simply "oracle module parameters"); ensure comments adjacent to functions Codec, StoreKey, NewQuerier, SetParams, GetParams and the keeper struct are consistent with the oracle module terminology._run/.envrc-3-9 (1)
3-9:⚠️ Potential issue | 🟡 MinorError messages don't reset terminal color.
The error messages set the terminal color to red but don't reset it afterward, which may cause subsequent terminal output to remain red.
🔧 Proposed fix to reset terminal color
if ! has grpcurl ; then - echo -e "\033[31mgrpcurl is not installed"; exit 1 + echo -e "\033[31mgrpcurl is not installed\033[0m"; exit 1 fi if ! has tqdm ; then - echo -e "\033[31mtqdm is not installed. https://github.com/tqdm/tqdm"; exit 1 + echo -e "\033[31mtqdm is not installed. https://github.com/tqdm/tqdm\033[0m"; exit 1 fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/.envrc` around lines 3 - 9, The error messages that set red color via "\033[31m" in the checks for "has grpcurl" and "has tqdm" don't reset the terminal color; update both echo statements used in the blocks that test "has grpcurl" and "has tqdm" so they append a reset sequence (e.g., "\033[0m" or use "tput sgr0") after the message to restore the terminal color before exiting, keeping the same wording and exit behavior.x/wasm/genesis.go-20-21 (1)
20-21:⚠️ Potential issue | 🟡 MinorFix comment: references "deployment" instead of "wasm".
The comment incorrectly references the "deployment module" - this should be "wasm module".
Proposed fix
-// DefaultGenesisState returns default genesis state as raw bytes for the deployment -// module. +// DefaultGenesisState returns default genesis state as raw bytes for the wasm +// module.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/genesis.go` around lines 20 - 21, Update the top-of-file comment for DefaultGenesisState to reference the wasm module instead of the deployment module: locate the comment above the DefaultGenesisState function in genesis.go and replace "deployment module" with "wasm module" so the docstring correctly reads that it returns the default genesis state for the wasm module.x/wasm/genesis.go-44-45 (1)
44-45:⚠️ Potential issue | 🟡 MinorFix comment: references "deployment" instead of "wasm".
The comment incorrectly references the "deployment module" - this should be "wasm module".
Proposed fix
-// ExportGenesis returns genesis state for the deployment module +// ExportGenesis returns genesis state for the wasm module🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/genesis.go` around lines 44 - 45, The top-level comment above ExportGenesis incorrectly says "deployment module" instead of "wasm module"; update the comment for the ExportGenesis function to reference the wasm module (e.g., change "deployment module" to "wasm module") so the documentation matches the package purpose and the function ExportGenesis.x/wasm/simulation/genesis.go-9-16 (1)
9-16:⚠️ Potential issue | 🟡 MinorFix copy-paste artifacts: variable name and comment reference wrong modules.
The variable
takeGenesisand the comment "for supply" appear to be copied from another module. These should reference the wasm module for clarity.Proposed fix
-// RandomizedGenState generates a random GenesisState for supply +// RandomizedGenState generates a random GenesisState for wasm func RandomizedGenState(simState *module.SimulationState) { - takeGenesis := &types.GenesisState{ + wasmGenesis := &types.GenesisState{ Params: types.DefaultParams(), } - simState.GenState[types.ModuleName] = simState.Cdc.MustMarshalJSON(takeGenesis) + simState.GenState[types.ModuleName] = simState.Cdc.MustMarshalJSON(wasmGenesis) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/simulation/genesis.go` around lines 9 - 16, Update the copy-paste artifacts in RandomizedGenState: change the top comment to reference the wasm module (e.g., "RandomizedGenState generates a random GenesisState for wasm") and rename the local variable takeGenesis to a clear wasm-specific name such as wasmGenesis or genesisState; ensure the variable still holds types.GenesisState with Params: types.DefaultParams() and is marshaled into simState.GenState[types.ModuleName] as before._run/common-commands.mk-29-47 (1)
29-47:⚠️ Potential issue | 🟡 MinorExit status check captures
readresult, not the pipeline result.On line 37,
$$?captures the exit status of thereadcommand, not thecurl | jqpipeline. Additionally, ifjqreturns"null"or an empty string (e.g., when the node isn't ready), the-lt 1comparison may produce a shell error rather than evaluating to false.🛠️ Proposed fix with robust validation
akash-node-ready: SHELL=$(BASH_PATH) akash-node-ready: @( \ max_retry=15; \ counter=0; \ while [[ $$counter -lt $$max_retry ]]; do \ - read block < <(curl -s $(AKASH_NODE)/status | jq -r '.result.sync_info.latest_block_height' 2> /dev/null); \ - if [[ $$? -ne 0 || $$block -lt 1 ]]; then \ + block=$$(curl -s $(AKASH_NODE)/status 2>/dev/null | jq -r '.result.sync_info.latest_block_height' 2>/dev/null); \ + if [[ -z "$$block" || "$$block" == "null" || ! "$$block" =~ ^[0-9]+$$ || "$$block" -lt 1 ]]; then \ echo "unable to get node status. sleep for 1s"; \ ((counter++)); \ sleep 1; \ else \ echo "latest block height: $${block}"; \ exit 0; \ fi \ done; \ exit 1 \ )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-commands.mk` around lines 29 - 47, The akash-node-ready target's check uses $$? immediately after read, which only reflects read's exit status and not the curl|jq pipeline, and comparing $$block with -lt 1 can fail when jq returns empty/null; change the logic in the akash-node-ready target to capture the pipeline output into a variable (e.g., assign block with command substitution from curl ... | jq -r ...), check the pipeline exit status (use set -o pipefail or inspect curl and jq exit codes) and then validate that the block value is a non-empty integer (e.g., regex or grep '^[0-9]\+$') before doing a numeric comparison against 1; update the loop to treat non-numeric/empty values as failure and retry rather than causing a shell error.go.mod-3-3 (1)
3-3:⚠️ Potential issue | 🟡 MinorUpdate Go version to latest available.
Go 1.25.4 is valid but outdated as of February 2026. Consider upgrading to Go 1.25.7 (latest 1.25 patch) or Go 1.26.0 (latest stable).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@go.mod` at line 3, Update the Go version declaration "go 1.25.4" in the go.mod file to a current release (e.g., "go 1.25.7" or "go 1.26.0"); after changing the "go" directive, run go mod tidy and all relevant tests/builds to ensure module resolution and compatibilities are OK and address any minor breakages reported by the toolchain.x/wasm/keeper/keeper.go-27-95 (1)
27-95:⚠️ Potential issue | 🟡 MinorFix copy‑paste doc comments (module names are incorrect).
The comments currently reference deployment/take/mint, but this is the wasm keeper.✍️ Suggested comment fixes
-// Keeper of the deployment store +// Keeper of the wasm store ... -// NewKeeper creates and returns an instance of take keeper +// NewKeeper creates and returns an instance of the wasm keeper ... -// GetAuthority returns the x/mint module's authority. +// GetAuthority returns the module authority (x/gov module account). ... -// SetParams sets the x/take module parameters. +// SetParams sets the wasm module parameters. ... -// GetParams returns the current x/take module parameters. +// GetParams returns the current wasm module parameters.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/keeper/keeper.go` around lines 27 - 95, Update the incorrect copy-paste comments to reference the wasm module and its responsibilities: change the file-level comment above type keeper to say "Keeper of the wasm module store" (or similar), update the NewKeeper comment to "NewKeeper creates and returns an instance of the wasm keeper", change the Codec comment to "Codec returns keeper codec for wasm", change StoreKey/NewQuerier comments if present to reference wasm, and update GetAuthority/SetParams/GetParams comments to mention the wasm module (e.g., "GetAuthority returns the wasm module authority", "SetParams sets the wasm module parameters", "GetParams returns the current wasm module parameters"). Ensure each comment sits immediately above the matching symbols: keeper, NewKeeper, Codec, StoreKey, NewQuerier, GetAuthority, SetParams, GetParams.x/wasm/module.go-110-181 (1)
110-181:⚠️ Potential issue | 🟡 MinorFix copy‑paste comments to reference the wasm module.
These comments currently mention provider/staking/take, which is misleading.✍️ Suggested comment fixes
-// Name returns the provider module name +// Name returns the wasm module name ... -// GenerateGenesisState creates a randomized GenState of the staking module. +// GenerateGenesisState creates a randomized GenState of the wasm module. ... -// RegisterStoreDecoder registers a decoder for take module's types. +// RegisterStoreDecoder registers a decoder for wasm module's types. ... -// WeightedOperations doesn't return any take module operation. +// WeightedOperations doesn't return any wasm module operation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/module.go` around lines 110 - 181, The file contains several incorrect copy‑paste comments that reference other modules (provider/staking/take) around AppModule methods; update the comments to accurately describe the wasm module behavior for functions like Name, IsOnePerModuleType, IsAppModule, QuerierRoute, RegisterServices, BeginBlock, EndBlock, InitGenesis, ExportGenesis, ConsensusVersion, GenerateGenesisState, ProposalMsgs, RegisterStoreDecoder and WeightedOperations so they mention the wasm module (e.g., "wasm module" or "this module") and correct descriptions (e.g., "BeginBlock performs no-op", "EndBlock returns no validator updates", "GenerateGenesisState creates randomized genesis for wasm module", etc.). Ensure each comment is concise and consistent with the corresponding method semantics and remove references to provider/staking/take.x/oracle/module.go-110-181 (1)
110-181:⚠️ Potential issue | 🟡 MinorFix copy‑paste comments to reference the oracle module.
These comments currently mention provider/staking/take, which is misleading.✍️ Suggested comment fixes
-// Name returns the provider module name +// Name returns the oracle module name ... -// GenerateGenesisState creates a randomized GenState of the staking module. +// GenerateGenesisState creates a randomized GenState of the oracle module. ... -// RegisterStoreDecoder registers a decoder for take module's types. +// RegisterStoreDecoder registers a decoder for oracle module's types. ... -// WeightedOperations doesn't return any take module operation. +// WeightedOperations doesn't return any oracle module operation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/module.go` around lines 110 - 181, Update misleading copy‑paste comments to reference the oracle module: change the comment above Name() from "provider module name" to "oracle module name", ensure comments for GenerateGenesisState, RegisterStoreDecoder, and WeightedOperations no longer mention "staking" or "take" (e.g., use "oracle module" or "oracle module's types/operations"), and scan the other comments (IsOnePerModuleType, IsAppModule, QuerierRoute, RegisterServices, BeginBlock, EndBlock, InitGenesis, ExportGenesis, ConsensusVersion, ProposalMsgs) to make sure they consistently mention the oracle module where appropriate and correct any remaining copy‑paste wording.
🧹 Nitpick comments (15)
wasmvm.go (1)
3-3: Avoid baking repo-local rpath into production binaries.
-Wl,-rpath,${SRCDIR}/.cache/libhardcodes a build-cache location into the binary, which is brittle outside the repo and can break releases/containers. Consider gating this behind a build tag or dev-only build flags, or move the path wiring to the build pipeline (e.g., env/config) so production artifacts don’t depend on.cache.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wasmvm.go` at line 3, The cgo LDFLAGS line currently embeds a repo-local rpath (-Wl,-rpath,${SRCDIR}/.cache/lib) into the binary; remove or conditionalize this hardcoded rpath in the wasmvm.go cgo directive and instead supply the runtime library path via a dev-only build tag or external build flag/environment variable. Specifically, replace the direct -Wl,-rpath usage with one of: (a) move the cgo directive into a separate file with a build tag like // +build dev so production builds omit it, or (b) read an environment/config variable at build time and inject the rpath via the build pipeline (go build -ldflags or CI wrapper) so the main wasmvm.go (the cgo LDFLAGS directive) no longer bakes in ${SRCDIR}/.cache/lib. Ensure the symbol to edit is the cgo LDFLAGS comment at the top of the wasmvm.go file.x/market/client/rest/rest.go (1)
9-9: Remove the commented import.The commented import serves no purpose since all code that would use it is also commented out. Remove it entirely rather than leaving it as a comment.
🧹 Proposed cleanup
- // "pkg.akt.dev/node/v2/x/market/query"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/client/rest/rest.go` at line 9, Remove the unused commented import line "// \"pkg.akt.dev/node/v2/x/market/query\"" from rest.go; locate the import block in the file (the commented line appears near the top of x/market/client/rest/rest.go) and delete that commented import so the import section contains only active imports and no leftover commented code.x/escrow/query/querier.go (1)
3-12: Consider removing commented-out dead code.This entire file consists of commented-out code that appears to be a placeholder (
NewQuerierreturnsnil). If the querier is no longer needed in v2, consider removing this file entirely rather than maintaining dead code.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/escrow/query/querier.go` around lines 3 - 12, This file contains only commented-out imports and a stub NewQuerier function; remove the dead code by deleting the file (querier.go) entirely if the querier is not required in v2, or alternatively restore a real implementation by uncommenting and implementing NewQuerier and required imports (codec.LegacyAmino, sdk types, keeper.Keeper) if the querier should exist; prefer deletion to avoid keeping commented placeholders.x/wasm/simulation/decoder.go (1)
1-17: Remove misleading commented-out placeholder code.This file contains only a package declaration and commented-out code that references
types.Minterandtypes.MinterKey— these are mint module types, not wasm types. This appears to be copy-pasted boilerplate that doesn't apply to the wasm module.Consider either:
- Implementing the actual wasm decoder with appropriate wasm types, or
- Removing this file entirely until the decoder is needed
Leaving misleading commented code adds confusion for future maintainers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/simulation/decoder.go` around lines 1 - 17, Remove the misleading commented-out boilerplate in NewDecodeStore that references mint module symbols (types.Minter, types.MinterKey); either delete this decoder.go file entirely if no wasm-specific simulation decoder is needed, or replace the commented block with a real wasm decoder implementation using actual wasm module types and keys (update the NewDecodeStore function and any referenced symbols accordingly) so the file no longer contains cross-module placeholder code.x/oracle/simulation/genesis.go (1)
9-15: Clarify “randomized” behavior (or actually randomize).The function currently uses
DefaultParams()only, so the “randomized” wording is misleading. Either randomize params or adjust the comment to reflect deterministic behavior.💡 Possible clarification
-// RandomizedGenState generates a random GenesisState for supply +// RandomizedGenState generates a default GenesisState for simulation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/simulation/genesis.go` around lines 9 - 15, The function RandomizedGenState is misleading because it always uses DefaultParams(); either make it actually random or change the comment to reflect deterministic behavior. Fix option A: implement randomized params by creating a new helper (e.g., RandomParams or RandomizedParams) that uses simState.Rand to produce varied types.Params values and assign them to takeGenesis.Params before marshalling. Fix option B: if you want deterministic behavior, change the function name/comment from "RandomizedGenState" to indicate default/deterministic genesis and keep using types.DefaultParams(); update the docstring accordingly. Ensure you update references to RandomizedGenState or the params helper (types.DefaultParams, takeGenesis, simState.GenState) consistently.x/oracle/simulation/decoder.go (1)
3-17: Remove or implement the commented-out decoder.Leaving a fully commented implementation makes maintenance harder and can mislead readers. If this is intended, wire it in; otherwise delete the file.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/simulation/decoder.go` around lines 3 - 17, The file contains a fully commented decoder; either restore and export it or delete the file. To restore: re-enable the NewDecodeStore(codec.Codec) func so it returns func(kvA, kvB kv.Pair) string, import bytes, fmt and use the passed-in cdc to MustUnmarshal kvA.Value/kvB.Value into types.Minter variables and return fmt.Sprintf("%v\n%v", minterA, minterB) when bytes.Equal(kvA.Key, types.MinterKey) (panic for default), ensuring NewDecodeStore, types.Minter, and types.MinterKey are referenced and imports cleaned. Alternatively, remove the commented block/file entirely if the decoder is not needed.make/cosmwasm.mk (1)
1-7: Fail fast when the contract directory is missing.A quick preflight check gives a clearer error than a Docker failure on an empty mount.
🧩 Suggested guard
build-contract-%: + `@test` -d "$(ROOT_DIR)/contracts/$*" || (echo "contract not found: $(ROOT_DIR)/contracts/$*" >&2; exit 1) mkdir -p $(AKASH_DEVCACHE)/cosmwasm/$* docker run --rm -v "$(ROOT_DIR)/contracts/$*":/code \ -v "$(AKASH_DEVCACHE)/cosmwasm/$*":/target \ --mount type=volume,source=registry_cache,target=/usr/local/cargo/registry \ $(COSMWASM_OPTIMIZER_IMAGE)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/cosmwasm.mk` around lines 1 - 7, The build-contract-% target should fail fast if the contract source directory is missing; add a preflight check in the build-contract-% recipe that verifies the directory "$(ROOT_DIR)/contracts/$*" exists and is non-empty and, if not, prints a clear error and exits non-zero. Update the build-contract-% target (in cosmwasm.mk) to perform this test before mkdir and docker run so you get an informative message rather than a Docker empty-mount failure.make/init.mk (1)
70-74: Add guard to prevent emptyWASMVM_MODfrom silently breaking version resolution.The
WASMVM_MODvalue is used directly to extractWASMVM_VERSION, which is later used in download URLs inmake/setup-cache.mk. If the grep returns empty, the resulting version variable would be malformed, causing silent failures at build time. Addinggrep -m1and an early error check improves robustness:Suggested guard
-WASMVM_MOD := $(shell $(GO) list -m -f '{{ .Path }}' all | grep github.com/CosmWasm/wasmvm) -WASMVM_VERSION := $(shell $(GO) list -mod=readonly -m -f '{{ .Version }}' $(WASMVM_MOD)) +WASMVM_MOD := $(shell $(GO) list -m -f '{{ .Path }}' all | grep -m1 'github.com/CosmWasm/wasmvm') +ifeq ($(strip $(WASMVM_MOD)),) +$(error "wasmvm module not found in go.mod/go.work") +endif +WASMVM_VERSION := $(shell $(GO) list -mod=readonly -m -f '{{ .Version }}' $(WASMVM_MOD))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/init.mk` around lines 70 - 74, The Makefile currently assigns WASMVM_MOD and then computes WASMVM_VERSION directly, which can silently break if WASMVM_MOD is empty; update the WASMVM_MOD assignment to use grep -m1 to pick the first match and add an explicit guard right after (check if WASMVM_MOD is empty) that prints a clear error and exits non‑zero; then only compute WASMVM_VERSION when WASMVM_MOD is nonempty so downstream targets (e.g., make/setup-cache.mk) won’t get a malformed version.tests/upgrade/workers_test.go (1)
123-134: Drop the second UnpackAny to avoid redundant decoding.
You already unpack intosdk.AccountIand type-assert tosdk.ModuleAccountI; the second UnpackAny intomaccis unnecessary and can be removed.♻️ Suggested cleanup
macc, ok := acc.(sdk.ModuleAccountI) require.True(t, ok) - - err = encCfg.InterfaceRegistry.UnpackAny(qResp.Account, &macc) - require.NoError(t, err)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 123 - 134, The second call to encCfg.InterfaceRegistry.UnpackAny(qResp.Account, &macc) is redundant because you already UnpackAny into var acc (sdk.AccountI) and then type-asserted it to macc (sdk.ModuleAccountI); remove the redundant UnpackAny line and keep the initial UnpackAny into acc, the type assertion macc,ok and the require checks so the test still validates decoding and the module account type.make/setup-cache.mk (1)
62-80: Prevent stale wasmvm libs whenWASMVM_VERSIONchanges.
The verification marker doesn’t encode the version, so a version bump can keep old libs “verified.” Consider versioning the marker (or storing a version file as a prerequisite).♻️ One way to version the marker
-.PHONY: wasmvm-libs-verify -wasmvm-libs-verify: - @$(MAKE) -s $(AKASH_DEVCACHE_LIB)/.wasmvm_verified +WASMVM_VERIFIED := $(AKASH_DEVCACHE_LIB)/.wasmvm_verified_$(WASMVM_VERSION) + +.PHONY: wasmvm-libs-verify +wasmvm-libs-verify: + @$(MAKE) -s $(WASMVM_VERIFIED)-$(AKASH_DEVCACHE_LIB)/.wasmvm_verified: $(patsubst %, $(AKASH_DEVCACHE_LIB)/%,$(WASMVM_LIBS)) $(AKASH_DEVCACHE_LIB)/wasmvm_checksums.txt +$(WASMVM_VERIFIED): $(patsubst %, $(AKASH_DEVCACHE_LIB)/%,$(WASMVM_LIBS)) $(AKASH_DEVCACHE_LIB)/wasmvm_checksums.txt cd $(AKASH_DEVCACHE_LIB) && sha256sum -c --ignore-missing wasmvm_checksums.txt `@touch` $@🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/setup-cache.mk` around lines 62 - 80, The verification marker .wasmvm_verified should be tied to WASMVM_VERSION so old artifacts aren't considered verified after a version bump; change the marker to include the version (e.g. use .wasmvm_verified-$(WASMVM_VERSION) or create a wasmvm_version file and make .wasmvm_verified depend on it) and update the targets that reference $(AKASH_DEVCACHE_LIB)/.wasmvm_verified (the rule for $(AKASH_DEVCACHE_LIB)/.wasmvm_verified, wasmvm-libs-verify, and wasmvm-libs) so the checksum verification and the touch use the versioned marker/filename and the new prerequisite (wasmvm_checksums.txt plus version) to force re-verification when WASMVM_VERSION changes; ensure the patsubst and touch calls reference the new marker name and that any rm -f lines clean the old versioned marker if necessary.Makefile (1)
24-26: Duplicate variable definitions.Several variables are defined twice:
GOMODis defined on line 24 and again on line 35 with the same default valueBUILD_TAGSis defined on line 25 but unconditionally overwritten on line 41GORELEASER_STRIP_FLAGSis defined on line 26 and again on line 45The second definitions on lines 35, 41, and 45 will override the first ones. If this is intentional (e.g., line 25 is legacy), consider removing the unused definitions to avoid confusion.
♻️ Suggested cleanup
-BUILD_TAGS ?= osusergo,netgo,hidraw,ledger -GORELEASER_STRIP_FLAGS ?= - - ifeq ($(IS_MAINNET), true) ifeq ($(IS_PREREL), false) IS_STABLE := true endif endif -GOMOD ?= readonly - ifneq ($(UNAME_OS),Darwin)Also applies to: 35-45
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 24 - 26, Remove the duplicate Makefile variable definitions to avoid accidental overrides: keep a single authoritative definition for GOMOD, BUILD_TAGS, and GORELEASER_STRIP_FLAGS and delete the redundant declarations (the later re-definitions that override the earlier ones); if the later values are the intended defaults, delete the earlier lines, otherwise move the intended values into the single kept declaration for each variable (GOMOD, BUILD_TAGS, GORELEASER_STRIP_FLAGS) and ensure BUILD_TAGS is not unconditionally overwritten by removing or conditionalizing the second assignment._run/node/Makefile (1)
1-5: Consider adding stub targets or excluding this file from minphony lint.
If checkmake runs on_run/node/Makefile, the missingall/clean/testphony targets may keep warning. Adding stubs or excluding this snippet from that rule would silence lint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/Makefile` around lines 1 - 5, This Makefile triggers checkmake minphony warnings because common phony targets like all, clean, and test are missing; either add stub phony targets (e.g., .PHONY: all clean test and empty rule bodies for all, clean, test) alongside the existing clean-node, or mark this file to be skipped by the linter (e.g., update the linter exclude list) so checkmake/minphony ignores it; update the file containing the phony list or the lint config accordingly and ensure you reference the existing clean-node target when adding the stubs to avoid duplicate definitions.x/market/simulation/operations.go (1)
229-293: Consider removing or restoring the commented-out simulation code.
SimulateMsgCloseLeasecontains ~60 lines of commented-out implementation. If this code is no longer needed, consider removing it to improve readability. If it's intended to be restored, consider tracking it with a TODO or issue reference.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/market/simulation/operations.go` around lines 229 - 293, The SimulateMsgCloseLease function contains a large commented-out simulation implementation (around the MsgCloseLease flow); either remove the dead-comment block to clean up the file or restore it properly and mark it with a TODO/issue reference. If removing: delete the commented lines inside SimulateMsgCloseLease so the function only returns the NoOpMsg using v1.ModuleName and types.MsgCloseLease. If restoring: uncomment and ensure symbols used (SimulateMsgCloseLease, types.NewMsgCloseLease/MsgCloseLease, moduletestutil.MakeTestEncodingConfig, simtestutil.GenSignedMockTx, app.SimDeliver) compile and add a TODO comment linking an issue that explains why the simulation is active. Ensure no remaining large commented blocks remain in the function.app/app.go (2)
137-143: Silent failure on type assertion for wasm options.If
appOpts.Get("wasm")returns a non-nil value that is not[]wasmkeeper.Option, the type assertion will silently fail andwasmOptswill remain empty. Consider logging a warning when the type doesn't match.♻️ Proposed improvement
var wasmOpts []wasmkeeper.Option if val := appOpts.Get("wasm"); val != nil { - if vl, valid := val.([]wasmkeeper.Option); valid { + if vl, valid := val.([]wasmkeeper.Option); valid { wasmOpts = append(wasmOpts, vl...) + } else { + logger.Warn("wasm option provided but type assertion failed", "type", fmt.Sprintf("%T", val)) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.go` around lines 137 - 143, The type assertion for wasm options can silently fail leaving wasmOpts empty; update the block that calls appOpts.Get("wasm") to check the assertion boolean and when val != nil and the assertion to []wasmkeeper.Option is false, emit a warning (e.g., app.Logger().Warn or fmt.Printf) indicating the unexpected type and include the actual type via fmt.Sprintf("%T", val) and the key "wasm", otherwise append the asserted slice to wasmOpts as before; ensure you reference wasmOpts, appOpts.Get("wasm"), and wasmkeeper.Option so reviewers can locate the change.
158-170: Config overrides afterReadNodeConfigmay surprise operators.The hardcoded values for
MemoryCacheSize,SmartQueryGasLimit, andContractDebugModeare applied after reading the node config, effectively overriding any custom settings from the config file. This is likely intentional for security, but consider documenting this behavior or adding a log message.The values themselves are reasonable:
- 100 MB memory cache is conservative
- 3M gas limit for queries is appropriate
- Debug mode disabled is correct for production
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.go` around lines 158 - 170, The code currently unconditionally overrides values from wasm.ReadNodeConfig by setting wasmConfig.MemoryCacheSize, wasmConfig.SmartQueryGasLimit, and wasmConfig.ContractDebugMode after loading the config; change this to only apply these hardcoded defaults when the loaded config has zero/unspecified values (e.g., if wasmConfig.MemoryCacheSize == 0 then set to 100, etc.), and add a concise log message noting when you override each field (use the same variables and functions: wasm.ReadNodeConfig, wasmConfig.MemoryCacheSize, wasmConfig.SmartQueryGasLimit, wasmConfig.ContractDebugMode) so operators are not surprised by silent overrides.
app/types/app.go
Outdated
| wOpts := make([]wasmkeeper.Option, 0, len(wasmOpts)+1) | ||
|
|
||
| wOpts = append(wOpts, wasmkeeper.WithMessageHandlerDecorator( | ||
| app.Keepers.Akash.Wasm.NewMsgFilterDecorator(), | ||
| )) | ||
|
|
||
| wOpts = append(wOpts, wasmOpts...) | ||
|
|
||
| // The last arguments can contain custom message handlers and custom query handlers | ||
| // if we want to allow any custom callbacks | ||
| wasmCapabilities := wasmkeeper.BuiltInCapabilities() | ||
| wasmCapabilities = append(wasmCapabilities, "akash") | ||
|
|
||
| wasmKeeper := wasmkeeper.NewKeeper( | ||
| cdc, | ||
| runtime.NewKVStoreService(app.keys[wasmtypes.StoreKey]), | ||
| app.Keepers.Cosmos.Acct, | ||
| app.Keepers.Cosmos.Bank, | ||
| *app.Keepers.Cosmos.Staking, | ||
| distrkeeper.NewQuerier(app.Keepers.Cosmos.Distr), | ||
| app.Keepers.Cosmos.IBC.ChannelKeeper, | ||
| app.Keepers.Cosmos.IBC.ChannelKeeper, | ||
| app.Keepers.Cosmos.IBC.ChannelKeeperV2, | ||
| app.Keepers.Cosmos.Transfer, | ||
| bApp.MsgServiceRouter(), | ||
| bApp.GRPCQueryRouter(), | ||
| wasmDir, | ||
| wasmConfig, | ||
| wasmtypes.VMConfig{}, | ||
| wasmCapabilities, | ||
| authtypes.NewModuleAddress(govtypes.ModuleName).String(), | ||
| wOpts..., | ||
| ) | ||
| app.Keepers.Cosmos.Wasm = &wasmKeeper | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the go.mod file and check wasmd version
fd go.mod -type fRepository: akash-network/node
Length of output: 232
🏁 Script executed:
# Once we find go.mod, let's check the wasmd dependency version
head -100 go.mod | grep -i wasmdRepository: akash-network/node
Length of output: 96
🌐 Web query:
wasmd v0.61.6 wasmkeeper NewKeeper function signature
💡 Result:
In wasmd v0.61.6 (x/wasm/keeper), the NewKeeper function signature is: [1]
func NewKeeper(
cdc codec.Codec,
storeService corestoretypes.KVStoreService,
accountKeeper types.AccountKeeper,
bankKeeper types.BankKeeper,
stakingKeeper types.StakingKeeper,
distrKeeper types.DistributionKeeper,
ics4Wrapper types.ICS4Wrapper,
channelKeeper types.ChannelKeeper,
channelKeeperV2 types.ChannelKeeperV2,
portSource types.ICS20TransferPortSource,
router MessageRouter,
_ GRPCQueryRouter,
homeDir string,
nodeConfig types.NodeConfig,
vmConfig types.VMConfig,
availableCapabilities []string,
authority string,
opts ...Option,
) KeeperSources: [1] (pkg.go.dev)
Citations:
Fix parameter 7 in wasmkeeper.NewKeeper call: pass ics4Wrapper instead of ChannelKeeper.
The wasmd v0.61.6 signature expects parameter 7 to be types.ICS4Wrapper, but the code passes app.Keepers.Cosmos.IBC.ChannelKeeper (which is types.ChannelKeeper). Parameter 8 is correctly set to ChannelKeeper. Passing the wrong interface type at position 7 will cause a compilation error or type mismatch.
Additionally, verify that parameter 14 (wasmConfig) matches the expected types.NodeConfig type and that parameter 6's use of distrkeeper.NewQuerier() is intentional rather than passing the keeper directly.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/types/app.go` around lines 471 - 505, The call to wasmkeeper.NewKeeper is
passing app.Keepers.Cosmos.IBC.ChannelKeeper as the 7th argument but the wasmd
v0.61.6 signature requires an ICS4Wrapper there; replace the 7th parameter with
your IBC ICS4Wrapper instance (e.g., the ICS4Wrapper provided by your IBC
keeper, not ChannelKeeper) when constructing wasmKeeper, ensure the 8th argument
remains ChannelKeeper, and confirm wasmConfig matches the expected NodeConfig
type and that using distrkeeper.NewQuerier(...) for parameter 6 was intentional
versus passing the Distr keeper directly.
| use cosmwasm_schema::write_api; | ||
| use akt_pyth_oracle::msg::{ExecuteMsg, InstantiateMsg, QueryMsg}; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the Cargo.toml for the crate name and lib configuration
echo "=== Cargo.toml in contracts/price-oracle ==="
cat contracts/price-oracle/Cargo.toml
echo ""
echo "=== Checking for lib name override ==="
grep -A 5 "^\[lib\]" contracts/price-oracle/Cargo.toml || echo "No [lib] section found"
echo ""
echo "=== Searching for akt_pyth_oracle references ==="
rg "akt_pyth_oracle|akt-pyth-oracle" --max-count 20Repository: akash-network/node
Length of output: 1163
Fix crate path for message imports—this will cause a compile error.
Line 2 imports akt_pyth_oracle::msg, but the crate is named price-oracle (Rust crate name price_oracle). No lib name override or dependency provides akt_pyth_oracle, so this import will fail at compile time.
🔧 Proposed fix
-use akt_pyth_oracle::msg::{ExecuteMsg, InstantiateMsg, QueryMsg};
+use price_oracle::msg::{ExecuteMsg, InstantiateMsg, QueryMsg};📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| use cosmwasm_schema::write_api; | |
| use akt_pyth_oracle::msg::{ExecuteMsg, InstantiateMsg, QueryMsg}; | |
| use cosmwasm_schema::write_api; | |
| use price_oracle::msg::{ExecuteMsg, InstantiateMsg, QueryMsg}; | |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/price-oracle/src/bin/schema.rs` around lines 1 - 3, The import path
uses the wrong crate name: replace the use statement that references
akt_pyth_oracle (use akt_pyth_oracle::msg::{ExecuteMsg, InstantiateMsg,
QueryMsg};) with the correct crate name for this project (the Rust crate name
price_oracle) so the line imports from price_oracle::msg::{ExecuteMsg,
InstantiateMsg, QueryMsg}; ensuring the module symbols ExecuteMsg,
InstantiateMsg, and QueryMsg resolve and the schema generator builds
successfully.
contracts/pyth/src/contract.rs.bak
Outdated
| let config = CONFIG.load(deps.storage)?; | ||
|
|
||
| // Check if sufficient fee was paid (CosmWasm 3.x uses Uint256 for coin amounts) | ||
| let sent_amount = info | ||
| .funds | ||
| .iter() | ||
| .find(|coin| coin.denom == "uakt") | ||
| .map(|coin| coin.amount) | ||
| .unwrap_or_else(Uint256::zero); | ||
|
|
||
| if sent_amount < config.update_fee { | ||
| return Err(ContractError::InsufficientFunds { | ||
| required: config.update_fee.to_string(), | ||
| sent: sent_amount.to_string(), | ||
| }); | ||
| } | ||
|
|
There was a problem hiding this comment.
Price updates are unauthenticated. Any sender paying the fee can set the price, which is risky for an oracle. If updates must be trusted, restrict to admin/allowlist or verify signed oracle data.
🔒 Suggested guard (if updates are permissioned)
let config = CONFIG.load(deps.storage)?;
+ // Only admin (or allowlisted feeders) may publish prices
+ if info.sender != config.admin {
+ return Err(ContractError::Unauthorized {});
+ }
+
// Check if sufficient fee was paid (CosmWasm 3.x uses Uint256 for coin amounts)
let sent_amount = info🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/price-oracle/src/contract.rs` around lines 106 - 122, The update
handler currently only checks payment (using CONFIG, config.update_fee,
info.funds and sent_amount) and allows any sender to set the price; restrict
updates by adding an authorization check: after loading CONFIG, verify
info.sender is allowed (compare against an admin/owner field in CONFIG or
consult an ALLOWED_UPDATERS storage item) or validate a signed price payload
(accept a signature and signer pubkey and verify the signature before applying
the update); if unauthorized return a new ContractError variant (e.g.,
UnauthorizedUpdater) instead of proceeding. Ensure you update CONFIG/storage
types and error enum accordingly and perform the check before accepting
funds/setting the price.
contracts/pyth/src/contract.rs.bak
Outdated
| // Check staleness | ||
| let current_time = env.block.time.seconds() as i64; | ||
| if current_time - publish_time > MAX_STALENESS { | ||
| return Err(ContractError::StalePriceData { | ||
| current_time, | ||
| publish_time, | ||
| }); | ||
| } |
There was a problem hiding this comment.
Reject future publish_time to avoid freezing updates. A future timestamp passes the staleness check and then blocks subsequent updates until chain time catches up.
🕒 Suggested validation
let current_time = env.block.time.seconds() as i64;
+ if publish_time > current_time {
+ return Err(ContractError::InvalidPriceData {
+ reason: format!(
+ "Publish time {} is in the future (current {})",
+ publish_time, current_time
+ ),
+ });
+ }
if current_time - publish_time > MAX_STALENESS {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Check staleness | |
| let current_time = env.block.time.seconds() as i64; | |
| if current_time - publish_time > MAX_STALENESS { | |
| return Err(ContractError::StalePriceData { | |
| current_time, | |
| publish_time, | |
| }); | |
| } | |
| // Check staleness | |
| let current_time = env.block.time.seconds() as i64; | |
| if publish_time > current_time { | |
| return Err(ContractError::InvalidPriceData { | |
| reason: format!( | |
| "Publish time {} is in the future (current {})", | |
| publish_time, current_time | |
| ), | |
| }); | |
| } | |
| if current_time - publish_time > MAX_STALENESS { | |
| return Err(ContractError::StalePriceData { | |
| current_time, | |
| publish_time, | |
| }); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/price-oracle/src/contract.rs` around lines 133 - 140, Add an
explicit check that rejects a future publish_time before the staleness
calculation: in the same validation block where current_time, publish_time and
MAX_STALENESS are used, if publish_time > current_time return an error (e.g. a
new ContractError variant like FuturePublishTime or reuse an existing error
type) instead of letting a future timestamp pass the staleness check; update any
callers/tests to expect this rejection.
contracts/pyth/src/contract.rs.bak
Outdated
| #[test] | ||
| fn test_instantiate_with_provided_id() { | ||
| let mut deps = mock_dependencies(); | ||
| let msg = InstantiateMsg { | ||
| admin: "admin".to_string(), | ||
| update_fee: Uint256::from(1000u128), | ||
| price_feed_id: "0xabc123def456".to_string(), | ||
| }; | ||
| let info = message_info(&deps.api.addr_make("creator"), &[]); | ||
| let env = mock_env(); | ||
|
|
||
| let res = instantiate(deps.as_mut(), env.clone(), info, msg).unwrap(); | ||
| assert_eq!(4, res.attributes.len()); | ||
|
|
||
| let config: ConfigResponse = | ||
| from_json(&query(deps.as_ref(), env, QueryMsg::GetConfig {}).unwrap()).unwrap(); | ||
| assert_eq!("admin", config.admin); | ||
| assert_eq!("0xabc123def456", config.price_feed_id); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_update_price_feed() { | ||
| let mut deps = mock_dependencies(); | ||
|
|
||
| let config = Config { | ||
| admin: deps.api.addr_make("admin"), | ||
| update_fee: Uint256::from(1000u128), | ||
| price_feed_id: "0xtest123".to_string(), | ||
| }; | ||
| CONFIG.save(&mut deps.storage, &config).unwrap(); | ||
|
|
||
| let price_feed = PriceFeed::new(); | ||
| PRICE_FEED.save(&mut deps.storage, &price_feed).unwrap(); | ||
|
|
||
| let env = mock_env(); | ||
|
|
||
| let update_msg = ExecuteMsg::UpdatePriceFeed { | ||
| price: Uint128::new(123000000), | ||
| conf: Uint128::new(1000000), | ||
| expo: -8, | ||
| publish_time: env.block.time.seconds() as i64, | ||
| }; | ||
| let info = message_info(&deps.api.addr_make("updater"), &[coin(1000, "uakt")]); | ||
| let res = execute(deps.as_mut(), env.clone(), info, update_msg).unwrap(); | ||
| assert_eq!(5, res.attributes.len()); | ||
|
|
||
| let price: PriceResponse = | ||
| from_json(&query(deps.as_ref(), env, QueryMsg::GetPrice {}).unwrap()).unwrap(); | ||
| assert_eq!(Uint128::new(123000000), price.price); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_update_fee() { | ||
| let mut deps = mock_dependencies(); | ||
|
|
||
| let config = Config { | ||
| admin: deps.api.addr_make("admin"), | ||
| update_fee: Uint256::from(1000u128), | ||
| price_feed_id: "0xtest123".to_string(), | ||
| }; | ||
| CONFIG.save(&mut deps.storage, &config).unwrap(); | ||
|
|
||
| let msg = ExecuteMsg::UpdateFee { | ||
| new_fee: Uint256::from(2000u128), | ||
| }; | ||
| let info = message_info(&deps.api.addr_make("admin"), &[]); | ||
| let res = execute(deps.as_mut(), mock_env(), info, msg).unwrap(); | ||
| assert_eq!(2, res.attributes.len()); | ||
|
|
||
| let config: ConfigResponse = | ||
| from_json(&query(deps.as_ref(), mock_env(), QueryMsg::GetConfig {}).unwrap()) | ||
| .unwrap(); | ||
| assert_eq!(Uint256::from(2000u128), config.update_fee); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_query_price_feed_id() { | ||
| let mut deps = mock_dependencies(); | ||
|
|
||
| let config = Config { | ||
| admin: deps.api.addr_make("admin"), | ||
| update_fee: Uint256::from(1000u128), | ||
| price_feed_id: "0xabc123def456".to_string(), | ||
| }; | ||
| CONFIG.save(&mut deps.storage, &config).unwrap(); | ||
|
|
||
| let response: PriceFeedIdResponse = from_json( | ||
| &query( | ||
| deps.as_ref(), | ||
| mock_env(), | ||
| QueryMsg::GetPriceFeedId {}, | ||
| ) | ||
| .unwrap(), | ||
| ) | ||
| .unwrap(); | ||
|
|
||
| assert_eq!("0xabc123def456", response.price_feed_id); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check cosmwasm_std version and mock dependency helpers
echo "=== cosmwasm_std version in Cargo.toml ==="
rg -n "cosmwasm-std|cosmwasm_std" -g 'Cargo.toml'
echo -e "\n=== Instances of mock_dependencies usage ==="
rg -n "mock_dependencies" --type=rust -C 2
echo -e "\n=== Check for AkashQuery definition/usage ==="
rg -n "AkashQuery" --type=rust
echo -e "\n=== Check for custom mock helpers ==="
rg -n "mock_dependencies_with_custom|create_mock_environment" --type=rustRepository: akash-network/node
Length of output: 3781
🏁 Script executed:
#!/bin/bash
# Look for custom mock_dependencies implementation or generic helpers
echo "=== Search for mock_dependencies implementation in querier.rs or tests ==="
rg -A 10 "fn mock_dependencies" --type=rust
echo -e "\n=== Check querier.rs for custom mock helpers ==="
head -50 contracts/price-oracle/src/querier.rs
echo -e "\n=== Check if there's a test utilities module ==="
fd "test|mock" --type=f contracts/price-oracle/src/
echo -e "\n=== Full test function signatures ==="
rg -B 2 "fn test_instantiate_with_provided_id" -A 15 --type=rustRepository: akash-network/node
Length of output: 2546
🏁 Script executed:
#!/bin/bash
# Check for Cargo.lock to see actual resolved versions
echo "=== Check Cargo.lock for cosmwasm_std ==="
rg "cosmwasm-std" Cargo.lock | head -20
echo -e "\n=== Check if tests are configured in Cargo.toml ==="
cat contracts/price-oracle/Cargo.toml | grep -A 10 "\[\[test"
echo -e "\n=== Check for any CI/build scripts ==="
find . -maxdepth 2 -type f \( -name "*.yml" -o -name "*.yaml" -o -name "Makefile" \) | head -10
echo -e "\n=== Look for build output or test configuration ==="
ls -la contracts/price-oracle/ | grep -E "target|\.cargo|build"Repository: akash-network/node
Length of output: 513
🏁 Script executed:
#!/bin/bash
# Check CI workflows for test execution
echo "=== GitHub workflows for Rust testing ==="
find .github/workflows -type f \( -name "*.yml" -o -name "*.yaml" \) | head -5
echo -e "\n=== Check if any workflow runs contract tests ==="
rg "price-oracle|contract.*test|cargo test" .github/workflows/
echo -e "\n=== Check Makefile for test targets ==="
grep -n "test\|contract" Makefile | head -20
echo -e "\n=== Check mock_dependencies generic signature in cosmwasm_std 3.0.2 docs ==="
# Look at the actual function call to understand type inference
rg -B 5 -A 5 "let mut deps = mock_dependencies" contracts/price-oracle/src/contract.rsRepository: akash-network/node
Length of output: 1936
🏁 Script executed:
#!/bin/bash
# Check if tests are marked as ignored or disabled
echo "=== Check test attributes ==="
rg "#\[.*test.*\]|#\[ignore\]" -B 1 contracts/price-oracle/src/contract.rs | grep -E "test|ignore"
echo -e "\n=== Check if price-oracle is built/tested at all ==="
grep -r "price-oracle" Makefile .github/workflows/ 2>/dev/null || echo "No matches found"
echo -e "\n=== Try to understand mock_dependencies type signature ==="
# Check imports to see full context
head -30 contracts/price-oracle/src/contract.rsRepository: akash-network/node
Length of output: 1353
🏁 Script executed:
#!/bin/bash
# Check the actual function signatures to see if they're generic
echo "=== Check instantiate function signature ==="
rg -A 5 "pub fn instantiate" contracts/price-oracle/src/contract.rs
echo -e "\n=== Check execute function signature ==="
rg -A 5 "pub fn execute" contracts/price-oracle/src/contract.rs
echo -e "\n=== Check if there's any generic parameter handling ==="
rg "impl.*<.*>.*\|where " contracts/price-oracle/src/contract.rs | head -20
echo -e "\n=== Verify no type parameter on mock_dependencies call ==="
rg "mock_dependencies\(" contracts/price-oracle/src/contract.rsRepository: akash-network/node
Length of output: 1276
🏁 Script executed:
#!/bin/bash
# Check cosmwasm_std 3.0.2 documentation for mock_dependencies signature
# and verify if there's a way the type inference could work
echo "=== Check the complete test module structure ==="
sed -n '279,290p' contracts/price-oracle/src/contract.rs
echo -e "\n=== Verify no hidden trait bounds or type specifications ==="
rg -B 20 "fn test_instantiate_with_provided_id" contracts/price-oracle/src/contract.rs | grep -E "impl|type|where|generic|CustomQuery"
echo -e "\n=== Check if cosmwasm_std re-exports or wraps mock_dependencies ==="
rg "mock_dependencies|MockQuerier" contracts/price-oracle/src/ -n
echo -e "\n=== Most importantly: do these tests actually compile? Check for build config ==="
cat contracts/price-oracle/Cargo.tomlRepository: akash-network/node
Length of output: 1996
🏁 Script executed:
#!/bin/bash
# Final verification: try to understand if these tests are meant to work
# Check if price-oracle is even part of the workspace
echo "=== Check workspace configuration ==="
cat Cargo.toml | grep -A 20 "\[workspace\]"
echo -e "\n=== Check if price-oracle has its own lib/test setup ==="
ls -la contracts/price-oracle/src/
echo -e "\n=== Search for any build scripts or test configuration specific to price-oracle ==="
find contracts/price-oracle -name "build.rs" -o -name "lib.rs" -o -name "tests"Repository: akash-network/node
Length of output: 859
🏁 Script executed:
#!/bin/bash
# Find the root Cargo.toml
echo "=== Find Cargo.toml files ==="
find . -maxdepth 3 -name "Cargo.toml" -type f
echo -e "\n=== Check the actual root Cargo.toml ==="
cat ./Cargo.toml 2>/dev/null || echo "Not at root"
echo -e "\n=== Check if price-oracle is in a subdir Cargo.toml ==="
cat contracts/Cargo.toml 2>/dev/null | head -30 || echo "No contracts/Cargo.toml"
echo -e "\n=== Check lib.rs to see what's exported ==="
cat contracts/price-oracle/src/lib.rsRepository: akash-network/node
Length of output: 399
🏁 Script executed:
#!/bin/bash
# Final confirmation: Check cosmwasm_std 3.0.2 documentation behavior
# by looking at how mock_dependencies would be inferred
echo "=== Verify the type mismatch is real by checking function parameter types ==="
rg "instantiate\(deps" contracts/price-oracle/src/contract.rs -A 1
echo -e "\n=== Check if tests even have a chance of running ==="
rg "#\[cfg\(test\)\]" contracts/price-oracle/src/contract.rs
echo -e "\n=== Final check: cosmwasm_std 3.0.2 mock_dependencies returns what? ==="
# Search for any conditional compilation or feature flags that might enable AkashQuery support
rg "cfg\(|feature" contracts/price-oracle/src/contract.rsRepository: akash-network/node
Length of output: 620
🏁 Script executed:
#!/bin/bash
# Final verification: Check if mock_dependencies can be used with AkashQuery
# by examining if there's a generic version available in cosmwasm_std 3.0.2
echo "=== Check cosmwasm_std documentation for generics ==="
rg "mock_dependencies" --type=rust -B 2 -A 8 | head -50
echo -e "\n=== Check if cw-multi-test provides the proper mocking for custom queries ==="
rg "cw.multi.test|cw_multi_test" contracts/price-oracle/Cargo.toml
echo -e "\n=== Verify that deps.as_mut() would have the wrong type ==="
# This confirms the type mismatch is real by checking the actual call
sed -n '286,295p' contracts/price-oracle/src/contract.rsRepository: akash-network/node
Length of output: 4040
🏁 Script executed:
#!/bin/bash
# Check if there's a generic version of mock_dependencies available in 3.0.2
echo "=== Check if mock_dependencies has a generic turbofish form ==="
rg "mock_dependencies::" --type=rust
echo -e "\n=== Verify no type annotation exists anywhere ==="
rg "deps.*:.*OwnedDeps|DepsMut.*<" contracts/price-oracle/src/contract.rs | grep test
echo -e "\n=== Check what cw-multi-test provides instead ==="
grep -A 5 "cw-multi-test" contracts/price-oracle/Cargo.tomlRepository: akash-network/node
Length of output: 299
These tests will not compile. In cosmwasm_std 3.0.2, mock_dependencies() without type parameters defaults to OwnedDeps<_, _, MockQuerier<Empty>>, but all contract functions require DepsMut<AkashQuery>. To fix, use mock_dependencies::<AkashQuery>() with turbofish syntax, or refactor contract functions to accept generic DepsMut<C: CustomQuery> for test compatibility.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/price-oracle/src/contract.rs` around lines 284 - 381, Tests fail to
compile because mock_dependencies() is producing OwnedDeps with the default
MockQuerier<Empty> while the contract functions expect DepsMut<AkashQuery>;
update the tests to call mock_dependencies::<AkashQuery>() (turbofish) wherever
mock_dependencies() is used (e.g., in test_instantiate_with_provided_id,
test_update_price_feed, test_update_fee, test_query_price_feed_id) so the
returned deps match the contract's DepsMut<AkashQuery>; alternatively, if you
prefer a library change, make instantiate, execute, and query accept generic
DepsMut<C: CustomQuery> so tests can use the default mock_dependencies, but the
minimal fix is to replace mock_dependencies() with
mock_dependencies::<AkashQuery>() in these tests.
| .PHONY: test-integration | ||
| test-integration: | ||
| $(GO_TEST) -v -tags="e2e.integration" $(TEST_MODULES) | ||
| $(GO_TEST) -v -tags="e2e.integration" -ldflags '$(ldflags)' $(TEST_MODULES) | ||
|
|
There was a problem hiding this comment.
test-integration should depend on wasmvm-libs like other test targets.
Without this, integration tests can run before wasmvm libs are available.
🛠️ Suggested fix
-.PHONY: test-integration
-test-integration:
+.PHONY: test-integration
+test-integration: wasmvm-libs
$(GO_TEST) -v -tags="e2e.integration" -ldflags '$(ldflags)' $(TEST_MODULES)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .PHONY: test-integration | |
| test-integration: | |
| $(GO_TEST) -v -tags="e2e.integration" $(TEST_MODULES) | |
| $(GO_TEST) -v -tags="e2e.integration" -ldflags '$(ldflags)' $(TEST_MODULES) | |
| .PHONY: test-integration | |
| test-integration: wasmvm-libs | |
| $(GO_TEST) -v -tags="e2e.integration" -ldflags '$(ldflags)' $(TEST_MODULES) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@make/test-integration.mk` around lines 21 - 24, The test-integration Make
target lacks the dependency on wasmvm-libs so integration tests may run before
the wasmvm artifacts are built; update the Makefile by adding wasmvm-libs as a
prerequisite for the test-integration target (i.e., change the target header for
test-integration to list wasmvm-libs before the recipe) so that the wasmvm-libs
target is executed/built prior to running the $(GO_TEST) command.
x/oracle/alias.go
Outdated
| package wasm | ||
|
|
||
| import ( | ||
| types "pkg.akt.dev/go/node/oracle/v1" | ||
| ) | ||
|
|
||
| const ( | ||
| // StoreKey represents storekey of wasm module | ||
| StoreKey = types.StoreKey | ||
| // ModuleName represents current module name | ||
| ModuleName = types.ModuleName | ||
| ) |
There was a problem hiding this comment.
Package name mismatch: declared as wasm but file is in x/oracle directory.
The file is located at x/oracle/alias.go but declares package wasm instead of package oracle. This will cause a compilation error since Go requires the package name to match the directory name (with some exceptions).
Additionally, the comments on lines 8 and 10 reference "wasm module" but the types are imported from pkg.akt.dev/go/node/oracle/v1.
Proposed fix
-package wasm
+package oracle
import (
types "pkg.akt.dev/go/node/oracle/v1"
)
const (
- // StoreKey represents storekey of wasm module
+ // StoreKey represents storekey of oracle module
StoreKey = types.StoreKey
- // ModuleName represents current module name
+ // ModuleName represents oracle module name
ModuleName = types.ModuleName
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| package wasm | |
| import ( | |
| types "pkg.akt.dev/go/node/oracle/v1" | |
| ) | |
| const ( | |
| // StoreKey represents storekey of wasm module | |
| StoreKey = types.StoreKey | |
| // ModuleName represents current module name | |
| ModuleName = types.ModuleName | |
| ) | |
| package oracle | |
| import ( | |
| types "pkg.akt.dev/go/node/oracle/v1" | |
| ) | |
| const ( | |
| // StoreKey represents storekey of oracle module | |
| StoreKey = types.StoreKey | |
| // ModuleName represents oracle module name | |
| ModuleName = types.ModuleName | |
| ) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/oracle/alias.go` around lines 1 - 12, The file declares the wrong package
name and misleading comments: change the package declaration from "wasm" to
"oracle" and update the two comments referencing "wasm module" to refer to the
oracle module; leave the constants StoreKey and ModuleName (and the import types
from pkg.akt.dev/go/node/oracle/v1) intact so StoreKey and ModuleName continue
to alias types.StoreKey and types.ModuleName.
x/oracle/genesis.go
Outdated
| @@ -0,0 +1,39 @@ | |||
| package wasm | |||
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
rg -n "^package " x/oracle -g '*.go'Repository: akash-network/node
Length of output: 456
Fix package name mismatch (compile-time blocker).
x/oracle/genesis.go declares package wasm, which won't compile under the x/oracle directory. This should be package oracle. This issue also affects x/oracle/module.go and x/oracle/alias.go.
✅ Suggested fix
-package wasm
+package oracle📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| package wasm | |
| package oracle |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/oracle/genesis.go` at line 1, The package declaration in x/oracle Go files
is incorrect: change the top-line package from "wasm" to "oracle" in
x/oracle/genesis.go, x/oracle/module.go, and x/oracle/alias.go so the package
name matches the directory (replace any "package wasm" with "package oracle");
after updating, ensure imports and any package-qualified references still
compile (run go build/go vet) and adjust any internal references to the package
name if present.
x/oracle/keeper/grpc_query.go
Outdated
| func (k Querier) PriceFeedConfig(ctx context.Context, request *types.QueryPriceFeedConfigRequest) (*types.QueryPriceFeedConfigResponse, error) { | ||
| //TODO implement me | ||
| panic("implement me") | ||
| } |
There was a problem hiding this comment.
Replace panic with a proper gRPC “unimplemented” error.
A panic in a query handler can crash the gRPC server. Return codes.Unimplemented instead.
🛠️ Suggested fix
func (k Querier) PriceFeedConfig(ctx context.Context, request *types.QueryPriceFeedConfigRequest) (*types.QueryPriceFeedConfigResponse, error) {
- //TODO implement me
- panic("implement me")
+ return nil, status.Errorf(codes.Unimplemented, "PriceFeedConfig is not implemented")
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (k Querier) PriceFeedConfig(ctx context.Context, request *types.QueryPriceFeedConfigRequest) (*types.QueryPriceFeedConfigResponse, error) { | |
| //TODO implement me | |
| panic("implement me") | |
| } | |
| func (k Querier) PriceFeedConfig(ctx context.Context, request *types.QueryPriceFeedConfigRequest) (*types.QueryPriceFeedConfigResponse, error) { | |
| return nil, status.Errorf(codes.Unimplemented, "PriceFeedConfig is not implemented") | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/oracle/keeper/grpc_query.go` around lines 19 - 22, The PriceFeedConfig
query handler currently panics; replace the panic in Querier.PriceFeedConfig
with returning a gRPC unimplemented error (use status.Errorf with
codes.Unimplemented and a concise message like "PriceFeedConfig not
implemented") so the server doesn't crash; also add the necessary imports for
"google.golang.org/grpc/codes" and "google.golang.org/grpc/status" (or use
status.Error) and ensure the function returns
(*types.QueryPriceFeedConfigResponse)(nil) along with the error.
| // FilterMessage applies Phase 1 filtering rules | ||
| func (k *keeper) FilterMessage(sctx sdk.Context, contractAddr sdk.AccAddress, msg wasmvmtypes.CosmosMsg, | ||
| ) error { | ||
| // ALLOW Bank messages (with restrictions) | ||
| if msg.Bank != nil { | ||
| return k.filterBankMessage(sctx, msg.Bank) | ||
| } | ||
|
|
||
| // BLOCK Staking messages | ||
| if msg.Staking != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Staking operations not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // BLOCK Distribution messages | ||
| if msg.Distribution != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Distribution operations not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // BLOCK Governance messages | ||
| if msg.Gov != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Governance operations not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // BLOCK IBC messages | ||
| if msg.IBC != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "IBC messages not allowed", | ||
| ) | ||
| } | ||
|
|
||
| if msg.IBC2 != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "IBC2 messages not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // BLOCK Custom messages (no Akash bindings) | ||
| if msg.Custom != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Custom messages not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // BLOCK Any messages (no Akash bindings) | ||
| if msg.Any != nil { | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Any messages not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // ALLOW Wasm messages (contract-to-contract calls) | ||
| if msg.Wasm != nil { | ||
| // Wasm execute/instantiate allowed | ||
| return nil | ||
| } | ||
|
|
||
| // BLOCK unknown/unhandled message types | ||
| return errorsmod.Wrap( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Unknown message type not allowed", | ||
| ) | ||
| } | ||
|
|
||
| // filterBankMessage applies restrictions to bank operations | ||
| func (k *keeper) filterBankMessage(sctx sdk.Context, msg *wasmvmtypes.BankMsg) error { | ||
| // Allow send with restrictions | ||
| if msg.Send != nil { | ||
| params := k.GetParams(sctx) | ||
|
|
||
| // Block transfers to critical addresses | ||
| for _, addr := range params.BlockedAddresses { | ||
| if addr == msg.Send.ToAddress { | ||
| return errorsmod.Wrapf( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Transfers to %s blocked (critical address)", | ||
| msg.Send.ToAddress, | ||
| ) | ||
| } | ||
| } | ||
|
|
||
| // Transfers to regular addresses allowed | ||
| return nil | ||
| } | ||
|
|
||
| // Deny burns | ||
| if msg.Burn != nil { | ||
| return errorsmod.Wrapf( | ||
| sdkerrors.ErrUnauthorized, | ||
| "Burn is not allowed", | ||
| ) | ||
| } | ||
|
|
||
| return nil | ||
| } |
There was a problem hiding this comment.
Block unknown BankMsg variants instead of silently allowing them.
If a BankMsg has no Send/Burn (or a new variant is introduced), this currently returns nil, which effectively bypasses Phase‑1 restrictions for that variant. Safer default is to reject unknown bank operations.
🛡️ Suggested fix
// Deny burns
if msg.Burn != nil {
return errorsmod.Wrapf(
sdkerrors.ErrUnauthorized,
"Burn is not allowed",
)
}
- return nil
+ return errorsmod.Wrap(
+ sdkerrors.ErrUnauthorized,
+ "Unsupported bank message type",
+ )
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // FilterMessage applies Phase 1 filtering rules | |
| func (k *keeper) FilterMessage(sctx sdk.Context, contractAddr sdk.AccAddress, msg wasmvmtypes.CosmosMsg, | |
| ) error { | |
| // ALLOW Bank messages (with restrictions) | |
| if msg.Bank != nil { | |
| return k.filterBankMessage(sctx, msg.Bank) | |
| } | |
| // BLOCK Staking messages | |
| if msg.Staking != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Staking operations not allowed", | |
| ) | |
| } | |
| // BLOCK Distribution messages | |
| if msg.Distribution != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Distribution operations not allowed", | |
| ) | |
| } | |
| // BLOCK Governance messages | |
| if msg.Gov != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Governance operations not allowed", | |
| ) | |
| } | |
| // BLOCK IBC messages | |
| if msg.IBC != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "IBC messages not allowed", | |
| ) | |
| } | |
| if msg.IBC2 != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "IBC2 messages not allowed", | |
| ) | |
| } | |
| // BLOCK Custom messages (no Akash bindings) | |
| if msg.Custom != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Custom messages not allowed", | |
| ) | |
| } | |
| // BLOCK Any messages (no Akash bindings) | |
| if msg.Any != nil { | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Any messages not allowed", | |
| ) | |
| } | |
| // ALLOW Wasm messages (contract-to-contract calls) | |
| if msg.Wasm != nil { | |
| // Wasm execute/instantiate allowed | |
| return nil | |
| } | |
| // BLOCK unknown/unhandled message types | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Unknown message type not allowed", | |
| ) | |
| } | |
| // filterBankMessage applies restrictions to bank operations | |
| func (k *keeper) filterBankMessage(sctx sdk.Context, msg *wasmvmtypes.BankMsg) error { | |
| // Allow send with restrictions | |
| if msg.Send != nil { | |
| params := k.GetParams(sctx) | |
| // Block transfers to critical addresses | |
| for _, addr := range params.BlockedAddresses { | |
| if addr == msg.Send.ToAddress { | |
| return errorsmod.Wrapf( | |
| sdkerrors.ErrUnauthorized, | |
| "Transfers to %s blocked (critical address)", | |
| msg.Send.ToAddress, | |
| ) | |
| } | |
| } | |
| // Transfers to regular addresses allowed | |
| return nil | |
| } | |
| // Deny burns | |
| if msg.Burn != nil { | |
| return errorsmod.Wrapf( | |
| sdkerrors.ErrUnauthorized, | |
| "Burn is not allowed", | |
| ) | |
| } | |
| return nil | |
| } | |
| // filterBankMessage applies restrictions to bank operations | |
| func (k *keeper) filterBankMessage(sctx sdk.Context, msg *wasmvmtypes.BankMsg) error { | |
| // Allow send with restrictions | |
| if msg.Send != nil { | |
| params := k.GetParams(sctx) | |
| // Block transfers to critical addresses | |
| for _, addr := range params.BlockedAddresses { | |
| if addr == msg.Send.ToAddress { | |
| return errorsmod.Wrapf( | |
| sdkerrors.ErrUnauthorized, | |
| "Transfers to %s blocked (critical address)", | |
| msg.Send.ToAddress, | |
| ) | |
| } | |
| } | |
| // Transfers to regular addresses allowed | |
| return nil | |
| } | |
| // Deny burns | |
| if msg.Burn != nil { | |
| return errorsmod.Wrapf( | |
| sdkerrors.ErrUnauthorized, | |
| "Burn is not allowed", | |
| ) | |
| } | |
| return errorsmod.Wrap( | |
| sdkerrors.ErrUnauthorized, | |
| "Unsupported bank message type", | |
| ) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/wasm/keeper/msg_filter.go` around lines 63 - 169, The filterBankMessage
function currently returns nil for unknown BankMsg variants (when msg.Send and
msg.Burn are both nil), which silently allows future or unexpected bank
operations; update filterBankMessage to reject any BankMsg that is not
explicitly handled by returning an unauthorized error (use errorsmod.Wrapf with
sdkerrors.ErrUnauthorized and a clear message like "Unknown bank message variant
not allowed"), keeping the existing handling for Send (including using
k.GetParams and params.BlockedAddresses and the blocked-address check) and Burn
intact, so only Send/Burn are permitted and all other variants cause an error.
There was a problem hiding this comment.
Actionable comments posted: 8
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
x/cert/module.go (1)
85-91:⚠️ Potential issue | 🟡 MinorStale module name in panic message.
The comment on line 85 was updated to reference the
certmodule, but thepanicmessage on line 89 still says"provider".🔧 Proposed fix
- panic(fmt.Sprintf("couldn't register provider grpc routes: %s", err.Error())) + panic(fmt.Sprintf("couldn't register cert grpc routes: %s", err.Error()))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/cert/module.go` around lines 85 - 91, The panic message in AppModuleBasic.RegisterGRPCGatewayRoutes still refers to "provider" even though this is the cert module; update the panic text in the error branch (where types.RegisterQueryHandlerClient is called with context.Background(), mux, types.NewQueryClient(clientCtx)) to mention "cert" instead of "provider" so the message reads something like "couldn't register cert grpc routes: <error>". Ensure you modify only the fmt.Sprintf string passed to panic.
♻️ Duplicate comments (4)
contracts/pyth/src/contract.rs.bak (3)
294-397:⚠️ Potential issue | 🟠 MajorCustom-query tests likely need the
AkashQuerymock helper.These tests call entry points requiring
DepsMut<AkashQuery>. Depending on cosmwasm-std 3.0.2,mock_dependencies()often returnsOwnedDeps<..., Empty>and won’t type-check. Please use the custom-query mock helper forAkashQuery(e.g.,mock_dependencies_custom::<AkashQuery>()or the generic form for your version).cosmwasm-std 3.0.2 mock_dependencies custom query helper signature (CustomQuery / AkashQuery)#!/bin/bash # Inspect cosmwasm-std version and existing mock helper usage rg -n "cosmwasm-std" -g "Cargo.toml" rg -n "mock_dependencies(_custom)?\(" --type=rust -C 2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs.bak` around lines 294 - 397, Tests in mod tests use mock_dependencies() but the contract entry points expect DepsMut<AkashQuery>, so replace mock_dependencies() with the custom-query mock helper (e.g., mock_dependencies_custom::<AkashQuery>() or the correct generic for your cosmwasm version) in each test (test_instantiate_with_provided_id, test_update_price_feed, test_update_fee, test_query_price_feed_id) and update any message_info/env construction as needed to use the returned deps; ensure you import AkashQuery and the mock_dependencies_custom helper so types match the contract functions (instantiate, execute, query) that reference AkashQuery.
149-156:⚠️ Potential issue | 🟠 MajorReject future
publish_timeto avoid freezing updates.A future timestamp passes the staleness check and blocks subsequent updates until chain time catches up.
🕒 Suggested validation
// Check staleness let current_time = env.block.time.seconds() as i64; + if publish_time > current_time { + return Err(ContractError::InvalidPriceData { + reason: format!( + "Publish time {} is in the future (current {})", + publish_time, current_time + ), + }); + } if current_time - publish_time > MAX_STALENESS {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs.bak` around lines 149 - 156, The code currently only checks for stale data but allows a publish_time in the future, which can freeze updates; add a validation in the same function before the staleness check to reject publish_time > current_time and return a clear error (e.g., ContractError::FuturePublishTime { publish_time, current_time }) or add that error variant if missing; keep the existing MAX_STALENESS and ContractError::StalePriceData logic intact and use env.block.time.seconds() as the source of current_time to compare against publish_time.
122-137:⚠️ Potential issue | 🟠 MajorPrice updates are unauthenticated.
Any sender paying the fee can update the price feed. If updates must be trusted, restrict to admin/allowlist (or validate signed oracle payloads) before accepting the update.
🔒 Suggested guard (admin-only example)
let config = CONFIG.load(deps.storage)?; + + if info.sender != config.admin { + return Err(ContractError::Unauthorized {}); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs.bak` around lines 122 - 137, The current update path only checks payment (CONFIG, config.update_fee, info.funds) and allows any payer to update prices; restrict this by enforcing an authorization check before accepting updates: load the admin/allowlist from CONFIG (e.g., config.admin or config.allowlist) and verify info.sender is authorized, or alternatively call a signature verification helper (e.g., verify_oracle_signature) to validate a signed oracle payload; if the check fails return a new unauthorized error (e.g., ContractError::Unauthorized) instead of proceeding to fee validation/price update so only trusted parties can update the feed.app/types/app.go (1)
500-519:⚠️ Potential issue | 🟠 MajorVerify
wasmkeeper.NewKeeperargument order for your wasmd version.In wasmd v0.61.x, parameter 7 is
ICS4Wrapperand parameter 8 isChannelKeeper. PassingChannelKeeperfor both can be wrong if the types don’t satisfy the interface expected at position 7. Please confirm the exact signature for your wasmd version and adjust the order/types accordingly.#!/bin/bash # Check wasmd version in go.mod and locate the NewKeeper call rg -n "github.com/CosmWasm/wasmd" go.mod rg -n "NewKeeper\\(" app/types/app.gowasmd v0.61.x wasmkeeper NewKeeper signature ICS4Wrapper ChannelKeeper parameter order🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/types/app.go` around lines 500 - 519, The wasmkeeper.NewKeeper call is passing ChannelKeeper twice; verify and match the wasmd v0.61.x signature which expects an ICS4Wrapper at parameter 7 and a ChannelKeeper at parameter 8. Replace the duplicate app.Keepers.Cosmos.IBC.ChannelKeeper (the 7th arg) with a proper ICS4Wrapper (for example ibc.NewICS4Wrapper(app.Keepers.Cosmos.IBC.ChannelKeeper) or the field that exposes ICS4Wrapper on your IBCKeeper) and keep app.Keepers.Cosmos.IBC.ChannelKeeper as the 8th arg; also add the necessary ibc import if you use ibc.NewICS4Wrapper and confirm types compile against wasmkeeper.NewKeeper.
🟠 Major comments (13)
tests/e2e/deployment_cli_test.go-5-542 (1)
5-542:⚠️ Potential issue | 🟠 MajorRe-enable the deployment CLI E2E test instead of commenting it out.
Commenting out the entire test suite disables critical E2E coverage, which is risky for a breaking-change PR. Please restore the tests and update them to v2 utilities, or replace with a build-tagged skip that documents why the tests cannot run yet.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/deployment_cli_test.go` around lines 5 - 542, The whole deployment E2E test suite (deploymentIntegrationTestSuite with methods SetupSuite, TestDeployment, TestGroup, TestFundedDeployment and helper getAccountBalance) was commented out; restore the file by uncommenting the test code and either update imports/usages to the v2 utilities (replace dv1beta4/types references and DefaultParams/MinDepositFor calls with their v2 equivalents and adjust any API changes in clitestutil/cli helpers) or, if the tests cannot run now, re-enable them behind a clear build tag (e.g. add a go:build e2e line and a short runtime skip) so they remain in source control; ensure all unique symbols referenced in the diff (deploymentIntegrationTestSuite, SetupSuite, TestDeployment, TestGroup, TestFundedDeployment, getAccountBalance) compile against the updated packages and that tx/query helpers (TxCreateDeploymentExec, QueryDeploymentsExec, TxDepositDeploymentExec, etc.) are imported from the v2 test utilities.x/deployment/simulation/genesis.go-35-38 (1)
35-38:⚠️ Potential issue | 🟠 MajorUse
sdk.NewCoins()to ensure MinDeposits are sorted by denom.
sdk.Coinsrequires denoms sorted in ascending order (alphabetically); the literal construction creates them as["uakt", "uact"], but sorted order is["uact", "uakt"]. Unsorted coins failValidateBasic()andIsValid()checks. Usesdk.NewCoins(minDepositAKT, minDepositACT)instead—it automatically sorts and validates.✅ Suggested fix
- MinDeposits: sdk.Coins{ - minDepositAKT, - minDepositACT, - }, + MinDeposits: sdk.NewCoins( + minDepositAKT, + minDepositACT, + ),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/simulation/genesis.go` around lines 35 - 38, MinDeposits is constructed with a literal sdk.Coins slice which can be unsorted (["uakt","uact"]) and thus fail ValidateBasic/IsValid; replace the literal with sdk.NewCoins(minDepositAKT, minDepositACT) so the coins are automatically sorted and validated before assignment to MinDeposits (update the MinDeposits assignment in genesis where MinDeposits: sdk.Coins{...} is set).x/deployment/simulation/proposals.go-47-52 (1)
47-52:⚠️ Potential issue | 🟠 MajorNormalize
sdk.Coinsafter appending to ensure sorted order.Appending
"uakt"then"uact"creates an unsorted slice ("uact"<"uakt"lexicographically). TheSetParamshandler callsparams.Validate(), which requires coins to be sorted by denom in ascending order. This will fail validation when the simulatedMsgUpdateParamsis processed.Suggested fix
- // uakt must always be present - coins = append(coins, sdk.NewInt64Coin("uakt", int64(simtypes.RandIntBetween(r, 500000, 50000000)))) - coins = append(coins, sdk.NewInt64Coin("uact", int64(simtypes.RandIntBetween(r, 500000, 50000000)))) + // uakt + uact must always be present + coins = append(coins, + sdk.NewInt64Coin("uakt", int64(simtypes.RandIntBetween(r, 500000, 50000000))), + sdk.NewInt64Coin("uact", int64(simtypes.RandIntBetween(r, 500000, 50000000))), + ) + coins = sdk.NewCoins(coins...)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/simulation/proposals.go` around lines 47 - 52, The appended MinDeposits slice can be unsorted (you append "uakt" then "uact"), which will fail params.Validate(); fix by normalizing/sorting the coins before assigning to params.MinDeposits—e.g. build a sdk.Coins (or use sdk.NewCoins) from the coins slice and call its Sort() (or rely on NewCoins' sorting) so that params.MinDeposits is in ascending denom order; update the code around the coins variable and the assignment to params.MinDeposits in the block that creates params (types.DefaultParams()).go.mod-156-156 (1)
156-156:⚠️ Potential issue | 🟠 Major
github.com/ethereum/go-ethereum v1.16.8has three unpatched HIGH-severity CVEs.The problems are resolved in v1.16.9 across all three advisories:
- GHSA-2gjw-fg97-vg3r / GHSA-689v-6xwf-5jf3 — a vulnerable node can be forced to shutdown/crash using a specially crafted message
- GHSA-m6j8-rg6r-7mv8 — through a flaw in the ECIES cryptography implementation, an attacker may be able to extract bits of the p2p node key
These vulnerabilities are in go-ethereum's p2p / RLPx networking layer, which Cosmos nodes typically do not use directly. However, upgrading to
v1.16.9eliminates them cleanly. Verify the transitive dependency chain (e.g., viago mod why github.com/ethereum/go-ethereum) to confirm whether the vulnerable code paths are reachable, and bump tov1.16.9in thereplaceblock or by transitively bumping the pulling dependency.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@go.mod` at line 156, The dependency github.com/ethereum/go-ethereum v1.16.8 has three HIGH CVEs; update the module to v1.16.9 by changing the version in go.mod (either directly bump the required version or add/modify a replace directive for github.com/ethereum/go-ethereum to v1.16.9) and then run go mod tidy; also run go mod why github.com/ethereum/go-ethereum to confirm why it is pulled and verify the vulnerable code paths are not reachable after the bump.contracts/wormhole/src/state.rs-148-176 (1)
148-176:⚠️ Potential issue | 🟠 MajorAdd input length validation to prevent panics in deserialization methods.
The
ByteUtilstrait methods (get_u8,get_u16,get_u64,get_bytes32) perform direct slice indexing without bounds checking and will panic if the input data is too short.
GovernancePacket::deserializerequires at least 35 bytes (reads throughget_u16(33)plus the payload slice starting at offset 35)ContractUpgrade::deserializerequires at least 32 bytes (readsget_u64(24))Both methods must validate input length before calling these methods:
Proposed fix for GovernancePacket
pub fn deserialize(data: &[u8]) -> StdResult<Self> { + if data.len() < 35 { + return ContractError::InvalidVAA.std_err(); + } let data_ref: &[u8] = data;Proposed fix for ContractUpgrade
pub fn deserialize(data: &[u8]) -> StdResult<Self> { + if data.len() < 32 { + return ContractError::InvalidVAA.std_err(); + } let data_ref: &[u8] = data;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/state.rs` around lines 148 - 176, GovernancePacket::deserialize and ContractUpgrade::deserialize currently call ByteUtils methods (get_bytes32, get_u8, get_u16, get_u64) that index slices without bounds checks and can panic; add explicit input-length checks at the top of each function (ensure data.len() >= 35 for GovernancePacket::deserialize and data.len() >= 32 for ContractUpgrade::deserialize) and return an appropriate StdResult::Err (constructed via StdError::generic_err or similar) when the input is too short, before calling get_bytes32/get_u8/get_u16/get_u64 or slicing payload._run/node/.envrc-1-1 (1)
1-1:⚠️ Potential issue | 🟠 MajorSource the envrc so variables propagate.
Line 1 executes the file as a command, which won’t export variables into the current direnv shell. Usesource_env(orsource) instead.✅ Suggested fix
-../.envrc_run +source_env ../.envrc_run🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/.envrc` at line 1, The current _run/node/.envrc simply executes ../.envrc_run which runs it as a command and does not export variables into the direnv environment; change the line so the file is sourced instead (use direnv's source_env or a shell source) so the environment variables defined in ../.envrc_run are exported into the current direnv shell.app/app.go-146-157 (1)
146-157:⚠️ Potential issue | 🟠 Major
ContractDebugModeis not explicitly forced tofalse— production risk if node config enables it.
ReadNodeConfigreads the wasm config from the node'sapp.toml, which may havecontract_debug_mode = true. Lines 152–154 overrideMemoryCacheSizeandSmartQueryGasLimit, butContractDebugModeis only mentioned in a comment (line 157). If a node operator enables debug mode in their config, it will silently remain enabled in production.Uncomment and explicitly set it to
false:Proposed fix
// Debug mode - MUST be false in production - // Uncomment this for debugging contracts. In the future this could be made into a param passed by the tests - //wasmConfig.ContractDebugMode = false + wasmConfig.ContractDebugMode = false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.go` around lines 146 - 157, The wasm config loaded via wasm.ReadNodeConfig(appOpts) can retain ContractDebugMode from node config; explicitly force it off by setting wasmConfig.ContractDebugMode = false right after the ReadNodeConfig call (near the existing MemoryCacheSize and SmartQueryGasLimit overrides) so the variable wasmConfig.ContractDebugMode cannot remain true in production._run/init.sh-352-360 (1)
352-360:⚠️ Potential issue | 🟠 MajorNode is terminated right after successful init.
kill -SIGINT $NODE_PIDstops the node even though the script logs “Node is running.” This makes the setup non-persistent.💡 Suggested fix (keep node running)
-kill -SIGINT $NODE_PID -# Keep the script running (wait for node process) -wait $NODE_PID +# Keep the script running (wait for node process) +wait $NODE_PID🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` around lines 352 - 360, The script currently sends kill -SIGINT $NODE_PID right after deploy_contracts and logs, which terminates the node; remove that kill -SIGINT $NODE_PID line so the node process (NODE_PID) remains running and let wait $NODE_PID keep the script alive; optionally add a trap (e.g., trap 'kill -SIGINT $NODE_PID' INT TERM EXIT) if you want a controlled shutdown on script exit, referencing deploy_contracts, NODE_PID, wait, and the log calls to locate the code to change._run/init.sh-5-24 (1)
5-24:⚠️ Potential issue | 🟠 MajorNounset makes the current env checks fail before printing errors.
Withset -u,[[ -z "$VAR" ]]on an unset variable exits immediately. Also,AKASH_CHAIN_IDandAKASH_RUN_DIRare used later without validation.💡 Suggested fix (nounset-safe checks + missing vars)
-if [[ -z "$AKASH_HOME" ]]; then - echo "AKASH_HOME is not set" - exit 1 -fi - -if [[ -z "$KEYS" ]]; then - echo "KEYS is not set" - exit 1 -fi - -if [[ -z "$MNEMONIC" ]]; then - echo "MNEMONIC is not set" - exit 1 -fi - -if [[ -z "$CONTRACTS_DIR" ]]; then - echo "CONTRACTS_DIR is not set" - exit 1 -fi +: "${AKASH_HOME:?AKASH_HOME is not set}" +: "${KEYS:?KEYS is not set}" +: "${MNEMONIC:?MNEMONIC is not set}" +: "${CONTRACTS_DIR:?CONTRACTS_DIR is not set}" +: "${AKASH_CHAIN_ID:?AKASH_CHAIN_ID is not set}" +: "${AKASH_RUN_DIR:?AKASH_RUN_DIR is not set}"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` around lines 5 - 24, The script's current env checks use [[ -z "$VAR" ]] which fails under set -u; change each check in _run/init.sh for AKASH_HOME, KEYS, MNEMONIC, CONTRACTS_DIR to nounset-safe forms (e.g. test against ${VAR-} or use parameter expansion defaults) so an unset variable doesn't cause immediate exit, and add the same nounset-safe checks for the two missing variables AKASH_CHAIN_ID and AKASH_RUN_DIR so all required env vars are validated before use._run/init.sh-185-205 (1)
185-205:⚠️ Potential issue | 🟠 MajorAdd
--yesflag to allakash txcommands for non-interactive execution.
akash txcommands hang on the broadcast confirmation prompt without--yes, which blocks execution in CI and unattended runs. This is a Cosmos SDK standard flag that must be included for automated workflows.The following instances need the
--yesflag added:
- Lines 185–205:
akash tx wasm storeandakash tx wasm instantiate(Wormhole)- Lines 212–240:
akash tx wasm storeandakash tx wasm instantiate(Pyth)- Lines 303–312:
akash tx gov submit-proposalandakash tx gov voteAlso quote variable references (e.g.,
"$admin_key") to avoid word-splitting issues.Suggested fix
-akash tx wasm store "$WORMHOLE_WASM" --from $admin_key -o json +akash tx wasm store "$WORMHOLE_WASM" --from "$admin_key" -o json --yes -akash tx wasm instantiate "$wormhole_code_id" "$wormhole_init_msg" \ +akash tx wasm instantiate "$wormhole_code_id" "$wormhole_init_msg" \ --label "wormhole-local" \ --admin "$admin_addr" \ - --from $admin_key \ + --from "$admin_key" \ + --yes \ -akash tx wasm store "$PYTH_WASM" --from $admin_key +akash tx wasm store "$PYTH_WASM" --from "$admin_key" --yes -akash tx wasm instantiate "$pyth_code_id" "$pyth_init_msg" \ +akash tx wasm instantiate "$pyth_code_id" "$pyth_init_msg" \ --label "pyth" \ --admin "$admin_addr" \ - --from $admin_key + --from "$admin_key" \ + --yes -akash tx gov submit-proposal /tmp/oracle-params.json --from main \ +akash tx gov submit-proposal /tmp/oracle-params.json --from main --yes \ -akash tx gov vote 1 yes --from validator +akash tx gov vote 1 yes --from validator --yes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` around lines 185 - 205, The akash tx invocations (e.g., the calls to "akash tx wasm store" and "akash tx wasm instantiate" that use variables like admin_key and admin_addr) must run non-interactively and must safely pass shell variables; update each akash tx command in this script to include the --yes flag and ensure all variable references are quoted (e.g., "$admin_key", "$admin_addr") to avoid word-splitting; apply the same change pattern to the other akash tx lines noted in the comment (wasm store/instantiate for Pyth and gov submit-proposal/vote) so all automated CI/unattended runs proceed without prompts and with safe variable expansion.contracts/pyth/src/contract.rs.bak-56-92 (1)
56-92:⚠️ Potential issue | 🟠 Major
Configinitialization is missing required fields.
Configincontracts/pyth/src/state.rsincludeswormhole_contract,default_data_id, anddata_sources. This initializer only sets three fields, so it won’t compile or leaves state incomplete if this file is meant to be used. Please alignInstantiateMsg/Configor remove the.bakfile to avoid drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs.bak` around lines 56 - 92, The instantiate function currently builds a Config with only admin, update_fee, and price_feed_id but the Config struct (in state.rs) also requires wormhole_contract, default_data_id, and data_sources; update the instantiate logic inside the instantiate function to populate all required Config fields (fetch/validate msg.wormhole_contract via deps.api.addr_validate, set default_data_id from msg or chain lookup, initialize data_sources from msg or empty vec) or adjust InstantiateMsg/Config definitions to match; ensure you reference and save the fully populated Config (CONFIG.save) so compilation succeeds and state is complete.contracts/wormhole/src/contract.rs-87-111 (1)
87-111:⚠️ Potential issue | 🟠 MajorAvoid
unwrap()on untrusted VAA data.A non‑UTF8 module field will panic. Return a controlled error instead.
🔧 Proposed fix
- let module = String::from_utf8(gov_packet.module).unwrap(); + let module = String::from_utf8(gov_packet.module) + .map_err(|_| StdError::msg("governance module is not valid UTF-8"))?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/contract.rs` around lines 87 - 111, The code in handle_governance_payload uses String::from_utf8(...).unwrap() on untrusted VAA bytes which can panic on non‑UTF8 input; replace the unwrap with proper error handling by mapping the from_utf8 error into a StdError (e.g., via map_err or match) and return that StdResult error instead of panicking, then continue to strip null chars as before; update the code around the module parsing (the String::from_utf8(gov_packet.module) call and subsequent module filtering) so non‑UTF8 modules yield a controlled StdError (with a clear message like "invalid module string in governance VAA").contracts/wormhole/src/contract.rs-61-85 (1)
61-85:⚠️ Potential issue | 🟠 MajorArchive VAAs only after successful execution.
VAA_ARCHIVE.savehappens beforehandle_governance_payload. If governance handling fails (or is fixed later), the VAA can’t be retried. Consider archiving only after a successful payload execution.🔧 Proposed fix
- VAA_ARCHIVE.save(deps.storage, vaa.hash.as_slice(), &true)?; - - if state.gov_chain == vaa.emitter_chain && state.gov_address == vaa.emitter_address { - return handle_governance_payload(deps, env, &vaa.payload); - } + if state.gov_chain == vaa.emitter_chain && state.gov_address == vaa.emitter_address { + let res = handle_governance_payload(deps, env, &vaa.payload)?; + VAA_ARCHIVE.save(deps.storage, vaa.hash.as_slice(), &true)?; + return Ok(res); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/contract.rs` around lines 61 - 85, handle_submit_vaa currently archives the VAA with VAA_ARCHIVE.save before executing the governance payload, which prevents retries if handle_governance_payload fails; move the VAA_ARCHIVE.save call so it only runs after a successful execution by calling handle_governance_payload(deps, env, &vaa.payload) first, check its StdResult<Response> for Ok and then persist the archive (using the same vaa.hash.as_slice()) before returning the Response; keep parse_and_verify_vaa as-is and ensure non-governance paths still return the ContractError without saving.
🟡 Minor comments (25)
x/epochs/alias.go-8-9 (1)
8-9:⚠️ Potential issue | 🟡 MinorStale copy-paste comment — references "wasm module" instead of "epochs".
The doc comment on
StoreKeywas likely copied from the wasm alias file and never updated.✏️ Proposed fix
- // StoreKey represents storekey of wasm module + // StoreKey represents storekey of epochs module StoreKey = types.StoreKey🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/epochs/alias.go` around lines 8 - 9, The comment on the exported identifier StoreKey is a stale copy-paste referencing "wasm module"; update the doc comment to accurately describe StoreKey in the epochs module (e.g., "StoreKey represents the store key used by the epochs module") so the comment matches the symbol StoreKey and the module context; locate the declaration of StoreKey in alias.go and replace the incorrect "wasm module" wording with a concise, correct description referring to epochs.x/epochs/simulation/genesis.go-30-31 (1)
30-31:⚠️ Potential issue | 🟡 MinorUpdate stale comment to match module context.
Comment says “distribution” but this function generates epochs genesis state.
✏️ Suggested fix
-// RandomizedGenState generates a random GenesisState for distribution +// RandomizedGenState generates a random GenesisState for epochs🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/epochs/simulation/genesis.go` around lines 30 - 31, Update the stale comment above RandomizedGenState to reflect the epochs module context: change "RandomizedGenState generates a random GenesisState for distribution" to something like "RandomizedGenState generates a random GenesisState for epochs" (or similar wording referencing epochs), ensuring the comment accurately describes RandomizedGenState, GenesisState and the module.SimulationState parameter.tests/upgrade/workers_test.go-87-122 (1)
87-122:⚠️ Potential issue | 🟡 MinorAvoid asserting non-nil response on expected error paths.
BroadcastMsgscan legitimately return a nil response when it returns an error, which would make this test flaky. Gate the type assertion on non-nil responses (and droprequire.NotNil) for both error-path broadcasts.🔧 Suggested fix
resp, err := mcl.Tx().BroadcastMsgs(ctx, []sdk.Msg{msg}) require.Error(t, err) -require.NotNil(t, resp) -require.IsType(t, &sdk.TxResponse{}, resp) +if resp != nil { + require.IsType(t, &sdk.TxResponse{}, resp) +} require.ErrorIs(t, err, sdkerrors.ErrUnauthorized) @@ resp, err = mcl.Tx().BroadcastMsgs(ctx, []sdk.Msg{govMsg}) require.Error(t, err) -require.NotNil(t, resp) -require.IsType(t, &sdk.TxResponse{}, resp) +if resp != nil { + require.IsType(t, &sdk.TxResponse{}, resp) +}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 87 - 122, The test currently asserts require.NotNil(t, resp) after calls to mcl.Tx().BroadcastMsgs which can return a nil response on error; remove the require.NotNil checks and change the require.IsType assertions to only run when resp != nil (e.g. guard the type check with an if resp != nil) for both the wasm store broadcast and the govMsg broadcast so the test no longer fails when BroadcastMsgs returns (nil, err).testutil/state/suite.go-225-231 (1)
225-231:⚠️ Potential issue | 🟡 MinorInitialize
msto avoidStore()returning nil.
TestSuite.Store()returnsts.ms, butmsis never set in the constructor. That leaves a nil multi-store and can break tests usingStore().🛠️ Suggested fix
return &TestSuite{ t: t, + ms: app.CommitMultiStore(), app: app, ctx: ctx, keepers: keepers, priceFeeder: priceFeeder, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/state/suite.go` around lines 225 - 231, The TestSuite constructor returns a TestSuite whose ms field is never set, so Store() can return nil; set the ms field when constructing the TestSuite (e.g., ms: app.CommitMultiStore() or the app method that returns the multi-store / state store such as app.GetMultiStore()), so that TestSuite.Store() (ts.ms) is initialized and non-nil; update the composite literal in the constructor that returns &TestSuite{...} to include ms: <app's multi-store accessor>.Cargo.toml-1-23 (1)
1-23:⚠️ Potential issue | 🟡 MinorWorkspace release profile looks correct for CosmWasm production contracts.
The
panic = 'abort'andoverflow-checks = truecombination is appropriate —abortavoids unwinding overhead in Wasm, and overflow checks are cheap on Wasm targets.One note: the current stable Rust release is 1.93.1, while
rust-version = "1.93.0"pins the MSRV to a version released on January 22, 2026. This is a very high (recent) MSRV — ensure all CI runners and contributor toolchains are on ≥ 1.93.0, as any environment running an older toolchain will fail to build.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 1 - 23, The workspace.package rust-version is pinned to "1.93.0" which is behind the current stable 1.93.1; update the rust-version value in Cargo.toml's [workspace.package] (the rust-version = "1.93.0" entry) to "1.93.1" (or to the exact MSRV your CI uses) and ensure CI toolchains and contributor docs are aligned to >=1.93.0 so builds don't fail for contributors using the current stable toolchain.contracts/wormhole/Cargo.toml-4-4 (1)
4-4:⚠️ Potential issue | 🟡 MinorMalformed
authorsentry — missing closing>in email format.
"Artur Troian <troian.ap@gmail.com"is missing the closing angle bracket.🔧 Proposed fix
-authors = ["Artur Troian <troian.ap@gmail.com"] +authors = ["Artur Troian <troian.ap@gmail.com>"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/Cargo.toml` at line 4, The authors entry in Cargo.toml is malformed: update the authors array element "Artur Troian <troian.ap@gmail.com" to include the missing closing angle bracket so it follows the proper "Name <email>" format; edit the authors line in contracts/wormhole/Cargo.toml (the authors entry) to add the trailing '>' character to the email address.contracts/pyth/Cargo.toml-4-4 (1)
4-4:⚠️ Potential issue | 🟡 MinorSame malformed
authorsentry ascontracts/wormhole/Cargo.toml— missing closing>in email format.🔧 Proposed fix
-authors = ["Artur Troian <troian.ap@gmail.com"] +authors = ["Artur Troian <troian.ap@gmail.com>"]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/Cargo.toml` at line 4, The authors entry in the Cargo.toml for the pyth contract is malformed (the email is missing a closing '>'); update the authors field value for the authors key (the "authors" entry) to include the proper email angle-bracket closing so the entry is a valid TOML string (e.g., fix the authors = ["Artur Troian <troian.ap@gmail.com>"] format).contracts/pyth/Cargo.toml-24-24 (1)
24-24:⚠️ Potential issue | 🟡 Minor
sha3is missingdefault-features = false— inconsistent withcontracts/wormhole/Cargo.tomland potentially problematic for thewasm32-unknown-unknowntarget.In the wormhole crate,
sha3is declared withdefault-features = false. The pyth crate omits it, which may pull instd-dependent features and either inflate the wasm binary or cause a compile error when targetingwasm32-unknown-unknown.🔧 Proposed fix
-sha3 = "0.10" +sha3 = { version = "0.10", default-features = false }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/Cargo.toml` at line 24, The sha3 dependency in this crate is missing the explicit opt-out of default features; update the sha3 dependency declaration (the sha3 entry in Cargo.toml) to include default-features = false so it matches the contracts/wormhole crate and avoids pulling in std-only features for the wasm32-unknown-unknown target; if the crate needs specific features, add them explicitly in the same dependency block.x/deployment/keeper/keeper_test.go-96-97 (1)
96-97:⚠️ Potential issue | 🟡 MinorRestore the no-event assertion on failure.
Leaving the check commented out weakens negative-case coverage for failed creates.Proposed fix
- //assert.Empty(t, ctx.EventManager().Events()) + assert.Empty(t, ctx.EventManager().Events())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/keeper/keeper_test.go` around lines 96 - 97, Uncomment and restore the negative-case assertion so the test verifies no events are emitted on failed creates: re-enable the assert.Empty(t, ctx.EventManager().Events()) call (using the existing ctx variable and EventManager() call) in keeper_test.go, and if needed ensure the testify assert package is imported and t/context are in scope so the assertion runs during the test. This ensures the test fails if any events were emitted for the unsuccessful create._docs/pyth-integration.md-97-141 (1)
97-141:⚠️ Potential issue | 🟡 MinorAdd language identifiers to fenced blocks (MD040).
markdownlint flags these fences as missing a language. Please add a language tag (e.g.,
text) to the diagram, internal flow, cost estimate, artifact list, and error snippet blocks.✏️ Example fix (apply similarly to the other flagged fences)
-``` +```text ┌──────────────────────────────────────────────────────────────┐Also applies to: 238-245, 438-443, 554-557, 896-898
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 97 - 141, The markdown has several fenced code blocks (the large Pyth Network diagram and other blocks at the indicated ranges) missing language identifiers triggering MD040; update each triple-backtick fence around the diagram and the other flagged snippets (e.g., the block starting with the Pyth Network ASCII diagram and the blocks at lines 238-245, 438-443, 554-557, 896-898) to include a language tag such as text (change ``` to ```text) so markdownlint no longer flags them; ensure you apply the same change to every similar fenced block in this file.x/epochs/keeper/hooks.go-9-26 (1)
9-26:⚠️ Potential issue | 🟡 MinorFix epoch hook doc comments (typo/grammar).
The comments reference the wrong keeper and have grammar issues; tweak to avoid confusion.
✏️ Suggested comment cleanup
-// Hooks gets the hooks for governance Keeper +// Hooks gets the hooks for the epochs keeper. @@ -// BeforeEpochStart new epoch is next block of epoch end block +// BeforeEpochStart is called on the block immediately after an epoch ends.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/epochs/keeper/hooks.go` around lines 9 - 26, Update the Go doc comments in the keeper hooks file to correctly reference the epochs keeper and fix grammar: change the top comment on Hooks() to "Hooks gets the hooks for epochs Keeper" (or "epochs keeper") and reword the AfterEpochEnd and BeforeEpochStart comments to be clear and grammatical (e.g., "AfterEpochEnd is called at the end of an epoch; end of epoch is the timestamp of the first block produced after the epoch duration." and "BeforeEpochStart is called at the start of a new epoch; the new epoch begins at the block following the epoch end."). Ensure comments are placed immediately above the corresponding functions Hooks(), AfterEpochEnd, and BeforeEpochStart and reference the same function names exactly.tests/e2e/oracle_grpc_test.go-159-160 (1)
159-160:⚠️ Potential issue | 🟡 MinorHard
Require().False(configResp.Enabled)is a fragile default-value assumption.The comment says the config "may not be enabled by default," yet the assertion unconditionally requires
false. If any test genesis configuration enables the price feed, this assertion fails spuriously. Uses.Require().NoError(err)alone (already present) and remove the assumption, or mark it as an explicit expectation with a clear rationale.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/oracle_grpc_test.go` around lines 159 - 160, The test currently asserts a hard default with s.Require().False(configResp.Enabled) which is fragile; remove that assertion and rely on the existing s.Require().NoError(err) (or, if you need to assert an expected state, replace it with an explicit expectation/comment explaining why the price feed must be disabled in this test). Locate the check around configResp (the configResp variable and the call using s.Require().False) and either delete the Require().False(configResp.Enabled) line or convert it into a documented expectation so the test no longer fails when genesis enables the price feed.x/epochs/keeper/grpc_query.go-53-56 (1)
53-56:⚠️ Potential issue | 🟡 MinorUse
status.Error(codes.NotFound, ...)instead oferrors.Newfor proper gRPC semantics.A plain
errors.Newis wrapped ascodes.Unknownby the gRPC layer, making it impossible for clients to distinguish "not found" from an unexpected internal error.codesandstatusare already imported.🛠️ Proposed fix
- return nil, errors.New("not available identifier") + return nil, status.Errorf(codes.NotFound, "epoch not found for identifier %q", req.Identifier)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/epochs/keeper/grpc_query.go` around lines 53 - 56, Replace the plain error returned when q.GetEpoch(sctx, req.Identifier) fails with a gRPC NotFound status: detect the error from GetEpoch and return status.Error(codes.NotFound, ...) with a clear message referencing req.Identifier (instead of errors.New("not available identifier")), so callers receive a codes.NotFound gRPC code; keep the same control flow around q.GetEpoch and sctx.tests/e2e/oracle_grpc_test.go-96-97 (1)
96-97:⚠️ Potential issue | 🟡 Minor
NotNilassertion on a proto3 repeated field fails for an empty price list.In proto3, unset repeated fields decode to
nilin Go (the field is omitted in JSON entirely). An oracle with no price data returns a response wherePricesisnil, causingRequire().NotNil(pricesResp.Prices)to fail — directly contradicting the "Prices may be empty" comment. The same issue applies to line 136 in the REST path.Either remove the
NotNilassertion entirely (theNoErrorcheck is sufficient), or uses.Require().GreaterOrEqual(len(pricesResp.Prices), 0)if you want to document the type.🐛 Proposed fix (applies to both line 97 and line 136)
- // Prices may be empty if no price data has been fed - s.Require().NotNil(pricesResp.Prices)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/oracle_grpc_test.go` around lines 96 - 97, The test currently asserts s.Require().NotNil(pricesResp.Prices) but proto3 repeated fields can be nil when empty; remove the NotNil assertion (or replace it with a length-based check such as s.Require().GreaterOrEqual(len(pricesResp.Prices), 0)) to allow nil/empty price lists; update the gRPC check that references pricesResp.Prices (and apply the same change to the analogous REST assertion) so the test only relies on the existing NoError validation.contracts/wormhole/src/byte_utils.rs-56-61 (1)
56-61:⚠️ Potential issue | 🟡 Minor
extend_address_to_32panics on usize underflow when address exceeds 32 bytes.Line 59:
let start = 32 - addr_bytes.len()is an uncheckedusizesubtraction. Ifaddr_bytes.len() > 32(e.g., malformed input), this overflows—panicking in debug builds and producing a massivestartvalue (then OOB incopy_from_slice) in release builds, both resulting in a contract abort with an unhelpful message.🛡️ Proposed fix
pub fn extend_address_to_32(addr: &CanonicalAddr) -> Vec<u8> { + let addr_bytes = addr.as_slice(); + assert!(addr_bytes.len() <= 32, "address too long: {} bytes", addr_bytes.len()); let mut result = vec![0u8; 32]; - let addr_bytes = addr.as_slice(); let start = 32 - addr_bytes.len(); result[start..].copy_from_slice(addr_bytes); result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/byte_utils.rs` around lines 56 - 61, extend_address_to_32 currently does unchecked subtraction (32 - addr_bytes.len()) which underflows when addr_bytes.len() > 32; fix by computing the start index with a guarded branch: if addr_bytes.len() <= 32, pad on the left as before using start = 32 - len, else take the last 32 bytes of addr_bytes (e.g., slice addr_bytes[addr_bytes.len()-32..]) and copy that into result so you never perform an underflow or out-of-bounds copy; update the function extend_address_to_32 (and any use of CanonicalAddr.as_slice()) to implement this guarded logic._run/node/Makefile-1-5 (1)
1-5:⚠️ Potential issue | 🟡 Minorcheckmake warns about missing standard phony targets.
If checkmake runs in CI, it will still warn about missingall,clean, andtest. Consider adding stubs here or adjusting the checkmake config to account for the included common makefiles.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/Makefile` around lines 1 - 5, The Makefile currently only defines the clean-node target and includes common.mk/common-commands.mk, which triggers checkmake warnings about missing standard phony targets; add stub phony targets for all, clean, and test (or update checkmake config) so CI stops warning — e.g., declare .PHONY for all, clean, test and add empty recipe stubs for those targets alongside the existing clean-node to satisfy checkmake while preserving the included common files._run/.env-1-8 (1)
1-8:⚠️ Potential issue | 🟡 MinorAddress dotenv-linter ordering warnings.
Reordering these keys keeps lint clean.🔧 Suggested reorder
-AKASH_KEYRING_BACKEND=test -AKASH_GAS_ADJUSTMENT=2 AKASH_CHAIN_ID=local -AKASH_YES=true -AKASH_GAS_PRICES=0.025uakt AKASH_GAS=auto +AKASH_GAS_ADJUSTMENT=2 +AKASH_GAS_PRICES=0.025uakt +AKASH_KEYRING_BACKEND=test AKASH_NODE=http://localhost:26657 +AKASH_YES=true CONTRACTS_DIR=$AKASH_DEVCACHE/cosmwasm🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/.env` around lines 1 - 8, Reorder the environment variables in the .env snippet into the expected alphabetical order to satisfy dotenv-linter (sort by the variable name and preserve any relevant grouping), e.g. arrange AKASH_CHAIN_ID, AKASH_DEVCACHE/CONTRACTS_DIR reference, AKASH_GAS, AKASH_GAS_ADJUSTMENT, AKASH_GAS_PRICES, AKASH_KEYRING_BACKEND, AKASH_NODE, AKASH_YES (or whatever alphabetical order your linter requires); update the block that defines AKASH_KEYRING_BACKEND, AKASH_GAS_ADJUSTMENT, AKASH_CHAIN_ID, AKASH_YES, AKASH_GAS_PRICES, AKASH_GAS, AKASH_NODE, CONTRACTS_DIR to the correct sorted order so dotenv-linter warnings go away..env-1-18 (1)
1-18:⚠️ Potential issue | 🟡 MinorAddress dotenv-linter ordering warnings.
The linter flags key order (e.g., CGO_ENABLED before GO111MODULE, and AKASH_DEVCACHE_INCLUDE before AKASH_DEVCACHE_LIB). Reordering avoids noisy CI warnings.🔧 Suggested reorder
-GO111MODULE=on -CGO_ENABLED=1 +CGO_ENABLED=1 +GO111MODULE=on @@ -AKASH_DEVCACHE_BIN=${AKASH_DEVCACHE}/bin -AKASH_DEVCACHE_LIB=${AKASH_DEVCACHE}/lib -AKASH_DEVCACHE_INCLUDE=${AKASH_DEVCACHE}/include -AKASH_DEVCACHE_VERSIONS=${AKASH_DEVCACHE}/versions -AKASH_DEVCACHE_NODE_MODULES=${AKASH_DEVCACHE} +AKASH_DEVCACHE_BIN=${AKASH_DEVCACHE}/bin +AKASH_DEVCACHE_INCLUDE=${AKASH_DEVCACHE}/include +AKASH_DEVCACHE_LIB=${AKASH_DEVCACHE}/lib +AKASH_DEVCACHE_NODE_MODULES=${AKASH_DEVCACHE} +AKASH_DEVCACHE_VERSIONS=${AKASH_DEVCACHE}/versions🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.env around lines 1 - 18, Reorder the entries in .env to satisfy dotenv-linter: place GO111MODULE before CGO_ENABLED (so GO111MODULE=on then CGO_ENABLED=1) and ensure AKASH_DEVCACHE_LIB comes before AKASH_DEVCACHE_INCLUDE (i.e., AKASH_DEVCACHE, AKASH_DEVCACHE_BIN, AKASH_DEVCACHE_LIB, AKASH_DEVCACHE_INCLUDE, AKASH_DEVCACHE_VERSIONS, AKASH_DEVCACHE_NODE_MODULES, AKASH_DEVCACHE_NODE_BIN), keeping all other AKASH_RUN and CARGO_TARGET_DIR lines in place; update the file so the keys are in the expected lexical/grouped order to remove the linter warnings.x/epochs/keeper/abci.go-75-79 (1)
75-79:⚠️ Potential issue | 🟡 MinorStore write failure silently continues — epoch state may become inconsistent.
If
EpochInfo.Setfails (line 75), the error is logged but the callback returnsfalse, nil, meaning the walk continues to process other epochs. However, theEventEpochStartevent on lines 68-74 has already been emitted for this epoch. Consumers of that event will believe the epoch started, but the state was never persisted.Consider returning the error to halt iteration and propagate it, matching the behavior on line 48 for event emission errors.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/epochs/keeper/abci.go` around lines 75 - 79, The callback emits EventEpochStart then attempts to persist epochInfo but swallows write errors by logging and returning (false, nil), causing the event to be emitted while state wasn't saved; change the error handling after k.EpochInfo.Set to return (false, err) (or simply return the error up) so iteration halts and the write error is propagated, mirroring the error handling used for EventEpochStart emission failures and ensuring consistency between emitted events and persisted state; locate the error handling around EpochInfo.Set in the same function where EventEpochStart is emitted to apply this change.app/app.go-319-345 (1)
319-345:⚠️ Potential issue | 🟡 MinorCommented-out ordering constraints for
oracleandbmemodules.Lines 332–335 have commented-out
ord.Before(...)calls fororacleandbme. If these modules are present inmodules(which they are per the AI summary), they'll be placed in an arbitrary position byTotalOrdering(). If oracle truly must precede bme, and bme must precede escrow, leaving these commented out risks non-deterministic ordering for those modules.If these modules aren't wired yet, consider adding a
// TODOwith a tracking issue. If they are wired, uncomment the constraints.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.go` around lines 319 - 345, The commented-out ord.Before constraints for oracle and bme in orderBeginBlockers lead to non-deterministic BeginBlock ordering if oracle, bme, or escrow are present; restore the intended ordering by uncommenting the ord.Before(oracle.ModuleName, bme.ModuleName) and ord.Before(bme.ModuleName, escrow.ModuleName) calls in orderBeginBlockers (or replace with explicit ord.Sequence calls) so the PartialOrdering enforces oracle → bme → escrow; if those modules are not yet wired, add a TODO noting why the constraints remain commented and reference oracle.ModuleName, bme.ModuleName, escrow.ModuleName and ord.Before so reviewers can trace the change._run/node/docker-compose.yaml-70-71 (1)
70-71:⚠️ Potential issue | 🟡 MinorComment says "1 minute" but value is 10 seconds.
UPDATE_INTERVAL_MS=10000is 10 seconds, not 1 minute (which would be 60000). Update the comment or the value.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/docker-compose.yaml` around lines 70 - 71, The comment says "1 minute for testing" but UPDATE_INTERVAL_MS is set to 10000 (10 seconds); change the value of the environment variable UPDATE_INTERVAL_MS in docker-compose.yaml to 60000 to match the 1-minute comment (or alternatively change the comment to reflect 10 seconds)—locate the UPDATE_INTERVAL_MS entry and make the value/comment consistent._run/node/docker-compose.yaml-26-26 (1)
26-26:⚠️ Potential issue | 🟡 MinorHardcoded
arm64image tag limits portability.
ghcr.io/akash-network/node:latest-arm64won't work on amd64 machines. Consider using a multi-arch tag (e.g.,latest) or parameterizing the architecture.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/docker-compose.yaml` at line 26, The docker-compose service currently pins the image to a hardcoded arm64 tag ("ghcr.io/akash-network/node:latest-arm64") which breaks on amd64 hosts; update the service to use a multi-arch tag (e.g., "ghcr.io/akash-network/node:latest") or parameterize the tag via an environment variable or compose variable (e.g., NODE_IMAGE or ARCH) so the image can be selected per-platform; alternatively add the Compose "platform" field to explicitly request arm64 when needed. Target the image declaration line (the "image:" value) to make this change._run/node/docker-compose.yaml-12-17 (1)
12-17:⚠️ Potential issue | 🟡 MinorUsage comments reference wrong file path.
The comments say
docker-compose -f _build/docker-compose.yaml ...but this file is at_run/node/docker-compose.yaml.Proposed fix (example correction)
-# docker-compose -f _build/docker-compose.yaml up -d # Start all services -# docker-compose -f _build/docker-compose.yaml logs -f # View all logs -# docker-compose -f _build/docker-compose.yaml logs -f validator # View node logs -# docker-compose -f _build/docker-compose.yaml logs -f hermes-client # View hermes logs -# docker-compose -f _build/docker-compose.yaml down # Stop services -# docker-compose -f _build/docker-compose.yaml down -v # Stop and clean volumes +# docker-compose -f _run/node/docker-compose.yaml up -d # Start all services +# docker-compose -f _run/node/docker-compose.yaml logs -f # View all logs +# docker-compose -f _run/node/docker-compose.yaml logs -f validator # View node logs +# docker-compose -f _run/node/docker-compose.yaml logs -f hermes-client # View hermes logs +# docker-compose -f _run/node/docker-compose.yaml down # Stop services +# docker-compose -f _run/node/docker-compose.yaml down -v # Stop and clean volumes🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/docker-compose.yaml` around lines 12 - 17, The commented docker-compose usage lines reference the wrong compose file string "_build/docker-compose.yaml"; update those comment lines in the docker-compose comment block to reference the correct compose file name used in this file (replace occurrences of "_build/docker-compose.yaml" in the commented commands with the actual compose filename used here or simply remove the -f flag if the comments should assume running from this file's directory) so the examples like "docker-compose -f _build/docker-compose.yaml up -d" become correct; search for the exact string "_build/docker-compose.yaml" in the comment block and replace it accordingly._run/common.mk-18-47 (1)
18-47:⚠️ Potential issue | 🟡 MinorHard‑coded mnemonic should be dev‑only or sourced from env.
Keeping a seed phrase in VCS makes accidental reuse outside local dev more likely. Consider loading it from
_run/.envor requiring it via environment for local runs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common.mk` around lines 18 - 47, The MNEMONIC variable is hard‑coded in the makefile which leaks a seed phrase; change the MNEMONIC export to read from the environment or a local dev-only file and only fall back to a dev default if explicitly allowed: replace the literal MNEMONIC assignment with logic that loads from an environment variable (e.g. MNEMONIC from process env or exported in _run/.env) or sources _run/.env, and ensure the exported MNEMONIC uses that value (refer to the MNEMONIC symbol and the export MNEMONIC line), and add a comment or guard so a default mnemonic is only used for non-production local runs.tests/e2e/pyth_contract_test.go-291-295 (1)
291-295:⚠️ Potential issue | 🟡 MinorInconsistent
InstantiatePermissionbetween test and helper.
TestStoreContractViaGovernance(Line 294) uses&wasmtypes.AllowNobody, whileSubmitStoreCodeProposal(Line 799) usesAccessTypeEverybody. This means the test and the reusable helper have contradictory permission semantics. If both are intended to be used in production-like flows, one of them is wrong.Also applies to: 796-800
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/pyth_contract_test.go` around lines 291 - 295, The InstantiatePermission used in TestStoreContractViaGovernance (currently &wasmtypes.AllowNobody) is inconsistent with the permission used in SubmitStoreCodeProposal (AccessTypeEverybody); update TestStoreContractViaGovernance to use the same permission object/value as SubmitStoreCodeProposal (replace &wasmtypes.AllowNobody with the AccessTypeEverybody-based permission used in SubmitStoreCodeProposal) so both the test and the helper use identical InstantiatePermission semantics.
app/app_configure.go
Outdated
| oracle.ModuleName, | ||
| epochs.ModuleName, | ||
| awasm.ModuleName, | ||
| wasmtypes.ModuleName, | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Inspect ModuleName constants for wasm-related modules in this repo.
rg -n --type=go 'ModuleName\s*=' -g '*wasm*/*.go'Repository: akash-network/node
Length of output: 1735
🏁 Script executed:
#!/bin/bash
# Find wasm-related files to understand the module structure
find . -type f -name '*wasm*' | grep -E '\.(go|proto)$' | head -20Repository: akash-network/node
Length of output: 1735
🏁 Script executed:
#!/bin/bash
# Search for awasm module definition and imports
rg -n 'awasm' --type=go -A 3 -B 3 | head -50Repository: akash-network/node
Length of output: 1735
🌐 Web query:
awasm ModuleName Akash custom wasm module
💡 Result:
On Akash (a Cosmos SDK chain), ModuleName is just the Cosmos-SDK module identifier string each module exposes (used for routing, store keys, begin/end blocker ordering, etc.). Akash’s app wires multiple modules together via the module manager (including Akash-specific modules like deployment, market, provider, escrow, audit, cert, take, etc.). [1]
If you mean a custom Akash (Cosmos-SDK) module
You typically:
- Create
x/<yourmodule>/types/keys.go(or similar) and define:const ModuleName = "<yourmodule>"StoreKey = ModuleName, etc.
- Implement
AppModuleBasic/AppModule. - Register it in the Akash app’s module setup (module manager + execution ordering). [1]
If you mean a CosmWasm (x/wasm) module on Akash
Akash’s roadmap proposal for enabling CosmWasm explicitly calls for integrating the standard CosmWasm x/wasm module into the Akash node. [2]
In CosmWasm/wasmd, the wasm module’s name constant is:
ModuleName = "wasm"[3]
If you tell me where you’re seeing awasm (code snippet / repo / error message), I can map it to the exact ModuleName you need.
🌐 Web query:
wasmtypes ModuleName CosmWasm upstream module
💡 Result:
In CosmWasm’s upstream Cosmos SDK module (x/wasm, from the github.com/CosmWasm/wasmd repo), the wasmtypes package (github.com/CosmWasm/wasmd/x/wasm/types) defines:
ModuleName = "wasm"(andStoreKey,QuerierRoute,RouterKeyall derive from it). [1]- This corresponds to the Cosmos SDK
x/wasmmodule, which CosmWasm describes as “the root of CosmWasm” in the Cosmos stack. [2]
Sources: [1] pkg.go.dev github.com/CosmWasm/wasmd/x/wasm/types (constants section); [2] CosmWasm docs (architecture: actor, describing x/wasm).
🌐 Web query:
Cosmos SDK wasm modules awasm wasmtypes difference
💡 Result:
In Cosmos SDK/wasmd (CosmWasm) codebases, these names usually refer to Go packages, not different “wasm modules” on-chain:
-
wasm: the module implementation package (i.e.,github.com/CosmWasm/wasmd/x/wasm). This is where the Cosmos SDK module wiring lives (e.g., the AppModule, begin/end blockers, services, etc.). You import it when you register/configure the wasm module in your app. Inwasmd’s app, it’s imported aswasm. [1] -
wasmtypes: the types subpackage (github.com/CosmWasm/wasmd/x/wasm/types). This holds the “shared definitions” used by the module and by other modules/apps: module name constants, Params/Genesis types, Msg/Query protobuf-generated types, errors, and various limits (e.g.MaxWasmSize, etc.). Inwasmd, those compile-time limits are explicitly referenced aswasmtypes.*. [1][2] -
awasm: this is typically just a local import alias some chains choose inapp.go(e.g.,awasm "github.com/CosmWasm/wasmd/x/wasm"), usually to avoid a naming collision with some otherwasmidentifier in the same file. It’s not a different package/module thanx/wasm; it’s just renamed at import-time by the application developer. (In upstreamwasmdit’s imported aswasm, notawasm.) [1]
Rule of thumb: import x/wasm (wasm/awasm) when you need the module; import x/wasm/types (wasmtypes) when you only need the definitions/messages/params/constants. [1]
Sources: [1] [2]
Remove one of awasm.ModuleName or wasmtypes.ModuleName from the module init order.
Both awasm and wasmtypes refer to the same underlying CosmWasm module (x/wasm). awasm is a local import alias, and wasmtypes is its types subpackage—they both resolve to ModuleName = "wasm". Including both in the module list will cause the module manager to panic on duplicate module name registration. Keep only one reference to the wasm module.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/app_configure.go` around lines 87 - 91, The module init order currently
includes both awasm.ModuleName and wasmtypes.ModuleName which are the same
CosmWasm module ("wasm") and will cause duplicate registration; remove one of
these entries (either awasm.ModuleName or wasmtypes.ModuleName) from the module
list in app_configure.go so only a single reference to the wasm module remains
(look for the slice containing oracle.ModuleName, epochs.ModuleName,
awasm.ModuleName, wasmtypes.ModuleName and delete the redundant wasm entry).
There was a problem hiding this comment.
awasm is different module named awasm
There was a problem hiding this comment.
@troian Thanks for the clarification — acknowledged. I withdraw the earlier duplication warning. Given that awasm is an Akash-specific module with its own ModuleName, having both awasm.ModuleName and wasmtypes.ModuleName in orderInitGenesis is correct and intentional.
Optional nit to prevent future confusion: consider adding a brief comment above the list noting that awasm is Akash’s wrapper/companion module and wasmtypes.ModuleName refers to the upstream CosmWasm x/wasm module.
✏️ Learnings added
Learnt from: troian
Repo: akash-network/node PR: 2041
File: app/app_configure.go:0-0
Timestamp: 2026-03-11T21:09:37.337Z
Learning: In akash-network/node v2, there is a custom module at x/wasm (imported as awasm) whose ModuleName is distinct from the upstream CosmWasm x/wasm module (wasmtypes.ModuleName == "wasm"); both appear in app/app_configure.go orderInitGenesis intentionally.
Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.
🧠 Learnings used
Learnt from: troian
Repo: akash-network/node PR: 2015
File: x/wasm/keeper/msg_filter.go:0-0
Timestamp: 2025-11-14T01:24:04.227Z
Learning: In wasmvm v3 (v3.0.0+), the CosmosMsg type does NOT contain a Stargate field. The Stargate capability was removed/deprecated in the v3 release. The actual fields in wasmvm v3.0.2 CosmosMsg are: Bank, Custom, Distribution, Gov, IBC, Staking, Any, Wasm, and IBC2.
contracts/pyth/src/contract.rs.bak
Outdated
| /// Query the price feed ID from the chain's oracle module params using custom query | ||
| fn fetch_price_feed_id_from_chain( | ||
| querier: &QuerierWrapper<AkashQuery>, | ||
| ) -> Result<String, ContractError> { | ||
| let response = querier | ||
| .query_oracle_params() | ||
| .map_err(|e| ContractError::InvalidPriceData { | ||
| reason: format!("Failed to query oracle params from chain: {}", e), | ||
| })?; | ||
|
|
||
| // Validate the price feed ID is not empty | ||
| if response.params.akt_price_feed_id.is_empty() { | ||
| return Err(ContractError::InvalidPriceData { | ||
| reason: "Price feed ID not configured in chain params".to_string(), | ||
| }); | ||
| } | ||
|
|
||
| Ok(response.params.akt_price_feed_id) | ||
| } |
There was a problem hiding this comment.
OracleParams doesn’t expose akt_price_feed_id directly.
OracleParams in contracts/pyth/src/querier.rs nests this under pyth_params, so this will not compile as written. Use the helper accessor and error when missing.
🔧 Proposed fix
- // Validate the price feed ID is not empty
- if response.params.akt_price_feed_id.is_empty() {
- return Err(ContractError::InvalidPriceData {
- reason: "Price feed ID not configured in chain params".to_string(),
- });
- }
-
- Ok(response.params.akt_price_feed_id)
+ let price_feed_id = response
+ .params
+ .get_akt_price_feed_id()
+ .ok_or_else(|| ContractError::InvalidPriceData {
+ reason: "Price feed ID not configured in chain params".to_string(),
+ })?;
+
+ Ok(price_feed_id.to_string())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/pyth/src/contract.rs.bak` around lines 36 - 54, The
fetch_price_feed_id_from_chain function is accessing
response.params.akt_price_feed_id which doesn't exist because akt_price_feed_id
is nested under pyth_params; update fetch_price_feed_id_from_chain to use the
provided helper accessor (e.g.,
response.params.pyth_params.get_akt_price_feed_id() or the project’s helper
function) to retrieve the price feed ID and return
Err(ContractError::InvalidPriceData { reason: "...missing..." }) if that
accessor yields None/empty; reference the function name
fetch_price_feed_id_from_chain and the QuerierWrapper<AkashQuery>/OracleParams
structure when making the change.
| pub fn parse_pyth_payload(payload: &[u8]) -> Result<PythPrice, StdError> { | ||
| // Minimum payload size check | ||
| if payload.len() < 4 { | ||
| return Err(StdError::msg("Payload too short")); | ||
| } | ||
|
|
||
| // Check magic bytes "P2WH" for Pyth-to-Wormhole format | ||
| if &payload[0..4] == PYTH_MAGIC { | ||
| return parse_p2wh_format(payload); | ||
| } | ||
|
|
||
| // Try parsing as accumulator/merkle format (newer Hermes API) | ||
| // The accumulator format starts with different magic bytes | ||
| if payload.len() >= 4 && &payload[0..4] == b"AUWV" { | ||
| return parse_accumulator_format(payload); | ||
| } | ||
|
|
||
| // Fallback: try to parse as raw price update | ||
| parse_raw_price_update(payload) | ||
| } | ||
|
|
||
| /// Parse P2WH (Pyth-to-Wormhole) format | ||
| /// This is the batch price attestation format | ||
| fn parse_p2wh_format(payload: &[u8]) -> Result<PythPrice, StdError> { | ||
| // P2WH format: | ||
| // 0-4: magic "P2WH" | ||
| // 4-6: major version (u16) | ||
| // 6-8: minor version (u16) | ||
| // 8-10: header size (u16) | ||
| // 10-11: payload type (u8) | ||
| // ... attestation data follows | ||
|
|
||
| if payload.len() < 11 { | ||
| return Err(StdError::msg("P2WH payload too short")); | ||
| } | ||
|
|
||
| let _major_version = u16::from_be_bytes([payload[4], payload[5]]); | ||
| let _minor_version = u16::from_be_bytes([payload[6], payload[7]]); | ||
| let header_size = u16::from_be_bytes([payload[8], payload[9]]) as usize; | ||
|
|
||
| // Skip header to get to attestation data | ||
| let attestation_start = 4 + header_size; | ||
| if attestation_start >= payload.len() { | ||
| return Err(StdError::msg("Invalid header size")); | ||
| } | ||
|
|
||
| let attestation_data = &payload[attestation_start..]; | ||
|
|
||
| // Parse batch attestation header | ||
| // 0-2: number of attestations (u16) | ||
| // 2-4: attestation size (u16) | ||
| if attestation_data.len() < 4 { | ||
| return Err(StdError::msg("Attestation data too short")); | ||
| } | ||
|
|
||
| let num_attestations = u16::from_be_bytes([attestation_data[0], attestation_data[1]]); | ||
| let attestation_size = u16::from_be_bytes([attestation_data[2], attestation_data[3]]) as usize; | ||
|
|
||
| if num_attestations == 0 { | ||
| return Err(StdError::msg("No attestations in payload")); | ||
| } | ||
|
|
||
| // Parse first attestation (we only need one price) | ||
| let first_attestation_start = 4; | ||
| if first_attestation_start + attestation_size > attestation_data.len() { | ||
| return Err(StdError::msg("Attestation data truncated")); | ||
| } | ||
|
|
||
| let attestation = &attestation_data[first_attestation_start..first_attestation_start + attestation_size]; | ||
| parse_single_attestation(attestation) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Pyth P2WH format magic bytes header size specification crosschain
💡 Result:
In Pyth’s cross-chain Accumulator Update (the format used by Hermes/target-chain contracts for Wormhole+Merkle updates), the wire layout is defined in PythAccumulator.sol:
1) Accumulator Update (outer) header (“P2WH” / cross-chain update blob)
Magic (u32): 0x504e4155 (ASCII "PNAU" = Pyth Network Accumulator Update)[1]
Then:
majorVersion(u8) =1[1]minorVersion(u8) (forward-compatible, must be ≥ minimum)[1]trailingHeaderSize(u8) then skiptrailingHeaderSizebytes (reserved for future headers)[1]updateType(u8) (currentlyWormholeMerkle)[1]
Minimum outer header size: 4 + 1 + 1 + 1 + 0 + 1 = 8 bytes (when trailingHeaderSize = 0).[1]
2) Wormhole+Merkle section immediately after the outer header
For WormholeMerkle, the next bytes begin with:
whProofSize(u16)whProofSizebytes of Wormhole VAA- (then the Merkle proof/update data that follows)[1]
3) Wormhole VAA payload header (“wormhole verification” header)
Inside the decoded VAA payload, Pyth expects:
Magic (u32): 0x41555756 (ASCII "AUWV" = Accumulator Update Wormhole Verification)[2]
Then:
updateType(u8) (must matchWormholeMerkle)[2]slot(u64) (currently unused by the contract)[2]ringSize(u32) (currently unused)[2]digest/ Merkle root (20 bytes)[2]
Fixed AUWV payload-header size: 4 + 1 + 8 + 4 + 20 = 37 bytes.[2]
Sources:
[1] extractUpdateTypeFromAccumulatorHeader(...) and whProofSize parsing in PythAccumulator.sol (verified source on SonicScan) (sonicscan.org)
[2] AUWV payload parsing in extractWormholeMerkleHeader... (verified source on SonicScan) (sonicscan.org)
Citations:
- 1: https://sonicscan.org/address/0xa2aa501b19aff244d90cc15a4cf739d2725b5729
- 2: https://sonicscan.org/address/0xa2aa501b19aff244d90cc15a4cf739d2725b5729
Fix P2WH magic bytes and header-size parsing—use "PNAU" magic and u8 field layout, not "P2WH" with u16 header_size.
The Pyth P2WH (Accumulator Update) format uses magic 0x504e4155 ("PNAU", not "P2WH") followed by majorVersion (u8), minorVersion (u8), trailingHeaderSize (u8), and updateType (u8). The current code checks for the wrong magic bytes and incorrectly treats bytes 8–9 as a u16 header_size field; this offset does not exist in the spec. This will cause all valid Pyth crosschain updates to fail parsing.
The "AUWV" magic (inside the Wormhole VAA payload) is correct and should be preserved.
Also applies to: 172-181
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/pyth/src/pyth.rs` around lines 31 - 101, The P2WH parser is using
the wrong magic and header layout: update parse_pyth_payload to check for the
PNAU magic (bytes 0..4 == 0x504e4155, "PNAU") instead of "P2WH", and modify
parse_p2wh_format to read the header as byte fields (majorVersion = payload[4]
as u8, minorVersion = payload[5] as u8, trailingHeaderSize = payload[6] as u8 as
usize, updateType = payload[7] as u8) rather than interpreting bytes 8–9 as a
u16 header_size; compute attestation_start = 4 + trailingHeaderSize and adjust
the minimum length checks accordingly so parse_single_attestation still receives
the correct attestation slice; keep AUWV handling and parse_accumulator_format
unchanged.
x/epochs/keeper/abci.go
Outdated
| err := ctx.EventManager().EmitTypedEvent(&types.EventEpochEnd{ | ||
| EpochNumber: epochInfo.CurrentEpoch, | ||
| }) | ||
| if err != nil { | ||
| return false, err | ||
| } | ||
| if err != nil { | ||
| return false, nil | ||
| } |
There was a problem hiding this comment.
Dead code: duplicate err check on lines 50-52 is unreachable.
The error from EmitTypedEvent (line 44) is already handled on lines 47-49 — if err != nil, the function returns immediately. Lines 50-52 check the same err variable, which is guaranteed to be nil at that point. This looks like a copy-paste artifact.
More importantly, line 51 returns false, nil (swallowing the error) unlike line 48 which returns false, err (propagating it). If the intent was to check a different error, there's a missing assignment.
Proposed fix — remove the dead check
err := ctx.EventManager().EmitTypedEvent(&types.EventEpochEnd{
EpochNumber: epochInfo.CurrentEpoch,
})
if err != nil {
return false, err
}
- if err != nil {
- return false, nil
- }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| err := ctx.EventManager().EmitTypedEvent(&types.EventEpochEnd{ | |
| EpochNumber: epochInfo.CurrentEpoch, | |
| }) | |
| if err != nil { | |
| return false, err | |
| } | |
| if err != nil { | |
| return false, nil | |
| } | |
| err := ctx.EventManager().EmitTypedEvent(&types.EventEpochEnd{ | |
| EpochNumber: epochInfo.CurrentEpoch, | |
| }) | |
| if err != nil { | |
| return false, err | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@x/epochs/keeper/abci.go` around lines 44 - 52, Remove the duplicate
unreachable error check after calling
ctx.EventManager().EmitTypedEvent(&types.EventEpochEnd{...}) — the first if err
!= nil already returns the error, so the subsequent if err != nil { return
false, nil } is dead and incorrectly swallows errors; delete that second check
(or if a different call was intended, replace it with the correct assignment
before checking). Ensure references to the err variable only occur after the
proper assignment from EmitTypedEvent and that errors are propagated (return
false, err) rather than swallowed.
|
Marked as stale; will be closed in five days. |
0327bee to
8908c0d
Compare
There was a problem hiding this comment.
Actionable comments posted: 13
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
testutil/state/suite.go (1)
260-262:⚠️ Potential issue | 🟠 Major
msfield is never initialized;Store()always returns nil.The
TestSuite.msfield is declared on line 44 but is never assigned inSetupTestSuiteWithKeepers(lines 237-243). CallingStore()will return a nilCommitMultiStore, which will cause nil pointer panics if any test relies on this accessor.Either initialize
msfrom the app's multi-store in the constructor, or remove the unused field and method if they're no longer needed.🔧 Proposed fix (if the store is needed)
return &TestSuite{ t: t, app: app, ctx: ctx, + ms: app.CommitMultiStore(), keepers: keepers, priceFeeder: priceFeeder, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/state/suite.go` around lines 260 - 262, The TestSuite.ms field is never set so Store() returns nil; update SetupTestSuiteWithKeepers (or the TestSuite constructor) to initialize ts.ms from the application multi-store (e.g., assign ts.ms = app.GetCommitMultiStore() or equivalent) so Store() returns a valid store, referencing the TestSuite.ms field and Store() method, or if the store accessor is not needed remove the TestSuite.ms field and the Store() method to avoid returning nil.tests/e2e/market_cli_test.go (1)
65-130:⚠️ Potential issue | 🟡 MinorValidate the new funding transfers before continuing.
Like in
tests/e2e/deployment_cli_test.go, theseExecSendcalls should validate the tx response, not justerr. Otherwise a failed transfer can leave the deployer/provider unfunded and only surface later as a noisy market-flow failure.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/market_cli_test.go` around lines 65 - 130, Each ExecSend funding call must have its transaction response validated immediately (not just check err) to ensure transfers succeeded; for each clitestutil.ExecSend (the calls that fund s.addrDeployer and s.addrProvider) call clitestutil.ValidateTxSuccessful(ctx, s.T(), cctx, res.Bytes()) right after WaitForNextBlock() (and keep s.Require().NoError(err) as well) so failures are caught early—look for clitestutil.ExecSend, clitestutil.ValidateTxSuccessful, and s.Network().WaitForNextBlock in the test and add the missing ValidateTxSuccessful calls where needed.app/upgrades.go (1)
37-37:⚠️ Potential issue | 🟡 MinorTypo in error message: "unitialize" should be "initialize".
✏️ Proposed fix
- return fmt.Errorf("unable to unitialize upgrade `%s`: %w", name, err) + return fmt.Errorf("unable to initialize upgrade `%s`: %w", name, err)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/upgrades.go` at line 37, Fix the typo in the error message string that currently says "unable to unitialize upgrade `%s`" by updating the message to "unable to initialize upgrade `%s`" where the fmt.Errorf call appears (look for the fmt.Errorf call containing the substring "unable to unitialize upgrade" in upgrades.go, using the variables name and err).
♻️ Duplicate comments (2)
cmd/akash/cmd/genesis.go (1)
234-236:⚠️ Potential issue | 🔴 Critical
sdkutil.DenomUExponentstill looks like the wrong exported constant.If
pkg.akt.dev/go/sdkutilstill exportsDenomUaktExponent, this is a compile break.🔧 Minimal fix
- Exponent: sdkutil.DenomUExponent, + Exponent: sdkutil.DenomUaktExponent,Does `pkg.akt.dev/go/sdkutil` export `DenomUExponent`, or is the exported constant still `DenomUaktExponent`?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` around lines 234 - 236, The code uses sdkutil.DenomUExponent but the exported constant in pkg.akt.dev/go/sdkutil may be DenomUaktExponent; update the reference to the actual exported symbol: replace sdkutil.DenomUExponent with the correct exported constant (e.g., sdkutil.DenomUaktExponent) wherever used (notably in the genesis setup block that sets Denom/Exponent), or import/alias the correct name from sdkutil if it was renamed; ensure all usages (e.g., Denom: sdkutil.DenomAkt, Exponent: ...) compile against the sdkutil package by matching the exact exported identifier._run/node/prop.json (1)
24-25:⚠️ Potential issue | 🔴 CriticalInvalid JSON — missing comma between
"title"and"summary"fields.This will cause a parse error when the file is used by governance tooling.
🐛 Proposed fix
- "title": "Add Oracle Price Feeder Source" + "title": "Add Oracle Price Feeder Source", "summary": "Authorize price feeder address for AKT/USD oracle",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/prop.json` around lines 24 - 25, The JSON object is invalid because the "title" and "summary" fields are missing a separating comma; update the JSON in _run/node/prop.json so that the "title": "Add Oracle Price Feeder Source" and "summary": "Authorize price feeder address for AKT/USD oracle" entries are separated by a comma, ensuring the object is syntactically valid and will parse correctly by governance tooling (locate the "title" and "summary" fields in the file and insert the comma between them).
🟠 Major comments (30)
tests/testplan-bme-testnet.md-284-302 (1)
284-302:⚠️ Potential issue | 🟠 MajorClarify the halt-path expectation.
This test currently expects both an automatic fallback to direct AKT handling and an
ErrCircuitBreakerActiveerror for the same deposit path. Those are different user-visible outcomes, so the pass criteria are ambiguous as written. Please split this into separate cases or state which interface should surface which behavior.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/testplan-bme-testnet.md` around lines 284 - 302, The test case TC-BME-CB03 is ambiguous because it expects both automatic fallback to direct AKT settlement and an ErrCircuitBreakerActive error for the same deposit flow; split the behavior into two distinct assertions or two sub-cases: one sub-case (e.g., "API-level behavior") should assert that the deposit API returns ErrCircuitBreakerActive when attempting AKT→ACT minting, referencing ErrCircuitBreakerActive and the deposit endpoint, and the other sub-case (e.g., "settlement behavior") should assert that new deposits are ultimately settled as AKT with Status `Halt` and no ACT minted (direct AKT settlement), referencing Status `Halt` and the AKT settlement path; update the test expectations accordingly so each user-visible interface has a single clear expected outcome.cmd/akash/cmd/genesis.go-119-135 (1)
119-135: 🛠️ Refactor suggestion | 🟠 MajorMerge bank metadata/send overrides instead of replacing the whole slices.
This reads the existing bank genesis, then clobbers both
DenomMetadataandSendEnabled. Any preconfigured metadata or per-denom send restrictions already present in the input genesis will be lost afterprepare-genesisruns.💡 Safer shape
- bankGenState.DenomMetadata = genesisParams.NativeCoinMetadatas - bankGenState.SendEnabled = []banktypes.SendEnabled{ - { - sdkutil.DenomUakt, - true, - }, - { - sdkutil.DenomUact, - false, - }, - } + bankGenState.DenomMetadata = mergeDenomMetadata( + bankGenState.DenomMetadata, + genesisParams.NativeCoinMetadatas, + ) + bankGenState.SendEnabled = mergeSendEnabled( + bankGenState.SendEnabled, + []banktypes.SendEnabled{ + {Denom: sdkutil.DenomUakt, Enabled: true}, + {Denom: sdkutil.DenomUact, Enabled: false}, + }, + )Supporting helpers can preserve existing entries and only override the denoms this command owns.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` around lines 119 - 135, The code currently overwrites bankGenState.DenomMetadata and bankGenState.SendEnabled with new slices, losing existing entries; update the logic in the genesis preparation (where bankGenState is set) to merge genesisParams.NativeCoinMetadatas into bankGenState.DenomMetadata (preserving existing metadata and replacing/adding only matching denom entries) and to merge the two send entries for sdkutil.DenomUakt and sdkutil.DenomUact into bankGenState.SendEnabled (preserving any other preconfigured per-denom SendEnabled entries and updating only those denoms), ensuring you modify the merging helper in the same scope that constructs bankGenState before MarshalJSON so existing entries are not clobbered.cmd/akash/cmd/genesis.go-242-261 (1)
242-261:⚠️ Potential issue | 🟠 MajorPoint ACT’s user-facing metadata at
act, notuact.
actis defined here as the exponent-6 unit, butDisplay,Name, andSymbolall point atuact. That will make clients render and label the token in micro-units instead of the intended display denomination.🪙 Suggested fix
- Base: sdkutil.DenomUact, - Display: sdkutil.DenomUact, - Name: sdkutil.DenomUact, - Symbol: sdkutil.DenomUact, + Base: sdkutil.DenomUact, + Display: sdkutil.DenomAct, + Name: sdkutil.DenomAct, + Symbol: sdkutil.DenomAct,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` around lines 242 - 261, The token metadata uses the micro-unit sdkutil.DenomUact for Display, Name, and Symbol but should use the user-facing unit sdkutil.DenomAct; update the genesis token metadata (the struct with Description "Akash Compute Token") so that Display, Name, and Symbol reference sdkutil.DenomAct while keeping Base as sdkutil.DenomUact and the DenomUnits unchanged.upgrades/CHANGELOG.md-9-15 (1)
9-15:⚠️ Potential issue | 🟠 MajorAdd v2.0.0 upgrade entry to document this major version release.
This PR introduces a major breaking change that migrates to v2.0.0, but the CHANGELOG lacks the corresponding upgrade entry. Operators need documented guidance on what changes during this upgrade.
The v2.0.0 code in
upgrades/software/v2.0.0/shows this release:
- Adds stores for
epochs,oracle,wasm, andbmemodules- Deletes the
takemodule store- Migrates
deploymentfrom version 5 → 6 andcertfrom 3 → 4- Instantiates Wormhole and Pyth oracle smart contracts
- Migrates deployments and escrow payments from axlUSDC to uakt denomination
📋 Add after line 41 (template section)
+##### v2.0.0 + +###### Description + +Major version upgrade introducing CosmWasm integration with Wormhole and Pyth oracle support. Migrates deployments from axlUSDC to native uakt denomination. + +- Stores + - added stores + - `epochs` + - `oracle` + - `wasm` + - `bme` + - deleted stores + - `take` + +- Migrations + - cert `3 -> 4` + - deployment `5 -> 6` + +###### Breaking Changes + +- Wasm module introduced for smart contract support +- Oracle module integrated with Pyth price feeds via Wormhole +- axlUSDC deprecated in favor of native uakt token +- Module code paths restructured for v2 package layout + ##### v1.2.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/CHANGELOG.md` around lines 9 - 15, Add a new v2.0.0 upgrade entry to the CHANGELOG documenting the major release: list added stores for epochs, oracle, wasm, and bme; note deletion of the take module store; record migration version bumps (deployment 5→6 and cert 3→4); describe instantiation of Wormhole and Pyth oracle smart contracts; and state that deployments and escrow payments migrate from axlUSDC to uakt. Place this v2.0.0 entry in the upgrades changelog section where other release entries/templates are listed and mirror the style/format used by existing entries.upgrades/software/v2.0.0/deployment.go-188-195 (1)
188-195:⚠️ Potential issue | 🟠 MajorUnconditional append to
Transferredis not persisted whenchangedis false.Line 188 appends a zero
DecCointoacc.State.Transferredfor every account, butSaveAccountRawis only called whenchangedis true. If no axlUSDC funds or deposits were found, the appended entry is discarded.If this transfer tracking entry should be added to all processed accounts (regardless of denom migration), move the append inside the
if changedblock or setchanged = trueafter the append.🐛 Proposed fix (if all accounts should get the entry)
- acc.State.Transferred = append(acc.State.Transferred, sdk.NewDecCoin(sdkutil.DenomUact, sdkmath.ZeroInt())) + acc.State.Transferred = append(acc.State.Transferred, sdk.NewDecCoin(sdkutil.DenomUact, sdkmath.ZeroInt())) + changed = true if changed {🐛 Or, if only migrated accounts need the entry
- acc.State.Transferred = append(acc.State.Transferred, sdk.NewDecCoin(sdkutil.DenomUact, sdkmath.ZeroInt())) - if changed { + acc.State.Transferred = append(acc.State.Transferred, sdk.NewDecCoin(sdkutil.DenomUact, sdkmath.ZeroInt())) if gerr = ekeeper.SaveAccountRaw(sctx, acc); gerr != nil {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 188 - 195, The append to acc.State.Transferred using sdk.NewDecCoin(sdkutil.DenomUact, sdkmath.ZeroInt()) is currently unconditional but you only call ekeeper.SaveAccountRaw(sctx, acc) when the local flag changed is true, so the appended entry is lost when changed is false; to fix, either (A) move the append into the existing if changed { ... } block so the zero DecCoin is only added for migrated accounts, or (B) if you intend every processed account to receive the entry, set changed = true immediately after the append so the later ekeeper.SaveAccountRaw(sctx, acc) persists the modification (references: acc.State.Transferred, sdk.NewDecCoin, sdkutil.DenomUact, changed, ekeeper.SaveAccountRaw).contracts/wormhole-old/src/contract.rs-71-76 (1)
71-76:⚠️ Potential issue | 🟠 MajorKeep replay protection out of the read-only verifier.
query_parse_and_verify_vaareuses the execute helper, so archived VAAs now fail theVerifyVAAquery withVaaAlreadyExecuted. That contradicts the query’s “verify without executing it” contract and prevents re-inspection of already processed VAAs.♻️ Proposed fix
let vaa = parse_and_verify_vaa( deps.storage, &querier, data, env.block.time.seconds(), + true, )?;fn parse_and_verify_vaa( storage: &dyn Storage, querier: &QuerierWrapper<AkashQuery>, data: &[u8], _block_time: u64, + check_archive: bool, ) -> StdResult<ParsedVAA> { let vaa = ParsedVAA::deserialize(data)?; if vaa.version != 1 { return ContractError::InvalidVersion.std_err(); } - if VAA_ARCHIVE.may_load(storage, vaa.hash.as_slice())?.unwrap_or(false) { + if check_archive && VAA_ARCHIVE.may_load(storage, vaa.hash.as_slice())?.unwrap_or(false) { return ContractError::VaaAlreadyExecuted.std_err(); }pub fn query_parse_and_verify_vaa(deps: Deps, data: &[u8], block_time: u64) -> StdResult<ParsedVAA> { // Always use oracle-based guardian set let querier: QuerierWrapper<AkashQuery> = QuerierWrapper::new(deps.querier.deref()); - parse_and_verify_vaa(deps.storage, &querier, data, block_time) + parse_and_verify_vaa(deps.storage, &querier, data, block_time, false) }Also applies to: 115-129, 292-295
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/contract.rs` around lines 71 - 76, The query handler query_parse_and_verify_vaa is using parse_and_verify_vaa which enforces replay protection (returning VaaAlreadyExecuted) and thus breaks read-only verification; change the verifier so replay checks are separated from signature/format verification: either add a new function parse_and_verify_vaa_no_replay (or a boolean flag on parse_and_verify_vaa) that performs parsing and cryptographic/format checks but does not check execution state, and call that from query_parse_and_verify_vaa, while keeping the original parse_and_verify_vaa (or the replay-true path) for execute paths that must enforce VaaAlreadyExecuted; update all callers accordingly (e.g., query_parse_and_verify_vaa, Execute path) and ensure the error types remain consistent for non-replay failures.contracts/wormhole-old/src/contract.rs-91-96 (1)
91-96:⚠️ Potential issue | 🟠 MajorDon't strip interior NULs from the governance module ID.
filter(|c| c != '\0')normalizes malformed module bytes likeb"C\0ore"into"Core". The check should only tolerate zero padding at the tail, or compare the raw 32-byte field directly.🔒 Proposed fix
- let module = String::from_utf8(gov_packet.module) - .map_err(|_| StdError::msg("invalid governance module encoding"))?; - let module: String = module.chars().filter(|c| c != &'\0').collect(); - - if module != "Core" { + let module = gov_packet.module.as_slice(); + if !module.starts_with(b"Core") || module[4..].iter().any(|b| *b != 0) { return Err(StdError::msg("this is not a valid module")); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/contract.rs` around lines 91 - 96, The code currently decodes gov_packet.module into module and then removes all NULs with module.chars().filter(...), which incorrectly accepts interior NULs (e.g., b"C\0ore"); instead, keep the original bytes or only strip trailing NULs: decode gov_packet.module with String::from_utf8 (or validate bytes) and replace the filter step with trimming trailing NULs (e.g., trim_end_matches('\0')) before comparing to "Core", or compare the raw 32-byte gov_packet.module bytes directly against the expected byte pattern; update the variable/module check (the module local and the gov_packet.module usage) accordingly.contracts/wormhole-old/src/contract.rs-35-42 (1)
35-42:⚠️ Potential issue | 🟠 MajorReject non-32-byte governance addresses at instantiate time.
handle_submit_vaalater comparesstate.gov_addressagainstParsedVAA.emitter_address, which is always 32 bytes. Persisting a 20-byte address here makes every governance VAA miss that equality and bricks upgrades / fee changes.🛡️ Proposed fix
pub fn instantiate( deps: DepsMut, _env: Env, _info: MessageInfo, msg: InstantiateMsg, ) -> StdResult<Response> { + if msg.gov_address.len() != 32 { + return Err(StdError::msg("gov_address must be 32 bytes")); + } + let state = ConfigInfo { gov_chain: msg.gov_chain, gov_address: msg.gov_address.to_vec(), fee: Coin::new(Uint256::from(FEE_AMOUNT), &msg.fee_denom), chain_id: msg.chain_id,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/contract.rs` around lines 35 - 42, During instantiation validate that the configured governance address is exactly 32 bytes and return an error if not; specifically, in the code that builds ConfigInfo (using ConfigInfo, gov_address, and where you call CONFIG.save), check msg.gov_address.len() == 32 (or equivalent) before creating/saving state and reject/err on other lengths so that state.gov_address matches ParsedVAA.emitter_address (32 bytes) used later by handle_submit_vaa.contracts.go-9-10 (1)
9-10:⚠️ Potential issue | 🟠 MajorAdd
build-contractsas prerequisite to default build targets.The
//go:embed .cache/cosmwasm/artifacts/*.wasmdirective requires the CosmWasm artifacts to exist at compile time. Currently, thebuild-contractstarget can generate them, but it is only integrated into release and test-bin targets—not the defaultbuildorbinstargets.Running
make buildormake binswill fail with "pattern .cache/cosmwasm/artifacts/*.wasm: no matching files found" unlessmake build-contractsis run first.Update the Makefile so that
buildandbinstargets depend onbuild-contracts, similar to howtest-bins,docker-image, andreleasealready do. For example:
build: wasmvm-libs build-contractsbins: wasmvm-libs build-contracts🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts.go` around lines 9 - 10, The Makefile's default build targets fail because the Go embed directive for Contracts requires .cache/cosmwasm/artifacts/*.wasm to exist; update the Makefile so the build and bins targets depend on the existing build-contracts target (same pattern used by test-bins, docker-image, release). Specifically, add build-contracts as a prerequisite to the build and bins targets (e.g., make build: wasmvm-libs build-contracts and make bins: wasmvm-libs build-contracts) so the CosmWasm artifacts are generated before compilation.make/init.mk-27-29 (1)
27-29:⚠️ Potential issue | 🟠 Major
SPACEis now empty, breaking delimiter operations.
SPACE := $(NULL)expands to an empty string instead of a literal space. This breaks callers like$(subst $(SPACE),$(COMMA),...)inmake/releasing.mkandMakefile, which depend onSPACEbeing a space character for delimiter substitution.Correct fix
NULL := -SPACE := $(NULL) -WHITESPACE := $(NULL) $(NULL) +SPACE := $(NULL) $(NULL) +WHITESPACE := $(SPACE)$(SPACE)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/init.mk` around lines 27 - 29, SPACE is currently set to $(NULL) which expands to empty; change SPACE to be a literal space character (i.e. assign a space after the :=) so delimiter substitutions work, and update WHITESPACE to use that SPACE (e.g., WHITESPACE := $(SPACE)$(SPACE)) while keeping NULL as the empty value; make sure callers like $(subst $(SPACE),$(COMMA),...) now receive an actual space via the SPACE variable.upgrades/software/v2.0.0/upgrade.go-97-100 (1)
97-100:⚠️ Potential issue | 🟠 MajorFix ACT metadata to use the user-facing denom for Display, Name, and Symbol.
The Display field should be
sdkutil.DenomAct(the macro unit at Exponent 6), notsdkutil.DenomUact(the micro unit). This matches the pattern used for AKT metadata in genesis and ensures wallets/CLI render balances in human-readable form (act) rather than micro-units (uact). The Name and Symbol should also be set to readable strings.Proposed fix
Base: sdkutil.DenomUact, - Display: sdkutil.DenomUact, - Name: sdkutil.DenomUact, - Symbol: sdkutil.DenomUact, + Display: sdkutil.DenomAct, + Name: "Akash Compute Token", + Symbol: "ACT",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/upgrade.go` around lines 97 - 100, The metadata currently sets Display, Name, and Symbol to the micro denom sdkutil.DenomUact; change Display to sdkutil.DenomAct (the Exponent 6 human-facing unit) and set Name and Symbol to readable strings representing the ACT unit (e.g., sdkutil.DenomAct or "ACT" as appropriate) instead of sdkutil.DenomUact so wallets/CLI show balances in ACT; update the struct where Base/Display/Name/Symbol are assigned (the metadata initialiser around Base: sdkutil.DenomUact) to use sdkutil.DenomAct for Display and human-readable Name/Symbol.go.mod-156-156 (1)
156-156:⚠️ Potential issue | 🟠 MajorUpgrade
go-ethereumfrom v1.16.8 to v1.16.9 or later.go-ethereum v1.16.8 has two unpatched CVEs:
- CVE-2026-26314: specially crafted message can force node shutdown
- CVE-2026-26315: ECIES/RLPx handshake validation flaw; attacker can extract node key bits
Both are fixed in v1.16.9. Since this is an indirect dependency, upgrade the parent dependency or add a replace directive. After upgrading, rotate the node key as recommended by OSV.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@go.mod` at line 156, The go-ethereum indirect dependency is pinned at v1.16.8 and must be updated to v1.16.9 or later to pick up CVE fixes; update the github.com/ethereum/go-ethereum version in go.mod (or add a replace directive for github.com/ethereum/go-ethereum to v1.16.9+) so the build uses the patched release, then run dependency tidy/resolve to ensure the new version is chosen by the dependency graph; after upgrading, follow the OSV recommendation to rotate the node key used by your service.make/cosmwasm.mk-3-9 (1)
3-9:⚠️ Potential issue | 🟠 MajorRun the optimizer container with the current user's UID/GID to avoid root-owned output files.
Docker creates bind-mounted paths (
targetandartifacts) as root when the container runs as root (the default). This makes follow-up builds and cleanup unusable withoutsudo.Additionally, create the output subdirectories explicitly to ensure they exist before the container starts.
Note: If you have a pre-existing
registry_cachenamed volume created by root, switching to--usermay cause permission errors. In that case, remove the volume withdocker volume rm registry_cachebefore running with the new--userflag.Proposed fix
build-contracts: - mkdir -p $(AKASH_DEVCACHE)/cosmwasm + mkdir -p $(AKASH_DEVCACHE)/cosmwasm/target $(AKASH_DEVCACHE)/cosmwasm/artifacts docker run --rm \ + --user "$$(id -u):$$(id -g)" \ -v "$(ROOT_DIR)":/code \ -v "$(AKASH_DEVCACHE)/cosmwasm/target":/target \ -v "$(AKASH_DEVCACHE)/cosmwasm/artifacts":/code/artifacts \ --mount type=volume,source=registry_cache,target=/usr/local/cargo/registry \ $(COSMWASM_OPTIMIZER_IMAGE)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/cosmwasm.mk` around lines 3 - 9, Create the output subdirs and run the optimizer container as the current user: ensure you mkdir -p $(AKASH_DEVCACHE)/cosmwasm/target and $(AKASH_DEVCACHE)/cosmwasm/artifacts before invoking docker, and add a --user $(shell id -u):$(shell id -g) flag to the docker run that uses $(ROOT_DIR), $(AKASH_DEVCACHE)/cosmwasm/target, $(AKASH_DEVCACHE)/cosmwasm/artifacts and $(COSMWASM_OPTIMIZER_IMAGE) so bind-mounted outputs are owned by the invoking user.upgrades/software/v2.0.0/wasm.go-133-143 (1)
133-143:⚠️ Potential issue | 🟠 Major
ChainID: 26is incorrect—Akash should use its own Wormhole chain ID, not Pythnet's emitter chain.Line 136 sets the Wormhole contract's
ChainIDto 26, which is the remote Pythnet emitter chain. This constant is used to identify the source of Pyth price feed data (line 183), not the local chain. Configuring Wormhole to identify itself as Pythnet breaks bridge semantics and will cause misrouting. Wormhole needs a distinct chain ID for Akash.Additionally, the comment on line 98 states "index 4" while the code on line 142 sets
GuardianSetIndex: 5—verify which is correct before deployment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/wasm.go` around lines 133 - 143, The ChainID field in wormholeInstantiateMsg is wrong—replace the hardcoded Pythnet value (26) with the correct Akash Wormhole chain ID (use the project constant or define akashWormholeChainID and assign that to ChainID) so the local chain identifies itself correctly; also reconcile the guardian index/comment mismatch by verifying the intended guardian set index and making GuardianSetIndex and any inline comment consistent (e.g., change the comment "index 4" or the value 5 so both reflect the verified guardian set index), and double-check GuardianSetExpirity spelling if needed.contracts/wormhole/src/byte_utils.rs-73-77 (1)
73-77:⚠️ Potential issue | 🟠 MajorGuard against oversized canonical addresses.
If
addr.len() > 32, Line 75 creates an invalid start index and panics at Line 76. Add an explicit bound check.🐛 Proposed fix
pub fn extend_address_to_32_array(addr: &CanonicalAddr) -> [u8; 32] { + assert!(addr.len() <= 32, "canonical address length exceeds 32 bytes"); let mut result = [0u8; 32]; let start = 32 - addr.len(); result[start..].copy_from_slice(addr); result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/byte_utils.rs` around lines 73 - 77, The function extend_address_to_32_array can panic when addr.len() > 32 because start = 32 - addr.len() underflows; add an explicit bound check at the top of extend_address_to_32_array (e.g., assert!(addr.len() <= 32, "canonical address too long: {} bytes", addr.len()) or return a Result error if you change the signature) so you never compute a negative start and the error message is explicit; keep using start and result.copy_from_slice(addr) after the check._run/init.sh-28-28 (1)
28-28:⚠️ Potential issue | 🟠 MajorDo not hardcode mnemonics in-repo.
Line 28 embeds a full wallet mnemonic. Even for local flows, this is a secrets-management risk; source it from environment/secure local config instead.
🔐 Proposed fix
-HERMES_MNEMONIC="wire museum tragic inmate final lady illegal father whisper margin sea cool soul half moon nut tissue strategy ladder come glory opera device elbow" +if [[ -z "${HERMES_MNEMONIC:-}" ]]; then + echo "HERMES_MNEMONIC is not set" + exit 1 +fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` at line 28, The script currently hardcodes a wallet mnemonic in the HERMES_MNEMONIC variable; remove the literal and load the mnemonic from a secure source instead (e.g., read from an environment variable or secure local config), e.g., change assignment of HERMES_MNEMONIC to read from the environment (and exit with a clear error if missing) so the mnemonic is not stored in-repo; update any code that references HERMES_MNEMONIC accordingly and document the required env var for local dev.contracts/wormhole/src/byte_utils.rs-19-64 (1)
19-64:⚠️ Potential issue | 🟠 MajorAvoid panic-based parsing for untrusted byte input.
Line 24/29/34/39/54/58/63 can panic on short slices. For VAA parsing paths, this should fail gracefully (e.g.,
Result<_, ContractError>) instead of trapping.♻️ Suggested direction
-pub trait ByteUtils { - fn get_u16(&self, index: usize) -> u16; +pub trait ByteUtils { + fn get_u16(&self, index: usize) -> Result<u16, ContractError>; ... }fn read<const N: usize>(data: &[u8], index: usize) -> Result<[u8; N], ContractError> { let end = index.checked_add(N).ok_or(ContractError::InvalidVAA)?; let slice = data.get(index..end).ok_or(ContractError::InvalidVAA)?; let mut out = [0u8; N]; out.copy_from_slice(slice); Ok(out) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/byte_utils.rs` around lines 19 - 64, The current byte-accessors (get_u16, get_u32, get_u64, get_u128_be, get_u256, get_address, get_bytes32, get_bytes, get_const_bytes and get_u8) index and copy from slices directly and will panic on short input; change these helpers to return a Result (e.g., Result<..., ContractError>) and perform bounds-checked access using checked_add and data.get(index..end) before copy_from_slice; implement a shared generic helper (like read<const N: usize>) that validates index+N, returns Err(ContractError::InvalidVAA) on failure, and use it inside get_u16/get_u32/get_u64/get_u128_be/get_u256/get_const_bytes and adjust get_bytes/get_bytes32/get_address to use safe slicing with get(..).ok_or(ContractError::InvalidVAA) so parsing fails gracefully instead of panicking._run/common.mk-51-53 (1)
51-53:⚠️ Potential issue | 🟠 MajorLikely typo in run-dir target variable (
AP_RUN_DIR).Line 51 uses
$(AP_RUN_DIR), but this file consistently referencesAKASH_RUN_DIR. The variableAP_RUN_DIRis not defined in_run/common.mkor its includes;AKASH_RUN_DIRis validated incommon-base.mk(line 7-8). This will cause directory creation to target an undefined variable.🐛 Proposed fix
-$(AP_RUN_DIR): +$(AKASH_RUN_DIR): mkdir -p $@🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common.mk` around lines 51 - 53, The make target currently uses the undefined variable $(AP_RUN_DIR) in the rule for the run directory; change the target and any references from AP_RUN_DIR to the correct AKASH_RUN_DIR (i.e., update the target "$(AP_RUN_DIR):" to "$(AKASH_RUN_DIR):" and ensure the mkdir command uses $@ or $(AKASH_RUN_DIR)) so the directory is created using the validated AKASH_RUN_DIR variable._run/common-commands.mk-6-10 (1)
6-10:⚠️ Potential issue | 🟠 MajorUpdate
bid-createto the ACT-denominated flow.The current defaults still use
uakt, and this target has no deposit argument. The updated e2e coverage now creates bids with both price and deposit inuact, so this helper no longer matches the transaction requirements.Also applies to: 136-144
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-commands.mk` around lines 6 - 10, The PRICE default currently uses "10uakt" and the bid-create target lacks a deposit argument; update the ACT-denominated flow by changing PRICE to an ACT unit (e.g., "10uact"), introduce a DEPOSIT variable with an appropriate default (e.g., "DEPOSIT ?= 1uact"), and modify the bid-create target invocation to pass both $(PRICE) and $(DEPOSIT) so bids are created with ACT-denominated price and deposit; update any other matching defaults in the same file (lines referenced around the PRICE block) to use "uact" consistently._run/common-commands.mk-87-91 (1)
87-91:⚠️ Potential issue | 🟠 Major
deployment-createis missing the new deposit flag.The updated e2e flows create deployments with an explicit deposit, but this helper still emits the old command shape.
_runwill not reproduce the supported path until the deposit is wired into create as well.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-commands.mk` around lines 87 - 91, The deployment-create Makefile target currently omits the new deposit flag; update the target named deployment-create to pass the CLI deposit argument (e.g. --deposit "$(DEPOSIT)") to the $(AKASH) tx deployment create command so it matches the updated e2e flow; reference the target deployment-create and variables AKASH, SDL_PATH, DSEQ, KEY_NAME and add the DEPOSIT variable usage to the command invocation._run/common-commands.mk-105-110 (1)
105-110:⚠️ Potential issue | 🟠 MajorUse
KEY_ADDRESSfordeployment-close.Hard-coding
MAIN_ADDRhere makes this target act on the wrong owner as soon asKEY_NAMEis overridden. The rest of this file already derives owner values fromKEY_NAME/KEY_ADDRESS.Suggested fix
$(AKASH) tx deployment close \ - --owner "$(MAIN_ADDR)" \ + --owner "$(KEY_ADDRESS)" \ --dseq "$(DSEQ)" \ --from "$(KEY_NAME)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-commands.mk` around lines 105 - 110, The deployment-close target currently hardcodes the owner using MAIN_ADDR which breaks when KEY_NAME is overridden; update the deployment-close recipe in the Makefile to use KEY_ADDRESS as the owner (replace --owner "$(MAIN_ADDR)" with --owner "$(KEY_ADDRESS)") so the owner value is derived from the key config consistent with the rest of the file, and ensure KEY_ADDRESS is the same variable used elsewhere for owner resolution when KEY_NAME is changed._run/common-commands.mk-78-84 (1)
78-84:⚠️ Potential issue | 🟠 MajorFix the
--providerflag typo inget-manifest.
get-manifestcurrently passes--provide, so it never sends the provider address to the command.Suggested fix
$(PROVIDER_SERVICES) get-manifest \ --dseq "$(DSEQ)" \ --from "$(KEY_NAME)" \ - --provide "$(PROVIDER_ADDRESS)" \ + --provider "$(PROVIDER_ADDRESS)" \ --auth-type "$(GW_AUTH_TYPE)"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-commands.mk` around lines 78 - 84, In the get-manifest Make target, the provider flag is misspelled so the provider address never gets passed; update the command under the get-manifest target (the line invoking $(PROVIDER_SERVICES) get-manifest) to use --provider "$(PROVIDER_ADDRESS)" instead of --provide, preserving the other flags (--dseq "$(DSEQ)", --from "$(KEY_NAME)", --auth-type "$(GW_AUTH_TYPE)").Makefile-68-82 (1)
68-82:⚠️ Potential issue | 🟠 Major
GORELEASER_LDFLAGSis frozen beforeldflagsis finalized.
:=snapshotsldflagsbefore-linkmode=external,-extldflags, stripping flags, and$(LDFLAGS)are appended, so release artifacts will use incomplete linker configuration compared toBUILD_FLAGS.Suggested fix
-GORELEASER_LDFLAGS := $(ldflags) - ldflags += -linkmode=external ifeq (static-link,$(findstring static-link,$(BUILD_OPTIONS))) ldflags += -extldflags "-L$(AKASH_DEVCACHE_LIB) -lm -Wl,-z,muldefs" @@ -86,6 +86,7 @@ endif ldflags += $(LDFLAGS) ldflags := $(strip $(ldflags)) +GORELEASER_LDFLAGS := $(ldflags)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Makefile` around lines 68 - 82, GORELEASER_LDFLAGS is being set with := before ldflags is fully built, so the frozen snapshot misses later additions; fix by assigning GORELEASER_LDFLAGS after all ldflags mutations (or use a recursive assignment) so it reflects appended values like -linkmode=external, -extldflags and the strip flags—update the Makefile so GORELEASER_LDFLAGS references the final ldflags (or change to GORELEASER_LDFLAGS = $(ldflags)) and ensure this happens after the ifeq blocks that modify ldflags and BUILD_FLAGS._docs/pyth-integration.md-333-339 (1)
333-339:⚠️ Potential issue | 🟠 MajorDon’t publish an inline mnemonic example.
Putting
MNEMONICdirectly on thedocker runcommand line leaks the seed into shell history and container metadata (docker inspect). The safer pattern is already shown in Lines 341-346 with--env-file; this example should use that too.🔐 Suggested doc fix
docker run -d \ --name hermes-client \ -e RPC_ENDPOINT=https://rpc.akashnet.net:443 \ -e CONTRACT_ADDRESS=akash1... \ - -e "MNEMONIC=your twelve word mnemonic here" \ + --env-file .env \ --restart unless-stopped \ ghcr.io/akash-network/hermes:latest node dist/cli.js daemon🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 333 - 339, Replace the inline MNEMONIC environment variable in the docker run example with the safer --env-file pattern: stop passing MNEMONIC directly on the command line (the snippet that sets -e "MNEMONIC=your twelve word mnemonic here") and update the docker run example to load sensitive envs from an env-file (as shown earlier in the doc) so the mnemonic is not exposed in shell history or docker metadata.contracts/wormhole/testing/utils.rs-102-105 (1)
102-105:⚠️ Potential issue | 🟠 MajorThe default guardian set is already expired.
1571797500is October 23, 2019 UTC. Any caller that passes real block timestamps intoVerifyVAAwill hitGuardianSetExpiredimmediately; the current tests stay green only because they pass block height instead of seconds.🛠️ Minimal fix
initial_guardian_set: GuardianSetInfo { addresses: guardians.to_vec(), - expiration_time: 1571797500, + expiration_time: 0, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/utils.rs` around lines 102 - 105, The GuardianSetInfo in initial_guardian_set uses a hardcoded expiration_time of 1571797500 (Oct 23, 2019) which makes VerifyVAA immediately return GuardianSetExpired for callers using real unix timestamps; update the expiration_time in the initial_guardian_set to a future-valid timestamp (e.g. far future or dynamic now + duration) so tests use a non-expired guardian set and VerifyVAA behaves correctly; adjust any test helpers that construct initial_guardian_set to accept a configurable expiration_time if needed so tests can set valid expirations.tests/e2e/pyth_contract_test.go-199-209 (1)
199-209:⚠️ Potential issue | 🟠 MajorRemove
get_oracle_paramsfrom the contract query mirror.
contracts/pyth/src/msg.rsonly exposesget_config,get_price,get_price_feed, andget_price_feed_id. Keepingget_oracle_paramshere means these helpers serialize a query the contract cannot decode; if you need oracle params,TestQueryOracleModuleParamsalready queries the module directly.🛠️ Suggested update
type QueryMsg struct { - GetConfig *QueryGetConfig `json:"get_config,omitempty"` - GetPrice *QueryGetPrice `json:"get_price,omitempty"` - GetPriceFeed *QueryGetPriceFeed `json:"get_price_feed,omitempty"` - GetOracleParams *QueryGetOracleParams `json:"get_oracle_params,omitempty"` + GetConfig *QueryGetConfig `json:"get_config,omitempty"` + GetPrice *QueryGetPrice `json:"get_price,omitempty"` + GetPriceFeed *QueryGetPriceFeed `json:"get_price_feed,omitempty"` + GetPriceFeedID *QueryGetPriceFeedID `json:"get_price_feed_id,omitempty"` } type QueryGetPrice struct{} type QueryGetPriceFeed struct{} -type QueryGetOracleParams struct{} +type QueryGetPriceFeedID struct{}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/pyth_contract_test.go` around lines 199 - 209, The QueryMsg struct and its helper types currently include a non-existent contract query `get_oracle_params`; remove the `GetOracleParams *QueryGetOracleParams \`json:"get_oracle_params,omitempty"\`` field from QueryMsg and delete the orphan type `QueryGetOracleParams` so the test mirror matches the contract's exposed queries (`get_config`, `get_price`, `get_price_feed`, `get_price_feed_id`) and avoid serializing a query the contract cannot decode.contracts/wormhole/testing/integration.rs-81-86 (1)
81-86:⚠️ Potential issue | 🟠 MajorPass
block_timein seconds, not block height.
VerifyVAAcompares this field against guardian-set expiration timestamps. Usingapp.block_info().heightexercises a different path than production and masks expiry bugs; the same mismatch is repeated in the otherVerifyVAAcalls below.🛠️ Minimal fix
- block_time: app.block_info().height, + block_time: app.block_info().time.seconds(),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/integration.rs` around lines 81 - 86, The test is passing block height into VerifyVAA's block_time, which should be a unix timestamp in seconds; replace usages of app.block_info().height with app.block_info().time.seconds() when building QueryMsg::VerifyVAA (look for ParsedVAA and QueryMsg::VerifyVAA occurrences) and update the other VerifyVAA calls in this file to use block_time = app.block_info().time.seconds() so expiry logic matches production.contracts/wormhole/testing/utils.rs-22-38 (1)
22-38:⚠️ Potential issue | 🟠 MajorUse the supplied keeper when producing version-2 fixtures.
WormholeKeeper::new()always signs with the default guardian set, so this helper can return a VAA whose signatures do not matchwh.guardian_set_index(). Any test that passes custom guardians gets a malformed fixture.🛠️ Minimal fix
pub fn sign_vaa_body_version_2<P: Serialize>( wh: WormholeKeeper, body: Body<P>, ) -> (Vaa<P>, Binary) { let data = serde_wormhole::to_vec(&body).unwrap(); - let signatures = WormholeKeeper::new().sign(&data); + let signatures = wh.clone().sign(&data); let header = Header { version: 2, guardian_set_index: wh.guardian_set_index(), signatures,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/utils.rs` around lines 22 - 38, The helper sign_vaa_body_version_2 creates signatures with WormholeKeeper::new() which uses the default guardian set and can mismatch the supplied wh.guardian_set_index(); replace the new() call and call the provided keeper's sign method (use wh.sign(&data)) so the signatures are produced by the same WormholeKeeper instance passed into sign_vaa_body_version_2 and will match wh.guardian_set_index().contracts/wormhole/src/contract.rs-126-127 (1)
126-127:⚠️ Potential issue | 🟠 MajorDon’t unwrap raw module bytes here.
gov_packet.modulecomes from the VAA payload. A non-UTF-8 32-byte module field will panic the contract before you can returnInvalidVAA, and removing all NULs also accepts non-canonical strings with embedded zeros. Decode fallibly and trim only trailing padding.🛠️ Minimal fix
- let module = String::from_utf8(gov_packet.module).unwrap(); - let module: String = module.chars().filter(|c| c != &'\0').collect(); + let module = String::from_utf8(gov_packet.module) + .map_err(|_| ContractError::InvalidVAA.std_err())?; + let module = module.trim_end_matches('\0');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/contract.rs` around lines 126 - 127, Replace the blind unwrap of gov_packet.module bytes with a fallible UTF-8 decode and trim only trailing NUL padding: attempt to convert gov_packet.module to &str using a fallible method (e.g., core::str::from_utf8 or String::from_utf8 in a match) and on error return the contract error variant InvalidVAA; then call trim_end_matches('\0') to strip trailing padding and collect into a String (instead of filtering out all NULs). Ensure you update the local variable previously named module (from gov_packet.module) to use this validated, trimmed string.contracts/wormhole/src/state.rs-261-265 (1)
261-265:⚠️ Potential issue | 🟠 MajorBounds-check the governance-packet header before slicing the payload.
data[35..]panics on a short payload. This path runs on VAA input, so malformed governance packets should returnInvalidVAA, not trap the contract.🛠️ Minimal fix
impl GovernancePacket { pub fn deserialize(data: &[u8]) -> StdResult<Self> { + if data.len() < 35 { + return ContractError::InvalidVAA.std_err(); + } let module = data.get_bytes32(0).to_vec(); let action = data.get_u8(32); let chain = data.get_u16(33); let payload = data[35..].to_vec();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/state.rs` around lines 261 - 265, The deserialize function should validate the buffer is at least 35 bytes before slicing payload to avoid panics: check data.len() >= 35 (or the exact header length you expect) before calling data.get_bytes32/get_u8/get_u16 and before using data[35..]; if the check fails return an appropriate InvalidVAA error via the function's StdResult (use the existing InvalidVAA error variant or StdError::generic_err with "InvalidVAA") so malformed governance packets produce an error instead of trapping.
| if [[ -z "$AKASH_HOME" ]]; then | ||
| echo "AKASH_HOME is not set" | ||
| exit 1 | ||
| fi | ||
|
|
||
| if [[ -z "$KEYS" ]]; then | ||
| echo "KEYS is not set" | ||
| exit 1 | ||
| fi | ||
|
|
||
| if [[ -z "$MNEMONIC" ]]; then | ||
| echo "MNEMONIC is not set" | ||
| exit 1 | ||
| fi | ||
|
|
||
| if [[ -z "$CONTRACTS_DIR" ]]; then | ||
| echo "CONTRACTS_DIR is not set" | ||
| exit 1 | ||
| fi |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
# Detect direct unset-sensitive checks under `set -u`.
rg -n '\[\[ -z "\$[A-Z_][A-Z0-9_]*" \]\]' _run/init.shRepository: akash-network/node
Length of output: 195
🏁 Script executed:
#!/bin/bash
# Check the shebang and initial lines including set statement
head -n 30 _run/init.shRepository: akash-network/node
Length of output: 708
🏁 Script executed:
#!/bin/bash
# Check lines around 363 for SIGINT behavior
sed -n '355,368p' _run/init.shRepository: akash-network/node
Length of output: 405
🏁 Script executed:
#!/bin/bash
# Verify the Bash behavior: test set -u expansion difference
cat > /tmp/test_set_u.sh << 'EOF'
#!/bin/bash
set -euo pipefail
# Test 1: Direct expansion fails
echo "Test 1: Direct expansion with set -u"
if [[ -z "$UNDEFINED_VAR" ]]; then
echo "Variable is unset (should not reach here)"
fi
EOF
bash /tmp/test_set_u.sh 2>&1 || echo "Exit code: $?"Repository: akash-network/node
Length of output: 171
🏁 Script executed:
#!/bin/bash
# Test 2: Using ${VAR:-} syntax
cat > /tmp/test_var_expansion.sh << 'EOF'
#!/bin/bash
set -euo pipefail
# Test 2: Using ${VAR:-} to avoid set -u failure
echo "Test 2: Using \${VAR:-} with set -u"
if [[ -z "${UNDEFINED_VAR:-}" ]]; then
echo "Variable is unset (custom message shown)"
fi
EOF
bash /tmp/test_var_expansion.sh 2>&1 && echo "Success"Repository: akash-network/node
Length of output: 145
Fix env-var guards for set -u behavior.
With set -euo pipefail active, lines 5, 10, 15, and 20 will fail with "unbound variable" error before your custom error message is printed if any variable is unset. Use ${VAR:-} syntax instead of $VAR to safely expand variables under set -u.
Proposed fix
-if [[ -z "$AKASH_HOME" ]]; then
+if [[ -z "${AKASH_HOME:-}" ]]; then
-if [[ -z "$KEYS" ]]; then
+if [[ -z "${KEYS:-}" ]]; then
-if [[ -z "$MNEMONIC" ]]; then
+if [[ -z "${MNEMONIC:-}" ]]; then
-if [[ -z "$CONTRACTS_DIR" ]]; then
+if [[ -z "${CONTRACTS_DIR:-}" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -z "$AKASH_HOME" ]]; then | |
| echo "AKASH_HOME is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "$KEYS" ]]; then | |
| echo "KEYS is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "$MNEMONIC" ]]; then | |
| echo "MNEMONIC is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "$CONTRACTS_DIR" ]]; then | |
| echo "CONTRACTS_DIR is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "${AKASH_HOME:-}" ]]; then | |
| echo "AKASH_HOME is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "${KEYS:-}" ]]; then | |
| echo "KEYS is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "${MNEMONIC:-}" ]]; then | |
| echo "MNEMONIC is not set" | |
| exit 1 | |
| fi | |
| if [[ -z "${CONTRACTS_DIR:-}" ]]; then | |
| echo "CONTRACTS_DIR is not set" | |
| exit 1 | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/init.sh` around lines 5 - 23, The guard checks for AKASH_HOME, KEYS,
MNEMONIC, and CONTRACTS_DIR will crash under set -u because direct expansions
like "$AKASH_HOME" produce an unbound-variable error; update each if-test in
init.sh (the four if [[ -z "$VAR" ]]; then ... fi blocks) to use the safe
expansion form ${VAR:-} (e.g., if [[ -z "${AKASH_HOME:-}" ]]; then ...) so your
custom error messages run even when variables are unset.
| kill -SIGINT $NODE_PID | ||
| # Keep the script running (wait for node process) | ||
| wait $NODE_PID |
There was a problem hiding this comment.
Remove forced SIGINT that shuts the node down immediately.
Line 363 kills the background node right before wait, so the script does not actually keep the node running.
🐛 Proposed fix
- kill -SIGINT $NODE_PID
- # Keep the script running (wait for node process)
+ # Keep the script running (wait for node process)
wait $NODE_PID📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kill -SIGINT $NODE_PID | |
| # Keep the script running (wait for node process) | |
| wait $NODE_PID | |
| # Keep the script running (wait for node process) | |
| wait $NODE_PID |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/init.sh` around lines 363 - 365, The script currently sends kill -SIGINT
$NODE_PID just before wait $NODE_PID which immediately terminates the background
Node process instead of letting the script wait; remove the forced kill and
simply wait "$NODE_PID" (or, better, install a trap that forwards signals to
$NODE_PID and only kills it on script exit) so that the background process
(referenced by NODE_PID) is allowed to run and the script properly waits for it.
| AKASH_CHAIN_ID="${AKASH_CHAIN_ID:=testnet-8}" | ||
| AKASH_NODE="${AKASH_NODE:=https://testnetrpc.akashnet.net:443}" | ||
| AKASH_KEYRING_BACKEND="${AKASH_KEYRING_BACKEND:=test}" | ||
| AKASH_FROM="${AKASH_FROM:=price-feeder}" | ||
| UPDATE_INTERVAL=10 # seconds between updates | ||
|
|
||
| # Pyth configuration | ||
| AKT_PYTH_FEED_ID="4ea5bb4d2f5900cc2e97ba534240950740b4d3b89fe712a94a7304fd2fd92702" | ||
| PYTH_API="https://hermes.pyth.network/api/latest_price_feeds" | ||
|
|
||
| # Logging | ||
| LOG_FILE="$AKASH_RUN_DIR/price-feeder.log" | ||
| MAX_LOG_SIZE=10485760 # 10MB |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find and read the price-feeder script
fd "price-feeder.sh" --type fRepository: akash-network/node
Length of output: 87
🏁 Script executed:
# Read the price-feeder.sh script to verify the claims
cat -n "_run/node/price-feeder.sh"Repository: akash-network/node
Length of output: 8402
🏁 Script executed:
# Verify akash CLI is actually invoked and search for how it uses env vars
grep -n "^[[:space:]]*akash" "_run/node/price-feeder.sh"Repository: akash-network/node
Length of output: 44
🏁 Script executed:
# Search for akash invocations more broadly
rg "akash\s" "_run/node/price-feeder.sh" -A 2Repository: akash-network/node
Length of output: 725
🏁 Script executed:
# Verify if AKASH_RUN_DIR is expected to be set externally or if there's any initialization
head -30 "_run/node/price-feeder.sh" | tail -20Repository: akash-network/node
Length of output: 758
Export the CLI environment variables and guard AKASH_RUN_DIR before first use.
With set -u, line 24 immediately exits if AKASH_RUN_DIR is unset. Additionally, lines 13–16 create shell-local variables only; akash subprocesses (lines 65, 71, 79–80, 141) won't inherit the default values unless they are exported. This causes the script to fail during initialization or silently ignore the configured defaults.
Suggested fix
-AKASH_CHAIN_ID="${AKASH_CHAIN_ID:=testnet-8}"
-AKASH_NODE="${AKASH_NODE:=https://testnetrpc.akashnet.net:443}"
-AKASH_KEYRING_BACKEND="${AKASH_KEYRING_BACKEND:=test}"
-AKASH_FROM="${AKASH_FROM:=price-feeder}"
+export AKASH_CHAIN_ID="${AKASH_CHAIN_ID:-testnet-8}"
+export AKASH_NODE="${AKASH_NODE:-https://testnetrpc.akashnet.net:443}"
+export AKASH_KEYRING_BACKEND="${AKASH_KEYRING_BACKEND:-test}"
+export AKASH_FROM="${AKASH_FROM:-price-feeder}"
UPDATE_INTERVAL=10 # seconds between updates
@@
-LOG_FILE="$AKASH_RUN_DIR/price-feeder.log"
+: "${AKASH_RUN_DIR:?AKASH_RUN_DIR must be set}"
+mkdir -p "$AKASH_RUN_DIR"
+LOG_FILE="$AKASH_RUN_DIR/price-feeder.log"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/node/price-feeder.sh` around lines 13 - 25, The script currently uses
AKASH_RUN_DIR before ensuring it's set and defines CLI defaults as shell-local
variables so child processes (akash) don't inherit them; guard AKASH_RUN_DIR by
validating or initializing it before first use (check/set AKASH_RUN_DIR) and
export the CLI environment variables AKASH_CHAIN_ID, AKASH_NODE,
AKASH_KEYRING_BACKEND, AKASH_FROM (and any other defaults like UPDATE_INTERVAL,
AKT_PYTH_FEED_ID, PYTH_API) so subprocesses inherit them; ensure this export
happens right after their assignment and before any use of AKASH_RUN_DIR or
invocations of akash in functions referenced around lines that call akash (e.g.,
the commands at lines invoking akash).
| wasmConfig, err := wasm.ReadNodeConfig(appOpts) | ||
| if err != nil { | ||
| panic(fmt.Sprintf("error while reading wasm config: %s", err)) | ||
| } | ||
|
|
||
| // Memory limits - prevent DoS | ||
| wasmConfig.MemoryCacheSize = 100 // 100 MB max | ||
| // Query gas limit - prevent expensive queries | ||
| wasmConfig.SmartQueryGasLimit = 3_000_000 | ||
| // Debug mode - MUST be false in production | ||
| // Uncomment this for debugging contracts. In the future this could be made into a param passed by the tests | ||
| wasmConfig.ContractDebugMode = true |
There was a problem hiding this comment.
Critical: ContractDebugMode is enabled in production code.
Line 160 sets wasmConfig.ContractDebugMode = true despite the comment explicitly stating "MUST be false in production." This will be deployed to mainnet and could expose sensitive contract debugging information, potentially aiding attackers in exploiting contract vulnerabilities.
🐛 Proposed fix
- // Debug mode - MUST be false in production
- // Uncomment this for debugging contracts. In the future this could be made into a param passed by the tests
- wasmConfig.ContractDebugMode = true
+ // Debug mode - MUST be false in production
+ // Enable via environment variable or test flag only
+ wasmConfig.ContractDebugMode = falseConsider making this configurable via an environment variable or build tag for test environments instead of hardcoding true.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| wasmConfig, err := wasm.ReadNodeConfig(appOpts) | |
| if err != nil { | |
| panic(fmt.Sprintf("error while reading wasm config: %s", err)) | |
| } | |
| // Memory limits - prevent DoS | |
| wasmConfig.MemoryCacheSize = 100 // 100 MB max | |
| // Query gas limit - prevent expensive queries | |
| wasmConfig.SmartQueryGasLimit = 3_000_000 | |
| // Debug mode - MUST be false in production | |
| // Uncomment this for debugging contracts. In the future this could be made into a param passed by the tests | |
| wasmConfig.ContractDebugMode = true | |
| wasmConfig, err := wasm.ReadNodeConfig(appOpts) | |
| if err != nil { | |
| panic(fmt.Sprintf("error while reading wasm config: %s", err)) | |
| } | |
| // Memory limits - prevent DoS | |
| wasmConfig.MemoryCacheSize = 100 // 100 MB max | |
| // Query gas limit - prevent expensive queries | |
| wasmConfig.SmartQueryGasLimit = 3_000_000 | |
| // Debug mode - MUST be false in production | |
| // Enable via environment variable or test flag only | |
| wasmConfig.ContractDebugMode = false |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/app.go` around lines 149 - 160, The code sets
wasmConfig.ContractDebugMode = true which enables contract debug mode in
production; change this to false by default and make it configurable: read an
environment variable (e.g., WASM_CONTRACT_DEBUG) or a command-line option from
appOpts and set wasmConfig.ContractDebugMode accordingly, defaulting to false;
update the initialization around wasm.ReadNodeConfig and wasmConfig to parse the
env/flag and assign that boolean so tests can opt-in while mainnet remains safe.
| if price_message_data.is_some() { | ||
| // For PNAU format, verify emitter is Pythnet (chain 26) | ||
| if verified_vaa.emitter_chain != 26 { | ||
| return Err(ContractError::InvalidDataSource { | ||
| emitter_chain: verified_vaa.emitter_chain, | ||
| emitter_address: hex::encode(&verified_vaa.emitter_address), | ||
| }); | ||
| } |
There was a problem hiding this comment.
Chain 26 alone is not enough for accumulator-source validation.
In the PNAU branch, any Wormhole-signed VAA from another emitter on Pythnet can supply its own Merkle root and make the proof check pass. You still need to pin the expected accumulator emitter address, not just emitter_chain == 26.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/pyth/src/contract.rs` around lines 187 - 194, In the PNAU branch
(inside the price_message_data.is_some() block) add a strict check that
verified_vaa.emitter_address matches the pinned Pyth accumulator emitter address
in addition to verified_vaa.emitter_chain == 26; if it does not match, return
ContractError::InvalidDataSource with the emitter_chain and
hex::encode(&verified_vaa.emitter_address). Use a constant (e.g.,
PNAU_ACCUMULATOR_EMITTER or similar) for the expected address and compare the
raw bytes (or their hex) against verified_vaa.emitter_address to ensure only the
known accumulator can supply the Merkle root.
| pub fn quorum(&self) -> usize { | ||
| // allow quorum of 0 for testing purposes... | ||
| if self.addresses.is_empty() { | ||
| return 0; | ||
| } | ||
| ((self.addresses.len() * 10 / 3) * 2) / 10 + 1 | ||
| } |
There was a problem hiding this comment.
Do not let an empty guardian set produce quorum 0.
With this branch, parse_and_verify_vaa accepts a VAA with len_signers == 0: the quorum check passes and the signature loop never runs. Empty guardian sets should be rejected, not treated as valid test-only state.
🔒 Suggested fix
impl GuardianSetInfo {
pub fn quorum(&self) -> usize {
- // allow quorum of 0 for testing purposes...
- if self.addresses.is_empty() {
- return 0;
- }
- ((self.addresses.len() * 10 / 3) * 2) / 10 + 1
+ ((self.addresses.len() * 2) / 3) + 1
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn quorum(&self) -> usize { | |
| // allow quorum of 0 for testing purposes... | |
| if self.addresses.is_empty() { | |
| return 0; | |
| } | |
| ((self.addresses.len() * 10 / 3) * 2) / 10 + 1 | |
| } | |
| pub fn quorum(&self) -> usize { | |
| ((self.addresses.len() * 2) / 3) + 1 | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/wormhole/src/state.rs` around lines 209 - 215, The quorum() method
currently returns 0 for an empty guardian set which lets parse_and_verify_vaa
accept VAAs with len_signers == 0; remove the test-only special-case so empty
addresses do not produce quorum 0. Update quorum() (the function name) to always
compute the quorum from self.addresses.len() using the existing formula (or
ensure a minimum of 1) instead of returning 0 when self.addresses.is_empty(), so
an empty guardian set will fail the quorum check in parse_and_verify_vaa and the
signature loop will not be skipped.
| package node | ||
|
|
||
| // #cgo LDFLAGS: -Wl,-rpath,${SRCDIR}/.cache/lib -L${SRCDIR}/.cache/lib |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if there are other files in the package with import "C" that might cause this file's directives to be picked up
fd -e go -x grep -l 'import "C"' {} \; 2>/dev/null | head -20
# Check if wasmvm library exists in expected location
echo "Checking .cache/lib directory:"
ls -la .cache/lib 2>/dev/null || echo ".cache/lib not found (may be created during build)"
# Check go.mod for wasmvm dependency
grep -i wasmvm go.mod 2>/dev/null || echo "wasmvm not found in go.mod"Repository: akash-network/node
Length of output: 181
🏁 Script executed:
# Find all Go files in the node package directory
fd -e go node/
# Check the full content of wasmvm.go to understand its context
cat -n node/wasmvm.go
# Look for other files in the same directory
ls -la node/ 2>/dev/null | grep -E '\.(go|c|h)$'Repository: akash-network/node
Length of output: 108
🏁 Script executed:
# Search for wasmvm.go in the repository
fd wasmvm.go
# Also search for any go files with wasmvm in the name
fd -e go | grep -i wasmvmRepository: akash-network/node
Length of output: 81
🏁 Script executed:
# Read the wasmvm.go file
cat -n wasmvm.go
# Check the size first
wc -l wasmvm.go
# List Go files in the root directory
ls -la *.go | head -20Repository: akash-network/node
Length of output: 417
🏁 Script executed:
# Read the other Go files to see if they have import "C"
cat -n cgo_trace.go
echo "---"
cat -n contracts.go
echo "---"
cat -n tools.goRepository: akash-network/node
Length of output: 653
🌐 Web query:
Go cgo LDFLAGS directives import "C" requirement processing
💡 Result:
#cgo ...directives (including#cgo LDFLAGS:) are only recognized by the Go toolchain as part of cgo processing, which is triggered when a Go file uses the specialimport "C"form. When the Go tool sees one or more files in a package import"C", it enables cgo for that package and compiles any eligible non-Go sources in that directory as part of the package build. [1]- The directives must appear in the cgo preamble: the comment immediately preceding
import "C"(with no blank line between the comment and the import). [2] - Flag aggregation rules:
- Within a package, all
#cgoCFLAGS/CPPFLAGS/etc. directives found in that package are concatenated for compiling that package’s C/C++/Fortran sources. [1] - All
#cgo LDFLAGSdirectives across any package in the final program are concatenated and applied at link time. [1]
- Within a package, all
Minimal pattern
package foo
// `#cgo` LDFLAGS: -L${SRCDIR}/libs -lfoo
// `#include` "foo.h"
import "C"${SRCDIR} is expanded to the directory containing the source file when directives are parsed. [1]
Sources
[1] Go cmd/cgo documentation (pkg.go.dev)
[2] Go Wiki: cgo (tip.golang.org)
🏁 Script executed:
# Search comprehensively for import "C" across all Go files
rg 'import\s+"C"' -A 2 -B 2
# Also check if there are other directories that might be part of the node package
find . -type d -name node
# List structure of root to understand package organization
ls -la | grep -E '^d' | head -20Repository: akash-network/node
Length of output: 1237
Add missing import "C" to enable cgo directive processing.
The #cgo LDFLAGS directive on line 3 will be ignored by the Go toolchain without an import "C" statement in the same package. According to Go's cgo specification, #cgo directives are only recognized when cgo processing is enabled, which requires the import "C" form.
Proposed fix
package node
// `#cgo` LDFLAGS: -Wl,-rpath,${SRCDIR}/.cache/lib -L${SRCDIR}/.cache/lib
+import "C"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| package node | |
| // #cgo LDFLAGS: -Wl,-rpath,${SRCDIR}/.cache/lib -L${SRCDIR}/.cache/lib | |
| package node | |
| // `#cgo` LDFLAGS: -Wl,-rpath,${SRCDIR}/.cache/lib -L${SRCDIR}/.cache/lib | |
| import "C" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@wasmvm.go` around lines 1 - 3, The file declares a cgo LDFLAGS directive but
is missing the required import "C" to enable cgo processing; add the import "C"
statement in package node (e.g., alongside the existing package declaration) so
the `#cgo LDFLAGS: -Wl,-rpath,${SRCDIR}/.cache/lib -L${SRCDIR}/.cache/lib`
directive is recognized by the Go toolchain and processed correctly.
dbb001f to
3d48390
Compare
There was a problem hiding this comment.
Actionable comments posted: 7
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
.github/workflows/tests.yaml (1)
84-92:⚠️ Potential issue | 🟠 MajorMissing
build-contractsstep inlint-gojob.The static analysis failure (
pattern .cache/cosmwasm/artifacts/*.wasm: no matching files found) originates from this job. Sincecontracts.gouses//go:embedwhich requires the artifacts to exist at compile time, andgolangci-lintperforms type checking, this job needsbuild-contractsto run before linting.Other jobs (tests, coverage, sims, build-bins, build-macos) correctly include
build-contracts.🐛 Proposed fix
lint-go: runs-on: ubuntu-latest steps: - name: Checkout code uses: actions/checkout@v4 - name: Setup environment uses: ./.github/actions/setup-ubuntu - run: make deps-tidy + - run: make build-contracts - run: make lint-go🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/tests.yaml around lines 84 - 92, The lint-go CI job is missing the build-contracts step causing golangci-lint typechecking to fail due to //go:embed artifacts not being present; update the lint-go job to run the existing build-contracts step before running make deps-tidy and make lint-go (i.e., add a step that invokes the same build-contracts action/command used by tests/coverage/sims/build-bins/build-macos) so that contracts are built and embedded artifacts exist when golangci-lint runs.
♻️ Duplicate comments (10)
contracts/wormhole-old/src/querier.rs (1)
8-22:⚠️ Potential issue | 🔴 CriticalExpose guardian-set identity in this query API.
This still only exposes an anonymous current set. VAA verification needs the
guardian_set_indexfrom the query path and response so it can verify against the set the VAA claims; otherwise rotations break historical verification and mismatched indices are never rejected.Suggested API shape
pub enum AkashQuery { /// Query the Wormhole guardian set from x/oracle params - GuardianSet {}, + GuardianSet { index: Option<u32> }, } @@ pub struct GuardianSetResponse { + /// Guardian set index returned by the oracle query + pub guardian_set_index: u32, /// List of guardian addresses (20 bytes each, base64 encoded) pub addresses: Vec<GuardianAddressResponse>, /// When this guardian set expires (0 = never) pub expiration_time: u64, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/querier.rs` around lines 8 - 22, The GuardianSet query currently returns only anonymous addresses; change the API to accept and return the guardian set index so callers can request and verify a specific set: update the AkashQuery::GuardianSet variant to carry a guardian_set_index parameter (e.g., GuardianSet { guardian_set_index: u32 }) and add a guardian_set_index field to GuardianSetResponse (e.g., pub guardian_set_index: u32 or u64) so responses include the index alongside addresses and expiration_time; adjust any matching serialization/derivation attributes and update comments to reflect the new field.tests/e2e/pyth_contract_test.go (1)
103-121:⚠️ Potential issue | 🔴 CriticalUse
submit_vaa/verify_vaaJSON keys.This is the same deserialization bug noted earlier: Line 104 and Line 120 still use
submit_v_a_a/verify_v_a_a, which won't match the Wormhole contract's snake_case enum variants at runtime.🔧 Proposed fix
type WormholeExecuteMsg struct { - SubmitVAA *SubmitVAAMsg `json:"submit_v_a_a,omitempty"` + SubmitVAA *SubmitVAAMsg `json:"submit_vaa,omitempty"` PostMessage *PostMessageMsg `json:"post_message,omitempty"` } @@ type WormholeQueryMsg struct { GuardianSetInfo *struct{} `json:"guardian_set_info,omitempty"` - VerifyVAA *VerifyVAAQuery `json:"verify_v_a_a,omitempty"` + VerifyVAA *VerifyVAAQuery `json:"verify_vaa,omitempty"` GetState *struct{} `json:"get_state,omitempty"` QueryAddressHex *QueryAddressHexMsg `json:"query_address_hex,omitempty"` }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/pyth_contract_test.go` around lines 103 - 121, The JSON tags for the wormhole message enums are incorrect and should use snake_case "submit_vaa" and "verify_vaa": update the struct tags on WormholeExecuteMsg.SubmitVAA (currently `submit_v_a_a`) to `submit_vaa` and on WormholeQueryMsg.VerifyVAA (currently `verify_v_a_a`) to `verify_vaa` so the deserializer matches the Wormhole contract's variants; ensure the related types SubmitVAAMsg and VerifyVAAQuery remain the same and only the `json` tag strings are corrected.contracts/pyth/src/contract.rs (1)
187-194:⚠️ Potential issue | 🔴 CriticalPin the accumulator emitter address, not just chain 26.
This still trusts any Pythnet emitter. In the accumulator path, another Wormhole-signed emitter on chain 26 can provide its own Merkle root and make the proof check pass against that root. The validation needs an exact emitter-address check for the known accumulator program.
What is the canonical Wormhole emitter address for Pyth Hermes v2 / PNAU accumulator updates on Pythnet, and should contracts pin both emitter_chain and emitter_address when verifying those VAAs?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 187 - 194, The check for PNAU accumulator VAAs only verifies verified_vaa.emitter_chain == 26 and must also pin the exact emitter address; update the branch where price_message_data.is_some() is handled to compare verified_vaa.emitter_address against the canonical Pyth Hermes v2 / PNAU accumulator emitter bytes (add a constant like ACCUMULATOR_EMITTER or PNAU_ACCUMULATOR_EMITTER) and return ContractError::InvalidDataSource (keeping the existing emitter_chain and hex::encode of verified_vaa.emitter_address) if it does not match; ensure the constant is the correct Wormhole emitter address for Pythnet accumulator updates and use it in the equality check so both emitter_chain and emitter_address are enforced.cmd/akash/cmd/genesis.go (1)
235-235:⚠️ Potential issue | 🔴 Critical
sdkutil.DenomUExponentdoes not exist — usesdkutil.DenomUaktExponent.This was previously flagged: the
pkg.akt.dev/go/sdkutilpackage exportsDenomUaktExponent = 6, notDenomUExponent. This will cause a compilation error.🐛 Proposed fix
{ Denom: sdkutil.DenomAkt, - Exponent: sdkutil.DenomUExponent, + Exponent: sdkutil.DenomUaktExponent, Aliases: nil, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` at line 235, Replace the non-existent constant sdkutil.DenomUExponent with the correct exported constant sdkutil.DenomUaktExponent where the genesis denomination exponent is set (the code that assigns Exponent: sdkutil.DenomUExponent should be changed to Exponent: sdkutil.DenomUaktExponent); update any related imports or usages that reference the incorrect symbol to ensure compilation succeeds._run/node/prop.json (1)
24-25:⚠️ Potential issue | 🔴 CriticalJSON syntax error: missing comma between
"title"and"summary"fields.This was previously flagged and remains unfixed. The missing comma will cause JSON parse failures.
🐛 Proposed fix
- "title": "Add Oracle Price Feeder Source" + "title": "Add Oracle Price Feeder Source", "summary": "Authorize price feeder address for AKT/USD oracle",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/node/prop.json` around lines 24 - 25, The JSON in _run/node/prop.json has a syntax error: the "title" and "summary" properties are adjacent without a comma. Fix the object literal by adding a comma after the "title" field (i.e., insert a comma between the "title": "Add Oracle Price Feeder Source" and "summary": "Authorize price feeder address for AKT/USD oracle" entries) so the JSON parses correctly.app/app.go (1)
158-160:⚠️ Potential issue | 🔴 CriticalCritical:
ContractDebugModeis enabled in production code.Line 160 sets
wasmConfig.ContractDebugMode = truedespite the comment explicitly stating "MUST be false in production." This could expose sensitive contract debugging information in production, potentially aiding attackers.🐛 Proposed fix
// Debug mode - MUST be false in production - // Uncomment this for debugging contracts. In the future this could be made into a param passed by the tests - wasmConfig.ContractDebugMode = true + // Enable via environment variable for test environments only + wasmConfig.ContractDebugMode = os.Getenv("WASM_DEBUG_MODE") == "true"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.go` around lines 158 - 160, The code enables contract debug mode in production by setting wasmConfig.ContractDebugMode = true; change this to false (or make it configurable via an env/config flag) so ContractDebugMode is disabled by default in production, e.g., ensure wasmConfig.ContractDebugMode is set to false or driven by a secure runtime config check before being enabled.make/test-integration.mk (1)
21-23:⚠️ Potential issue | 🟠 Major
test-integrationshould depend onwasmvm-libslike other test targets.All other test targets (
test,test-nocache,test-full,test-coverage,test-vet) depend onwasmvm-libs, buttest-integrationdoes not. This inconsistency could cause integration tests to fail when wasmvm libraries aren't available.🛠️ Suggested fix
.PHONY: test-integration -test-integration: +test-integration: wasmvm-libs $(GO_TEST) -v -tags="e2e.integration" -ldflags '$(ldflags)' ./tests/e2e/...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/test-integration.mk` around lines 21 - 23, The test-integration make target currently lacks the wasmvm-libs prerequisite; update the target declaration so it depends on wasmvm-libs (e.g., change "test-integration:" to "test-integration: wasmvm-libs") so the wasmvm libraries are built before running the integration command in the test-integration target.app/sim_test.go (1)
30-31:⚠️ Potential issue | 🔴 CriticalDrop the second
types/simulationimport.Go does not allow the same package path to be imported twice under different aliases. This file will not build until
simtypesis removed andsetupSimulationAppreturnssdksim.Configinstead.#!/bin/bash # Expect exactly one match after the fix. rg -n '"github.com/cosmos/cosmos-sdk/types/simulation"' app/sim_test.goAlso applies to: 526-526
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/sim_test.go` around lines 30 - 31, Remove the duplicate import alias by deleting simtypes ("github.com/cosmos/cosmos-sdk/types/simulation") and update the setupSimulationApp function signature and any related return/usage to use sdksim.Config (the remaining alias) instead of simtypes.Config so only one import of the simulation package remains; adjust any references inside setupSimulationApp and its callers to sdksim.Config to ensure the file compiles.upgrades/software/v2.0.0/deployment.go (2)
204-223:⚠️ Potential issue | 🔴 CriticalPropagate bank keeper failures from the burn/mint sequence.
Each failure path currently returns
nil, so a failed send/burn/mint looks like a successful upgrade and can leave denom accounting half-applied.🐛 Proposed fix
if mintCoin.IsGT(sdk.NewCoin(sdkutil.DenomUact, sdkmath.ZeroInt())) { err = up.Keepers.Cosmos.Bank.SendCoinsFromModuleToModule(ctx, emodule.ModuleName, bmemodule.ModuleName, sdk.Coins{burnCoin}) if err != nil { - return nil + return fmt.Errorf("send burn coins to bme module: %w", err) } err = up.Keepers.Cosmos.Bank.BurnCoins(ctx, bmemodule.ModuleName, sdk.Coins{burnCoin}) if err != nil { - return nil + return fmt.Errorf("burn escrow migration coins: %w", err) } err = up.Keepers.Cosmos.Bank.MintCoins(ctx, bmemodule.ModuleName, sdk.Coins{mintCoin}) if err != nil { - return nil + return fmt.Errorf("mint migrated uact coins: %w", err) } err = up.Keepers.Cosmos.Bank.SendCoinsFromModuleToModule(ctx, bmemodule.ModuleName, emodule.ModuleName, sdk.Coins{mintCoin}) if err != nil { - return nil + return fmt.Errorf("send minted uact coins back to escrow: %w", err) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 204 - 223, The current upgrade deployment block swallows errors by returning nil after Bank operations (up.Keepers.Cosmos.Bank.SendCoinsFromModuleToModule, BurnCoins, MintCoins), which hides failures; change each error path to return the actual error (or a wrapped error with context) instead of nil so failures propagate to the caller—include contextual info such as burnCoin/mintCoin and module names (emodule.ModuleName, bmemodule.ModuleName) in the returned error to aid debugging.
126-128:⚠️ Potential issue | 🔴 CriticalUse
||in both orphan escrow filters.With
&&, these passes still accept records when either the scope or the state matches. The migration only wants deployment-scoped payments/accounts that are open or overdrawn.🐛 Proposed fix
- if p.ID.AID.Scope != eid.ScopeDeployment && (p.State.State != etypes.StateOpen && p.State.State != etypes.StateOverdrawn) { + if p.ID.AID.Scope != eid.ScopeDeployment || (p.State.State != etypes.StateOpen && p.State.State != etypes.StateOverdrawn) { return false } ... - if acc.ID.Scope != eid.ScopeDeployment && (acc.State.State != etypes.StateOpen && acc.State.State != etypes.StateOverdrawn) { + if acc.ID.Scope != eid.ScopeDeployment || (acc.State.State != etypes.StateOpen && acc.State.State != etypes.StateOverdrawn) { return false }Also applies to: 153-156
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 126 - 128, The orphan-escrow filter currently uses && so it lets through records when either scope or state matches; update the closures used by ekeeper.WithPayments (closure taking p etypes.Payment) and the analogous ekeeper.WithAccounts/account closure to return false when the scope is not deployment OR the state is not open/overdrawn: replace the existing combined condition that uses && between the scope check (p.ID.AID.Scope != eid.ScopeDeployment / a.Account.ID.AID.Scope != eid.ScopeDeployment) and the state-check expression with || so the clause becomes: if <scope> != eid.ScopeDeployment || ( <state> != etypes.StateOpen && <state> != etypes.StateOverdrawn ) { return false }.
🟠 Major comments (20)
tests/upgrade/workers_test.go-180-263 (1)
180-263:⚠️ Potential issue | 🟠 MajorPaginate these migration queries.
Every list call caps results at 1000 and ignores
NextKey. Once the fixture grows past one page, this helper will silently skip deployments, orders, bids, leases, or payments and can report a false-positive migration.Suggested pattern
for key := []byte(nil); ; { resp, err := mcl.Query().Market().Orders(ctx, &mvbeta.QueryOrdersRequest{ Filters: mvbeta.OrderFilters{State: mvbeta.OrderActive.String()}, Pagination: &sdkquery.PageRequest{ Limit: 1000, Key: key, }, }) require.NoError(t, err) // validate resp.Orders ... if resp.Pagination == nil || len(resp.Pagination.NextKey) == 0 { break } key = resp.Pagination.NextKey }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 180 - 263, The list queries in the test (mcl.Query().Deployment().Deployments, mcl.Query().Market().Orders, mcl.Query().Market().Bids, mcl.Query().Market().Leases, and mcl.Query().Escrow().Payments) only request Limit:1000 and ignore Pagination.NextKey; change each to iterate with a pagination loop: supply a PageRequest with Limit:1000 and Key set to the last response.Pagination.NextKey, validate items in each page (same assertions/assertDenomMigrated logic), and break when resp.Pagination is nil or NextKey is empty so no results are silently skipped.tests/upgrade/workers_test.go-122-130 (1)
122-130:⚠️ Potential issue | 🟠 MajorAssert the gov-path failure reason.
This currently passes on any proposal submission failure. With a fixed
1_000_000_000uaktdeposit, an underfunded proposer or a gov-param change can satisfy the test without proving the sender restriction. Assert the expected unauthorized error here as well.Suggested assertion
resp, err = mcl.Tx().BroadcastMsgs(ctx, []sdk.Msg{govMsg}) require.Error(t, err) require.NotNil(t, resp) require.IsType(t, &sdk.TxResponse{}, resp) +require.ErrorIs(t, err, sdkerrors.ErrUnauthorized)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 122 - 130, The test currently only requires an error on broadcasting the gov proposal but not the specific reason; update the assertion after calling mcl.Tx().BroadcastMsgs(ctx, []sdk.Msg{govMsg}) to verify the failure is an unauthorized sender error (i.e., that the gov path rejected the proposal due to sender not being the gov module account). Locate the block using govv1.NewMsgSubmitProposal and mcl.Tx().BroadcastMsgs and replace or augment the generic require.Error(t, err) with a specific check such as using sdkerrors.ErrUnauthorized.Is(err) or asserting strings.Contains(err.Error(), "unauthorized") (and keep require.NotNil(t, resp) / require.IsType(t, &sdk.TxResponse{}, resp) as needed).tests/e2e/pyth_contract_test.go-890-915 (1)
890-915:⚠️ Potential issue | 🟠 MajorBind the code ID to this proposal instead of returning the latest code.
GetCodeIDFromWasmEventsignoresproposalIDafter the status check and then returns the last entry fromQueryCodes. If another test or proposal stores code on the same network, Line 1115 can pick the wrong code ID and make the deployment flow flaky.contracts/wormhole-old/src/byte_utils.rs-56-60 (1)
56-60:⚠️ Potential issue | 🟠 MajorAdd bounds checking for address length before padding.
Line 59 panics when
addr.as_slice().len() > 32due to unchecked subtraction. AlthoughCanonicalAddrtypically represents 20-byte addresses, the type does not enforce this constraint, and the function is public. This should validate the input length and return an error or accept only sized address types.Suggested fix
-pub fn extend_address_to_32(addr: &CanonicalAddr) -> Vec<u8> { +pub fn extend_address_to_32(addr: &CanonicalAddr) -> StdResult<Vec<u8>> { let mut result = vec![0u8; 32]; let addr_bytes = addr.as_slice(); - let start = 32 - addr_bytes.len(); + let start = 32 + .checked_sub(addr_bytes.len()) + .ok_or_else(|| StdError::generic_err("address longer than 32 bytes"))?; result[start..].copy_from_slice(addr_bytes); - result + Ok(result) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/byte_utils.rs` around lines 56 - 60, extend_address_to_32 currently subtracts 32 - addr.as_slice().len() without validation and will panic for inputs longer than 32; update the function to validate addr.as_slice().len() before padding and return a Result instead of panicking. Specifically, change extend_address_to_32 to return Result<Vec<u8>, E> (use an existing error type or add a small AddressTooLong error), check if addr_bytes.len() > 32 and return Err(...) in that case, otherwise perform the existing 32-byte padding and return Ok(result); reference the function extend_address_to_32 and the CanonicalAddr::as_slice() usage when making the change.contracts/wormhole-old/src/byte_utils.rs-3-11 (1)
3-11:⚠️ Potential issue | 🟠 MajorMake the byte readers fallible and fix unchecked address length.
The public
ByteUtilstrait is infallible, but the implementation panics on short input at lines 16, 21, 27, 33, 39, 48, and 52. For parsing contract payloads, malformed bytes turn into an abort instead of a propagated error. ReturnStdResult<_>orOption<_>and usechecked_add+get(..)in the impl.Additionally,
extend_address_to_32at line 59 performs unchecked subtraction (32 - addr_bytes.len()) that will panic ifCanonicalAddrexceeds 32 bytes. Add a length check before the subtraction.Suggested direction for trait methods
-use cosmwasm_std::CanonicalAddr; +use cosmwasm_std::{CanonicalAddr, StdError, StdResult}; pub trait ByteUtils { - fn get_u16(&self, index: usize) -> u16; + fn get_u16(&self, index: usize) -> StdResult<u16>; // same idea for the other accessors } impl ByteUtils for &[u8] { - fn get_u16(&self, index: usize) -> u16 { + fn get_u16(&self, index: usize) -> StdResult<u16> { + let end = index + .checked_add(2) + .ok_or_else(|| StdError::generic_err("byte index overflow"))?; let mut bytes = [0u8; 2]; - bytes.copy_from_slice(&self[index..index + 2]); - u16::from_be_bytes(bytes) + bytes.copy_from_slice( + self.get(index..end) + .ok_or_else(|| StdError::generic_err("byte slice too short"))?, + ); + Ok(u16::from_be_bytes(bytes)) } }And for
extend_address_to_32:pub fn extend_address_to_32(addr: &CanonicalAddr) -> Vec<u8> { let mut result = vec![0u8; 32]; let addr_bytes = addr.as_slice(); + if addr_bytes.len() > 32 { + return result; // or handle error appropriately + } let start = 32 - addr_bytes.len(); result[start..].copy_from_slice(addr_bytes); result }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/byte_utils.rs` around lines 3 - 11, Make ByteUtils fallible and guard all unchecked slice access: change the trait ByteUtils methods (get_u8, get_u16, get_u32, get_u64, get_u128, get_u256, get_bytes32, get_address) to return a fallible type (StdResult<...> or Option<...>), then update the implementation to stop panicking by using checked_add and slice.get(..)/get(range) to validate each range before converting bytes; convert existing unwraps/panics at the previous get_* offsets into early returns of Err/None. Also fix extend_address_to_32 to check addr_bytes.len() <= 32 before doing 32 - addr_bytes.len() (return an error if >32) and only then pad/copy into a 32-byte buffer, ensuring get_address uses that safe helper and returns the appropriate fallible type.Cargo.toml-8-12 (1)
8-12:⚠️ Potential issue | 🟠 MajorLower the MSRV to match actual dependency requirements.
rust-version = "1.93.0"is unnecessarily restrictive. The primary dependency,cosmwasm-std3.0.2, declares MSRV 1.82.0, and other dependencies are even less restrictive (e.g.,thiserror1.68,schemars1.74,k2561.65,serde1.31). Consider lowering the workspace MSRV to 1.82.0 to avoid excluding contributors and CI environments that cannot run Rust 1.93.0 without legitimate reasons.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 8 - 12, The workspace Cargo.toml currently pins the workspace MSRV via workspace.package.rust-version = "1.93.0", which is more restrictive than required; change this field to "1.82.0" (matching cosmwasm-std 3.0.2) so contributors and CI using older but supported toolchains aren’t excluded—update the rust-version entry under the [workspace.package] section accordingly and run cargo check to verify no higher-Rust-only features are in use.contracts/pyth/src/contract.rs-43-51 (1)
43-51:⚠️ Potential issue | 🟠 MajorValidate
price_feed_idanddata_sourcesbefore saving config.
update_configcan setprice_feed_idto"", and both config paths accept arbitraryemitter_addressstrings. BecauseDataSource::matchestreats decode failures asfalse, one bad admin update can silently make every raw-VAA update fail.🛡️ Possible fix
+fn validate_data_source(ds: DataSourceMsg) -> Result<DataSource, ContractError> { + let emitter_address = ds.emitter_address.strip_prefix("0x").unwrap_or(&ds.emitter_address); + let decoded = hex::decode(emitter_address).map_err(|_| ContractError::InvalidPriceData { + reason: "data source emitter_address must be valid hex".to_string(), + })?; + if decoded.len() != 32 { + return Err(ContractError::InvalidPriceData { + reason: "data source emitter_address must be 32 bytes".to_string(), + }); + } + Ok(DataSource { + emitter_chain: ds.emitter_chain, + emitter_address: hex::encode(decoded), + }) +} + let data_sources: Vec<DataSource> = msg .data_sources .into_iter() - .map(|ds| DataSource { - emitter_chain: ds.emitter_chain, - emitter_address: ds.emitter_address, - }) - .collect(); + .map(validate_data_source) + .collect::<Result<_, _>>()?; @@ if let Some(feed_id) = price_feed_id { + if feed_id.is_empty() { + return Err(ContractError::InvalidPriceData { + reason: "price_feed_id is required".to_string(), + }); + } config.price_feed_id = feed_id; } @@ if let Some(sources) = data_sources { config.data_sources = sources .into_iter() - .map(|ds| DataSource { - emitter_chain: ds.emitter_chain, - emitter_address: ds.emitter_address, - }) - .collect(); + .map(validate_data_source) + .collect::<Result<_, _>>()?; }Also applies to: 361-372
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 43 - 51, The config mapping currently accepts empty price_feed_id and arbitrary emitter_address strings which can break DataSource::matches (since it treats decode failures as false); in the update_config flow validate inputs before constructing/saving: ensure the price_feed_id (the field being set via update_config) is non-empty and enforce that each incoming emitter_address decodes/parses to the expected format (reject or return Err on failure) before creating the DataSource instances (the code that builds the data_sources Vec and the DataSource struct); apply the same validation in the other config path around the code block referenced near lines 361-372 so invalid emitter addresses or empty price_feed_id never get persisted.contracts/pyth/src/pyth.rs-123-125 (1)
123-125:⚠️ Potential issue | 🟠 MajorNormalize
price_feed_idthe same way in every parser.
parse_single_attestationreturns bare hex, while the other two parsers prepend0x.execute_update_price_feeddoes an exact string compare againstconfig.price_feed_id, so the legacy attestation path will reject the same feed ID that the PNAU/raw paths accept.💡 Possible fix
+fn encode_feed_id(bytes: &[u8]) -> String { + format!("0x{}", hex::encode(bytes)) +} + fn parse_single_attestation(attestation: &[u8]) -> Result<PythPrice, StdError> { @@ - let id = hex::encode(&attestation[32..64]); + let id = encode_feed_id(&attestation[32..64]); @@ pub fn parse_price_feed_message(data: &[u8]) -> Result<PythPrice, StdError> { @@ - let id = format!("0x{}", hex::encode(&data[1..33])); + let id = encode_feed_id(&data[1..33]); @@ fn parse_raw_price_update(payload: &[u8]) -> Result<PythPrice, StdError> { @@ - let id = format!("0x{}", hex::encode(&payload[0..32])); + let id = encode_feed_id(&payload[0..32]);Also applies to: 215-216, 284-285
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/pyth.rs` around lines 123 - 125, parse_single_attestation returns the price_feed_id as bare hex while the other parsers include a "0x" prefix, causing execute_update_price_feed's exact compare against config.price_feed_id to fail; update the ID extraction in parse_single_attestation (the line using hex::encode(&attestation[32..64])) and the other parser functions that extract the same slice so they all produce the same normalized format (e.g., prepend "0x" to the encoded hex), ensuring execute_update_price_feed comparisons against config.price_feed_id succeed.contracts/pyth/src/contract.rs-146-167 (1)
146-167:⚠️ Potential issue | 🟠 MajorDon’t assume the configured feed is always the first accumulator update.
The accumulator parser preserves all updates, but this branch always verifies and uses
price_updates[0]. A valid Hermes batch will be rejected whenever the requestedconfig.price_feed_idis present later in the list.🎯 Possible fix
- // Get the first price update and verify its Merkle proof - let price_update = &accumulator.price_updates[0]; - - // Verify Merkle proof - if !verify_merkle_proof( - &price_update.message_data, - &price_update.merkle_proof, - &accumulator.merkle_root, - ) { - return Err(ContractError::InvalidPriceData { - reason: "Merkle proof verification failed".to_string(), - }); - } - - (accumulator.vaa, Some(price_update.message_data.clone())) + let price_update = accumulator + .price_updates + .iter() + .find(|update| { + parse_price_feed_message(&update.message_data) + .map(|price| price.id == config.price_feed_id) + .unwrap_or(false) + }) + .ok_or_else(|| ContractError::InvalidPriceData { + reason: "configured price feed not found in accumulator".to_string(), + })?; + + if !verify_merkle_proof( + &price_update.message_data, + &price_update.merkle_proof, + &accumulator.merkle_root, + ) { + return Err(ContractError::InvalidPriceData { + reason: "Merkle proof verification failed".to_string(), + }); + } + + (accumulator.vaa, Some(price_update.message_data.clone()))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 146 - 167, The code incorrectly assumes the desired feed is at accumulator.price_updates[0]; instead, iterate accumulator.price_updates to locate the update whose message_data matches the configured feed id (config.price_feed_id), then run verify_merkle_proof against that matched price_update (using price_update.message_data, price_update.merkle_proof, accumulator.merkle_root); if none match or the proof fails, return the same ContractError, otherwise return (accumulator.vaa, Some(matched_price_update.message_data.clone())). Ensure you replace direct indexing of price_updates[0] with this search and verification logic.contracts/wormhole/src/state.rs-241-250 (1)
241-250:⚠️ Potential issue | 🟠 MajorDo not treat storage decode failures as cache misses.
sequence_read()andvaa_archive_check()collapse anyStdErrorinto0/false. After a migration or corrupted entry, that resets sequences and disables replay protection instead of failing closed.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/state.rs` around lines 241 - 250, sequence_read and vaa_archive_check currently swallow storage decode errors by treating StdError as a cache miss; change both functions to return StdResult (e.g., pub fn sequence_read(...) -> StdResult<u64> and pub fn vaa_archive_check(...) -> StdResult<bool>) and propagate decoding errors instead of defaulting: use the ? operator on SEQUENCES.may_load(storage, emitter)? .unwrap_or(0) for sequence_read and VAA_ARCHIVE.may_load(storage, hash)? .unwrap_or(false) for vaa_archive_check, and update any callers to handle the StdResult accordingly so decode failures fail closed.contracts/wormhole/testing/utils.rs-26-32 (1)
26-32:⚠️ Potential issue | 🟠 MajorSign with the caller’s keeper here.
This helper uses
wh.guardian_set_index()for the header but signs with a freshWormholeKeeper::new(). That produces internally inconsistent VAAs whenever the caller passes custom guardians or a non-default index.🛠️ Suggested fix
- let signatures = WormholeKeeper::new().sign(&data); + let signatures = wh.clone().sign(&data);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/utils.rs` around lines 26 - 32, The helper constructs a Header using wh.guardian_set_index() but signs the payload with a fresh WormholeKeeper (WormholeKeeper::new().sign(&data)), causing inconsistent VAAs; replace the fresh keeper call with the caller's keeper (use wh.keeper().sign(&data) or the equivalent method on the wh test helper) so the signatures come from the same guardian set/index used in Header creation.contracts/wormhole/src/state.rs-334-338 (1)
334-338:⚠️ Potential issue | 🟠 MajorReject non-zero high bits in governance amounts.
get_u256()returns(high, low), but both deserializers discardhighand build theCoinfromlowonly. Amounts aboveu128::MAXare silently truncated instead of raisingAmountTooHigh. The error enum itself documents this constraint with "Only 128-bit amounts are supported".🛠️ Suggested fix
impl SetFee { pub fn deserialize(data: &[u8], fee_denom: String) -> StdResult<Self> { - let (_, amount) = data.get_u256(0); + let (high, amount) = data.get_u256(0); + if high != 0 { + return ContractError::AmountTooHigh.std_err(); + } let fee = Coin { denom: fee_denom, amount: Uint256::from(amount), @@ impl TransferFee { pub fn deserialize(data: &[u8], fee_denom: String) -> StdResult<Self> { let recipient = data.get_address(0); - let (_, amount) = data.get_u256(32); + let (high, amount) = data.get_u256(32); + if high != 0 { + return ContractError::AmountTooHigh.std_err(); + } let amount = Coin { denom: fee_denom, amount: Uint256::from(amount),🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/src/state.rs` around lines 334 - 338, The code is reading a 256-bit value with data.get_u256(0) but ignores the returned high bits and silently truncates amounts > u128::MAX; update the deserialization that builds the Coin (the snippet using let (_, amount) = data.get_u256(0); let fee = Coin { denom: fee_denom, amount: Uint256::from(amount), };) to inspect the returned high value and return the existing AmountTooHigh error (or equivalent) when high != 0 instead of proceeding, so only 128-bit or smaller amounts are accepted.go.mod-156-156 (1)
156-156:⚠️ Potential issue | 🟠 MajorSecurity: Upgrade
go-ethereumto patch p2p vulnerabilities.The indirect dependency
github.com/ethereum/go-ethereum v1.16.8contains vulnerabilities affecting p2p messaging and the RLPx handshake:
- CVE-2026-26314: DoS via crafted p2p message
- CVE-2026-26315: ECIES public key validation issue in RLPx handshake
- CVE-2026-26313: High memory usage via crafted p2p message
Patched versions are available:
- v1.16.9 addresses CVE-2026-26314 and CVE-2026-26315
- v1.17.0+ addresses all three vulnerabilities
Update the indirect dependency by upgrading the package that pulls in go-ethereum, or if direct control is available, update directly to v1.16.9 or later.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@go.mod` at line 156, The go.mod currently pins the indirect module github.com/ethereum/go-ethereum at v1.16.8 which is vulnerable; update that dependency to a patched release (at minimum v1.16.9, or preferably v1.17.0 or later) by adjusting the go.mod entry or updating the upstream module that pulls it in (e.g., re-run dependency resolution to bump github.com/ethereum/go-ethereum from v1.16.8 to v1.16.9+ or v1.17.0+), then tidy/verify modules so go.sum reflects the new secure version.testutil/oracle/price_feeder.go-68-72 (1)
68-72:⚠️ Potential issue | 🟠 MajorDon't synthesize a
1.0price for unknown denoms.If a test forgets to call
SetPriceor mistypes a denom, this helper still writes oracle state with a fabricated price. That can hide setup bugs and make oracle/BME tests pass against data that was never configured. Return an error here instead of defaulting silently.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/oracle/price_feeder.go` around lines 68 - 72, The FeedPrice method on PriceFeeder currently synthesizes a $1.00 price when a denom is missing; change FeedPrice (method FeedPrice, map pf.prices) to return a clear error instead of using sdkmath.LegacyOneDec() when the denom is not found (e.g., return fmt.Errorf("price not set for denom %s") or a package-level sentinel error), and update callers/tests to handle or set prices via SetPrice so missing denom mistakes surface instead of being silently written to oracle state.tests/e2e/oracle_grpc_test.go-156-189 (1)
156-189:⚠️ Potential issue | 🟠 MajorMake the price-feed-config test assert behavior, not a default.
Line 160 hard-codes
Enabled == false, but the REST path never checks the decoded value at all. That makes this test both brittle to legitimate genesis changes and too weak to catch a wrong/default REST response. Compare the CLI and REST payloads instead of pinning the suite tofalse.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/oracle_grpc_test.go` around lines 156 - 189, The test currently hard-codes configResp.Enabled == false; instead remove that brittle assertion and assert the REST response matches the CLI/gRPC response captured earlier: after unmarshalling the initial gRPC QueryPriceFeedConfigResponse into configResp and after fetching/unmarshalling the REST payload into config (via sdktestutil.GetRequest and val.ClientCtx.Codec.UnmarshalJSON), compare the two objects (e.g., s.Require().Equal(configResp, config) or an equivalent deep-equality check) inside the loop when tc.expErr is false; keep the error-path assertions intact and reference the existing symbols configResp, config, sdktestutil.GetRequest and val.ClientCtx.Codec.UnmarshalJSON._run/common-commands.mk-6-10 (1)
6-10:⚠️ Potential issue | 🟠 MajorBring the market helper defaults in line with the new UACT bid flow.
The updated e2e flows in this PR create bids with
uactvalues and an explicit deposit. These helpers still default to10uakt, reuse that value for deployment deposits, andbid-createnever passes a deposit, so the canned commands drift from the behavior now under test.🐛 Proposed fix
-PRICE ?= 10uakt +BID_PRICE ?= 10uact +DEPOSIT ?= 500000uact ... deployment-deposit: - $(AKASH) tx escrow deposit deployment "$(PRICE)" \ + $(AKASH) tx escrow deposit deployment "$(DEPOSIT)" \ --dseq "$(DSEQ)" \ --from "$(KEY_NAME)" ... bid-create: $(AKASH) tx market bid create \ --owner "$(KEY_ADDRESS)" \ --dseq "$(DSEQ)" \ --gseq "$(GSEQ)" \ --oseq "$(OSEQ)" \ --from "$(PROVIDER_KEY_NAME)" \ - --price "$(PRICE)" + --price "$(BID_PRICE)" \ + --deposit "$(DEPOSIT)"Also applies to: 93-97, 136-145
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-commands.mk` around lines 6 - 10, The market helper defaults still use the old unit and implicit deposit behavior; update the PRICE variable from "10uakt" to "10uact", ensure any deployment-deposit defaults (the variables/uses around the deployment helpers later in the file) reuse that PRICE value or a dedicated DEPOSIT variable set to the same "10uact" amount, and modify the bid-create invocation to pass an explicit --deposit using that DEPOSIT/PRICE variable so the canned commands match the new uact bid flow; look for and update references to DSEQ, GSEQ, OSEQ, PRICE, CERT_HOSTNAME and the bid-create/deploy helper command blocks mentioned in the comment.upgrades/software/v2.0.0/deployment.go-75-79 (1)
75-79:⚠️ Potential issue | 🟠 MajorAbort the deployment scan when
GetGroupsfails.This branch stores the failure in
gerrbut returnsfalse, so later iterations can overwrite the original error and keep doing migration work after a keeper read error.🐛 Proposed fix
var groups dvbeta.Groups groups, gerr = dkeeper.GetGroups(sctx, d.ID) if gerr != nil { - return false + up.log.Error("failed to load deployment groups", "deployment", d.ID, "error", gerr) + return true }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 75 - 79, The code ignores and swallows the error from dkeeper.GetGroups by returning only false; change the branch that handles gerr (the error from GetGroups) to abort the deployment scan and propagate the error (e.g., return false, gerr or wrap and return that error) so the original GetGroups failure is not lost or overwritten by later iterations; update the return in the function containing this call (referencing GetGroups and gerr) to return both the boolean and the error per the function's signature._run/common-base.mk-24-27 (1)
24-27:⚠️ Potential issue | 🟠 Major
SKIP_BUILD=truedoes not skip the build because the conditional never matches.In GNU make, whitespace after the comma in
ifneq ($(SKIP_BUILD), true)becomes part of the second argument. The conditional comparestrueagainsttrue(with a leading space), which are never equal. As a result, the guard always evaluates to true and lines 25–26 execute regardless of theSKIP_BUILDsetting.🐛 Proposed fix
-ifneq ($(SKIP_BUILD), true) +ifneq ($(strip $(SKIP_BUILD)),true)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/common-base.mk` around lines 24 - 27, The conditional ifneq ($(SKIP_BUILD), true) never matches because the literal space becomes part of the second argument; update the guard to compare correctly (e.g., ifneq ($(SKIP_BUILD),true) or better ifneq ($(strip $(SKIP_BUILD)),true)) so SKIP_BUILD properly skips the make -C $(AKASH_ROOT) akash and make -C $(AKASH_ROOT) docker-image lines; keep the rest of the block intact.testutil/state/suite.go-42-49 (1)
42-49:⚠️ Potential issue | 🟠 MajorInitialize
msin the constructedTestSuite.
Store()still returnsts.ms, but the suite constructor never assigns it. Any test that callsStore()gets a nilCommitMultiStore.Also applies to: 237-243, 260-262
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/state/suite.go` around lines 42 - 49, The TestSuite field ts.ms is never set in the suite constructor so Store() returns nil; update the constructor that builds TestSuite (where ts.app is created) to assign ts.ms from the app's commit multi-store (e.g. ts.ms = ts.app.CommitMultiStore() or ts.app.GetCommitMultiStore(), whichever method name exists in AkashApp) so that ts.Store() returns a valid store; apply the same fix to the other constructors/initializers that create TestSuite instances referenced around the other occurrences (ensure ts.ms is assigned immediately after ts.app is initialized).make/setup-cache.mk-62-80 (1)
62-80:⚠️ Potential issue | 🟠 MajorCached wasmvm artifacts become stale when
WASMVM_VERSIONchanges.The download targets write to constant paths under
$(AKASH_DEVCACHE_LIB)keyed only by filename (e.g.,libwasmvm.x86_64.so). WhenWASMVM_VERSIONchanges ingo.mod, the target path stays the same, andmaketreats the existing cache as up-to-date even though the URL has changed. The old version artifacts will be reused untilcache-cleanis run manually.The project already uses version-stamped marker files for other tools (e.g.,
GIT_CHGLOG_VERSION_FILE,MOCKERY_VERSION_FILE). The wasmvm cache should follow the same pattern.♻️ One way to fix it
+WASMVM_VERSION_FILE := $(AKASH_DEVCACHE_VERSIONS)/wasmvm-$(WASMVM_VERSION) + +$(WASMVM_VERSION_FILE): $(AKASH_DEVCACHE) + rm -rf $(AKASH_DEVCACHE_LIB) + mkdir -p $(AKASH_DEVCACHE_LIB) "$(dir $@)" + touch $@ + -$(AKASH_DEVCACHE_LIB)/%: +$(AKASH_DEVCACHE_LIB)/%: $(WASMVM_VERSION_FILE) wget -q --show-progress https://github.com/CosmWasm/wasmvm/releases/download/$(WASMVM_VERSION)/$* -O $@ `@rm` -f $(AKASH_DEVCACHE_LIB)/.wasmvm_verified -$(AKASH_DEVCACHE_LIB)/wasmvm_checksums.txt: +$(AKASH_DEVCACHE_LIB)/wasmvm_checksums.txt: $(WASMVM_VERSION_FILE) wget -q --show-progress https://github.com/CosmWasm/wasmvm/releases/download/$(WASMVM_VERSION)/checksums.txt -O $@ `@rm` -f $(AKASH_DEVCACHE_LIB)/.wasmvm_verified🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/setup-cache.mk` around lines 62 - 80, The cached wasmvm artifacts are not versioned so changing WASMVM_VERSION doesn't invalidates the cache; modify the Makefile targets that reference $(AKASH_DEVCACHE_LIB)/% and the .wasmvm_verified marker so they are tied to the WASMVM_VERSION (for example by creating a WASMVM_VERSION_FILE or making the cache path include $(WASMVM_VERSION) and making $(AKASH_DEVCACHE_LIB)/% and $(AKASH_DEVCACHE_LIB)/.wasmvm_verified depend on that versioned marker); update the download rules that write files (the target pattern $(AKASH_DEVCACHE_LIB)/% and the wasmvm_checksums.txt rule) to output into the versioned location or touch a version-stamped file when downloads complete, and adjust wasmvm-libs and wasmvm-libs-verify to depend on that version-stamped marker so Make will re-download when WASMVM_VERSION changes.
🟡 Minor comments (6)
_docs/pyth-integration.md-23-23 (1)
23-23:⚠️ Potential issue | 🟡 MinorFix grammar: missing punctuation.
The sentence is missing a comma or dash between "Pyth Network" and "a decentralized oracle network".
📝 Proposed fix
-Akash Network needs reliable price data (AKT/USD) for [BME](https://github.com/akash-network/AEP/tree/main/spec/aep-76). This integration brings prices from Pyth Network a decentralized oracle network — onto Akash in a cryptographically verifiable way. +Akash Network needs reliable price data (AKT/USD) for [BME](https://github.com/akash-network/AEP/tree/main/spec/aep-76). This integration brings prices from Pyth Network, a decentralized oracle network, onto Akash in a cryptographically verifiable way.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` at line 23, Edit the sentence containing "Pyth Network a decentralized oracle network" so it reads with proper punctuation (e.g., "Pyth Network, a decentralized oracle network — onto Akash..." or "Pyth Network — a decentralized oracle network — onto Akash..."), preserving the rest of the sentence about AKT/USD and BME; update the phrase in the same paragraph to insert either a comma or dash between "Pyth Network" and "a decentralized oracle network" for correct grammar._docs/pyth-integration.md-819-820 (1)
819-820:⚠️ Potential issue | 🟡 MinorClarify guardian set query expectations.
This query for
guardian_set_infomay be misleading. The documentation clearly states (line 628) that "Guardian addresses are managed via x/oracle module params, not stored in the Wormhole contract." Users might expect this query to return guardian addresses when it likely returns minimal or no guardian data. Consider adding a note explaining what this query actually returns, or updating the example to query x/oracle params instead.📋 Suggested clarification
# Wormhole - Get guardian set info akash query wasm contract-state smart <wormhole-contract> \ '{"guardian_set_info":{}}' +# Note: Guardian addresses are stored in x/oracle params, not in the contract. +# To query guardian addresses, use: akash query oracle params🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 819 - 820, The example calling the Wormhole contract state with the key "guardian_set_info" is misleading because guardian addresses are managed in the x/oracle module params, not in the Wormhole contract; update the docs so the command akash query wasm contract-state smart <wormhole-contract> '{"guardian_set_info":{}}' is accompanied by a note that it returns minimal/contract-local guardian metadata (or no addresses) and add an alternative example that queries the x/oracle params (describe querying x/oracle params) so users know where to fetch actual guardian addresses._docs/pyth-integration.md-532-535 (1)
532-535:⚠️ Potential issue | 🟡 MinorFix inconsistent filename reference.
Lines 532 and 535 reference
docker-compose.yaml, but line 491 in the Quick Start section usesdocker-compose.local.yml. This inconsistency will confuse users and cause command failures.🔧 Proposed fix
# Stop services -docker-compose -f docker-compose.yaml down +docker-compose -f docker-compose.local.yml down # Stop and remove all data (full reset) -docker-compose -f docker-compose.yaml down -v +docker-compose -f docker-compose.local.yml down -v🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 532 - 535, Update the inconsistent compose filename references so commands match the Quick Start: replace the two occurrences of "docker-compose.yaml" in the stop/remove examples with "docker-compose.local.yml" (so they align with the existing Quick Start usage of "docker-compose.local.yml").tests/e2e/pyth_contract_test.go-656-660 (1)
656-660:⚠️ Potential issue | 🟡 MinorDon't let
TestAllContractsExistpass when artifacts are missing.Right now Line 658-Line 660 only logs and continues, so this test succeeds even when a required contract artifact is absent. That makes the check ineffective.
🧪 Proposed fix
for _, c := range contracts { wasmPath := findWasmPath(c.dir, c.wasmFile) - if wasmPath == "" { - s.T().Logf("WARN: %s contract not found at expected paths", c.name) - continue - } + s.Require().NotEmpty(wasmPath, "%s contract not found at expected paths", c.name) info, err := os.Stat(wasmPath)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/pyth_contract_test.go` around lines 656 - 660, The loop in TestAllContractsExist currently only logs and continues when findWasmPath returns an empty string, letting the test pass despite missing artifacts; update the block that checks wasmPath (inside the for _, c := range contracts loop) to fail the test immediately when wasmPath == "" by calling s.T().Fatalf or s.T().FailNow with a clear message including c.name and expected c.wasmFile, so missing contract artifacts cause the test to fail rather than only being logged.contracts/wormhole/testing/integration.rs-85-86 (1)
85-86:⚠️ Potential issue | 🟡 MinorUse
time.seconds()forVerifyVAA.block_time.
block_timeis compared against guardian-set expiration timestamps, but these queries pass block height. The parsing assertions still run, but the time-based validity checks are not being exercised with the right units.Also applies to: 149-150, 207-208, 269-270, 296-297, 315-316, 334-335, 355-356, 385-386
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/integration.rs` around lines 85 - 86, The VerifyVAA struct is being populated with block_time using app.block_info().height which supplies block height instead of seconds; replace uses of app.block_info().height with app.block_info().time.seconds() wherever VerifyVAA.block_time is set (e.g., the instances that currently set block_time: app.block_info().height). Update all occurrences listed in the review (the other spots that set VerifyVAA.block_time) so the field receives seconds since epoch via app.block_info().time.seconds() to ensure time-based guardian-set expiration checks use the correct units..claude/skills/vanity/SKILL.md-7-7 (1)
7-7:⚠️ Potential issue | 🟡 MinorFix typo in repository URL.
The URL contains an extra
com/segment.🛠️ Suggested fix
-Register the current node major version as a vanity URL in the `github.com/com/akash-network/vanity` repo. +Register the current node major version as a vanity URL in the `github.com/akash-network/vanity` repo.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.claude/skills/vanity/SKILL.md at line 7, The repo URL text contains a stray "com/" segment; update the sentence that reads "Register the current node major version as a vanity URL in the `github.com/com/akash-network/vanity` repo." to use the correct repository path `github.com/akash-network/vanity` (remove the extra "com/") so the vanity repo URL is accurate in SKILL.md.
| .PHONY: get-manifest | ||
| get-manifest: | ||
| $(PROVIDER_SERVICES) get-manifest \ | ||
| --dseq "$(DSEQ)" \ | ||
| --from "$(KEY_NAME)" \ | ||
| --provide "$(PROVIDER_ADDRESS)" \ | ||
| --auth-type "$(GW_AUTH_TYPE)" |
There was a problem hiding this comment.
Fix the get-manifest provider flag.
--provide looks like a typo here, so this target will not pass the provider address under the expected flag name.
🐛 Proposed fix
$(PROVIDER_SERVICES) get-manifest \
--dseq "$(DSEQ)" \
--from "$(KEY_NAME)" \
- --provide "$(PROVIDER_ADDRESS)" \
+ --provider "$(PROVIDER_ADDRESS)" \
--auth-type "$(GW_AUTH_TYPE)"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .PHONY: get-manifest | |
| get-manifest: | |
| $(PROVIDER_SERVICES) get-manifest \ | |
| --dseq "$(DSEQ)" \ | |
| --from "$(KEY_NAME)" \ | |
| --provide "$(PROVIDER_ADDRESS)" \ | |
| --auth-type "$(GW_AUTH_TYPE)" | |
| .PHONY: get-manifest | |
| get-manifest: | |
| $(PROVIDER_SERVICES) get-manifest \ | |
| --dseq "$(DSEQ)" \ | |
| --from "$(KEY_NAME)" \ | |
| --provider "$(PROVIDER_ADDRESS)" \ | |
| --auth-type "$(GW_AUTH_TYPE)" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/common-commands.mk` around lines 78 - 84, The get-manifest Makefile
target is using a typo'd flag "--provide" so the provider address isn't passed
correctly; update the get-manifest target (and the relevant invocation using
PROVIDER_SERVICES and PROVIDER_ADDRESS) to use the correct flag name (replace
"--provide \"$(PROVIDER_ADDRESS)\"" with the expected provider flag, e.g.,
"--provider \"$(PROVIDER_ADDRESS)\"") so the provider address is sent under the
correct option.
| local url="${PYTH_API}?ids[]=${AKT_PYTH_FEED_ID}" | ||
| local response | ||
|
|
||
| response=$(curl -s --max-time 10 "$url" 2>/dev/null) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
url='https://example.invalid/api/latest_price_feeds?ids[]=deadbeef'
set +e
out=$(curl -sS --max-time 1 "$url" -o /dev/null 2>&1)
rc=$?
set -e
printf 'without --globoff: exit=%s\n%s\n' "$rc" "$out"
set +e
out=$(curl --globoff -sS --max-time 1 "$url" -o /dev/null 2>&1)
rc=$?
set -e
printf '\nwith --globoff: exit=%s\n%s\n' "$rc" "$out"Repository: akash-network/node
Length of output: 211
🏁 Script executed:
#!/bin/bash
# First, examine the actual file and context
cat -n _run/node/price-feeder.sh | sed -n '85,100p'Repository: akash-network/node
Length of output: 516
🏁 Script executed:
#!/bin/bash
# Test curl globbing behavior with a simpler approach
# Using -v to see what URL curl is trying to access
set +e
# Test 1: URL with brackets - what does curl try to do?
echo "Test 1: curl verbose output with brackets (no globoff)"
curl -v --max-time 1 'http://localhost:9999/?ids[]=test' 2>&1 | grep -E "(Host:|GET|Trying)" || echo "Connection attempt..."
echo ""
echo "Test 2: curl verbose output with brackets (with globoff)"
curl --globoff -v --max-time 1 'http://localhost:9999/?ids[]=test' 2>&1 | grep -E "(Host:|GET|Trying)" || echo "Connection attempt..."
set -eRepository: akash-network/node
Length of output: 440
🌐 Web query:
curl URL globbing query parameters brackets behavior
💡 Result:
curl has URL globbing enabled by default: it treats [] and {} in the URL as special syntax to generate multiple URLs (ranges/lists). So if your query string contains literal brackets (common in APIs, e.g. filter[status]=open or ids[]=1), curl may try to “expand” them and fail with errors like curl: (3) [globbing] bad range specification. [1][2]
How to send literal brackets:
-
Disable curl’s globbing (recommended):
curl -g 'https://example.com/search?filter[status]=open&ids[]=1'(
-g/--globoffswitches off the URL globbing parser.) [2] -
Or escape the brackets (still allows globbing elsewhere):
curl 'https://example.com/search?filter\[status\]=open&ids\[\]=1'(Escaping prevents curl from treating them as glob patterns.) [3]
-
Or percent-encode the brackets:
[→%5B,]→%5D
curl 'https://example.com/search?filter%5Bstatus%5D=open&ids%5B%5D=1'
Note: quoting the URL ('...' or "...") prevents your shell from globbing, but it does not disable curl’s own URL globbing—use -g (or escapes/encoding) for that. [1][2]
Sources:
[1] everything-curl: URL globbing docs
[2] curl man page (-g, --globoff)
[3] Stack Overflow: passing URLs with brackets to curl
🏁 Script executed:
#!/bin/bash
# Check if there are any test results, logs, or error patterns related to this issue
rg -i "glob|curl.*exit.*3|ids\[\]" --type sh
rg -i "pyth.*price.*error|fetch_pyth" --type sh
git log --oneline --all -S "fetch_pyth_price" 2>/dev/null | head -20Repository: akash-network/node
Length of output: 148
🏁 Script executed:
#!/bin/bash
# Check PYTH_API construction and actual usage
grep -n "PYTH_API\|AKT_PYTH_FEED_ID" _run/node/price-feeder.sh | head -20Repository: akash-network/node
Length of output: 267
Disable curl URL globbing for the ids[]= query parameter.
curl treats square brackets as glob syntax by default. The ids[]= parameter in the URL will cause curl to attempt URL expansion, which can fail locally with exit code 3 before the request reaches the network—preventing price fetches entirely.
Fix
- response=$(curl -s --max-time 10 "$url" 2>/dev/null)
+ response=$(curl --globoff -s --max-time 10 "$url" 2>/dev/null)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| local url="${PYTH_API}?ids[]=${AKT_PYTH_FEED_ID}" | |
| local response | |
| response=$(curl -s --max-time 10 "$url" 2>/dev/null) | |
| local url="${PYTH_API}?ids[]=${AKT_PYTH_FEED_ID}" | |
| local response | |
| response=$(curl --globoff -s --max-time 10 "$url" 2>/dev/null) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@_run/node/price-feeder.sh` around lines 90 - 93, The curl call that fetches
the Pyth price uses a URL containing the literal query key ids[] which triggers
curl's globbing and can cause exit code 3; update the curl invocation in the
price fetch logic (the local url and response usage around the curl call) to
disable globbing (e.g., add curl's --globoff flag) so the square brackets are
treated literally and the request always reaches the network.
| let version = data.get_u8(0); | ||
|
|
||
| // Load 4 bytes starting from index 1 | ||
| let guardian_set_index: u32 = data.get_u32(Self::GUARDIAN_SET_INDEX_POS); | ||
| let len_signers = data.get_u8(Self::LEN_SIGNER_POS) as usize; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
wc -l contracts/wormhole/src/state.rsRepository: akash-network/node
Length of output: 97
🏁 Script executed:
# Get context around the flagged lines
sed -n '125,140p' contracts/wormhole/src/state.rs
sed -n '258,270p' contracts/wormhole/src/state.rs
sed -n '285,295p' contracts/wormhole/src/state.rs
sed -n '295,305p' contracts/wormhole/src/state.rs
sed -n '330,340p' contracts/wormhole/src/state.rs
sed -n '347,358p' contracts/wormhole/src/state.rsRepository: akash-network/node
Length of output: 2304
🏁 Script executed:
# Find ByteUtils definition
fd -t f 'byte_utils\|ByteUtils' contracts/wormhole/src/
rg -t rs 'trait ByteUtils|impl.*ByteUtils' --max-count 20Repository: akash-network/node
Length of output: 88
🏁 Script executed:
rg 'trait ByteUtils|impl.*ByteUtils' --type rustRepository: akash-network/node
Length of output: 315
🏁 Script executed:
# Find where ByteUtils is defined
rg 'get_u8|get_u32|get_bytes32' --type rust -l | head -5Repository: akash-network/node
Length of output: 240
🏁 Script executed:
# Look at the imports in state.rs
head -30 contracts/wormhole/src/state.rsRepository: akash-network/node
Length of output: 1349
🏁 Script executed:
# Check if ByteUtils panics on out of bounds
rg -A 10 'fn get_u8|fn get_u32|fn get_bytes32|fn get_u256|fn get_address' --type rust | head -100Repository: akash-network/node
Length of output: 7352
🏁 Script executed:
# Get the complete ByteUtils implementation
cat -n contracts/wormhole/src/byte_utils.rsRepository: akash-network/node
Length of output: 5523
🏁 Script executed:
# Verify exact lines in state.rs for the flagged deserializers
sed -n '125,145p' contracts/wormhole/src/state.rs | cat -n
sed -n '258,270p' contracts/wormhole/src/state.rs | cat -n
sed -n '285,305p' contracts/wormhole/src/state.rs | cat -n
sed -n '330,340p' contracts/wormhole/src/state.rs | cat -n
sed -n '347,360p' contracts/wormhole/src/state.rs | cat -nRepository: akash-network/node
Length of output: 2976
Validate payload length before fixed-offset reads.
These deserializers call ByteUtils methods without upfront data.len() guards. ByteUtils operations use slice indexing that panics on out-of-bounds access instead of returning InvalidVAA. A short VAA or governance payload will cause a panic.
Affected locations:
- ParsedVAA (lines 130–132): reads version, guardian_set_index, len_signers before length check on line 134
- GovernancePacket (lines 262–264): reads module, action, chain without any length check
- ContractUpgrade (line 289): reads new_contract at offset 24
- GuardianSetUpgrade (lines 298–299): reads guardian_set_index and n_guardians
- SetFee (line 334): reads amount at offset 0
- TransferFee (lines 351–353): reads recipient and amount
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/wormhole/src/state.rs` around lines 130 - 134, Ensure fixed-offset
reads validate the payload length before calling ByteUtils methods: add upfront
data.len() checks in ParsedVAA (before reading version, guardian_set_index,
len_signers), GovernancePacket (before reading module, action, chain),
ContractUpgrade (before reading new_contract at offset 24), GuardianSetUpgrade
(before reading guardian_set_index and n_guardians), SetFee (before reading
amount at offset 0), and TransferFee (before reading recipient and amount). For
each function/method, compute the minimum required length for the specific fixed
offsets, return the existing InvalidVAA error variant when the buffer is too
short, and only then call data.get_u8/get_u32/get_u64 or similar ByteUtils
methods to avoid panics.
| use crate::{ | ||
| contract::instantiate, | ||
| msg::QueryMsg, | ||
| state::{ConfigInfo, GuardianAddress, ParsedVAA, CONFIG_KEY}, | ||
| }; |
There was a problem hiding this comment.
CONFIG_KEY no longer matches the current state API.
contracts/wormhole/src/state.rs defines CONFIG: Item<ConfigInfo>, not CONFIG_KEY, so this helper will not compile on the current branch. Load the item directly instead of reconstructing the raw storage key.
🛠️ Suggested fix
use crate::{
contract::instantiate,
msg::QueryMsg,
- state::{ConfigInfo, GuardianAddress, ParsedVAA, CONFIG_KEY},
+ state::{ConfigInfo, GuardianAddress, ParsedVAA, CONFIG},
};
@@
fn get_config_info<S: Storage>(storage: &S) -> ConfigInfo {
- let key = to_length_prefixed(CONFIG_KEY);
- let data = storage.get(&key).expect("data should exist");
- from_slice(&data).expect("invalid data")
+ CONFIG.load(storage).expect("data should exist")
}Also applies to: 29-32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts/wormhole/testing/integration.rs` around lines 6 - 10, The test is
referencing CONFIG_KEY which no longer exists; instead import and use the CONFIG
Item defined in contracts/wormhole/src/state.rs (CONFIG: Item<ConfigInfo>) and
load it directly (e.g., call CONFIG.load on the contract storage) wherever
CONFIG_KEY was used (including the occurrences around lines 29-32 and the import
list), replacing raw key reconstruction with CONFIG.load(&deps.storage) or
equivalent access via the CONFIG Item.
| resp, err := msgServer.StoreCode(ctx, &wasmtypes.MsgStoreCode{ | ||
| Sender: sender, | ||
| WASMByteCode: wasmBz, | ||
| InstantiatePermission: &wasmtypes.AllowNobody, | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In github.com/CosmWasm/wasmd v0.61.7, does MsgInstantiateContract succeed for a code ID stored with MsgStoreCode.InstantiatePermission = AllowNobody?
💡 Result:
No. In wasmd v0.61.7, MsgInstantiateContract calls keeper.instantiate(...), which rejects the call unless authPolicy.CanInstantiateContract(codeInfo.InstantiateConfig, creator) is true; otherwise it returns an ErrUnauthorized (“can not instantiate”). [1]
With the default policy, CanInstantiateContract is implemented as return config.Allowed(actor). [2] For a stored code whose InstantiatePermission is AllowNobody (i.e., AccessTypeNobody, “forbidden”), the access config is not allowed for regular senders, so MsgInstantiateContract does not succeed. [3]
Sources: [1] https://github.com/CosmWasm/wasmd/blob/v0.61.7/x/wasm/keeper/keeper.go [2] https://github.com/CosmWasm/wasmd/blob/v0.61.7/x/wasm/keeper/authz_policy.go [3] https://github.com/CosmWasm/wasmd/blob/v0.61.7/x/wasm/types/types.pb.go
🏁 Script executed:
# Check the file to understand the context around lines 221-225
cd /tmp && git ls-files "upgrades/software/v2.0.0/wasm.go" && wc -l upgrades/software/v2.0.0/wasm.go 2>/dev/null || echo "File not found, searching..."Repository: akash-network/node
Length of output: 176
🏁 Script executed:
# Search for the wasm.go file
find . -name "wasm.go" -path "*/upgrades/software/v2.0.0/*" 2>/dev/nullRepository: akash-network/node
Length of output: 96
🏁 Script executed:
# List the directory to understand the structure
ls -la upgrades/software/v2.0.0/ 2>/dev/null || find . -path "*/upgrades/software/v2.0.0*" -type d 2>/dev/nullRepository: akash-network/node
Length of output: 410
🏁 Script executed:
# Read the wasm.go file to see the context
wc -l upgrades/software/v2.0.0/wasm.go && echo "---" && cat -n upgrades/software/v2.0.0/wasm.goRepository: akash-network/node
Length of output: 9001
🏁 Script executed:
# Check the go.mod file for wasmd version
cat go.mod | grep -i wasmdRepository: akash-network/node
Length of output: 96
AllowNobody blocks instantiation immediately after storage.
The code stores contracts with InstantiatePermission = AllowNobody (line 224), then calls msgServer.InstantiateContract on both the Wormhole contract (line 151) and Pyth contract (line 194). In wasmd v0.61.7 (used by this repository), MsgInstantiateContract fails with ErrUnauthorized when the code ID has an AllowNobody access config, regardless of the sender. The upgrade will halt on the first contract deployment.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@upgrades/software/v2.0.0/wasm.go` around lines 221 - 225, The stored code is
using InstantiatePermission = &wasmtypes.AllowNobody which prevents subsequent
msgServer.InstantiateContract calls from succeeding; update the MsgStoreCode
invocation to use an instantiation permission that allows immediate
instantiation (e.g., replace wasmtypes.AllowNobody with wasmtypes.AllowEverybody
or nil) so that the later msgServer.InstantiateContract calls for the Wormhole
and Pyth contracts succeed; update the InstantiatePermission field in the
MsgStoreCode call and keep the related msgServer.InstantiateContract usages
unchanged.
There was a problem hiding this comment.
Actionable comments posted: 3
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
tests/e2e/deployment_cli_test.go (1)
391-394:⚠️ Potential issue | 🟡 MinorHardcoded
DSeqin shared blockchain state may cause test flakiness.The deployment ID uses a hardcoded
DSeq: uint64(105). Since this test shares blockchain state withTestDeploymentandTestGroup(which both create deployments using the same deployer address), andTestGroupleaves its deployment open rather than closing it, there is a risk of collision if auto-assigned DSeq values happen to match 105. Consider querying for the next available DSeq or deriving a unique value from test context to improve test robustness.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/deployment_cli_test.go` around lines 391 - 394, The test hardcodes dv1.DeploymentID{Owner: s.addrDeployer.String(), DSeq: uint64(105)} which can collide with other tests; change the code to compute a unique DSeq instead of using 105 by querying the chain or deriving from test context (e.g., call a helper that fetches the next available DSeq for s.addrDeployer or read the current highest DSeq and add 1, or generate a value from the current timestamp/unique test run ID) and assign that computed value to deploymentID.DSeq so the deployment in this test cannot collide with TestDeployment/TestGroup.testutil/state/suite.go (1)
259-262:⚠️ Potential issue | 🟡 Minor
Store()returns uninitializedts.ms.The
msfield inTestSuiteis never assigned duringSetupTestSuiteWithKeepers, soStore()always returnsnil. Either remove this method if unused, or initializemsfromapp.CommitMultiStore().🐛 Proposed fix (if Store() is needed)
return &TestSuite{ t: t, app: app, ctx: ctx, + ms: app.CommitMultiStore(), keepers: keepers, priceFeeder: priceFeeder, }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/state/suite.go` around lines 259 - 262, The Store() accessor returns an uninitialized TestSuite.ms because SetupTestSuiteWithKeepers never assigns ts.ms; either remove Store() if unused, or initialize the field in SetupTestSuiteWithKeepers by setting ts.ms = app.CommitMultiStore() (or the appropriate CommitMultiStore value from your app instance) so TestSuite.Store() returns a valid store.app/upgrades.go (1)
36-38:⚠️ Potential issue | 🟡 MinorFix typo in error message.
"unitialize" should be "initialize".
📝 Proposed fix
upgrade, err := fn(app.Logger(), app.App) if err != nil { - return fmt.Errorf("unable to unitialize upgrade `%s`: %w", name, err) + return fmt.Errorf("unable to initialize upgrade `%s`: %w", name, err) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/upgrades.go` around lines 36 - 38, Fix the typo in the error message string used in the fmt.Errorf call (in app/upgrades.go) that currently reads "unable to unitialize upgrade `%s`: %w"; change "unitialize" to "initialize" so the message becomes "unable to initialize upgrade `%s`: %w" where the fmt.Errorf call uses the variables name and err.
♻️ Duplicate comments (13)
contracts/wormhole-old/src/querier.rs (1)
8-23:⚠️ Potential issue | 🔴 CriticalMake guardian-set queries index-aware.
This still exposes only an anonymous current set. After a rotation, VAA verification cannot bind to
vaa.guardian_set_index, so valid VAAs from the previous set become unverifiable and mismatched indices are never rejected. Please return the set index here—and ideally allow lookup by index/history—instead of forcing downstream code to hardcode0.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/querier.rs` around lines 8 - 23, The GuardianSet query currently exposes only the anonymous current set; update AkashQuery and GuardianSetResponse so callers can request and receive a specific guardian set index: change the AkashQuery::GuardianSet variant to accept an optional index parameter (e.g., GuardianSet { index: Option<u32> } or a concrete u32) to allow lookup by index/history, and add a guardian_set_index field to GuardianSetResponse (e.g., guardian_set_index: u32 or u64) so the response returns the set index; update any deserialization/CustomQuery handling for AkashQuery::GuardianSet and any code that constructs/parses GuardianSetResponse accordingly.contracts/pyth/src/contract.rs (1)
184-194:⚠️ Potential issue | 🔴 CriticalPin the accumulator emitter address too, not just chain
26.Any Wormhole-signed VAA from another emitter on Pythnet can currently supply its own Merkle root and make the proof check pass. The PNAU branch needs to validate the known accumulator emitter address in addition to
emitter_chain == 26.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 184 - 194, The PNAU branch currently only checks verified_vaa.emitter_chain == 26 but must also validate the emitter address; update the logic where price_message_data.is_some() and verified_vaa.emitter_chain != 26 is checked to also compare verified_vaa.emitter_address against the known Pythnet accumulator emitter (add a constant like PYTHNET_ACCUMULATOR_EMITTER as the expected byte array or hex and compare byte-wise), and when it does not match return the same ContractError::InvalidDataSource (encoding the provided emitter_address with hex::encode as done now); ensure the new constant is used in the same scope as the check (or imported) so the PNAU path enforces both chain==26 and emitter_address==PYTHNET_ACCUMULATOR_EMITTER.cmd/akash/cmd/genesis.go (1)
235-236:⚠️ Potential issue | 🔴 CriticalUse the exported AKT exponent constant.
pkg.akt.dev/go/sdkutildocumentsDenomUaktExponent, notDenomUExponent, so this rename looks like a compile break unless the dependency was changed in lockstep. Please switch this back or verify that the external package now exports the new symbol. (pkg.go.dev)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` around lines 235 - 236, The code uses sdkutil.DenomUExponent which likely doesn't exist upstream; replace sdkutil.DenomUExponent with the exported constant sdkutil.DenomUaktExponent where the Exponent field is being set (e.g. in the genesis construction logic that assigns Exponent: sdkutil.DenomUExponent), or if the external dependency was intentionally changed, update imports/dependency and verify the new symbol exists; ensure all references to DenomUExponent are renamed to DenomUaktExponent (or vice versa only after confirming the external package change).app/app.go (1)
154-160:⚠️ Potential issue | 🔴 Critical
ContractDebugModeis still enabled by default.This remains hardcoded to
true, so every node boots with production-unsafe contract debugging enabled.🔒 Suggested fix
// Debug mode - MUST be false in production -// Uncomment this for debugging contracts. In the future this could be made into a param passed by the tests -wasmConfig.ContractDebugMode = true +// Opt in from tests or a dedicated dev flag only. +wasmConfig.ContractDebugMode = false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/app.go` around lines 154 - 160, wasmConfig.ContractDebugMode is hardcoded true which enables production-unsafe debugging; change this default to false and make it configurable (e.g., via a CLI flag or environment variable) so nodes boot with ContractDebugMode=false unless explicitly enabled. Update the initialization code that sets wasmConfig.ContractDebugMode to read a config/flag (or default to false) and ensure any tests that relied on true are updated to set the flag/env when needed; reference the wasmConfig.ContractDebugMode assignment in app.go and the app startup/config parsing logic to implement this change.contracts/wormhole/testing/integration.rs (1)
6-10:⚠️ Potential issue | 🔴 Critical
CONFIG_KEYis still the wrong state access path here.This helper is still reconstructing the removed raw-key layout. On the current state API that will not compile, and it stays brittle even if a raw key reappears.
🛠 Suggested fix
use crate::{ contract::instantiate, msg::QueryMsg, - state::{ConfigInfo, GuardianAddress, ParsedVAA, CONFIG_KEY}, + state::{ConfigInfo, GuardianAddress, ParsedVAA, CONFIG}, }; @@ fn get_config_info<S: Storage>(storage: &S) -> ConfigInfo { - let key = to_length_prefixed(CONFIG_KEY); - let data = storage.get(&key).expect("data should exist"); - from_slice(&data).expect("invalid data") + CONFIG.load(storage).expect("data should exist") }That also lets you drop the now-unused
from_sliceandto_length_prefixedimports.Also applies to: 29-32
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/integration.rs` around lines 6 - 10, The import of CONFIG_KEY is incorrect — stop reconstructing the raw-key layout and switch to the contract's state accessor instead: remove CONFIG_KEY from the state import and any now-unused from_slice/to_length_prefixed imports, then replace any use of CONFIG_KEY-based lookups with the contract state API (e.g., use CONFIG.load(storage) / CONFIG.save(storage, &ConfigInfo) or the module-level CONFIG accessor alongside the ConfigInfo type and GuardianAddress/ParsedVAA as needed). Ensure all code paths that previously referenced CONFIG_KEY now call the proper CONFIG methods so the code compiles and no raw-key logic remains._run/init.sh (2)
363-365:⚠️ Potential issue | 🔴 CriticalRemove forced SIGINT that shuts the node down immediately.
Line 363 kills the background node right before
wait, so the script does not actually keep the node running. The node is terminated immediately after contract deployment completes.🐛 Proposed fix
log "=== Initialization Complete ===" log "Node is running. Hermes can now connect." - kill -SIGINT $NODE_PID # Keep the script running (wait for node process) wait $NODE_PID },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` around lines 363 - 365, The script forcibly sends SIGINT to the background node via the kill -SIGINT $NODE_PID call which immediately terminates it instead of letting it run; remove the kill -SIGINT $NODE_PID invocation and rely on wait $NODE_PID (and the existing NODE_PID variable) so the script truly waits for the node process to exit naturally after deployment.
5-23:⚠️ Potential issue | 🔴 CriticalFix env-var guards for
set -ubehavior.With
set -euo pipefailactive, lines 5, 10, 15, and 20 will fail with "unbound variable" error before your custom error message is printed if any variable is unset. Use${VAR:-}syntax to safely expand variables underset -u.🐛 Proposed fix
-if [[ -z "$AKASH_HOME" ]]; then +if [[ -z "${AKASH_HOME:-}" ]]; then echo "AKASH_HOME is not set" exit 1 fi -if [[ -z "$KEYS" ]]; then +if [[ -z "${KEYS:-}" ]]; then echo "KEYS is not set" exit 1 fi -if [[ -z "$MNEMONIC" ]]; then +if [[ -z "${MNEMONIC:-}" ]]; then echo "MNEMONIC is not set" exit 1 fi -if [[ -z "$CONTRACTS_DIR" ]]; then +if [[ -z "${CONTRACTS_DIR:-}" ]]; then echo "CONTRACTS_DIR is not set" exit 1 fi,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` around lines 5 - 23, The current if-checks for AKASH_HOME, KEYS, MNEMONIC, and CONTRACTS_DIR will trigger an "unbound variable" error under set -u; update each conditional that checks these environment variables (the tests referencing AKASH_HOME, KEYS, MNEMONIC, CONTRACTS_DIR) to use safe parameter expansion/default-empty expansion so the checks work even when variables are unset and still print your custom error message and exit.tests/e2e/market_grpc_test.go (1)
17-19:⚠️ Potential issue | 🔴 CriticalRemove the duplicate
market/v1beta5import.Go will not compile with the same import path listed twice (lines 18-19). Both
v1beta5andmvbetapoint to"pkg.akt.dev/go/node/market/v1beta5". Keep one alias and update all references consistently.🐛 Proposed fix - keep mvbeta alias
v1 "pkg.akt.dev/go/node/market/v1" - "pkg.akt.dev/go/node/market/v1beta5" mvbeta "pkg.akt.dev/go/node/market/v1beta5"Then update all
v1beta5.references tomvbeta.throughout the file.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/market_grpc_test.go` around lines 17 - 19, The file imports the same package twice as v1beta5 and mvbeta which causes a compile error; remove the duplicate import so only one alias remains (prefer keeping mvbeta), then rename all references from v1beta5. to mvbeta. throughout the file (e.g., update any usages of v1beta5.SomeType or v1beta5.SomeFunc to mvbeta.SomeType/mvbeta.SomeFunc) to keep the import and usages consistent.upgrades/software/v2.0.0/wasm.go (1)
221-225:⚠️ Potential issue | 🔴 Critical
AllowNobodyblocks instantiation immediately after storage.The code stores contracts with
InstantiatePermission = &wasmtypes.AllowNobody(line 224), then callsmsgServer.InstantiateContracton both Wormhole (line 151) and Pyth (line 194). In wasmd,MsgInstantiateContractfails withErrUnauthorizedwhen the code hasAllowNobodyaccess config, regardless of the sender. This will halt the upgrade on the first contract deployment.Consider using a permission that allows the governance address to instantiate:
🐛 Proposed fix
resp, err := msgServer.StoreCode(ctx, &wasmtypes.MsgStoreCode{ Sender: sender, WASMByteCode: wasmBz, - InstantiatePermission: &wasmtypes.AllowNobody, + InstantiatePermission: wasmtypes.AccessConfigAnyOfAddresses(sender), }),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/wasm.go` around lines 221 - 225, The stored contracts use InstantiatePermission = &wasmtypes.AllowNobody in the MsgStoreCode call (StoreCode/MsgStoreCode), which prevents subsequent calls to InstantiateContract/MsgInstantiateContract and causes ErrUnauthorized; update the InstantiatePermission to a config that permits the governance address to instantiate (e.g. set an access config allowing sdk.AccAddress(governanceAddr) or the appropriate wasmtypes.AllowOnlyAddress/AccessConfig granting the upgrade module/governance address instantiation rights) so that msgServer.InstantiateContract calls for Wormhole and Pyth succeed; ensure the change is applied where InstantiatePermission is set and that the governance address used matches the one used for instantiation.upgrades/software/v2.0.0/deployment.go (3)
204-224:⚠️ Potential issue | 🔴 CriticalCritical: Error handlers return
nilinstead of the actual error.All four error checks in the mint/burn flow silently swallow errors by returning
nil. This could leave the chain in an inconsistent state where coins are moved but not burned, burned but not minted, or minted but stuck in the wrong module. The upgrade would appear successful while token accounting is broken.🐛 Proposed fix
if mintCoin.IsGT(sdk.NewCoin(sdkutil.DenomUact, sdkmath.ZeroInt())) { err = up.Keepers.Cosmos.Bank.SendCoinsFromModuleToModule(ctx, emodule.ModuleName, bmemodule.ModuleName, sdk.Coins{burnCoin}) if err != nil { - return nil + return fmt.Errorf("send coins to burn module: %w", err) } err = up.Keepers.Cosmos.Bank.BurnCoins(ctx, bmemodule.ModuleName, sdk.Coins{burnCoin}) if err != nil { - return nil + return fmt.Errorf("burn axlUSDC coins: %w", err) } err = up.Keepers.Cosmos.Bank.MintCoins(ctx, bmemodule.ModuleName, sdk.Coins{mintCoin}) if err != nil { - return nil + return fmt.Errorf("mint UACT coins: %w", err) } err = up.Keepers.Cosmos.Bank.SendCoinsFromModuleToModule(ctx, bmemodule.ModuleName, emodule.ModuleName, sdk.Coins{mintCoin}) if err != nil { - return nil + return fmt.Errorf("send minted coins to escrow: %w", err) } },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 204 - 224, The error handlers in the mint/burn flow inside deployment.go are returning nil on failure (in the block using mintCoin, burnCoin and calls to up.Keepers.Cosmos.Bank.SendCoinsFromModuleToModule, BurnCoins, MintCoins), silently swallowing real errors; change each error branch to return the actual error (or wrap and return err) instead of nil so callers receive the failure (e.g., replace each "return nil" with "return err" or a wrapped error that includes context mentioning the failing call and module names emodule.ModuleName / bmemodule.ModuleName).
153-156:⚠️ Potential issue | 🔴 CriticalSame boolean logic issue as the payments filter.
Line 154 has the same
&&vs||problem. Should use||to properly filter for deployment-scoped accounts in open/overdrawn state.🐛 Proposed fix
ekeeper.WithAccounts(sctx, func(acc etypes.Account) bool { - if acc.ID.Scope != eid.ScopeDeployment && (acc.State.State != etypes.StateOpen && acc.State.State != etypes.StateOverdrawn) { + if acc.ID.Scope != eid.ScopeDeployment || (acc.State.State != etypes.StateOpen && acc.State.State != etypes.StateOverdrawn) { return false },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 153 - 156, The predicate inside ekeeper.WithAccounts incorrectly uses && between the scope check and state check, causing wrong filtering; change the condition in the inline func so it returns false when the account is not deployment-scoped OR the account state is neither StateOpen nor StateOverdrawn (i.e., replace the outer && with || while keeping the inner state checks on acc.ID.Scope, eid.ScopeDeployment, and acc.State.State against etypes.StateOpen/etypes.StateOverdrawn).
126-146:⚠️ Potential issue | 🔴 CriticalIncorrect boolean logic in payment filter condition.
The condition on line 127 uses
&&but should use||. With the current logic:
- Skips if: (NOT deployment) AND (NOT open AND NOT overdrawn)
- Processes: IS deployment OR IS open/overdrawn (unintended)
To skip payments that are either not deployment-scoped OR not in open/overdrawn state:
🐛 Proposed fix
ekeeper.WithPayments(sctx, func(p etypes.Payment) bool { - if p.ID.AID.Scope != eid.ScopeDeployment && (p.State.State != etypes.StateOpen && p.State.State != etypes.StateOverdrawn) { + if p.ID.AID.Scope != eid.ScopeDeployment || (p.State.State != etypes.StateOpen && p.State.State != etypes.StateOverdrawn) { return false },
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/deployment.go` around lines 126 - 146, The payment filter currently uses && between the scope check and the state check inside the ekeeper.WithPayments callback, which yields incorrect logic; change the condition in the anonymous function to use || so it reads: if p.ID.AID.Scope != eid.ScopeDeployment || (p.State.State != etypes.StateOpen && p.State.State != etypes.StateOverdrawn) { return false } — locate this condition in the lambda passed to ekeeper.WithPayments and update the boolean operator accordingly (symbols: ekeeper.WithPayments, p.ID.AID.Scope, eid.ScopeDeployment, p.State.State, etypes.StateOpen, etypes.StateOverdrawn).contracts/wormhole/src/contract.rs (1)
126-130:⚠️ Potential issue | 🔴 CriticalStill avoid UTF-8 decoding for the governance module check.
Line 126 still unwraps untrusted VAA bytes, and the current NUL filtering also accepts malformed values like
Co\0re. Compare the raw 32-byte module bytes instead so invalid payloads return a controlled error instead of panicking.🛠️ Proposed fix
- let module = String::from_utf8(gov_packet.module).unwrap(); - let module: String = module.chars().filter(|c| c != &'\0').collect(); - - if module != "Core" { + if gov_packet.module != crate::byte_utils::extend_string_to_32("Core") { return Err(StdError::msg("this is not a valid module")); }
🟠 Major comments (19)
contracts/wormhole/testing/utils.rs-22-32 (1)
22-32:⚠️ Potential issue | 🟠 MajorSign with the configured keeper, not a fresh one.
Line 27 ignores
whand signs withWormholeKeeper::new(). That makes the signatures come from the default guardian set while Line 31 advertiseswh.guardian_set_index(), sonew_with_guardiansproduces VAAs that do not match the instantiated guardians.Proposed fix
pub fn sign_vaa_body_version_2<P: Serialize>( wh: WormholeKeeper, body: Body<P>, ) -> (Vaa<P>, Binary) { let data = serde_wormhole::to_vec(&body).unwrap(); - let signatures = WormholeKeeper::new().sign(&data); + let signatures = wh.clone().sign(&data); let header = Header { version: 2, guardian_set_index: wh.guardian_set_index(), signatures,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole/testing/utils.rs` around lines 22 - 32, The signatures are created from a fresh keeper (WormholeKeeper::new()) instead of using the provided keeper, causing guardian set mismatches; update sign_vaa_body_version_2 to call sign on the passed-in wh (use wh.sign(&data)) so the signatures match wh.guardian_set_index() and the configured guardians.contracts/wormhole-old/src/byte_utils.rs-3-12 (1)
3-12:⚠️ Potential issue | 🟠 MajorMake these byte accessors fallible.
Every accessor here slices directly and can panic on short or malformed packets. These helpers are used for VAA/governance parsing, so bad input should return
StdError/ContractErrorinstead of trapping the contract.Also applies to: 14-54
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/wormhole-old/src/byte_utils.rs` around lines 3 - 12, The ByteUtils trait methods like get_u8, get_u16, etc., currently return values directly and will panic if the input is too short or malformed. Update these methods to return a Result type with a proper error like StdError or ContractError to make them fallible and avoid panics. Apply this change to all accessor methods in the ByteUtils trait including the ones between lines 14-54 to ensure safe error handling on invalid byte slices during VAA/governance parsing.tests/upgrade/workers_test.go-166-169 (1)
166-169:⚠️ Potential issue | 🟠 MajorThis can miss partially migrated objects.
The helper only checks that each field is in
{uact,uakt}. A deployment or payment with mixed allowed denoms still passes, anddenomSeencounts only the last non-zero resource price. If post-upgrade objects are supposed to be internally consistent, capture one denom per entity and compare all related fields against it.Also applies to: 188-215, 225-231, 240-276
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/upgrade/workers_test.go` around lines 166 - 169, The current assertDenomMigrated helper only checks each field is in allowedDenoms and can miss partially migrated entities; update the tests (including assertDenomMigrated and the other blocks mentioned) to determine and lock a single observed denom per entity (e.g., derive denom from the first non-zero price or denomSeen logic) then assert that that denom is allowed (allowedDenoms[denom]) and that every related price/denom field for that entity equals this single denom; replace per-field independent checks with this single-denom consistency check across deployment/payment objects so partially migrated mixed-denom objects fail the test.contracts/pyth/src/contract.rs-43-51 (1)
43-51:⚠️ Potential issue | 🟠 MajorValidate
data_sourcesbefore persisting them.
DataSource::matches()decodesemitter_addresslazily and just returnsfalseon bad hex. Right now a typo or non-32-byte address is accepted into state and only shows up later as a permanentInvalidDataSourceat update time.Also applies to: 365-373
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 43 - 51, The DataSource conversion currently accepts potentially-invalid emitter_address values into state; before persisting, validate each DataSource's emitter_address (e.g., by attempting the same decode/length check used in DataSource::matches() or calling a dedicated validator) and reject or return an error for bad hex or non-32-byte addresses so invalid entries never get stored; update the conversion logic that constructs DataSource (the block that maps msg.data_sources into Vec<DataSource>) and the similar conversion at the other occurrence (the block around the same conversion later) to perform this validation and handle failures (return an Err or skip with proper error) instead of blindly collecting them.contracts/pyth/src/contract.rs-146-167 (1)
146-167:⚠️ Potential issue | 🟠 MajorDon’t assume the first accumulator leaf is the configured feed.
Hermes accumulator bundles can contain multiple price updates. Taking
price_updates[0]means a valid bundle is rejected whenever AKT/USD is not first, even if a later leaf matchesconfig.price_feed_id.Suggested direction
- // Get the first price update and verify its Merkle proof - let price_update = &accumulator.price_updates[0]; + // Find the leaf for the configured feed ID + let price_update = accumulator + .price_updates + .iter() + .find(|update| { + parse_price_feed_message(&update.message_data) + .map(|msg| msg.id == config.price_feed_id) + .unwrap_or(false) + }) + .ok_or_else(|| ContractError::InvalidPriceData { + reason: "configured price feed not found in accumulator".to_string(), + })?; // Verify Merkle proof if !verify_merkle_proof( &price_update.message_data, &price_update.merkle_proof, &accumulator.merkle_root, ) {Also applies to: 224-229
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 146 - 167, The code assumes the first accumulator leaf is the desired feed by using accumulator.price_updates[0]; instead iterate over accumulator.price_updates to find the entry whose message_data (or its feed id field) matches the configured feed id (config.price_feed_id), verify that entry’s Merkle proof via verify_merkle_proof(message_data, merkle_proof, accumulator.merkle_root), and return accumulator.vaa with Some(matching.message_data.clone()) only for the matched and verified leaf; if none match or verification fails for the matched leaf, return the existing ContractError::InvalidPriceData with an appropriate reason. Ensure you reference accumulator.price_updates, price_update/message_data, price_update.merkle_proof, accumulator.merkle_root, and config.price_feed_id when implementing the change.contracts/pyth/src/contract.rs-361-363 (1)
361-363:⚠️ Potential issue | 🟠 MajorKeep the non-empty
price_feed_idinvariant inUpdateConfig.
instantiate()rejects an empty ID, butexecute_update_config()can still write""into state. After that, every update fails the feed-ID check until an admin fixes it.Minimal fix
if let Some(feed_id) = price_feed_id { + if feed_id.is_empty() { + return Err(ContractError::InvalidPriceData { + reason: "price_feed_id is required".to_string(), + }); + } config.price_feed_id = feed_id; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/contract.rs` around lines 361 - 363, Update execute_update_config (the UpdateConfig handling) to preserve the invariant that price_feed_id is non-empty: when matching the incoming price_feed_id in execute_update_config, if Some(feed_id) is provided then validate that !feed_id.is_empty() and return an error (or reject the update) instead of writing an empty string into config.price_feed_id; ensure the same validation logic used in instantiate() is applied here so config.price_feed_id is never set to "" (refer to the update handler where config.price_feed_id is assigned).tests/e2e/pyth_contract_test.go-891-915 (1)
891-915:⚠️ Potential issue | 🟠 MajorResolve the stored code ID deterministically.
proposalIDis only used as a passed-status check; the helper then returns the last code on chain. If any other code upload exists, this can instantiate the wrong contract. Please tie the lookup to the proposal execution result or the uploaded checksum instead ofCodeInfos[len(codesResp.CodeInfos)-1].🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/pyth_contract_test.go` around lines 891 - 915, GetCodeIDFromWasmEvents currently ignores the proposal execution result and returns the last wasm code on chain, which can pick up unrelated uploads; change it so it deterministically resolves the code ID tied to the proposal execution by extracting the uploaded code identifier/checksum from the proposal's execution result or events and using that to look up the matching code. Specifically, inside GetCodeIDFromWasmEvents use the proposal (from QueryProposalRequest using proposalID) execution message or emitted events to obtain the uploaded code checksum or explicit code_id, then search Codes/CodeInfos for the CodeID that matches that checksum/identifier (instead of returning CodeInfos[len(CodeInfos)-1].CodeID). Ensure you reference the proposal variable and the Codes/CodeInfos response to implement this deterministic lookup.tests/e2e/pyth_contract_test.go-656-661 (1)
656-661:⚠️ Potential issue | 🟠 MajorFail this test when an artifact is missing.
continueletsTestAllContractsExistpass even though its stated purpose is to verify the WASM artifacts exist. That hides missing build outputs in CI.✅ Simple fix
for _, c := range contracts { wasmPath := findWasmPath(c.dir, c.wasmFile) - if wasmPath == "" { - s.T().Logf("WARN: %s contract not found at expected paths", c.name) - continue - } + s.Require().NotEmpty(wasmPath, "%s contract not found at expected paths", c.name) info, err := os.Stat(wasmPath)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/pyth_contract_test.go` around lines 656 - 661, The loop over contracts silently skips missing WASM artifacts by using "continue", so change the behavior to fail the test when findWasmPath(c.dir, c.wasmFile) returns an empty string: replace the "continue" with a failing call such as s.T().Fatalf("contract %s not found at expected paths", c.name) (or require.FailNow/Failf equivalent) so TestAllContractsExist (or the test hosting this loop) fails immediately when a contract artifact is missing; keep findWasmPath, contracts and c.name to locate the spot.upgrades/CHANGELOG.md-7-15 (1)
7-15: 🛠️ Refactor suggestion | 🟠 MajorAdd a dedicated v2.0.0 upgrade entry.
This is a breaking upgrade, but this file still only updates the version table and the existing
v1.2.0note. Please add a new section that documents the new/renamed/deleted stores and the module migrations for this rollout so operators have an actionable upgrade path.Also applies to: 40-46
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/CHANGELOG.md` around lines 7 - 15, Add a new "v2.0.0" upgrade section to upgrades/CHANGELOG.md (in addition to the existing v1.2.0 note and the version table) that enumerates all new/renamed/deleted stores and lists per-module migration steps required for the rollout; specifically, create a clear header "v2.0.0", under it document the store changes (new stores, renamed store names, and stores removed) and then for each module shown in the version table (audit, cert, deployment, escrow, market, provider, take) add concise migration instructions (commands, config changes, data transformation notes) and any dependency ordering or rollback steps so operators have an actionable upgrade path.cmd/akash/cmd/genesis.go-121-135 (1)
121-135:⚠️ Potential issue | 🟠 MajorDon't replace the entire send-enabled table.
PrepareGenesisreads the existing bank genesis and then overwritesSendEnabledwith a brand-new slice. Any preexisting per-denom overrides disappear here. Please merge theuakt/uactoverrides intobankGenState.SendEnabledinstead of replacing the full list.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` around lines 121 - 135, The code currently replaces bankGenState.SendEnabled with a new slice, wiping any preexisting per-denom overrides; instead, update/merge the two entries for sdkutil.DenomUakt and sdkutil.DenomUact into the existing bankGenState.SendEnabled. Locate the assignment to bankGenState.SendEnabled, build a map keyed by Denom from the existing bankGenState.SendEnabled, set/overwrite the map entries for sdkutil.DenomUakt (true) and sdkutil.DenomUact (false), then reconstruct bankGenState.SendEnabled from the map so other denoms are preserved before calling cdc.MarshalJSON and storing appState[banktypes.ModuleName].cmd/akash/cmd/genesis.go-242-261 (1)
242-261:⚠️ Potential issue | 🟠 MajorFix the ACT metadata ordering and display fields.
The DenomUnits are ordered as
act(6),mact(3),uact(0)(descending exponent), but Cosmos SDK metadata validation requires ascending exponent order. Also,Displayshould point toact(the higher unit) notuact(the base unit), matching the pattern used for AKT metadata in the same file. Additionally,NameandSymbolshould be descriptive strings rather than pointing to the base denom constant.Suggested fix
{ Description: "Akash Compute Token", DenomUnits: []*banktypes.DenomUnit{ { - Denom: sdkutil.DenomAct, - Exponent: 6, - }, - { - Denom: sdkutil.DenomMact, - Exponent: 3, - }, - { Denom: sdkutil.DenomUact, Exponent: 0, }, + { + Denom: sdkutil.DenomMact, + Exponent: 3, + }, + { + Denom: sdkutil.DenomAct, + Exponent: 6, + }, }, Base: sdkutil.DenomUact, - Display: sdkutil.DenomUact, - Name: sdkutil.DenomUact, - Symbol: sdkutil.DenomUact, + Display: sdkutil.DenomAct, + Name: "Akash Compute Token", + Symbol: "ACT", },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cmd/akash/cmd/genesis.go` around lines 242 - 261, Reorder the DenomUnits for the AKT metadata in the genesis definition so exponents are ascending (use sdkutil.DenomUact exponent 0, then sdkutil.DenomMact exponent 3, then sdkutil.DenomAct exponent 6); set Display to the human-facing higher unit sdkutil.DenomAct (not the base), and replace Name and Symbol constants with descriptive strings (e.g., Name = "Akash Compute Token" and Symbol = "ACT") instead of using the base denom constant to match the AKT pattern._docs/pyth-integration.md-172-176 (1)
172-176:⚠️ Potential issue | 🟠 MajorReconcile the guardian-set ownership/update flow.
This file says guardian addresses live in
x/oracleand are updated through Akash governance, but later it documents queryingguardian_set_infofrom the Wormhole contract._docs/governance-updates.mdin this same PR also describes guardian-set updates viasubmit_vaaon the Wormhole contract. One of these flows is wrong, and operators will follow the wrong recovery/update path.Also applies to: 743-805, 818-820
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 172 - 176, The docs contradict each other about where guardian sets are stored and how they are updated; pick the correct canonical flow (either guardians are stored in x/oracle and updated via Akash governance OR guardians are stored in the Wormhole contract and updated via submit_vaa) and update all references to match that single flow: change the bullet list that mentions querying guardian_set_info and the lines referencing x/oracle, guardian-set ownership, and submit_vaa (including the ranges noted) so they consistently state the chosen source of truth and update mechanism, and ensure any examples or recovery instructions point to the correct symbol (x/oracle or Wormhole contract + submit_vaa) accordingly._docs/pyth-integration.md-437-451 (1)
437-451:⚠️ Potential issue | 🟠 MajorThe AKT cost estimates are off by 10^3.
1,000,000 uaktis1 AKT, not0.001 AKT, so1,003,750 uaktis about1.00375 AKTper update. The monthly totals in the table are understated by roughly 1000x.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 437 - 451, The AKT conversion is wrong: 1,000,000 uakt equals 1 AKT, so update the "Per Update" block where it shows "Total: ~0.001 AKT" (and the "1,000,000 uakt" line) to show ~1.00375 AKT for "Total" (1,003,750 uakt = 1.00375 AKT), and recompute the Monthly Cost table entries by multiplying 1.00375 AKT by the Updates/Month (use 8,640 → ~8,672.4 AKT, 4,320 → ~4,336.2 AKT, 2,880 → ~2,890.8 AKT); replace the old "~0.001 AKT" and the "~9 AKT / ~4.5 AKT / ~3 AKT" values accordingly and ensure unit labels explicitly say "AKT".make/init.mk-77-79 (1)
77-79:⚠️ Potential issue | 🟠 Major
uname -mwon't hit this branch on most Linux arm64 hosts.Linux usually reports
aarch64, notarm64, so ARM runners will still pullcosmwasm/optimizerinstead of the-arm64image.Suggested fix
-ifeq (arm64,$(UNAME_ARCH)) +ifneq (,$(filter arm64 aarch64,$(UNAME_ARCH))) COSMWASM_OPTIMIZER_IMAGE := $(COSMWASM_OPTIMIZER_IMAGE)-arm64 endif🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/init.mk` around lines 77 - 79, The conditional that appends "-arm64" to COSMWASM_OPTIMIZER_IMAGE only matches "arm64" but Linux reports "aarch64", so update the branch around the UNAME_ARCH check in make/init.mk to detect both "aarch64" and "arm64" (or normalize UNAME_ARCH to a canonical value) before setting COSMWASM_OPTIMIZER_IMAGE := $(COSMWASM_OPTIMIZER_IMAGE)-arm64; ensure you reference the existing UNAME_ARCH variable and the COSMWASM_OPTIMIZER_IMAGE assignment (replace the current ifeq (arm64,$(UNAME_ARCH)) block with a check that matches aarch64 or arm64).make/setup-cache.mk-62-80 (1)
62-80:⚠️ Potential issue | 🟠 Major
wasmvm-libs-verifyfails on a clean checkout.Neither download rule nor
wasmvm-libs-verifydepends on$(AKASH_DEVCACHE), somake wasmvm-libs-verifycan try to write into$(AKASH_DEVCACHE_LIB)before that directory exists.Suggested fix
-$(AKASH_DEVCACHE_LIB)/%: +$(AKASH_DEVCACHE_LIB)/%: | $(AKASH_DEVCACHE) wget -q --show-progress https://github.com/CosmWasm/wasmvm/releases/download/$(WASMVM_VERSION)/$* -O $@ `@rm` -f $(AKASH_DEVCACHE_LIB)/.wasmvm_verified -$(AKASH_DEVCACHE_LIB)/wasmvm_checksums.txt: +$(AKASH_DEVCACHE_LIB)/wasmvm_checksums.txt: | $(AKASH_DEVCACHE) wget -q --show-progress https://github.com/CosmWasm/wasmvm/releases/download/$(WASMVM_VERSION)/checksums.txt -O $@ `@rm` -f $(AKASH_DEVCACHE_LIB)/.wasmvm_verified .PHONY: wasmvm-libs-verify -wasmvm-libs-verify: +wasmvm-libs-verify: $(AKASH_DEVCACHE) @$(MAKE) -s $(AKASH_DEVCACHE_LIB)/.wasmvm_verified🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@make/setup-cache.mk` around lines 62 - 80, The verify target can run before the cache dir exists; update the wasmvm-libs-verify target so it ensures $(AKASH_DEVCACHE) exists before verifying (e.g., make wasmvm-libs-verify depend on $(AKASH_DEVCACHE) or create the directory at the start of the recipe). Modify the wasmvm-libs-verify rule (symbol: wasmvm-libs-verify) to include $(AKASH_DEVCACHE) as a prerequisite or run mkdir -p "$(AKASH_DEVCACHE_LIB)" before invoking $(MAKE) -s $(AKASH_DEVCACHE_LIB)/.wasmvm_verified so the downloads/sha256 check never writes into a missing directory.testutil/oracle/price_feeder.go-24-35 (1)
24-35:⚠️ Potential issue | 🟠 Major
FeedPrices()skips the ACT feed.
NewPriceFeeder()seeds only AKT, andFeedPrices()iterates that map. Any BME test that doesSetupPriceFeeder(...).FeedPrices()will still be missing the ACT price that the new swap/settlement paths depend on.FeedPrice()'s$1.00fallback does not help here because it is never called for ACT unless the test does so explicitly.Also applies to: 124-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/oracle/price_feeder.go` around lines 24 - 35, NewPriceFeeder currently seeds only the AKT price so FeedPrices() never sends an ACT price; update NewPriceFeeder to also seed the ACT denom in the PriceFeeder.prices map (use the ACT denom constant, e.g. sdkutil.DenomAct or the project's ACT constant) with the expected $1.00 value (sdkmath.LegacyMustNewDecFromStr("1.0")), and ensure the same initialization is added to the other constructor/initialization site mentioned (around the 124-129 region); this ensures FeedPrices() iterates over and feeds ACT without relying on FeedPrice()'s fallback.app/types/app.go-509-529 (1)
509-529:⚠️ Potential issue | 🟠 MajorAdd the wasm module account before enabling the keeper.
This PR wires the upstream
Cosmos.Wasmkeeper, butModuleAccountPerms()inapp/mac.gostill has no entry forwasmtypes.ModuleName. That leaves the standard wasmd module account undefined, so module-account-backed wasm flows will fail once contracts start exercising them.🧩 Suggested follow-up in `app/mac.go`
+import wasmtypes "github.com/CosmWasm/wasmd/x/wasm/types" @@ govtypes.ModuleName: {authtypes.Burner}, ibctransfertypes.ModuleName: {authtypes.Minter, authtypes.Burner}, + wasmtypes.ModuleName: {authtypes.Burner}, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@app/types/app.go` around lines 509 - 529, The wasm keeper is being initialized (wasmkeeper.NewKeeper / wasmKeeper) before the wasm module account is registered, but ModuleAccountPerms() currently lacks an entry for wasmtypes.ModuleName; add the wasm module account and its permissions in ModuleAccountPerms() (and ensure the ModuleAccounts map/registration includes authtypes.NewModuleAddress(wasmtypes.ModuleName).String()) so the wasm module account exists before assigning app.Keepers.Cosmos.Wasm; update app/mac.go to include the wasm module name and proper perms consistent with other modules.upgrades/software/v2.0.0/upgrade.go-97-100 (1)
97-100:⚠️ Potential issue | 🟠 MajorUse the human-facing denom for
Display,Name, andSymbolinstead of the base unit.
Metadata.basemust be the smallest unit (e.g.,uact), butdisplayshould point to the human-facing denom that clients render—typically defined indenom_unitswith a higher exponent. Similarly,nameshould be the token's full name andsymbolshould be the exchange ticker, not the micro-unit. Setting all four fields tosdkutil.DenomUactwill cause wallets and explorers to show the micro-unit instead of the intended denom.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.0.0/upgrade.go` around lines 97 - 100, The Metadata block currently sets Base, Display, Name and Symbol all to sdkutil.DenomUact; change Display, Name and Symbol to the human-facing denom/constants (not the micro unit) while keeping Base as sdkutil.DenomUact so the smallest unit remains correct—update the struct where Base/Display/Name/Symbol are assigned in upgrade.go (replace sdkutil.DenomUact for Display/Name/Symbol with the appropriate human-facing denom constants used elsewhere in the codebase)..goreleaser-test-bins.yaml-43-44 (1)
43-44:⚠️ Potential issue | 🟠 MajorUse a valid deployment target for darwin/arm64.
darwin/arm64binaries require macOS 11.0 (Big Sur) as the minimum deployment target. Setting-mmacosx-version-min=10.12is invalid for Apple Silicon and will be rejected by the toolchain. Update to11.0.Proposed fix
- - CGO_CFLAGS=-mmacosx-version-min=10.12 - - CGO_LDFLAGS=-L./.cache/lib -mmacosx-version-min=10.12 + - CGO_CFLAGS=-mmacosx-version-min=11.0 + - CGO_LDFLAGS=-L./.cache/lib -mmacosx-version-min=11.0🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.goreleaser-test-bins.yaml around lines 43 - 44, Update the macOS deployment target used in the CGO flags from 10.12 to 11.0 so darwin/arm64 (Apple Silicon) builds are valid: change the CGO_CFLAGS and CGO_LDFLAGS values that currently contain "-mmacosx-version-min=10.12" to "-mmacosx-version-min=11.0" (the variables to edit are CGO_CFLAGS and CGO_LDFLAGS in the YAML).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0037deb5-6aca-409b-b3a3-eeee51676b49
⛔ Files ignored due to path filters (3)
Cargo.lockis excluded by!**/*.lockgo.sumis excluded by!**/*.sumtests/upgrade/testdata/hackatom.wasmis excluded by!**/*.wasm
📒 Files selected for processing (297)
.claude/skills/smartcontracts/SKILL.md.claude/skills/vanity/SKILL.md.env.envrc.github/actions/setup-ubuntu/action.yaml.github/workflows/dispatch.yaml.github/workflows/tests.yaml.gitignore.goreleaser-docker.yaml.goreleaser-test-bins.yaml.goreleaser.yaml.mockery.yamlCargo.tomlMakefile_build/akash.Dockerfile_build/test.Dockerfile_docs/governance-updates.md_docs/pyth-integration.md_run/.env_run/.envrc_run/.envrc_run_run/common-base.mk_run/common-commands.mk_run/common.mk_run/init.sh_run/node/.envrc_run/node/.gitignore_run/node/Makefile_run/node/deployment.yaml_run/node/docker-compose.yaml_run/node/price-feeder.sh_run/node/prop.jsonapp/app.goapp/app_configure.goapp/config.goapp/genesis.goapp/mac.goapp/modules.goapp/sim/sim_utils.goapp/sim_test.goapp/testnet.goapp/types/app.goapp/upgrades.gocmd/akash/cmd/app_creator.gocmd/akash/cmd/config.gocmd/akash/cmd/genesis.gocmd/akash/cmd/root.gocmd/akash/cmd/testnetify/config.gocmd/akash/cmd/testnetify/testnetify.gocmd/akash/cmd/testnetify/utils.gocmd/akash/main.gocontracts.gocontracts/pyth/Cargo.tomlcontracts/pyth/src/accumulator.rscontracts/pyth/src/contract.rscontracts/pyth/src/error.rscontracts/pyth/src/integration_tests.rscontracts/pyth/src/lib.rscontracts/pyth/src/msg.rscontracts/pyth/src/oracle.rscontracts/pyth/src/pyth.rscontracts/pyth/src/state.rscontracts/pyth/src/wormhole.rscontracts/wormhole-old/Cargo.tomlcontracts/wormhole-old/src/byte_utils.rscontracts/wormhole-old/src/contract.rscontracts/wormhole-old/src/error.rscontracts/wormhole-old/src/lib.rscontracts/wormhole-old/src/msg.rscontracts/wormhole-old/src/querier.rscontracts/wormhole-old/src/state.rscontracts/wormhole/Cargo.tomlcontracts/wormhole/src/byte_utils.rscontracts/wormhole/src/contract.rscontracts/wormhole/src/error.rscontracts/wormhole/src/lib.rscontracts/wormhole/src/msg.rscontracts/wormhole/src/state.rscontracts/wormhole/src/testing.rscontracts/wormhole/testing/integration.rscontracts/wormhole/testing/mod.rscontracts/wormhole/testing/tests.rscontracts/wormhole/testing/utils.rsdocgen/main.gogo.modmake/cosmwasm.mkmake/init.mkmake/releasing.mkmake/setup-cache.mkmake/test-integration.mkmeta.jsonpubsub/bus_test.gotests/e2e/bme_cli_test.gotests/e2e/bme_grpc_test.gotests/e2e/certs_cli_test.gotests/e2e/certs_grpc_test.gotests/e2e/cli_test.gotests/e2e/deployment_cli_test.gotests/e2e/deployment_grpc_test.gotests/e2e/grpc_test.gotests/e2e/market_cli_test.gotests/e2e/market_grpc_test.gotests/e2e/oracle_cli_test.gotests/e2e/oracle_grpc_test.gotests/e2e/provider_cli_test.gotests/e2e/provider_grpc_test.gotests/e2e/pyth_contract_test.gotests/testplan-bme-testnet.mdtests/upgrade/config-v0.24.0.tmpl.jsontests/upgrade/test-cases.jsontests/upgrade/testnet.jsontests/upgrade/upgrade_test.gotests/upgrade/workers_test.gotestutil/cosmos/keepers.gotestutil/cosmos/mocks/AccountKeeper_mock.gotestutil/cosmos/mocks/BankKeeper_mock.gotestutil/network/network.gotestutil/network/rpc.gotestutil/network/util.gotestutil/network_suite.gotestutil/oracle/price_feeder.gotestutil/sims/simulation_helpers.gotestutil/state/suite.gotestutil/types.gotools/upgrade-info/main.goupgrades/CHANGELOG.mdupgrades/software/v1.2.0/deployment.goupgrades/software/v1.2.0/init.goupgrades/software/v1.2.0/market.goupgrades/software/v1.2.0/upgrade.goupgrades/software/v2.0.0/deployment.goupgrades/software/v2.0.0/init.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.0.0/wasm.goupgrades/types/types.goupgrades/upgrades.goupgrades/upgrades_test.goutil/format/encoding_helper.goutil/format/encoding_helper_test.goutil/partialord/internal/dag/dag_test.goutil/partialord/partialord.goutil/partialord/partialord_test.goutil/query/pagination.gowasmvm.gox/audit/alias.gox/audit/genesis.gox/audit/handler/handler.gox/audit/handler/handler_test.gox/audit/handler/msg_server.gox/audit/keeper/grpc_query_test.gox/audit/keeper/keeper_test.gox/audit/keeper/key.gox/audit/module.gox/bme/alias.gox/bme/handler/server.gox/bme/imports/keepers.gox/bme/keeper/abci.gox/bme/keeper/codec.gox/bme/keeper/genesis.gox/bme/keeper/grpc_query.gox/bme/keeper/grpc_query_test.gox/bme/keeper/keeper.gox/bme/keeper/keeper_test.gox/bme/keeper/key.gox/bme/module.gox/bme/simulation/decoder.gox/bme/simulation/genesis.gox/bme/simulation/proposals.gox/cert/alias.gox/cert/genesis.gox/cert/handler/handler.gox/cert/handler/handler_test.gox/cert/handler/msg_server.gox/cert/keeper/grpc_query.gox/cert/keeper/grpc_query_test.gox/cert/keeper/keeper_test.gox/cert/keeper/key.gox/cert/module.gox/cert/utils/key_pair_manager.gox/cert/utils/utils.gox/deployment/alias.gox/deployment/genesis.gox/deployment/handler/handler.gox/deployment/handler/handler_test.gox/deployment/handler/keepers.gox/deployment/handler/server.gox/deployment/imports/keepers.gox/deployment/keeper/abci.gox/deployment/keeper/external.gox/deployment/keeper/grpc_query.gox/deployment/keeper/grpc_query_test.gox/deployment/keeper/indexes.gox/deployment/keeper/keeper.gox/deployment/keeper/keeper_test.gox/deployment/keeper/key.gox/deployment/keeper/keys/key.gox/deployment/migrate/v7/act.gox/deployment/module.gox/deployment/query/types.gox/deployment/simulation/operations.gox/deployment/simulation/proposals.gox/deployment/testdata/deployment-v2-same-pricing.yamlx/deployment/testdata/deployment-v2.yamlx/deployment/testdata/deployment.yamlx/epochs/alias.gox/epochs/keeper/abci.gox/epochs/keeper/abci_test.gox/epochs/keeper/epoch.gox/epochs/keeper/epoch_test.gox/epochs/keeper/genesis.gox/epochs/keeper/genesis_test.gox/epochs/keeper/grpc_query.gox/epochs/keeper/grpc_query_test.gox/epochs/keeper/hooks.gox/epochs/keeper/keeper.gox/epochs/keeper/keeper_test.gox/epochs/module.gox/epochs/simulation/genesis.gox/escrow/genesis.gox/escrow/handler/handler.gox/escrow/handler/server.gox/escrow/imports/keepers.gox/escrow/keeper/abci.gox/escrow/keeper/external.gox/escrow/keeper/grpc_query.gox/escrow/keeper/grpc_query_test.gox/escrow/keeper/keeper.gox/escrow/keeper/keeper_test.gox/escrow/keeper/key.gox/escrow/keeper/keys/codec.gox/escrow/keeper/querier.gox/escrow/module.gox/escrow/query/querier.gox/market/alias.gox/market/client/rest/params.gox/market/client/rest/rest.gox/market/genesis.gox/market/handler/handler.gox/market/handler/handler_test.gox/market/handler/keepers.gox/market/handler/server.gox/market/keeper/grpc_query.gox/market/keeper/grpc_query_test.gox/market/keeper/indexes.gox/market/keeper/keeper.gox/market/keeper/keeper_test.gox/market/keeper/keys/key.gox/market/module.gox/market/query/client.gox/market/query/path.gox/market/query/rawclient.gox/market/query/types.gox/market/simulation/genesis.gox/market/simulation/operations.gox/market/simulation/proposals.gox/market/simulation/utils.gox/oracle/alias.gox/oracle/handler/server.gox/oracle/keeper/abci.gox/oracle/keeper/codec.gox/oracle/keeper/genesis.gox/oracle/keeper/grpc_query.gox/oracle/keeper/grpc_query_test.gox/oracle/keeper/keeper.gox/oracle/keeper/key.gox/oracle/module.gox/oracle/simulation/decoder.gox/oracle/simulation/genesis.gox/oracle/simulation/proposals.gox/provider/alias.gox/provider/genesis.gox/provider/handler/handler.gox/provider/handler/handler_test.gox/provider/handler/server.gox/provider/keeper/grpc_query_test.gox/provider/keeper/keeper_test.gox/provider/module.gox/provider/simulation/operations.gox/take/genesis.gox/take/handler/server.gox/take/module.gox/take/simulation/proposals.gox/wasm/alias.gox/wasm/bindings/akash_query.gox/wasm/bindings/custom_querier.gox/wasm/bindings/query.gox/wasm/bindings/query_whitelist.gox/wasm/bindings/tools.gox/wasm/genesis.gox/wasm/handler/server.gox/wasm/keeper/grpc_query.gox/wasm/keeper/keeper.gox/wasm/keeper/msg_filter.gox/wasm/module.gox/wasm/simulation/decoder.gox/wasm/simulation/genesis.gox/wasm/simulation/proposals.go
💤 Files with no reviewable changes (6)
- upgrades/software/v1.2.0/upgrade.go
- upgrades/software/v1.2.0/init.go
- upgrades/software/v1.2.0/market.go
- upgrades/software/v1.2.0/deployment.go
- .github/workflows/dispatch.yaml
- tests/upgrade/config-v0.24.0.tmpl.json
✅ Files skipped from review due to trivial changes (3)
- testutil/sims/simulation_helpers.go
- upgrades/upgrades_test.go
- app/testnet.go
🚧 Files skipped from review as they are similar to previous changes (57)
- tests/e2e/certs_grpc_test.go
- tests/upgrade/upgrade_test.go
- wasmvm.go
- _run/node/docker-compose.yaml
- tests/e2e/certs_cli_test.go
- app/sim/sim_utils.go
- util/format/encoding_helper_test.go
- .claude/skills/vanity/SKILL.md
- _run/node/prop.json
- contracts/pyth/src/lib.rs
- cmd/akash/cmd/testnetify/config.go
- contracts/wormhole/testing/tests.rs
- .mockery.yaml
- _run/.envrc
- app/mac.go
- make/cosmwasm.mk
- _run/node/.gitignore
- .claude/skills/smartcontracts/SKILL.md
- _build/akash.Dockerfile
- contracts/pyth/Cargo.toml
- upgrades/software/v2.0.0/init.go
- contracts/pyth/src/integration_tests.rs
- .gitignore
- util/query/pagination.go
- cmd/akash/cmd/testnetify/utils.go
- _run/.envrc_run
- _run/node/price-feeder.sh
- contracts/pyth/src/wormhole.rs
- Cargo.toml
- contracts/wormhole-old/Cargo.toml
- tests/e2e/provider_grpc_test.go
- meta.json
- x/audit/alias.go
- upgrades/upgrades.go
- _run/node/.envrc
- util/partialord/partialord.go
- testutil/network/util.go
- x/audit/handler/handler.go
- tests/upgrade/testnet.json
- testutil/cosmos/keepers.go
- testutil/network_suite.go
- tests/e2e/oracle_grpc_test.go
- contracts/wormhole/src/testing.rs
- x/audit/genesis.go
- util/partialord/partialord_test.go
- .envrc
- tests/e2e/bme_cli_test.go
- .goreleaser-docker.yaml
- _run/node/deployment.yaml
- util/format/encoding_helper.go
- contracts/wormhole/testing/mod.rs
- make/test-integration.mk
- x/audit/handler/msg_server.go
- testutil/network/rpc.go
- tests/e2e/bme_grpc_test.go
- contracts/wormhole-old/src/state.rs
- testutil/cosmos/mocks/BankKeeper_mock.go
| func NewDefaultGenesisState(cdc codec.Codec) GenesisState { | ||
| genesis := ModuleBasics().DefaultGenesis(cdc) | ||
| return genesisFilterTokens(genesis) | ||
| return ModuleBasics().DefaultGenesis(cdc) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf 'Default-genesis and denom-related references:\n'
rg -n --type=go 'NewDefaultGenesisState|DefaultGenesis|genesisFilter|uakt|stake|DefaultBondDenom|mint_denom|bond_denom|min_deposit' app cmd xRepository: akash-network/node
Length of output: 11270
🏁 Script executed:
sed -n '25,120p' app/genesis.goRepository: akash-network/node
Length of output: 3772
🏁 Script executed:
# Check where NewDefaultGenesisState is called and what happens with it
rg -n 'NewDefaultGenesisState' app cmd --type=go -B 2 -A 5Repository: akash-network/node
Length of output: 2043
🏁 Script executed:
# Find ModuleBasics definition
rg -n 'func ModuleBasics' app --type=go -A 30Repository: akash-network/node
Length of output: 172
🏁 Script executed:
# Find mbasics definition
rg -n 'mbasics' app/config.go --type=go -B 5 -A 50 | head -100Repository: akash-network/node
Length of output: 1536
🏁 Script executed:
# Find akashModuleBasics definition
rg -n 'func akashModuleBasics' app --type=go -A 30Repository: akash-network/node
Length of output: 1829
🏁 Script executed:
# Check for any denom filtering, normalization, or rewriting functions
rg -n 'denom|normalize|rewrite|filter' app/genesis.go cmd/akash/cmd/genesis.go --type=goRepository: akash-network/node
Length of output: 115
🏁 Script executed:
# Get full context around line 42 in cmd/akash/cmd/genesis.go
sed -n '1,150p' cmd/akash/cmd/genesis.goRepository: akash-network/node
Length of output: 4658
🏁 Script executed:
# Check InitChain and how genesis is actually used during chain startup
rg -n 'InitChain|initChain' app/app.go --type=go -B 3 -A 15 | head -80Repository: akash-network/node
Length of output: 1491
🏁 Script executed:
# Check git history for recent changes to app/genesis.go
git log --oneline -20 app/genesis.goRepository: akash-network/node
Length of output: 85
🏁 Script executed:
# Check what changed in that commit
git show 3d48390 -- app/genesis.go | head -100Repository: akash-network/node
Length of output: 3963
🏁 Script executed:
# Check git history before that commit
git log --oneline -30 | head -20Repository: akash-network/node
Length of output: 85
🏁 Script executed:
# Check what the file looked like before (show parent commit)
git show 3d48390^:app/genesis.go 2>/dev/null || echo "No parent commit"Repository: akash-network/node
Length of output: 78
🏁 Script executed:
# Check the testnetify file to understand the genesis flow
sed -n '100,140p' cmd/akash/cmd/testnetify/testnetify.goRepository: akash-network/node
Length of output: 1374
🏁 Script executed:
# Check if genesis.go existed before the cosmwasm commit
git log --all --full-history -- app/genesis.go | head -20Repository: akash-network/node
Length of output: 752
🏁 Script executed:
# Check what genesis.go looked like in the previous commit
git show b10cf66cb1c5b2fe61980719d4add48a0cadb73a:app/genesis.go 2>/dev/null | head -100Repository: akash-network/node
Length of output: 3373
Remove the direct SDK default denom exposure introduced in genesis.
The previous version normalized all "stake" references to "uakt" via genesisFilterTokens(). This was removed in commit 3d48390, so NewDefaultGenesisState() now returns raw SDK defaults without conversion. Cross-file, cmd/akash/cmd/testnetify/testnetify.go Line 120 seeds balances in "uakt", leaving genesis with a mismatch: staking/mint/gov params will have "stake" while the system expects "uakt".
Restore the denom normalization step:
Suggested fix
func NewDefaultGenesisState(cdc codec.Codec) GenesisState {
- return ModuleBasics().DefaultGenesis(cdc)
+ genesis := ModuleBasics().DefaultGenesis(cdc)
+ return normalizeGenesisDenoms(genesis, "uakt")
}
+func normalizeGenesisDenoms(genesis GenesisState, targetDenom string) GenesisState {
+ // Replace all "stake" references with target denom (e.g., "uakt")
+ normalized := make(GenesisState)
+ for k, v := range genesis {
+ data := v.MarshalJSON()
+ stringData := string(data)
+ stringData = strings.ReplaceAll(stringData, `"stake"`, fmt.Sprintf(`"%s"`, targetDenom))
+ normalized[k] = json.RawMessage(stringData)
+ }
+ return normalized
+}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@app/genesis.go` around lines 32 - 33, NewDefaultGenesisState currently
returns raw SDK defaults from ModuleBasics().DefaultGenesis(cdc) which
reintroduces the "stake" denom; to fix this, call the existing
genesisFilterTokens(...) normalization on the default genesis map before
returning it so all "stake" denoms become "uakt". Locate NewDefaultGenesisState
and replace the direct return of ModuleBasics().DefaultGenesis(cdc) with code
that obtains the default genesis (via ModuleBasics().DefaultGenesis(cdc)),
passes that result into genesisFilterTokens, and returns the normalized genesis
state.
contracts.go
Outdated
| const ContractsDir = ".cache/cosmwasm/artifacts" | ||
|
|
||
| //go:embed .cache/cosmwasm/artifacts/*.wasm | ||
| var Contracts embed.FS |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
printf 'Embed target in contracts.go:\n'
sed -n '1,20p' contracts.go
printf '\nMatching wasm artifacts under .cache/cosmwasm/artifacts:\n'
fd -a -t f '\.wasm$' .cache/cosmwasm/artifacts 2>/dev/null || trueRepository: akash-network/node
Length of output: 301
Remove or gate the //go:embed directive—it will fail compilation.
Line 9's //go:embed .cache/cosmwasm/artifacts/*.wasm pattern matches no files in the current tree. The Go compiler will fail typecheck and build until at least one .wasm file exists in that directory. Either commit the required artifacts, generate them before compilation, or use a build tag to gate this code to avoid breaking fresh checkouts and CI pipelines.
🧰 Tools
🪛 golangci-lint (2.5.0)
[error] 9-9: pattern .cache/cosmwasm/artifacts/*.wasm: no matching files found
(typecheck)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@contracts.go` around lines 7 - 10, The //go:embed directive on the Contracts
variable references .cache/cosmwasm/artifacts/*.wasm which may not exist and
will break compilation; either remove the embed line and load wasm artifacts at
runtime, commit/generate the .wasm files into ContractsDir before builds, or
gate this file with a build tag so the embed is only processed when artifacts
exist (for example add a //go:build wasm_artifacts constraint and document using
that tag in build/CI). Update references to ContractsDir and Contracts
accordingly so callers use the gated/optional embed or the runtime loader.
e85f008 to
1c68c3e
Compare
d6527d9 to
b04d3be
Compare
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
Description
Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow-up issues.
I have...
!to the type prefix if API or client breaking changeCHANGELOG.md