Stuart's instruction (2026-02-22): Never say "I think I fixed it." You MUST verify before telling Stuart it works.
"I wrote it, I tested it, I proved it, I'm sure it works, this looks great — use it."
- Build — run
npm run buildand confirm it succeeds with zero errors - Screenshot the live result — use Playwright to take real browser screenshots:
node --input-type=module <<'EOF' import { chromium } from '/Users/stuartkerr/.npm-global/lib/node_modules/playwright/index.mjs'; const b = await chromium.launch(); const p = await b.newPage(); await p.goto('http://localhost:3000'); await p.screenshot({ path: '/tmp/snap-verify.png' }); await b.close(); EOF
- Read the screenshots — use the Read tool to view the PNG. Check visually: does it look right? Is anything broken, missing, or wrong?
- Test the specific change — if you added a UI element, click it. If you fixed a loading bug, wait and confirm loading is gone. If you fixed a layout, confirm sidebar + header are intact.
- Only THEN declare it done — with confidence, not hedging.
- "I think I fixed it" ❌
- "This should work now" ❌
- "Let me know if it looks right" ❌
- Shipping code that breaks navigation ❌
- Shipping code that creates duplicate elements ❌
- Shipping code with a still-spinning loader ❌
- "I built it, ran Playwright, reviewed the screenshot, the loader is gone, the orb renders, the sidebar is intact — it works." ✓
Stuart's instruction (2026-02-21): Every command given in this project MUST be immediately handed to Ruflo. Claude Code is the executor only — Ruflo is the brain.
Stuart gives command
↓
Ruflo receives it (FIRST)
↓
Ruflo queries RuvVector KB (RVF-first: knowledge.rvf + MCP embedded WASM KB)
↓
Ruflo selects agents from RuvNet architecture
↓
Agents execute using agentic-flow + RuvNet tools
↓
Claude Code executes file/bash operations only
# Step 1: Route to Ruflo immediately
npx ruflo@latest hooks route --task "[Stuart's command]"
# Step 2: Initialize swarm for any non-trivial task
npx ruflo@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized
# Step 3: Query RuvVector KB before any action
mcp__ask-ruvnet__kb_search({ query: "[relevant terms]", limit: 5 })| Command Type | Ruflo Routes To |
|---|---|
| Any question | researcher + KB search first |
| Architecture/design | system-architect (Opus model) |
| Code changes | coder + reviewer |
| Deployment/infra | devops-architect |
| Cleanup/audit | Explore agent + coder |
| Docs update | technical-writer |
| Bug fix | root-cause-analyst + coder |
| Testing | quality-engineer |
VIOLATION = failing to route through Ruflo. Claude Code NEVER answers directly without CF orchestration.
This project has a 502-entry expert-curated knowledge base (v7.0.0, 10 tools, 164 teaching entries) with teaching content that is NOT in any LLM's training data. It covers RuVector (104 Rust crates), Ruflo (219 MCP tools), agents, swarms, AIMDS, embeddings, HNSW, ONNX, MCP, RVF, SONA, MinCut, GNN, attention mechanisms, cognitive containers, WASM deployment, and advanced agentic patterns. Much of this technology was created after your knowledge cutoff.
STEP 1: Check the KB. For ANY question about agents, swarms, vectors, embeddings, HNSW, ONNX, MCP, RVF, AIMDS, security, Ruflo, architecture, knowledge bases, RuVector, coding concepts, debugging, or how things work:
-- Via MCP (preferred — 10 tools available):
mcp__ask-ruvnet__kb_search({ query: "<user's question>", limit: 5 })
-- For teaching/explanations:
mcp__ask-ruvnet__kb_teach({ query: "<concept>", limit: 3 })
-- For WASM/browser apps:
mcp__ask-ruvnet__kb_wasm({ query: "<wasm topic>", limit: 5 })
-- Via local inspection (if MCP unavailable — no PostgreSQL needed):
node scripts/kb-inspect.mjs --search "<relevant terms>"STEP 2: Route through Ruflo. Do NOT answer complex questions as raw Claude Code. Use Ruflo agents:
- Architecture questions ->
Task(subagent_type="system-architect") - Code implementation ->
Task(subagent_type="coder") - Research/exploration ->
Task(subagent_type="researcher") - Security questions ->
Task(subagent_type="security-engineer") - Teaching/explaining -> Check KB first, THEN answer using KB content
STEP 3: Teach from the KB, not from training data. If the KB has a teaching entry (category='teaching'), use ITS analogies and plain English. Stuart is learning -- every response is a teaching moment. The KB entries were written specifically for his learning level.
- Stuart is a vibe coder growing into advanced agentic development
- The KB contains deep teaching knowledge about technologies that are TOO NEW for any LLM's training data
- Ruflo understands RuVector natively -- Claude Code does not
- Every time Claude Code answers without checking the KB, it risks giving stale or wrong information
- The KB has beginner-friendly explanations that Claude Code would not generate on its own
- RVF cognitive container format (24 segments, self-booting, 5.5KB WASM)
- AIMDS: 3-layer security pipeline, 25-level meta-learning, Lyapunov chaos detection
- SONA: Self-optimizing neural architecture (<1ms real-time learning)
- MinCut: Dynamic graph partitioning for self-healing AI (Dec 2025 breakthrough)
- RuVector-Postgres: 290+ SQL functions (pgvector replacement)
- RuVector-WASM: Complete browser vector DB (<400KB, zero backend)
- Micro-HNSW: 7.2KB neuromorphic WASM with spiking neural networks
- 80+ Rust crates and their interconnections
- Teaching entries with plain-English analogies for every major concept
- Progressive learning paths from vibe coding to advanced agent building
- Decision frameworks (when to use WASM vs Postgres, hierarchical vs mesh, etc.)
- Debugging guides written for non-coders
- Stuart says "just do it" or "skip the KB"
- The task is purely mechanical (git commit, file rename, formatting)
- You are reading/displaying a file without interpreting it
This repo is the ONLY knowledge base for the entire RuVector/Ruflo ecosystem. The old Ruvnet-Koweldgebase-and-Application-Builder repo was deleted on 2026-03-20. All KB content, MCP server code, and auto-curation pipelines live here.
- Location:
bin/mcp-server.js(v7.0.0, embedded-only) - Data: Reads from
kb-data/directory (kb-entries.json.gz, kb-embeddings.bin, kb-metadata.json) - Registration: Global via
~/.mcp.jsonasask-ruvnet+~/.claude.jsonasask-ruvnet - 502 gold entries, 16 categories, avg quality 97/100, binary-quantized 384-dim vectors
As of v4.12.0, the build pipeline source of truth is kb-master.json (checked into repo root). Builds no longer require PostgreSQL. PostgreSQL (kb_complete) is still used by the nightly auto-curation scripts but is NOT needed for local builds or deploys.
4:00 AM — kb-evergreen.mjs (LaunchAgent: com.ruvnet.kb-evergreen)
→ Scans ALL ruvnet repos via gh repo list (~200+ repos)
→ Ingests .md, .txt, .rst, AND .toml files (Cargo.toml for crate discovery)
→ Stores raw chunks in architecture_docs with ONNX embeddings
→ NEW repos are automatically detected and ingested
5:00 AM — kb-auto-curate.mjs --rebuild (LaunchAgent: ai.openclaw.kb-curate)
→ Detects gaps: repos in architecture_docs with no gold entry in kb_complete
→ Detects stale: repos updated more recently than their gold entry
→ Synthesizes MULTIPLE teaching entries per repo (up to 5 for monorepos)
→ Each entry answers: What is it? Why care? How to use? What's unique?
→ Embeds with ONNX, upserts to kb_complete
→ Triggers full rebuild: lean RVF + quantized + MCP export
6:00 AM — kb-export-pipeline.mjs (LaunchAgent: ai.openclaw.kb-export)
→ Safety-net rebuild of RVF + browser assets + MCP kb-data
node scripts/ingest-catalog.mjs— writes entries tokb-master.json(not PG)
kb-evergreen.mjspulls ALL repos underruvnetGitHub user nightly- New repos (never seen before) are flagged and ingested immediately
Cargo.tomlfiles are now ingested — discovers crates without READMEskb-auto-curate.mjsdetects the gap (repo in architecture_docs, no gold entry)- Claude Sonnet synthesizes teaching-quality entries explaining what/why/how
- Multiple entries generated for monorepos with many crates (up to 5)
node scripts/build-lean-rvf.mjs— readskb-master.json(no PG needed) → .ruvector/ + knowledge.rvf + sidecarnode scripts/build-quantized-rvf.mjs— .ruvector/ → SQ8 browser assetsnode scripts/export-mcp-kb.mjs --output kb-data/— .ruvector/ → MCP formatcd src/ui && npm run build— Frontend with quantized assets
node scripts/kb-inspect.mjs— inspect KB entries, categories, quality scores (replaces ad-hoc SQL queries)
- NEVER rebuild knowledge.rvf from architecture_docs (255K+ entries, 270MB+)
- NEVER point MCP at old
Ruvnet-Koweldgebase-and-Application-Builder(DELETED) - NEVER commit knowledge.rvf at >1MB — wrong build script ran
- NEVER skip ONNX for embeddings — use Xenova/all-MiniLM-L6-v2, 384 dims
- NEVER point build-lean-rvf.mjs at PostgreSQL — it reads
kb-master.jsonas of v4.12.0
When starting work on complex tasks, Claude Code MUST automatically:
- Initialize the swarm using CLI tools via Bash
- Spawn concurrent agents using Claude Code's Task tool
- Coordinate via hooks and memory
When user says "spawn swarm" or requests complex work, Claude Code MUST in ONE message:
- Call CLI tools via Bash to initialize coordination
- IMMEDIATELY call Task tool to spawn REAL working agents
- Both CLI and Task calls must be in the SAME response
CLI coordinates, Task tool agents do the actual work!
Use this to prevent agent drift:
# Small teams (6-8 agents) - use hierarchical for tight control
npx ruflo@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized
# Large teams (10-15 agents) - use hierarchical-mesh for V3 queen + peer communication
npx ruflo@latest swarm init --topology hierarchical-mesh --max-agents 15 --strategy specializedValid Topologies:
hierarchical- Queen controls workers directly (anti-drift for small teams)hierarchical-mesh- V3 queen + peer communication (recommended for 10+ agents)mesh- Fully connected peer networkring- Circular communication patternstar- Central coordinator with spokeshybrid- Dynamic topology switching
Anti-Drift Guidelines:
- hierarchical: Coordinator catches divergence
- max-agents 6-8: Smaller team = less drift
- specialized: Clear roles, no overlap
- consensus: raft (leader maintains state)
When the user requests a complex task, spawn agents in background and WAIT for completion:
// STEP 1: Initialize swarm coordination (anti-drift config)
Bash("npx ruflo@latest swarm init --topology hierarchical --max-agents 8 --strategy specialized")
// STEP 2: Spawn ALL agents IN BACKGROUND in a SINGLE message
// Use run_in_background: true so agents work concurrently
Task({
prompt: "Research requirements, analyze codebase patterns, store findings in memory",
subagent_type: "researcher",
description: "Research phase",
run_in_background: true // ← CRITICAL: Run in background
})
Task({
prompt: "Design architecture based on research. Document decisions.",
subagent_type: "system-architect",
description: "Architecture phase",
run_in_background: true
})
Task({
prompt: "Implement the solution following the design. Write clean code.",
subagent_type: "coder",
description: "Implementation phase",
run_in_background: true
})
Task({
prompt: "Write comprehensive tests for the implementation.",
subagent_type: "tester",
description: "Testing phase",
run_in_background: true
})
Task({
prompt: "Review code quality, security, and best practices.",
subagent_type: "reviewer",
description: "Review phase",
run_in_background: true
})
// STEP 3: WAIT - Tell user agents are working, then STOP
// Say: "I've spawned 5 agents to work on this in parallel. They'll report back when done."
// DO NOT check status repeatedly. Just wait for user or agent responses.After spawning background agents:
- TELL USER - "I've spawned X agents working in parallel on: [list tasks]"
- STOP - Do not continue with more tool calls
- WAIT - Let the background agents complete their work
- RESPOND - When agents return results, review and synthesize
Example response after spawning:
I've launched 5 concurrent agents to work on this:
- 🔍 Researcher: Analyzing requirements and codebase
- 🏗️ Architect: Designing the implementation approach
- 💻 Coder: Implementing the solution
- 🧪 Tester: Writing tests
- 👀 Reviewer: Code review and security check
They're working in parallel. I'll synthesize their results when they complete.
- Continuously check swarm status
- Poll TaskOutput repeatedly
- Add more tool calls after spawning
- Ask "should I check on the agents?"
- Spawn all agents in ONE message
- Tell user what's happening
- Wait for agent results to arrive
- Synthesize results when they return
# 1. Search memory for relevant patterns from past successes
Bash("npx ruflo@latest memory search --query '[task keywords]' --namespace patterns")
# 2. Check if similar task was done before
Bash("npx ruflo@latest memory search --query '[task type]' --namespace tasks")
# 3. Load learned optimizations
Bash("npx ruflo@latest hooks route --task '[task description]'")# 1. Store successful pattern for future reference
Bash("npx ruflo@latest memory store --namespace patterns --key '[pattern-name]' --value '[what worked]'")
# 2. Train neural patterns on the successful approach
Bash("npx ruflo@latest hooks post-edit --file '[main-file]' --train-neural true")
# 3. Record task completion with metrics
Bash("npx ruflo@latest hooks post-task --task-id '[id]' --success true --store-results true")
# 4. Trigger optimization worker if performance-related
Bash("npx ruflo@latest hooks worker dispatch --trigger optimize")| Trigger | Worker | When to Use |
|---|---|---|
| After major refactor | optimize |
Performance optimization |
| After adding features | testgaps |
Find missing test coverage |
| After security changes | audit |
Security analysis |
| After API changes | document |
Update documentation |
| Every 5+ file changes | map |
Update codebase map |
| Complex debugging | deepdive |
Deep code analysis |
ALWAYS check memory before:
- Starting a new feature (search for similar implementations)
- Debugging an issue (search for past solutions)
- Refactoring code (search for learned patterns)
- Performance work (search for optimization strategies)
ALWAYS store in memory after:
- Solving a tricky bug (store the solution pattern)
- Completing a feature (store the approach)
- Finding a performance fix (store the optimization)
- Discovering a security issue (store the vulnerability pattern)
| Code | Task | Agents |
|---|---|---|
| 1 | Bug Fix | coordinator, researcher, coder, tester |
| 3 | Feature | coordinator, architect, coder, tester, reviewer |
| 5 | Refactor | coordinator, architect, coder, reviewer |
| 7 | Performance | coordinator, perf-engineer, coder |
| 9 | Security | coordinator, security-architect, auditor |
| 11 | Docs | researcher, api-docs |
Codes 1-9: hierarchical/specialized (anti-drift). Code 11: mesh/balanced
AUTO-INVOKE SWARM when task involves:
- Multiple files (3+)
- New feature implementation
- Refactoring across modules
- API changes with tests
- Security-related changes
- Performance optimization
- Database schema changes
SKIP SWARM for:
- Single file edits
- Simple bug fixes (1-2 lines)
- Documentation updates
- Configuration changes
- Quick questions/exploration
ABSOLUTE RULES:
- ALL operations MUST be concurrent/parallel in a single message
- NEVER save working files, text/mds and tests to the root folder
- ALWAYS organize files in appropriate subdirectories
- USE CLAUDE CODE'S TASK TOOL for spawning agents concurrently, not just MCP
MANDATORY PATTERNS:
- TodoWrite: ALWAYS batch ALL todos in ONE call (5-10+ todos minimum)
- Task tool (Claude Code): ALWAYS spawn ALL agents in ONE message with full instructions
- File operations: ALWAYS batch ALL reads/writes/edits in ONE message
- Bash commands: ALWAYS batch ALL terminal operations in ONE message
- Memory operations: ALWAYS batch ALL memory store/retrieve in ONE message
NEVER save to root folder. Use these directories:
/src- Source code files/tests- Test files/docs- Documentation and markdown files/config- Configuration files/scripts- Utility scripts/examples- Example code
- Topology: hierarchical (prevents drift)
- Max Agents: 8 (smaller = less drift)
- Strategy: specialized (clear roles)
- Consensus: raft
- Memory: hybrid
- HNSW: Enabled
- Neural: Enabled
| Command | Subcommands | Description |
|---|---|---|
init |
4 | Project initialization with wizard, presets, skills, hooks |
agent |
8 | Agent lifecycle (spawn, list, status, stop, metrics, pool, health, logs) |
swarm |
6 | Multi-agent swarm coordination and orchestration |
memory |
11 | AgentDB memory with vector search (150x-12,500x faster) |
mcp |
9 | MCP server management and tool execution |
task |
6 | Task creation, assignment, and lifecycle |
session |
7 | Session state management and persistence |
config |
7 | Configuration management and provider setup |
status |
3 | System status monitoring with watch mode |
workflow |
6 | Workflow execution and template management |
hooks |
17 | Self-learning hooks + 12 background workers |
hive-mind |
6 | Queen-led Byzantine fault-tolerant consensus |
| Command | Subcommands | Description |
|---|---|---|
daemon |
5 | Background worker daemon (start, stop, status, trigger, enable) |
neural |
5 | Neural pattern training (train, status, patterns, predict, optimize) |
security |
6 | Security scanning (scan, audit, cve, threats, validate, report) |
performance |
5 | Performance profiling (benchmark, profile, metrics, optimize, report) |
providers |
5 | AI providers (list, add, remove, test, configure) |
plugins |
5 | Plugin management (list, install, uninstall, enable, disable) |
deployment |
5 | Deployment management (deploy, rollback, status, environments, release) |
embeddings |
4 | Vector embeddings (embed, batch, search, init) - 75x faster with agentic-flow |
claims |
4 | Claims-based authorization (check, grant, revoke, list) |
migrate |
5 | V2 to V3 migration with rollback support |
doctor |
1 | System diagnostics with health checks |
completions |
4 | Shell completions (bash, zsh, fish, powershell) |
# Initialize project
npx ruflo@latest init --wizard
# Start daemon with background workers
npx ruflo@latest daemon start
# Spawn an agent
npx ruflo@latest agent spawn -t coder --name my-coder
# Initialize swarm
npx ruflo@latest swarm init --v3-mode
# Search memory (HNSW-indexed)
npx ruflo@latest memory search --query "authentication patterns"
# System diagnostics
npx ruflo@latest doctor --fix
# Security scan
npx ruflo@latest security scan --depth full
# Performance benchmark
npx ruflo@latest performance benchmark --suite allcoder, reviewer, tester, planner, researcher
security-architect, security-auditor, memory-specialist, performance-engineer
CVE remediation, input validation, path security:
InputValidator- Zod validationPathValidator- Traversal preventionSafeExecutor- Injection protection
hierarchical-coordinator, mesh-coordinator, adaptive-coordinator, collective-intelligence-coordinator, swarm-memory-manager
byzantine-coordinator, raft-manager, gossip-coordinator, consensus-builder, crdt-synchronizer, quorum-manager, security-manager
perf-analyzer, performance-benchmarker, task-orchestrator, memory-coordinator, smart-agent
github-modes, pr-manager, code-review-swarm, issue-tracker, release-manager, workflow-automation, project-board-sync, repo-architect, multi-repo-swarm
sparc-coord, sparc-coder, specification, pseudocode, architecture, refinement
backend-dev, mobile-dev, ml-developer, cicd-engineer, api-docs, system-architect, code-analyzer, base-template-generator
tdd-london-swarm, production-validator
| Hook | Description | Key Options |
|---|---|---|
pre-edit |
Get context before editing files | --file, --operation |
post-edit |
Record editing outcome for learning | --file, --success, --train-neural |
pre-command |
Assess risk before commands | --command, --validate-safety |
post-command |
Record command execution outcome | --command, --track-metrics |
pre-task |
Record task start, get agent suggestions | --description, --coordinate-swarm |
post-task |
Record task completion for learning | --task-id, --success, --store-results |
session-start |
Start/restore session (v2 compat) | --session-id, --auto-configure |
session-end |
End session and persist state | --generate-summary, --export-metrics |
session-restore |
Restore a previous session | --session-id, --latest |
route |
Route task to optimal agent | --task, --context, --top-k |
route-task |
(v2 compat) Alias for route | --task, --auto-swarm |
explain |
Explain routing decision | --topic, --detailed |
pretrain |
Bootstrap intelligence from repo | --model-type, --epochs |
build-agents |
Generate optimized agent configs | --agent-types, --focus |
metrics |
View learning metrics dashboard | --v3-dashboard, --format |
transfer |
Transfer patterns via IPFS registry | store, from-project |
list |
List all registered hooks | --format |
intelligence |
RuVector intelligence system | trajectory-*, pattern-*, stats |
worker |
Background worker management | list, dispatch, status, detect |
progress |
Check V3 implementation progress | --detailed, --format |
statusline |
Generate dynamic statusline | --json, --compact, --no-color |
coverage-route |
Route based on test coverage gaps | --task, --path |
coverage-suggest |
Suggest coverage improvements | --path |
coverage-gaps |
List coverage gaps with priorities | --format, --limit |
pre-bash |
(v2 compat) Alias for pre-command | Same as pre-command |
post-bash |
(v2 compat) Alias for post-command | Same as post-command |
| Worker | Priority | Description |
|---|---|---|
ultralearn |
normal | Deep knowledge acquisition |
optimize |
high | Performance optimization |
consolidate |
low | Memory consolidation |
predict |
normal | Predictive preloading |
audit |
critical | Security analysis |
map |
normal | Codebase mapping |
preload |
low | Resource preloading |
deepdive |
normal | Deep code analysis |
document |
normal | Auto-documentation |
refactor |
normal | Refactoring suggestions |
benchmark |
normal | Performance benchmarking |
testgaps |
normal | Test coverage analysis |
# Core hooks
npx ruflo@latest hooks pre-task --description "[task]"
npx ruflo@latest hooks post-task --task-id "[id]" --success true
npx ruflo@latest hooks post-edit --file "[file]" --train-neural true
# Session management
npx ruflo@latest hooks session-start --session-id "[id]"
npx ruflo@latest hooks session-end --export-metrics true
npx ruflo@latest hooks session-restore --session-id "[id]"
# Intelligence routing
npx ruflo@latest hooks route --task "[task]"
npx ruflo@latest hooks explain --topic "[topic]"
# Neural learning
npx ruflo@latest hooks pretrain --model-type moe --epochs 10
npx ruflo@latest hooks build-agents --agent-types coder,tester
# Background workers
npx ruflo@latest hooks worker list
npx ruflo@latest hooks worker dispatch --trigger audit
npx ruflo@latest hooks worker status
# Coverage-aware routing
npx ruflo@latest hooks coverage-gaps --format table
npx ruflo@latest hooks coverage-route --task "[task]"
# Statusline (for Claude Code integration)
npx ruflo@latest hooks statusline
npx ruflo@latest hooks statusline --json# Check migration status
npx ruflo@latest migrate status
# Run migration with backup
npx ruflo@latest migrate run --backup
# Rollback if needed
npx ruflo@latest migrate rollback
# Validate migration
npx ruflo@latest migrate validateV3 includes the RuVector Intelligence System:
- SONA: Self-Optimizing Neural Architecture (<0.05ms adaptation)
- MoE: Mixture of Experts for specialized routing
- HNSW: 150x-12,500x faster pattern search
- EWC++: Elastic Weight Consolidation (prevents forgetting)
- Flash Attention: 2.49x-7.47x speedup
The 4-step intelligence pipeline:
- RETRIEVE - Fetch relevant patterns via HNSW
- JUDGE - Evaluate with verdicts (success/failure)
- DISTILL - Extract key learnings via LoRA
- CONSOLIDATE - Prevent catastrophic forgetting via EWC++
Features:
- sql.js: Cross-platform SQLite persistent cache (WASM, no native compilation)
- Document chunking: Configurable overlap and size
- Normalization: L2, L1, min-max, z-score
- Hyperbolic embeddings: Poincaré ball model for hierarchical data
- 75x faster: With agentic-flow ONNX integration
- Neural substrate: Integration with RuVector
hierarchical- Queen controls workers directlymesh- Fully connected peer networkhierarchical-mesh- Hybrid (recommended)adaptive- Dynamic based on load
byzantine- BFT (tolerates f < n/3 faulty)raft- Leader-based (tolerates f < n/2)gossip- Epidemic for eventual consistencycrdt- Conflict-free replicated data typesquorum- Configurable quorum-based
| Metric | Target |
|---|---|
| Flash Attention | 2.49x-7.47x speedup |
| HNSW Search | 150x-12,500x faster |
| Memory Reduction | 50-75% with quantization |
| MCP Response | <100ms |
| CLI Startup | <500ms |
| SONA Adaptation | <0.05ms |
# After any significant operation, track metrics
Bash("npx ruflo@latest hooks post-command --command '[operation]' --track-metrics true")
# Periodically run benchmarks (every major feature)
Bash("npx ruflo@latest performance benchmark --suite all")
# Analyze bottlenecks when performance degrades
Bash("npx ruflo@latest performance profile --target '[component]'")# At session start - restore previous context
Bash("npx ruflo@latest session restore --latest")
# At session end - persist learned patterns
Bash("npx ruflo@latest hooks session-end --generate-summary true --persist-state true --export-metrics true")# Train on successful code patterns
Bash("npx ruflo@latest neural train --pattern-type coordination --epochs 10")
# Predict optimal approach for new tasks
Bash("npx ruflo@latest neural predict --input '[task description]'")
# View learned patterns
Bash("npx ruflo@latest neural patterns --list")# Configuration
RUFLO_CONFIG=./ruflo.config.json
RUFLO_LOG_LEVEL=info
# Provider API Keys
ANTHROPIC_API_KEY=sk-ant-...
OPENAI_API_KEY=sk-...
GOOGLE_API_KEY=...
# MCP Server
RUFLO_MCP_PORT=3000
RUFLO_MCP_HOST=localhost
RUFLO_MCP_TRANSPORT=stdio
# Memory
RUFLO_MEMORY_BACKEND=hybrid
RUFLO_MEMORY_PATH=./data/memoryRun npx ruflo@latest doctor to check:
- Node.js version (20+)
- npm version (9+)
- Git installation
- Config file validity
- Daemon status
- Memory database
- API keys
- MCP servers
- Disk space
- TypeScript installation
# Add MCP servers (auto-detects MCP mode when stdin is piped)
claude mcp add ruflo -- npx -y ruflo@latest
claude mcp add ruv-swarm -- npx -y ruv-swarm mcp start # Optional
claude mcp add flow-nexus -- npx -y flow-nexus@latest mcp start # Optional
# Start daemon
npx ruflo@latest daemon start
# Run doctor
npx ruflo@latest doctor --fix- Task tool: Spawn and run agents concurrently
- File operations (Read, Write, Edit, MultiEdit, Glob, Grep)
- Code generation and programming
- Bash commands and system operations
- TodoWrite and task management
- Git operations
- Swarm init:
npx ruflo@latest swarm init --topology <type> - Swarm status:
npx ruflo@latest swarm status - Agent spawn:
npx ruflo@latest agent spawn -t <type> --name <name> - Memory store:
npx ruflo@latest memory store --key "mykey" --value "myvalue" --namespace patterns - Memory search:
npx ruflo@latest memory search --query "search terms" - Memory list:
npx ruflo@latest memory list --namespace patterns - Memory retrieve:
npx ruflo@latest memory retrieve --key "mykey" --namespace patterns - Hooks:
npx ruflo@latest hooks <hook-name> [options]
# REQUIRED: --key and --value
# OPTIONAL: --namespace (default: "default"), --ttl, --tags
npx ruflo@latest memory store --key "pattern-auth" --value "JWT with refresh tokens" --namespace patterns
npx ruflo@latest memory store --key "bug-fix-123" --value "Fixed null check" --namespace solutions --tags "bugfix,auth"# REQUIRED: --query (full flag, not -q)
# OPTIONAL: --namespace, --limit, --threshold
npx ruflo@latest memory search --query "authentication patterns"
npx ruflo@latest memory search --query "error handling" --namespace patterns --limit 5# OPTIONAL: --namespace, --limit
npx ruflo@latest memory list
npx ruflo@latest memory list --namespace patterns --limit 10# REQUIRED: --key
# OPTIONAL: --namespace (default: "default")
npx ruflo@latest memory retrieve --key "pattern-auth"
npx ruflo@latest memory retrieve --key "pattern-auth" --namespace patternsnpx ruflo@latest memory init --force --verboseKEY: CLI coordinates the strategy via Bash, Claude Code's Task tool executes with real agents.
- Documentation: https://github.com/ruvnet/ruflo
- Issues: https://github.com/ruvnet/ruflo/issues
Remember: Ruflo coordinates, Claude Code Task tool creates!
Do what has been asked; nothing more, nothing less. NEVER create files unless they're absolutely necessary for achieving your goal. ALWAYS prefer editing an existing file to creating a new one. NEVER proactively create documentation files (*.md) or README files. Only create documentation files if explicitly requested by the User. Never save working files, text/mds and tests to the root folder.
- SPAWN IN BACKGROUND: Use
run_in_background: truefor all agent Task calls - SPAWN ALL AT ONCE: Put ALL agent Task calls in ONE message for parallel execution
- TELL USER: After spawning, list what each agent is doing (use emojis for clarity)
- STOP AND WAIT: After spawning, STOP - do NOT add more tool calls or check status
- NO POLLING: Never poll TaskOutput or check swarm status - trust agents to return
- SYNTHESIZE: When agent results arrive, review ALL results before proceeding
- NO CONFIRMATION: Don't ask "should I check?" - just wait for results
Example spawn message:
"I've launched 4 agents in background:
- 🔍 Researcher: [task]
- 💻 Coder: [task]
- 🧪 Tester: [task]
- 👀 Reviewer: [task]
Working in parallel - I'll synthesize when they complete."
This project is indexed by GitNexus as Ask-Ruvnet (6745 symbols, 10159 relationships, 300 execution flows). Use the GitNexus MCP tools to understand code, assess impact, and navigate safely.
If any GitNexus tool warns the index is stale, run
npx gitnexus analyzein terminal first.
- MUST run impact analysis before editing any symbol. Before modifying a function, class, or method, run
gitnexus_impact({target: "symbolName", direction: "upstream"})and report the blast radius (direct callers, affected processes, risk level) to the user. - MUST run
gitnexus_detect_changes()before committing to verify your changes only affect expected symbols and execution flows. - MUST warn the user if impact analysis returns HIGH or CRITICAL risk before proceeding with edits.
- When exploring unfamiliar code, use
gitnexus_query({query: "concept"})to find execution flows instead of grepping. It returns process-grouped results ranked by relevance. - When you need full context on a specific symbol — callers, callees, which execution flows it participates in — use
gitnexus_context({name: "symbolName"}).
gitnexus_query({query: "<error or symptom>"})— find execution flows related to the issuegitnexus_context({name: "<suspect function>"})— see all callers, callees, and process participationREAD gitnexus://repo/Ask-Ruvnet/process/{processName}— trace the full execution flow step by step- For regressions:
gitnexus_detect_changes({scope: "compare", base_ref: "main"})— see what your branch changed
- Renaming: MUST use
gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true})first. Review the preview — graph edits are safe, text_search edits need manual review. Then run withdry_run: false. - Extracting/Splitting: MUST run
gitnexus_context({name: "target"})to see all incoming/outgoing refs, thengitnexus_impact({target: "target", direction: "upstream"})to find all external callers before moving code. - After any refactor: run
gitnexus_detect_changes({scope: "all"})to verify only expected files changed.
- NEVER edit a function, class, or method without first running
gitnexus_impacton it. - NEVER ignore HIGH or CRITICAL risk warnings from impact analysis.
- NEVER rename symbols with find-and-replace — use
gitnexus_renamewhich understands the call graph. - NEVER commit changes without running
gitnexus_detect_changes()to check affected scope.
| Tool | When to use | Command |
|---|---|---|
query |
Find code by concept | gitnexus_query({query: "auth validation"}) |
context |
360-degree view of one symbol | gitnexus_context({name: "validateUser"}) |
impact |
Blast radius before editing | gitnexus_impact({target: "X", direction: "upstream"}) |
detect_changes |
Pre-commit scope check | gitnexus_detect_changes({scope: "staged"}) |
rename |
Safe multi-file rename | gitnexus_rename({symbol_name: "old", new_name: "new", dry_run: true}) |
cypher |
Custom graph queries | gitnexus_cypher({query: "MATCH ..."}) |
| Depth | Meaning | Action |
|---|---|---|
| d=1 | WILL BREAK — direct callers/importers | MUST update these |
| d=2 | LIKELY AFFECTED — indirect deps | Should test |
| d=3 | MAY NEED TESTING — transitive | Test if critical path |
| Resource | Use for |
|---|---|
gitnexus://repo/Ask-Ruvnet/context |
Codebase overview, check index freshness |
gitnexus://repo/Ask-Ruvnet/clusters |
All functional areas |
gitnexus://repo/Ask-Ruvnet/processes |
All execution flows |
gitnexus://repo/Ask-Ruvnet/process/{name} |
Step-by-step execution trace |
Before completing any code modification task, verify:
gitnexus_impactwas run for all modified symbols- No HIGH/CRITICAL risk warnings were ignored
gitnexus_detect_changes()confirms changes match expected scope- All d=1 (WILL BREAK) dependents were updated
After committing code changes, the GitNexus index becomes stale. Re-run analyze to update it:
npx gitnexus analyzeIf the index previously included embeddings, preserve them by adding --embeddings:
npx gitnexus analyze --embeddingsTo check whether embeddings exist, inspect .gitnexus/meta.json — the stats.embeddings field shows the count (0 means no embeddings). Running analyze without --embeddings will delete any previously generated embeddings.
Claude Code users: A PostToolUse hook handles this automatically after
git commitandgit merge.
| Task | Read this skill file |
|---|---|
| Understand architecture / "How does X work?" | .claude/skills/gitnexus/gitnexus-exploring/SKILL.md |
| Blast radius / "What breaks if I change X?" | .claude/skills/gitnexus/gitnexus-impact-analysis/SKILL.md |
| Trace bugs / "Why is X failing?" | .claude/skills/gitnexus/gitnexus-debugging/SKILL.md |
| Rename / extract / split / refactor | .claude/skills/gitnexus/gitnexus-refactoring/SKILL.md |
| Tools, resources, schema reference | .claude/skills/gitnexus/gitnexus-guide/SKILL.md |
| Index, status, clean, wiki CLI commands | .claude/skills/gitnexus/gitnexus-cli/SKILL.md |