Practical recipes for extending teams, stacks, build images, registries, pipelines, and intercept behavior. Examples use paths from the main tekton-dag platform repository.
New team or new stack from scratch? See TEAM-ONBOARDING-STACKS-AND-BAGGAGE.md for which baggage / header-forwarding library to use per language and an end-to-end stack creation checklist (YAML, Helm, orchestrator).
Goal: Isolate config, labels, and orchestrator defaults for another group.
- Repository layout — add
teams/<team>/team.yamlandteams/<team>/values.yaml:
teams/squad-b/team.yaml (orchestrator and docs; loaded from /teams when packaged):
name: squad-b
namespace: tekton-pipelines
cluster: prod-us-east
imageRegistry: registry.internal/squad-b
cacheRepo: registry.internal/squad-b/kaniko-cache
interceptBackend: telepresence
maxConcurrentRuns: 5
maxParallelBuilds: 8
stacks:
- stacks/squad-b-core.yamlteams/squad-b/values.yaml (Helm overrides for that release):
teamName: "squad-b"
namespace: "tekton-pipelines"
imageRegistry: "registry.internal/squad-b"
cacheRepo: "registry.internal/squad-b/kaniko-cache"
interceptBackend: "telepresence"
maxParallelBuilds: 8
stackFile: "stacks/squad-b-core.yaml"
gitUrl: "https://github.com/myorg/platform.git"
gitRevision: "main"
orchestrationService:
enabled: true
image: "registry.internal/squad-b/tekton-dag-orchestrator:latest"- Helm chart — run
helm/tekton-dag/package.sh, then copy team YAML into the chart’s raw path soconfigmap-teams.yamlrenders:
mkdir -p helm/tekton-dag/raw/teams/squad-b
cp teams/squad-b/team.yaml helm/tekton-dag/raw/teams/squad-b/-
Stacks — add
stacks/squad-b-core.yamland ensurepackage.shcopies it intoraw/stacks/. -
Deploy —
helm upgrade --install tekton-dag-squad-b ./helm/tekton-dag -n tekton-pipelines -f teams/squad-b/values.yaml.
Goal: Register another service in the DAG (build, image, tests, downstream edges).
Baggage: add the matching library from libs/ and set propagation-role—see TEAM-ONBOARDING-STACKS-AND-BAGGAGE.md.
Edit or create a stack file under stacks/. Each entry under apps needs a unique name, repo, role, build tool settings, and optional downstream / tests:
# stacks/my-stack.yaml (excerpt)
name: my-stack
defaults:
namespace: staging
image-registry: "${IMAGE_REGISTRY}"
apps:
- name: new-api
repo: myorg/new-api
role: persistence
propagation-role: terminal
context-dir: "."
dockerfile: Dockerfile
build:
tool: maven
runtime: spring-boot
java-version: "21"
build-command: "mvn -B clean package -DskipTests"
downstream: []
tests:
postman: tests/postman/api.jsontool must match a compile path your pipelines support (npm, maven, gradle, pip, composer / PHP flows). After changing stacks, run helm/tekton-dag/package.sh before upgrading the chart so the stacks ConfigMap updates.
Goal: Support another language runtime tag (e.g. Java 25) end-to-end.
- Dockerfile — add or extend build args in
build-images/Dockerfile.<tool>(example pattern from Maven):
ARG JAVA_VERSION=21
FROM maven:3.9-eclipse-temurin-${JAVA_VERSION}
# ...- Build matrix — extend
VARIANT_MAPinbuild-images/build-and-push.shand run:
cd build-images
REGISTRY=my.registry.io ./build-and-push.sh --matrix --tool maven- Helm values — add a matching entry under
compileImageVariantsinhelm/tekton-dag/values.yaml:
compileImageVariants:
maven-java25: "my.registry.io/tekton-dag-build-maven:java25"- PipelineRuns — pass the corresponding
compile-image-maven(or other tool) param so the compile Task uses that image (scripts/generate-run.shwith--build-imagesbuilds defaults fromIMAGE_REGISTRY).
Goal: Point Kaniko, app images, and compile images at a new registry.
-
Helm — set
imageRegistry,cacheRepo,compileImages,compileImageVariants, andorchestrationService.imageinvalues.yamlor a team override file. -
Scripts / local env —
scripts/common.shdefaultsIMAGE_REGISTRYtolocalhost:5001; override with environment or a repo.env:
export IMAGE_REGISTRY=my.registry.io:443
./scripts/publish-build-images.sh # uses load_env + REGISTRYpublish-build-images.sh calls build-images/build-and-push.sh, which honors REGISTRY and positional args. For Kind, resolve_compile_registry maps localhost:5001 → localhost:5000 for in-cluster references.
Goal: Different defaults per team without editing shared pipeline YAML.
| Mechanism | What it controls |
|---|---|
| Helm values per release | Orchestrator env: INTERCEPT_BACKEND, STACK_FILE, GIT_*, IMAGE_REGISTRY, CACHE_REPO, MAX_PARALLEL_BUILDS. |
POST /api/run |
JSON body may include intercept_backend, stack_file, git_revision for one-off overrides. |
| PipelineRun params | generate-run.sh flags (--intercept-backend, --registry, …) map directly to pipeline parameters. |
team.yaml |
Documents team limits, stacks, and registry; keep in sync with Helm for clarity. /api/teams exposes this file. |
Example manual API call:
POST /api/run
{
"mode": "pr",
"changed_app": "demo-fe",
"pr_number": 99,
"intercept_backend": "mirrord",
"stack_file": "stacks/stack-two-vendor.yaml"
}Goal: Run extra logic after clone, after image build, around tests, or in finally.
- Define a Task in
tasks/(seetasks/examples/example-image-scan.yamlforpost-build,example-slack-notify.yamlforpost-test):
apiVersion: tekton.dev/v1
kind: Task
metadata:
name: my-pre-build
labels:
tekton-dag/hook-type: "pre-build"
spec:
steps:
- name: run
image: alpine:3.20
script: |
#!/bin/sh
set -e
echo "Custom pre-build"-
Apply the Task to the pipeline namespace (
kubectl apply -f tasks/my-pre-build.yaml -n tekton-pipelinesor include it in chart packaging). -
Pass the Task name as a pipeline parameter (
pre-build-task,post-build-task,pre-test-task, orpost-test-task). Empty string skips the hook. Pipelines use aWhenExpressionon the param so missing Tasks are not required until you set the name.
Goal: Pin Node, Java, Python, or PHP for a specific service.
Set fields under build in the stack YAML; they are preserved in resolved stack JSON for tooling and documentation:
build:
tool: npm
runtime: vue
node-version: "20"
build-command: "npm ci && npm run build"build:
tool: maven
java-version: "17"
build-command: "mvn -B clean package -DskipTests"build:
tool: pip
python-version: "3.11"
build-command: "pip install -r requirements.txt && pytest -q"build:
tool: composer
php-version: "8.2"
build-command: "composer install --no-dev --optimize-autoloader"Ensure a compile image exists that matches that toolchain (build and push with build-and-push.sh --matrix, then set compileImageVariants and pass the right compile-image-* on the PipelineRun). For a PR run that builds only one app, a single compile-image-maven value matching that app’s Java version is enough.
Goal: Use mirrord instead of Telepresence (or switch per environment).
-
Cluster — install the components your chosen backend needs (e.g. Telepresence Traffic Manager or mirrord operator per vendor docs).
-
Helm — set
interceptBackend: "mirrord"in values; the orchestrator setsINTERCEPT_BACKENDfor webhook-created runs. -
Pipeline param —
stack-pr-testalready exposesintercept-backend(telepresence|mirrord).generate-run.shsupports:
./scripts/generate-run.sh --mode pr --repo demo-fe --pr 1 --intercept-backend mirrord --apply-
Images — ensure
compileImages.mirrordpoints to a pushedtekton-dag-build-mirrordimage when mirrord tasks run. -
Manual API —
POST /api/runaccepts"intercept_backend": "mirrord"for PR mode.
For details on mirrord tasks and scenarios, see m7-mirrord-intercept-task.md in this docs folder.