Release Date: October 2025
- Added
force-new-cluster.ymlfor automated etcd cluster recovery via CIB attributes - Ansible conversion of Carlo Lobrano's shell script using
cluster_vmsinventory group
Release Date: October 2025
- Automatic discovery and inventory updates with cluster VMs after deployment
- Cluster VMs added to
[cluster_vms]group with SSH ProxyJump configuration through hypervisor - Direct Ansible access to cluster nodes from local machine without requiring direct network access
- Works with both dev-scripts and kcli deployment methods
Usage example:
ansible cluster_vms -m ping -i inventory.ini
ansible cluster_vms -m shell -a "uptime" -i inventory.ini
ansible-playbook my-cluster-playbook.yml -i inventory.ini -l cluster_vms- New
make patch-nodescommand for building and patching resource-agents on cluster nodes - Automated workflow: builds RPM on hypervisor, copies to localhost, patches all cluster nodes
- Eliminates manual RPM building and distribution steps
- Includes automated node reboots with etcd health verification
Usage:
# From deploy/ directory
make patch-nodes- New
make get-tnf-logscommand for collecting etcd related logs from cluster VMs
Usage:
# From deploy/ directory
make get-tnf-logs- All playbooks now consistently use
metal_machinehost group - Updated
kcli-install.yml,init-host.yml, andkcli-redfish.ymlto usemetal_machine - Prevents accidental execution on cluster VMs when using inventory with cluster VM entries
- Ensures consistent behavior across all deployment playbooks
- Fixed
make sshcommand to use the configured SSH key frominstance.env - SSH script now correctly derives private key path from configured public key
- Resolves authentication issues when using non-default SSH keys
- Comprehensive main README rewrite with clear quick-start options for AWS and external servers
Release Date: September 2025
- Added
init-host.ymlplaybook for external RHEL host preparation - Support for non-AWS environments including Beaker labs and bare metal systems
- Automated host configuration equivalent to AWS hypervisor initialization
- RHSM registration, package installation, and dev-scripts preparation
- Flexible RHSM credential configuration via environment variables or local files
- Automatic working directory configuration for storage optimization (
/home/dev-scripts) - Essential package installation including golang, make, git, and development tools
See deploy/openshift-clusters/README-external-host.md for detailed usage instructions.
- Added
helpers/fencing_validator.shfor two-node cluster fencing validation - Non-disruptive validation of STONITH configuration, node health, and etcd quorum
- Support for multiple transport methods (auto-detection, SSH, oc debug)
- IPv4/IPv6 support with automatic node discovery
Release Date: August 2025
- Added
kcli-installrole providing alternative deployment method using kcli virtualization management tool - Support for both fencing and arbiter topologies with kcli-based deployment
- Automated libvirt virtualization stack and kcli installation from COPR repository
- Integration with kcli's BMC/Redfish simulation for fencing configuration
- Pacemaker fencing configuration available via
kcli-redfish.ymlplaybook for TNF clusters
- Refactored proxy functionality into dedicated
proxy-setuprole - Separate task files for credentials, environment, infrastructure, and container management
- Automated cluster authentication file management for direct hypervisor access
- Authentication files automatically copied to standard
~/auth/directory on hypervisor - Default kubeconfig symlink (
~/.kube/config) created for seamlessoccommand usage
# Non-interactive fencing deployment
ansible-playbook kcli-install.yml -i inventory.iniIf you are installing a TNF cluster, Pacemaker configuration is done automatically. To do it manually:
# Configure pacemaker fencing for TNF clusters
ansible-playbook kcli-redfish.yml -i inventory.ini- Manual Execution Required: KCLI deployment not integrated with make command system
- No Cluster Management: KCLI clusters not supported by existing start/stop/cleanup tools
Release Date: August 2025
- Added Agent-based installation method alongside existing IPI method for arbiter topology
- New
make arbiter-agentcommand for non-interactive Agent-based deployments - Interactive method selection (ipi or agent) when deploying arbiter topology
- Enhanced arbiter configuration with separate IPI and Agent-specific variables
- Updated default OpenShift release image to 4.20.0-ec.4-x86_64
- Added deployment script with comprehensive error handling
- Updated READMEs with Agent-based installation options
- Enhanced configuration examples with method-specific sections
Release Date: July 2025
- Added EC2 instance start/stop capabilities with OpenShift cluster detection
- Instance operations detect running OpenShift clusters and provide management options
- Interactive prompts guide users through cluster shutdown/startup procedures
- New
make redeploy-clustercommand with deployment strategy detection - Automatic detection of cluster topology changes (Arbiter ↔ Fencing) with cleanup strategies
- Deployment paths optimized based on cluster state and configuration changes
- Simplified cluster cleanup with
make cleanandmake full-cleancommands - Standardized cleanup interface replacing direct ansible playbook commands
- Documentation updated to recommend make targets over manual ansible commands
- Breaking Change: All make commands now run from
deploy/directory instead ofdeploy/aws-hypervisor/
make create- Create new EC2 instance (renamed fromdeploy)make start- Start stopped EC2 instance with cluster detectionmake stop- Interactive stop with cluster management optionsmake redeploy-cluster- Cluster redeployment with mode selectionmake shutdown-cluster- Graceful OpenShift cluster VM shutdownmake startup-cluster- Restore OpenShift cluster VMs and proxy services
make clean- Standard cluster cleanup preserving cached data for faster redeploymentmake full-clean- Complete cluster cleanup including all cached data for thorough reset
- Stop script detects running clusters and offers shutdown, cleanup, or redeploy options, with separate force-stop command
- Cluster state tracking maintains configuration state for recovery
- Automatic proxy container lifecycle management during cluster operations
- Cluster VM state tracking for startup/shutdown cycles
- Updated .gitignore for inventory backups and config files
- Reorganized Makefile with comprehensive command structure and help system
- Deployment selection between fast redeploy, clean deployment, and complete rebuild
- Enhanced Ansible playbook integration for redeploy workflows
- Split README documentation: makefile commands documented in
deploy/README.md - AWS hypervisor scripts documented separately in
deploy/aws-hypervisor/README.md
For existing users: Change your working directory from deploy/aws-hypervisor/ to deploy/ when running make commands:
# Old workflow
cd deploy/aws-hypervisor
make deploy
# New workflow
cd deploy
make deployRelease Date: July 31, 2025
- Added helper scripts for patching resource-agents on OpenShift cluster nodes
- Support for both shell script and Ansible playbook-based approaches
- Automatic cluster node discovery and RPM deployment across all nodes
helpers/apply-rpm-patch.sh- Shell script for installing RPM packages on all cluster nodes using rpm-ostreehelpers/apply-rpm-patch.yml- Ansible playbook alternative for RPM installation
- Added
helpers/README.mdwith usage instructions for both approaches - Sample inventory file
helpers/inventory_ocp_hosts.samplefor Ansible workflow
- Prerequisites validation (oc and jq tools)
- Cluster authentication verification
- Node reboot instructions after RPM installation
- Simplified workflow for resource agent updates on OpenShift clusters
Release Date: July 10, 2025
- Redfish fencing configuration for Two-Node with Fencing (TNF) clusters on OpenShift 4.19+
- Integrated and standalone Redfish configuration workflows
- Bare metal host management with Redfish-compatible BMC support
- Configuration examples with separation between arbiter and fencing examples
- Redfish role documentation for configuration and troubleshooting
- Redfish configuration can run as part of main deployment or standalone
- Added
redfish.ymlplaybook for standalone Redfish configuration - New Redfish role with BMH (BareMetalHost) processing
- Integration with TNF installation workflow for stonith setup
- Enhanced role documentation
Release Date: June 19, 2025
- Added support for Two-Node with Fencing cluster topology
- Single toolbox supports both TNA (Two-Node Arbiter) and TNF deployments
- Interactive mode selection between arbiter and fencing topologies
- Moved from
tna-ipi-baremetalds-virtto unifiedopenshift-clustersdirectory - Separate config files for arbiter (
config_arbiter.sh) and fencing (config_fencing.sh) deployments - Reorganized roles for clarity and maintainability
- TNF documentation explaining Two-Node with Fencing concepts, Pacemaker integration, and use cases
- Updated Two-Node with Arbiter documentation
- Added visual topology diagrams for both TNA and TNF configurations
- Consolidated deployment guide supporting both topologies
- Deployment script prompts for cluster type (arbiter/fencing)
- Enhanced config file validation and examples
- Role restructuring with
install-devreplacingarbiter-dev
Release Date: June 5, 2025
- Toolchain for creating development hypervisors in AWS EC2
- CloudFormation integration for infrastructure provisioning with security groups and networking
- TNA deployment tools integration with AWS-provisioned hypervisors
create.sh- Deploy new EC2 instance using CloudFormationinit.sh- Initialize and configure deployed instancedestroy.sh- Clean teardown of AWS resourcesssh.sh- Direct SSH access to instancesconfigure.sh- Post-deployment hypervisor setup
instance.env.templatefor AWS configuration- Dynamic inventory management for Ansible
- SSH key management and security group setup
make deploy(run fromdeploy/directory) creates, initializes, and configures development environment- AWS hypervisor integration with cluster deployment tools
- Instance start/stop for development cost control
- Instructions for AWS account setup, CLI configuration, and deployment
- Instructions for using AWS hypervisor with cluster deployment tools
- Common issues and solutions for AWS-based development
Release Date: May 16, 2025
- Deployment automation for OpenShift clusters with arbiter topology
- Integration with openshift-metal3/dev-scripts
- Optimized for bare metal development and testing environments
- Modular roles for configuration, deployment, and cleanup
- Automated SSH setup, git configuration, and development tools
- Proxy container setup for external cluster access
ansible-playbook setup.ymlwith configuration validation- Start, stop, and cleanup operations for development clusters
- CLI setup with aliases and environment configuration
- Instructions for Two-Node Arbiter cluster deployment
- System requirements and setup instructions
- Sample configs for OpenShift release images and development settings
- Basic connectivity and deployment issue resolution
- Separation between configuration, deployment, and management roles
- Template-based config files with environment-specific customization
- Optimized for rapid development, testing, and iteration workflows
The Two-Node Toolbox provides tooling for deploying and managing two-node OpenShift clusters in development environments. It supports both Two-Node Arbiter (TNA) and Two-Node with Fencing (TNF) topologies for High Availability solutions in edge computing and development scenarios.
- Two-Node with Arbiter (TNA): 2 full control planes + 1 arbiter node
- Two-Node with Fencing (TNF): 2 control planes + software-based fencing via Pacemaker/Corosync
- Edge computing deployments requiring HA
- Development and testing environments
- CI/CD integration for cluster lifecycle testing
- Prototyping and validation workflows
Note: Two-node configurations are Technology Preview features and not covered under Red Hat production SLAs. Primarily targets OpenShift 4.19+ deployments on bare metal infrastructure.