diff --git a/absolute-beginners/devops-beginner/ansible/_category_.json b/absolute-beginners/devops-beginner/ansible/_category_.json
new file mode 100644
index 0000000..1d358a0
--- /dev/null
+++ b/absolute-beginners/devops-beginner/ansible/_category_.json
@@ -0,0 +1,13 @@
+{
+ "label": "Ansible",
+ "position": 9,
+ "link": {
+ "type": "generated-index",
+ "title": "Ansible Configuration Management",
+ "description": "Learn to manage hundreds of servers simultaneously. Master Agentless automation, Playbooks, and YAML-based configuration for CodeHarborHub infrastructure. Ideal for DevOps beginners eager to streamline server management and deployment processes. Start your journey with Ansible and transform your infrastructure management skills!"
+ },
+ "customProps": {
+ "icon": "🤖",
+ "status": "Beginner-Friendly"
+ }
+}
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/ansible/ansible-architecture.mdx b/absolute-beginners/devops-beginner/ansible/ansible-architecture.mdx
new file mode 100644
index 0000000..ac0584d
--- /dev/null
+++ b/absolute-beginners/devops-beginner/ansible/ansible-architecture.mdx
@@ -0,0 +1,107 @@
+---
+title: "Ansible Architecture"
+sidebar_label: "2. How it Works"
+sidebar_position: 2
+description: "Understand the internal components of Ansible, including the Control Node, Managed Nodes, and the Push Model. Learn how Ansible uses SSH to communicate and execute tasks across your infrastructure."
+tags: ["ansible", "architecture", "control node", "managed nodes", "ssh", "push model"]
+keywords: ["ansible architecture", "control node", "managed nodes", "ssh communication", "push model"]
+---
+
+To master automation at **CodeHarborHub**, you must understand how Ansible communicates across a network. Unlike other tools that require a "Resident Agent" on every server, Ansible is **Agentless**. It sits on one machine and "talks" to others using standard protocols.
+
+:::info
+This lesson is crucial for understanding how Ansible operates under the hood. It will help you troubleshoot issues and optimize your automation workflows.
+:::
+
+## The Core Components
+
+Ansible’s architecture consists of four primary building blocks that work together to execute your "Industrial Level" automation.
+
+### 1. The Control Node
+This is the machine where Ansible is installed. It is the "Brain" of your operations.
+* **Requirements:** Any Unix-like machine (Linux, macOS). *Note: Windows cannot be a Control Node, but it can be a Managed Node.*
+* **Action:** This is where you write your Playbooks and run the `ansible-playbook` command.
+
+### 2. Managed Nodes (Hosts)
+These are the remote systems (Servers, Network Devices, or Containers) that you are managing with Ansible.
+* **Requirements:** They only need **Python** installed and an **SSH** connection.
+* **Action:** They receive instructions from the Control Node and execute them locally.
+
+### 3. Inventory
+A list of Managed Nodes. It tells Ansible "Who" to talk to.
+* It can be a simple static file (`hosts.ini`) or a dynamic script that pulls data from AWS or Azure.
+
+### 4. Modules
+The "Tools" in the toolbox. Modules are small programs that Ansible pushes to the Managed Nodes to perform specific tasks (like installing a package or restarting a service).
+
+## The "Push" Model
+
+Most automation tools use a "Pull" model (where servers ask for updates). Ansible uses a **Push Model**.
+
+```mermaid
+sequenceDiagram
+ participant CN as Control Node (Your Laptop)
+ participant I as Inventory File
+ participant MN as Managed Node (AWS EC2)
+
+ CN->>I: Read list of IPs
+ CN->>MN: Establish SSH Connection
+ CN->>MN: Push "Module" (e.g., install_nginx.py)
+ MN->>MN: Execute Module locally
+ MN->>MN: Remove Module (Cleanup)
+ MN-->>CN: Return Result (Changed / OK / Failed)
+```
+
+## Architecture Features
+
+| Feature | Description | Why it matters? |
+| :--- | :--- | :--- |
+| **Agentless** | No software to update or manage on target servers. | Reduces security vulnerabilities and "resource bloat." |
+| **SSH Transport** | Uses standard OpenSSH for secure communication. | No need to open extra firewall ports. |
+| **Facts Engine** | Automatically discovers system info (OS, IP, CPU). | Allows you to write logic like "If OS is Ubuntu, use `apt`." |
+
+## How Modules Work (The Execution)
+
+When you run a task, Ansible doesn't just send a command string. It follows a professional execution lifecycle:
+
+
+
+
+Ansible opens an SSH connection to the Managed Node using your SSH keys.
+
+
+
+
+It copies the required **Python Module** to a temporary folder on the remote machine.
+
+
+
+
+It runs the Python script on the remote machine. This script checks the current state and makes changes if necessary.
+
+
+
+
+Once the task is done, Ansible deletes the temporary Python script, leaving the server clean.
+
+
+
+
+## Visualizing the Workflow
+
+```mermaid
+graph LR
+ subgraph "Control Machine"
+ P[Playbook.yml] --> E[Ansible Engine]
+ E --> Inv[Inventory]
+ E --> API[Modules API]
+ end
+
+ E -->|SSH| Web1[Web Server 1]
+ E -->|SSH| Web2[Web Server 2]
+ E -->|SSH| DB[Database Server]
+```
+
+:::info
+Because Ansible is agentless, you can start managing a server the **second** it finishes booting up. There is no "registration" or "handshake" process required.
+:::
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/ansible/ansible-variables-vault.mdx b/absolute-beginners/devops-beginner/ansible/ansible-variables-vault.mdx
new file mode 100644
index 0000000..ecc5439
--- /dev/null
+++ b/absolute-beginners/devops-beginner/ansible/ansible-variables-vault.mdx
@@ -0,0 +1,131 @@
+---
+title: "Variables and Ansible Vault"
+sidebar_label: "5. Variables & Security"
+sidebar_position: 5
+description: "Learn to make your Ansible playbooks dynamic with variables and secure with encrypted Vaults. Perfect for handling different environments and sensitive data!"
+tags: ["Ansible", "Variables", "Vault", "Secrets Management", "Best Practices"]
+keywords: ["Ansible Variables", "Ansible Vault", "Secrets Management", "Dynamic Playbooks", "Industrial Automation"]
+---
+
+Static playbooks are useful, but "Industrial Level" automation requires flexibility and security. In this guide, we will learn how to use **Variables** to handle different environments (Dev/Prod) and **Ansible Vault** to protect sensitive data.
+
+:::tip Why Variables and Vault?
+* **Variables** allow you to write reusable playbooks that can adapt to different scenarios without changing the code.
+* **Ansible Vault** ensures that sensitive information like passwords and API keys are encrypted and safe from prying eyes, even in version control.
+:::
+
+## 1. Using Variables
+
+Variables in Ansible allow you to write one playbook and use it for multiple purposes. Instead of hardcoding a version number or a username, you use a placeholder.
+
+### Where to Define Variables?
+Ansible has a specific "Precedence" (priority) for variables, but these are the most common places:
+
+1. **Playbook Level:** Directly inside the `.yml` file.
+2. **Inventory Level:** Inside your `hosts.ini`.
+3. **File Level:** In a dedicated `group_vars` or `host_vars` folder.
+
+```yaml title="Example: Playbook with Variables"
+---
+- name: Deploy CodeHarborHub App
+ hosts: webservers
+ vars:
+ app_version: "v2.0.4"
+ node_port: 3000
+
+ tasks:
+ - name: Start the application
+ command: "node app.js --port {{ node_port }}"
+```
+
+:::tip Syntax Note
+Always wrap variables in double curly braces `{{ var_name }}`. If the variable starts the line, you must wrap the entire value in quotes: `"{{ var_name }}"`.
+:::
+
+## 2. Ansible Vault (Securing Secrets)
+
+At **CodeHarborHub**, we **never** push plain-text passwords, SSH keys, or SSL certificates to GitHub. **Ansible Vault** is a built-in feature that encrypts these files so they can be safely stored in version control.
+
+### Common Vault Operations
+
+| Action | Command |
+| :--- | :--- |
+| **Create** | `ansible-vault create secrets.yml` |
+| **Edit** | `ansible-vault edit secrets.yml` |
+| **Encrypt Existing** | `ansible-vault encrypt my_passwords.txt` |
+| **Decrypt** | `ansible-vault decrypt secrets.yml` |
+
+### How to use Vault in a Playbook
+
+1. Create an encrypted file `vars/secrets.yml`:
+ ```yaml title="Example: Encrypted Vault File"
+ db_password: "SuperSecretPassword123"
+ ```
+2. Reference it in your playbook:
+ ```yaml title="Example: Using Vault in Playbook"
+ - name: Setup Database
+ hosts: dbservers
+ vars_files:
+ - vars/secrets.yml
+ ```
+3. Run the playbook by providing the password:
+ ```bash title="Running Playbook with Vault"
+ ansible-playbook site.yml --ask-vault-pass
+ ```
+
+In this example, Ansible will prompt you for the vault password before it can read the encrypted variables. This way, you can safely store sensitive information in your repository without risking exposure.
+
+## 3. Facts: The Special Variables
+
+Ansible automatically discovers information about the Managed Node before running any tasks. These are called **Facts**.
+
+```mermaid
+graph LR
+ A[Control Node] -->|Setup Module| B[Managed Node]
+ B -->|Returns JSON| C[Facts: OS, IP, RAM, CPU]
+ C --> D[Use in Playbook: ansible_os_family]
+```
+
+**Example: Conditional Logic using Facts**
+
+```yaml title="Example: Using Facts in Playbook"
+- name: Install Web Server
+ apt:
+ name: apache2
+ state: present
+ when: ansible_os_family == "Debian"
+```
+
+## Comparison: Variables vs. Vault
+
+| Feature | Variables | Ansible Vault |
+| :--- | :--- | :--- |
+| **Visibility** | Plain text / Human readable. | Encrypted / Block of gibberish. |
+| **Purpose** | Configuration (Ports, Paths, Names). | Secrets (Passwords, Keys, Tokens). |
+| **Storage** | Committed directly to Git. | Committed to Git (but encrypted). |
+
+## Industrial Best Practice: `group_vars`
+
+Instead of cluttering your playbook, create a directory structure like this:
+
+```text title="Best Practice: group_vars Directory Structure"
+.
+├── inventory.ini
+├── playbook.yml
+└── group_vars/
+ ├── all.yml # Variables for all servers
+ ├── webservers.yml # Specific for web group
+ └── dbservers.yml # Specific for DB group
+```
+
+Ansible will **automatically** load these variables based on the group names in your inventory! This keeps your playbooks clean and organized, making it easier to manage large infrastructures.
+
+## Final Graduation Challenge
+
+1. Create a variable file named `user_config.yml`.
+2. Add a variable `username: chh_admin`.
+3. Create a playbook that creates a user on your local machine using `{{ username }}`.
+4. Now, encrypt `user_config.yml` using `ansible-vault`.
+5. Run the playbook and see how Ansible asks for the password before it can read the file\!
+
+Congratulations! You've just learned how to make your Ansible playbooks dynamic with variables and secure with Vault. This is a crucial step towards becoming an "Industrial Level" DevOps Engineer at CodeHarborHub!
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/ansible/intro-to-ansible.mdx b/absolute-beginners/devops-beginner/ansible/intro-to-ansible.mdx
new file mode 100644
index 0000000..c874327
--- /dev/null
+++ b/absolute-beginners/devops-beginner/ansible/intro-to-ansible.mdx
@@ -0,0 +1,84 @@
+---
+title: "Introduction to Ansible"
+sidebar_label: "1. What is Ansible?"
+sidebar_position: 1
+description: "Learn the fundamentals of Ansible, the industry-standard agentless automation tool for configuration management. Understand its role in the DevOps lifecycle, its key features, and how it simplifies managing multiple servers. Perfect for Full-Stack Developers and DevOps Engineers at CodeHarborHub looking to streamline their infrastructure management."
+tags: ["ansible", "configuration management", "devops", "automation", "full-stack development"]
+keywords: ["Ansible", "Configuration Management", "DevOps", "Automation", "Agentless", "Idempotent", "YAML", "Playbooks", "SSH"]
+---
+
+As a **Full-Stack Developer** or **DevOps Engineer** at **CodeHarborHub**, you will eventually manage more than just one server. Imagine having to install Node.js, configure Nginx, and create users on **50 different AWS EC2 instances** manually.
+
+This is where **Ansible** comes in. Ansible is an open-source IT automation engine that automates provisioning, configuration management, and application deployment.
+
+## The "Furniture" Analogy
+
+To understand where Ansible fits in the DevOps lifecycle, compare it to building a house:
+
+* **Terraform:** Builds the "House" (The VPC, the Subnets, the empty EC2 instances).
+* **Ansible:** Installs the "Furniture" and "Utilities" (Installing Node.js, setting up the Database, adding SSH keys for the team).
+
+## Why Ansible? (The 3 Agentless Pillars)
+
+Ansible stands out from other tools like Chef or Puppet because of its simplicity and "Industrial Level" efficiency.
+
+| Feature | Explanation | Benefit |
+| :--- | :--- | :--- |
+| **Agentless** | No software to install on the target servers. | Less overhead and higher security. |
+| **Idempotent** | Only makes changes if the current state doesn't match the desired state. | Safe to run the same script 100 times. |
+| **YAML Based** | Uses "Playbooks" written in simple, human-readable English. | Easy for the whole team to read and edit. |
+
+## Understanding Idempotency
+
+This is the most critical concept in Ansible. If you tell Ansible to "Ensure Nginx is installed," it first checks the server.
+
+```mermaid
+graph TD
+ A[Run Playbook] --> B{Check Server State}
+ B -- "Nginx exists" --> C[Result: OK / No Change]
+ B -- "Nginx missing" --> D[Action: Install Nginx]
+ D --> E[Result: Changed]
+ C --> F[Next Task]
+ E --> F
+```
+
+In a manual script, running an "install" command twice might cause an error. In Ansible, it simply says **"OK"** and moves on.
+
+## How it Connects: The SSH Secret
+
+Ansible doesn't use a special "calling" system. It uses **SSH (Secure Shell)**, the same tool you use to log into your servers manually.
+
+
+
+
+ * Require a "Client" software installed on every server.
+ * Require specific ports to be opened.
+ * High maintenance as you scale.
+
+
+
+
+ * Uses the existing SSH connection.
+ * Works as soon as the server is launched.
+ * **Push Model:** You push configurations from your laptop to the servers.
+
+
+
+
+
+## Essential Vocabulary
+
+Before we move to the next chapter, familiarize yourself with these terms:
+
+1. **Control Node:** The machine where Ansible is installed (usually your laptop or a CI/CD runner).
+2. **Managed Nodes:** The remote servers you are managing.
+3. **Inventory:** A simple list of IP addresses for your Managed Nodes.
+4. **Playbook:** The YAML file containing your list of automation tasks.
+
+:::info
+Ansible is perfect for **Full-Stack Developers**. You can write a single "Playbook" that sets up your entire MERN stack environment (Linux + Node.js + MongoDB + Nginx) in under 2 minutes.
+:::
+
+## Learning Challenge
+
+Think about a task you do repeatedly on your local Linux machine (like updating packages or cleaning logs). In the next few lessons, we will learn how to turn that manual process into a reusable **Ansible Task**.
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/ansible/inventory-management.mdx b/absolute-beginners/devops-beginner/ansible/inventory-management.mdx
new file mode 100644
index 0000000..8df6476
--- /dev/null
+++ b/absolute-beginners/devops-beginner/ansible/inventory-management.mdx
@@ -0,0 +1,148 @@
+---
+title: "Inventory Management with Ansible"
+sidebar_label: "3. Inventories"
+sidebar_position: 3
+description: "Learn how to organize your servers into logical groups using Ansible static and dynamic inventories. Master the art of inventory management for efficient automation."
+tags: ["ansible", "inventory", "devops", "automation", "codeharborhub"]
+keywords: ["ansible inventory", "static inventory", "dynamic inventory", "grouping servers", "ansible variables", "testing inventory"]
+---
+
+An **Inventory** is a simple file that tells Ansible which servers to manage. In a professional **CodeHarborHub** workflow, we don't just list IP addresses; we organize them into logical groups like `webservers`, `dbservers`, and `staging` vs `production`.
+
+:::info Why Inventory Management Matters
+Proper inventory management is crucial for scaling your automation. It allows you to target specific tasks to specific servers, manage different environments, and maintain a clear structure as your infrastructure grows.
+:::
+
+## Inventory Formats
+
+Ansible supports two primary formats for static inventories: **INI** (the classic way) and **YAML** (the modern, structured way).
+
+
+
+
+Easy to read and great for beginners, but can get messy with large inventories.
+
+```ini title="hosts.ini"
+[webservers]
+web1.codeharborhub.com
+13.233.10.45 ansible_user=ubuntu
+
+[dbservers]
+db-primary.internal
+
+[india:children]
+webservers
+dbservers
+```
+
+
+
+
+Best for complex configurations and nested groups, but has a steeper learning curve.
+
+```yaml title="hosts.yaml"
+all:
+ children:
+ webservers:
+ hosts:
+ web1.codeharborhub.com:
+ 13.233.10.45:
+ ansible_user: ubuntu
+ dbservers:
+ hosts:
+ db-primary.internal:
+```
+
+
+
+
+## Grouping & Hierarchy
+
+Grouping allows you to target specific tasks to specific servers. For example, you only want to install **Node.js** on your web servers, not your database servers.
+
+### Parent and Child Groups
+
+You can nest groups to create a hierarchy. This is "Industrial Level" practice for managing multiple environments.
+
+```mermaid
+graph TD
+ A[All Servers] --> B[Production]
+ A --> C[Staging]
+ B --> D[Web Servers]
+ B --> E[DB Servers]
+ C --> F[Testing Node]
+```
+
+```ini title="hosts.ini"
+[prod_web]
+web-01.prod.com
+web-02.prod.com
+
+[prod_db]
+db-01.prod.com
+
+[production:children]
+prod_web
+prod_db
+```
+
+In this example, `production` is a parent group that includes both `prod_web` and `prod_db`. You can run tasks on all production servers or target just the web or database servers.
+
+## Inventory Variables
+
+Sometimes, servers in the same group need different settings (e.g., a different SSH port or a specific API key). You can define these directly in the inventory.
+
+| Variable | Purpose |
+| :--- | :--- |
+| `ansible_host` | The actual IP/FQDN if the alias is different. |
+| `ansible_port` | The SSH port (default is 22). |
+| `ansible_user` | The username to log in with (e.g., `ubuntu` or `ec2-user`). |
+| `ansible_ssh_private_key_file` | Path to your `.pem` or `.pub` key. |
+
+
+## Static vs. Dynamic Inventory
+
+At **CodeHarborHub**, we use different inventory types based on the project size.
+
+| Type | Best For... | Example |
+| :--- | :--- | :--- |
+| **Static** | Small projects, fixed IPs, or local labs. | `hosts.ini` file. |
+| **Dynamic** | Cloud environments (AWS, Azure, GCP). | A Python script that asks AWS: "Give me all running EC2 instances." |
+
+:::info The AWS Plugin
+In a real job, you won't manually type IP addresses. You will use the `aws_ec2` plugin, which automatically updates your inventory every time a new server is launched in your VPC.
+:::
+
+## Testing Your Inventory
+
+Once you've created your `hosts.ini` file, use the **Ad-Hoc** command to test connectivity to all servers in a group:
+
+```bash
+# Syntax: ansible -i -m
+ansible webservers -i hosts.ini -m ping
+```
+
+**Expected Output:**
+
+```json
+13.233.10.45 | SUCCESS => {
+ "changed": false,
+ "ping": "pong"
+}
+```
+
+If you see `SUCCESS`, it means Ansible can communicate with your server. If you get `UNREACHABLE`, check your SSH settings, firewall rules, and inventory configuration.
+
+:::info Best Practices & Tips
+* Always use descriptive group names (e.g., `prod_web` instead of just `web`).
+* Keep your inventory organized and version-controlled (e.g., in Git).
+* Use variables to avoid hardcoding sensitive information in your playbooks.
+* Regularly test your inventory to ensure all servers are reachable before running playbooks.
+:::
+
+## Learning Challenge
+
+1. Create a file named `my_hosts.ini`.
+2. Add your local machine as a host (use `127.0.0.1 ansible_connection=local`).
+3. Create a group named `[local]`.
+4. Run `ansible local -i my_hosts.ini -m setup` to see all the "Facts" Ansible can discover about your computer!
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/ansible/playbooks-and-tasks.mdx b/absolute-beginners/devops-beginner/ansible/playbooks-and-tasks.mdx
new file mode 100644
index 0000000..c58c39d
--- /dev/null
+++ b/absolute-beginners/devops-beginner/ansible/playbooks-and-tasks.mdx
@@ -0,0 +1,160 @@
+---
+title: "Playbooks and Tasks"
+sidebar_label: "4. Playbooks & Tasks"
+sidebar_position: 4
+description: "Master the heart of Ansible automation—writing idempotent YAML playbooks to configure your servers. Learn how to structure plays, use essential modules, and run your first playbook with confidence."
+tags: ["ansible", "playbooks", "tasks", "devops", "automation", "codeharborhub"]
+keywords: ["ansible playbooks", "ansible tasks", "idempotent automation", "ansible modules", "running playbooks", "ansible best practices"]
+---
+
+If an **Inventory** is the "Address Book" of your servers, a **Playbook** is the "Instruction Manual." Playbooks are where you define the **Desired State** of your infrastructure using a simple, human-readable language called **YAML**.
+
+At **CodeHarborHub**, we use Playbooks to ensure that every server in our cluster is configured exactly the same way, every time.
+
+:::info Why Playbooks Matter
+Playbooks allow you to automate complex tasks across multiple servers with a single command. They are idempotent, meaning you can run them multiple times without causing unintended changes. This makes them essential for maintaining consistency and reliability in your infrastructure.
+:::
+
+## The Anatomy of a Playbook
+
+A Playbook consists of one or more **Plays**. A Play maps a group of hosts to a list of **Tasks**.
+
+
+
+
+
+
+
+### A Standard Web Server Playbook
+
+Create a file named `setup-webserver.yml`:
+
+```yaml title="setup-webserver.yml"
+---
+- name: Configure CodeHarborHub Frontend
+ hosts: webservers
+ become: yes # Run as sudo/root
+
+ tasks:
+ - name: Ensure Nginx is installed
+ apt:
+ name: nginx
+ state: present
+
+ - name: Start Nginx service
+ service:
+ name: nginx
+ state: started
+ enabled: yes
+```
+
+In this example, we have a single Play that targets the `webservers` group. It has two tasks: one to install Nginx and another to start the Nginx service. Notice how we use the `apt` module to manage packages and the `service` module to manage services. Each task has a descriptive name, making it easy to understand what the playbook does at a glance.
+
+:::info
+Always use `become: yes` for tasks that require elevated privileges (like installing software or modifying system configurations). This ensures your playbook can run successfully on servers where you don't have direct root access.
+:::
+
+## Understanding Tasks and Modules
+
+A **Task** is the smallest unit of action in Ansible. Every task calls an **Ansible Module**.
+
+| Component | Purpose | Example |
+| :--- | :--- | :--- |
+| **Name** | Describes what the task does (shown in logs). | `name: Install Node.js` |
+| **Module** | The specialized tool used for the task. | `apt`, `yum`, `copy`, `git`. |
+| **Arguments** | The specific settings for that module. | `name: nodejs`, `state: latest`. |
+
+## The Idempotent Execution Flow
+
+When you run a Playbook, Ansible executes tasks sequentially. For each task, it reports the status back to you.
+
+```mermaid
+graph TD
+ A[Start Playbook] --> B[Task 1: Install Git]
+ B --> C{Is Git Installed?}
+ C -- No --> D[Action: Install]
+ C -- Yes --> E[Status: OK / No Change]
+ D --> F[Status: Changed]
+ F --> G[Task 2: Clone Repo]
+ E --> G
+```
+
+## Essential Modules for Full-Stack Devs
+
+To build an "Industrial Level" MERN stack environment, you will frequently use these modules:
+
+
+
+
+**Modules:** `apt` (Ubuntu/Debian) or `yum` (CentOS/RHEL).
+
+```yaml title="install-nodejs.yml"
+- name: Install Node.js
+ apt:
+ name: nodejs
+ state: present
+ update_cache: yes
+```
+
+
+
+
+**Modules:** `copy` (Transfer local files) or `template` (Dynamic files).
+
+```yaml title="upload-nginx-config.yml"
+- name: Upload Nginx Config
+ copy:
+ src: ./nginx.conf
+ dest: /etc/nginx/sites-available/default
+```
+
+
+
+
+**Module:** `git`.
+
+```yaml title="deploy-code.yml"
+- name: Pull latest code from CodeHarborHub
+ git:
+ repo: 'https://github.com/codeharborhub/tutorial.git'
+ dest: /var/www/html
+ version: main
+```
+
+
+
+
+## Running Your First Playbook
+
+Once your YAML file is ready, use the `ansible-playbook` command:
+
+```bash
+# Syntax: ansible-playbook -i
+ansible-playbook -i hosts.ini setup-webserver.yml
+```
+
+:::tip The "Dry Run"
+
+Before making real changes to your production servers, always run a simulation:
+
+```bash
+ansible-playbook -i hosts.ini setup-webserver.yml --check
+```
+
+*This will show you exactly what **would** change without actually touching the servers.*
+
+Ansible will report each task as `OK` (already in desired state), `CHANGED` (action taken), or `FAILED` (something went wrong). This feedback loop is crucial for debugging and ensuring your playbooks work as intended.
+
+:::
+
+## Best Practices for Tasks
+
+1. **Meaningful Names:** Always start your task names with a verb (e.g., "Install...", "Configure...", "Verify...").
+2. **Use Handlers:** If you change a configuration file, use a `handler` to restart the service only when a change is detected.
+3. **State Matters:** Prefer `state: present` (ensure it exists) or `state: latest` over `state: absent` unless you are specifically uninstalling software.
+
+## Learning Challenge
+
+1. Create a playbook that installs `htop` and `curl` on your local machine.
+2. Add a task to create a directory named `/tmp/codeharborhub-test`.
+3. Run the playbook twice. Observe how the second run shows "OK" (no changes) because the tasks are idempotent!
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/github-actions/_category_.json b/absolute-beginners/devops-beginner/github-actions/_category_.json
new file mode 100644
index 0000000..4888125
--- /dev/null
+++ b/absolute-beginners/devops-beginner/github-actions/_category_.json
@@ -0,0 +1,13 @@
+{
+ "label": "GitHub Actions",
+ "position": 10,
+ "link": {
+ "type": "generated-index",
+ "title": "GitHub Actions Automation",
+ "description": "Learn to automate your build, test, and deployment pipeline. Master CI/CD to ensure CodeHarborHub projects are always production-ready. Discover how to create workflows, manage secrets, and integrate with other tools to streamline your development process."
+ },
+ "customProps": {
+ "icon": "🚀",
+ "status": "Essential"
+ }
+}
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/github-actions/ci-cd-for-mern.mdx b/absolute-beginners/devops-beginner/github-actions/ci-cd-for-mern.mdx
new file mode 100644
index 0000000..fa108a3
--- /dev/null
+++ b/absolute-beginners/devops-beginner/github-actions/ci-cd-for-mern.mdx
@@ -0,0 +1,124 @@
+---
+title: "CI/CD for MERN Stack"
+sidebar_label: "4. MERN Automation"
+sidebar_position: 4
+description: "Learn how to automate testing and building for MongoDB, Express, React, and Node.js applications. This guide walks you through creating a GitHub Actions workflow that handles both the frontend and backend of your MERN stack project, ensuring your code is always production-ready with every push to GitHub."
+tags: ["GitHub Actions", "CI/CD", "Automation", "DevOps", "CodeHarborHub", "MERN stack"]
+keywords: ["GitHub Actions", "CI/CD", "Continuous Integration", "Continuous Delivery", "Automation", "DevOps", "CodeHarborHub", "MERN stack", "MongoDB", "Express", "React", "Node.js"]
+---
+
+Building a **MERN (MongoDB, Express, React, Node.js)** application is one thing; ensuring it works perfectly every time you update it is another. In a professional environment like **CodeHarborHub**, we use GitHub Actions to automate the testing and building of both the **Frontend** and the **Backend**.
+
+:::info Why Automate MERN?
+MERN projects have multiple moving parts. You want to make sure that your React frontend doesn't break when you update your Node.js backend, and vice versa. Automation ensures that every change is tested and built correctly before it reaches production.
+:::
+
+## The MERN Pipeline Strategy
+
+In a MERN project, your repository usually has two main folders: `frontend/` and `backend/`. Our automation needs to handle both.
+
+**Typically, a MERN CI/CD pipeline will have three main stages:**
+
+1. **Dependency Install:** Download `node_modules` for both React and Node.js.
+2. **Lint & Test:** Check for syntax errors and run Unit Tests (using Jest or Mocha).
+3. **Build:** Create the production-ready "dist" or "build" folder for the frontend.
+
+
+## Creating the MERN Workflow
+
+Create a file named `.github/workflows/mern-ci.yml`. This workflow uses **Jobs** to keep the backend and frontend tasks organized.
+
+```yaml title="mern-ci.yml"
+name: MERN Stack CI
+
+on:
+ push:
+ branches: [ main ]
+ pull_request:
+ branches: [ main ]
+
+jobs:
+ # JOB 1: Backend Testing
+ backend-tests:
+ runs-on: ubuntu-latest
+ defaults:
+ run:
+ working-directory: ./backend # Tell GitHub to run commands inside 'backend' folder
+ steps:
+ - uses: actions/checkout@v4
+ - name: Setup Node.js
+ uses: actions/setup-node@v4
+ with:
+ node-version: '20'
+ cache: 'npm' # Speeds up future builds!
+
+ - run: npm install
+ - run: npm test
+
+ # JOB 2: Frontend Build
+ frontend-build:
+ runs-on: ubuntu-latest
+ defaults:
+ run:
+ working-directory: ./frontend
+ steps:
+ - uses: actions/checkout@v4
+ - name: Setup Node.js
+ uses: actions/setup-node@v4
+ with:
+ node-version: '20'
+ cache: 'npm'
+
+ - run: npm install
+ - run: npm run build
+```
+
+## Parallel vs. Sequential Execution
+
+By default, GitHub Actions runs `backend-tests` and `frontend-build` at the **same time** (Parallel). This is the "Industrial Standard" because it saves time.
+
+```mermaid
+graph LR
+ A[Push Code] --> B[Job: Backend Tests]
+ A --> C[Job: Frontend Build]
+ B --> D{All Pass?}
+ C --> D
+ D -->|Yes| E[Deploy to Production]
+ D -->|No| F[Stop & Notify Developer]
+```
+
+## Handling Environment Variables (.env)
+
+In your local MERN app, you use a `.env` file for your `MONGODB_URI`. **Never** commit that file to GitHub!
+
+Instead, for your CI/CD tests, you can provide "dummy" variables directly in the YAML:
+
+```yaml title="mern-ci.yml"
+- name: Run Backend Tests
+ run: npm test
+ env:
+ MONGODB_URI: mongodb://localhost:27017/test-db
+ JWT_SECRET: codeharborhub_secret_key
+```
+
+## Performance Tip: Caching
+
+MERN projects have massive `node_modules` folders. Without caching, your workflow might take 5 minutes. With caching, it can drop to 1 minute\!
+
+Notice the `cache: 'npm'` line in our workflow above? That tells GitHub:
+*"If the `package-lock.json` hasn't changed, reuse the modules from the last time we ran this."*
+
+## Professional "MERN" Rules
+
+| Rule | Why? |
+| :--- | :--- |
+| **Separate Folders** | Use `working-directory` so your frontend tests don't try to run in the backend folder. |
+| **Node Versioning** | Always match the Node version in your workflow to your local development version. |
+| **Status Badges** | Add a "Build Passing" badge to your `README.md` to show off your professional automation! |
+
+
+:::info Industrial Level Bonus: Deployment
+Once your tests and builds are successful, you can add a third job to deploy your MERN app automatically. For example, you can deploy the backend to **AWS EC2** and the frontend to **Vercel** or **Netlify** with just a few extra steps in your workflow.
+
+If you are using **Docker** for your MERN app, your GitHub Action can also build a **Docker Image** and push it to **Docker Hub** automatically after the tests pass. This is how real-world MERN applications are deployed at scale!
+:::
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/github-actions/creating-first-workflow.mdx b/absolute-beginners/devops-beginner/github-actions/creating-first-workflow.mdx
new file mode 100644
index 0000000..9f75914
--- /dev/null
+++ b/absolute-beginners/devops-beginner/github-actions/creating-first-workflow.mdx
@@ -0,0 +1,121 @@
+---
+title: "Creating Your First Workflow"
+sidebar_label: "3. First Workflow"
+sidebar_position: 3
+description: "Step-by-step guide to building your first automated pipeline with GitHub Actions. Learn how to set up a simple workflow that runs tests and checks your code environment every time you push to GitHub. Perfect for absolute beginners looking to get hands-on experience with CI/CD automation in their CodeHarborHub projects."
+tags: ["GitHub Actions", "CI/CD", "Automation", "DevOps", "CodeHarborHub"]
+keywords: ["GitHub Actions", "CI/CD", "Continuous Integration", "Automation", "DevOps", "CodeHarborHub", "MERN stack"]
+---
+
+Welcome to the hands-on part of the **CodeHarborHub** DevOps track! If you've ever felt the stress of "I hope I didn't break anything" before pushing code, this lesson is for you.
+
+We are going to build a **CI (Continuous Integration)** workflow that automatically greets you and checks your code environment every time you push to GitHub.
+
+:::info Why This Workflow?
+This is a simple, beginner-friendly workflow that demonstrates the core concepts of GitHub Actions. It will help you understand how to structure your YAML files, use pre-built actions, and run shell commands in an automated environment. Plus, it's a fun way to see automation in action!
+:::
+
+## The "Chef" Analogy
+
+Think of a GitHub Action Workflow like a **Cooking Recipe**:
+
+| Technical Term | Recipe Equivalent | What it does |
+| :--- | :--- | :--- |
+| **Event** | Someone orders food | The trigger that starts the process. |
+| **Runner** | The Kitchen | The environment where the work happens. |
+| **Job** | The Chef | A specific worker assigned to a task. |
+| **Step** | A Recipe Instruction | A single action (e.g., "Boil water"). |
+| **Action** | A Pre-made Sauce | A reusable component that performs a common task (e.g., "Use pre-made tomato sauce"). |
+
+## Step 1: Preparing the Kitchen
+
+GitHub looks for workflows in a very specific folder. If you don't put them here, they won't run!
+
+1. Open your project in VS Code.
+2. Create a folder named `.github` (don't forget the dot!).
+3. Inside `.github`, create another folder named `workflows`.
+4. Inside `workflows`, create a file named `hello-world.yml`.
+
+## Step 2: Writing the YAML Code
+
+Copy and paste this code into your `hello-world.yml` file. Don't worry—we will break down exactly what each line does below.
+
+```yaml title="hello-world.yml"
+# The name of your automation
+name: CodeHarborHub First Automation
+
+# When should this run? (The Trigger)
+on: [push]
+
+# What should it actually do?
+jobs:
+ say-hello:
+ # Use a fresh Ubuntu Linux server provided by GitHub
+ runs-on: ubuntu-latest
+
+ steps:
+ # Step 1: Download the code from your repo onto the runner
+ - name: Checkout Repository
+ uses: actions/checkout@v4
+
+ # Step 2: Run a simple terminal command
+ - name: Greet the Developer
+ run: echo "Hello CodeHarborHub Learner! Your automation is working! 🚀"
+
+ # Step 3: Check the environment version
+ - name: Check Node Version
+ run: node -v
+```
+
+## Step 3: Understanding the "Why"
+
+Let's look at the "Industrial Level" logic behind these lines:
+
+### The `on: [push]` Trigger
+
+This tells GitHub: "The moment someone pushes code to *any* branch, start this engine." In professional settings, we often change this to `on: [pull_request]` so we only run tests when someone wants to merge code.
+
+### The `uses: actions/checkout@v4`
+
+This is a **Pre-built Action**. Imagine you are a chef, and instead of farming the wheat yourself, you just buy flour. This action "buys the flour" by automatically cloning your code into the virtual machine so the next steps can use it.
+
+### The `run:` command
+
+This is exactly like typing a command into your computer's Terminal or Command Prompt. Anything you can do in a terminal, you can do here! In a real-world scenario, this is where you would run your tests (`npm test`), build your app (`npm run build`), or deploy to a server.
+
+## Visualizing the Execution
+
+Once you push this file to GitHub, here is what happens behind the scenes:
+
+```mermaid
+sequenceDiagram
+ participant U as You (Git Push)
+ participant G as GitHub Registry
+ participant R as Ubuntu Runner (The VM)
+
+ U->>G: Push .github/workflows/hello-world.yml
+ G->>R: Spin up fresh Ubuntu VM
+ R->>R: Step 1: Checkout Code
+ R->>R: Step 2: Echo "Hello..."
+ R->>R: Step 3: node -v
+ R-->>G: Report "Success" (Green Checkmark âś…)
+```
+
+## Step 4: Seeing it in Action
+
+1. **Commit and Push:** Run `git add .`, `git commit -m "Add first workflow"`, and `git push`.
+2. **Go to GitHub:** Open your repository in your browser.
+3. **Click the "Actions" Tab:** You will see a yellow circle (running) or a green checkmark (finished).
+4. **Click the Workflow:** Click on "CodeHarborHub First Automation" to see the logs. You can expand each step to see the output!
+
+## Common Mistakes for Beginners
+
+ * **Indentation Matters:** YAML is very picky. If your `steps:` is not indented correctly under `jobs:`, the workflow will fail. Always use **spaces**, never tabs.
+ * **Typing the Folder Name:** Ensure it is `.github/workflows`. If you name it `.github/workflow` (singular), it will not work.
+ * **Case Sensitivity:** `on: push` is different from `On: Push`. Always use lowercase for keywords.
+
+:::tip Tip for Absolute Beginners
+Don't worry if it doesn't work the first time! Check the logs in the "Actions" tab to see what went wrong. The error messages are usually very descriptive and will guide you to the fix. This is how professional developers debug their CI/CD pipelines!
+
+In the industrial world, we use these logs to debug **MERN** applications. If your frontend build fails, the logs here will tell you exactly which line of code caused the error! It's like having a detective's magnifying glass to find the culprit in your code.
+:::
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/github-actions/github-actions-concepts.mdx b/absolute-beginners/devops-beginner/github-actions/github-actions-concepts.mdx
new file mode 100644
index 0000000..547c0cf
--- /dev/null
+++ b/absolute-beginners/devops-beginner/github-actions/github-actions-concepts.mdx
@@ -0,0 +1,134 @@
+---
+title: "GitHub Actions Core Concepts"
+sidebar_label: "2. Core Concepts"
+sidebar_position: 2
+description: "Understand the fundamental building blocks of GitHub Actions automation, from Workflows to Runners. Learn how to structure your CI/CD pipelines effectively and follow best practices to ensure your CodeHarborHub projects are always production-ready."
+tags: ["GitHub Actions", "CI/CD", "Automation", "DevOps", "CodeHarborHub"]
+keywords: ["GitHub Actions", "CI/CD", "Continuous Integration", "Continuous Delivery", "Automation", "DevOps", "CodeHarborHub"]
+---
+
+To build professional automation at **CodeHarborHub**, you need to speak the language of GitHub Actions. It isn't just about "running scripts"; it's about orchestrating a series of events across virtual environments.
+
+## The Automation Anatomy
+
+A GitHub Action is structured like a Russian Nesting Doll (Matryoshka). Each layer lives inside another.
+
+```mermaid
+graph TD
+ A[Event: Push/PR] --> B[Workflow: main.yml]
+ subgraph "Inside the Workflow"
+ B --> C[Job 1: Run Tests]
+ B --> D[Job 2: Build & Deploy]
+ C --> E[Step 1: Checkout Code]
+ C --> F[Step 2: Setup Node.js]
+ C --> G[Step 3: npm test]
+ end
+```
+
+In this example, the developer pushes code to a branch, which triggers the GitHub Actions workflow. The runner installs dependencies and runs tests. If the tests pass, it automatically deploys to production. If they fail, it notifies the developer to fix the code.
+
+### 1. Workflow
+
+The highest level of organization. A workflow is an automated process that you add to your repository. It is defined by a **YAML** file in your `.github/workflows` directory.
+* *Example:* `production-deploy.yml` or `unit-tests.yml`.
+
+### 2. Events
+
+An event is a specific activity in a repository that triggers a workflow run.
+* **Webhook events:** `push`, `pull_request`, `create` (new branch).
+* **Scheduled events:** `cron` (e.g., run backups every night at 12 AM).
+* **Manual events:** `workflow_dispatch` (a button you click to run the script).
+
+### 3. Jobs
+
+A job is a set of **steps** that execute on the same **runner**.
+* By default, multiple jobs in a workflow run in **parallel** (at the same time).
+* You can make jobs dependent on each other (e.g., Don't "Deploy" until "Test" is finished).
+
+### 4. Steps
+
+A step is an individual task. It can be a shell command (`run`) or an action (`uses`). All steps in a job run sequentially on the same runner.
+
+### 5. Actions
+
+An action is a standalone application that performs a complex but frequently repeated task. You "use" them to reduce the amount of code you write.
+* *Example:* `actions/checkout@v4` (Clones your code into the runner).
+
+## How They Connect (The Logic Flow)
+
+```mermaid
+graph TD
+ A[Event: Push to Main] --> B{Workflow}
+
+ subgraph "Job: Build & Test (Runner: Ubuntu)"
+ B --> J1[Step 1: Checkout Code]
+ J1 --> J2[Step 2: Install Node.js]
+ J2 --> J3[Step 3: Run npm test]
+ end
+
+ subgraph "Job: Deploy (Runner: Ubuntu)"
+ B --> D1[Step 1: Login to AWS]
+ D1 --> D2[Step 2: Sync S3 Bucket]
+ end
+
+ J3 -.->|Needs Success| D1
+```
+
+## The Runner: Where the Magic Happens
+
+A **Runner** is a server that has the GitHub Actions runner application installed. It listens for available jobs, runs the steps, and reports the progress back to GitHub.
+
+
+
+
+ * **Managed by:** GitHub.
+ * **OS:** Ubuntu Linux, Windows, or macOS.
+ * **Clean Slate:** Every time a job runs, you get a fresh, clean virtual machine.
+ * **Best For:** Most CodeHarborHub projects and standard MERN apps.
+
+
+
+
+ * **Managed by:** You (on your own server or EC2).
+ * **Customization:** You can pre-install large dependencies to save time.
+ * **Security:** Good for accessing private data centers.
+ * **Best For:** Large-scale industrial apps with specific hardware needs.
+
+
+
+
+
+## Understanding the YAML Syntax
+
+A typical **CodeHarborHub** configuration looks like this. Notice the clear, indented structure:
+
+```yaml title="ci-pipeline.yml"
+name: CI-Pipeline # 1. The Workflow Name
+on: [push] # 2. The Trigger (Event)
+
+jobs: # 3. List of Jobs
+ test-app: # Job ID
+ runs-on: ubuntu-latest # 4. The Runner Environment
+
+ steps: # 5. List of Steps
+ - name: Get Code
+ uses: actions/checkout@v4
+
+ - name: Install dependencies
+ run: npm install # 6. Standard Shell Command
+```
+
+## Industrial Level Best Practices
+
+| Concept | Professional Tip |
+| :--- | :--- |
+| **Timeouts** | Always set a `timeout-minutes` for your jobs so a stuck test doesn't waste your minutes. |
+| **Caching** | Use the `actions/cache` to remember your `node_modules`. This makes your builds 5x faster! |
+| **Matrix Strategy** | Use a `matrix` to test your app on Node 18, 20, and 22 simultaneously to ensure compatibility. |
+| **Secrets Management** | Store sensitive data (API keys, passwords) in GitHub Secrets and reference them in your workflow. |
+
+:::info Did you know?
+You can also use **GitHub Environments** to set up different deployment targets (e.g., staging vs production) with specific secrets and approval rules. This adds an extra layer of control to your deployment process.
+
+At **CodeHarborHub**, we recommend starting with a simple "Test" workflow. Once you see that green checkmark appearing on your Pull Requests, you'll never want to go back to manual testing!
+:::
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/github-actions/intro-to-github-actions.mdx b/absolute-beginners/devops-beginner/github-actions/intro-to-github-actions.mdx
new file mode 100644
index 0000000..cae507e
--- /dev/null
+++ b/absolute-beginners/devops-beginner/github-actions/intro-to-github-actions.mdx
@@ -0,0 +1,121 @@
+---
+title: "Introduction to GitHub Actions"
+sidebar_label: "1. What is GitHub Actions?"
+sidebar_position: 1
+description: "Learn the fundamentals of CI/CD and how to automate your CodeHarborHub projects using GitHub Actions. Discover the core components, best practices, and how to set up your first workflow to ensure your MERN stack applications are always production-ready."
+tags: ["GitHub Actions", "CI/CD", "Automation", "DevOps", "CodeHarborHub"]
+keywords: ["GitHub Actions", "CI/CD", "Continuous Integration", "Continuous Delivery", "Automation", "DevOps", "CodeHarborHub", "MERN stack"]
+---
+
+At **CodeHarborHub**, we believe developers should spend their time writing code, not manually running tests or uploading files to servers. **GitHub Actions** is the engine that makes this possible.
+
+It is a **Continuous Integration and Continuous Delivery (CI/CD)** platform that allows you to automate your build, test, and deployment pipeline right from your GitHub repository.
+
+:::info Fun Fact
+GitHub Actions was launched in 2018 and has since become one of the most popular CI/CD tools in the developer community, with millions of workflows running every day.
+:::
+
+## The CI/CD Philosophy
+
+The core idea behind GitHub Actions is to automate the software development lifecycle. This means that every time you make a change to your code, you can automatically run tests, build your application, and even deploy it without lifting a finger.
+
+:::info Why CI/CD?
+CI/CD helps catch bugs early, ensures consistent builds, and allows you to deliver features to users faster. It's like having a robot assistant that takes care of the repetitive tasks, so you can focus on writing amazing code.
+:::
+
+To understand GitHub Actions, you must understand the two halves of the automation coin:
+
+### 1. Continuous Integration (CI)
+Every time you `git push` your MERN stack code, the CI pipeline automatically:
+* Installs dependencies (`npm install`).
+* Runs your test suite (`npm test`).
+* Checks for code linting errors.
+* **Goal:** Catch bugs before they reach the main branch.
+
+### 2. Continuous Delivery (CD)
+Once the tests pass, the CD pipeline automatically:
+* Builds the production version of your app (`npm run build`).
+* Deploys the code to **AWS EC2**, **Vercel**, or **S3**.
+* **Goal:** Deliver features to users as fast as possible.
+
+## The Core Components
+
+GitHub Actions uses a specific hierarchy to organize automation. Think of it as a "Recipe" for your code.
+
+```mermaid
+graph TD
+ A[Event: Push/PR] --> B[Workflow: main.yml]
+ subgraph "Inside the Workflow"
+ B --> C[Job 1: Run Tests]
+ B --> D[Job 2: Build & Deploy]
+ C --> E[Step 1: Checkout Code]
+ C --> F[Step 2: Setup Node.js]
+ C --> G[Step 3: npm test]
+ end
+```
+
+| Component | What it is | Analogy |
+| :--- | :--- | :--- |
+| **Workflow** | The entire automated process (`.yml` file). | The Cookbook. |
+| **Event** | The trigger (Push, Pull Request, Schedule). | The Hunger (Why you start cooking). |
+| **Job** | A set of steps running on the same server. | A Chef in the kitchen. |
+| **Step** | An individual task (command or action). | A single instruction (e.g., "Chop onions"). |
+| **Runner** | The virtual server (Ubuntu/Windows) running the code. | The Kitchen itself. |
+
+## Why Developers Love It
+
+
+
+
+No need to set up external tools like Jenkins or CircleCI. Everything happens inside your "Actions" tab in the repository. It's like having a built-in kitchen in your house!
+
+
+
+
+Don't reinvent the wheel! Use pre-built "Actions" created by the community for common tasks like setting up Docker, sending Slack notifications, or deploying to AWS. It's like having a pantry stocked with ready-to-use ingredients.
+
+
+
+
+Automatically test your **CodeHarborHub** app across multiple versions of Node.js (18, 20, 22) and multiple Operating Systems (Linux, macOS, Windows) simultaneously. It's like having multiple chefs working on the same recipe to ensure it tastes good for everyone.
+
+
+
+
+## Visualizing a Production Workflow
+
+This is how a typical **CodeHarborHub** industrial-level pipeline behaves when a developer submits a Pull Request:
+
+```mermaid
+sequenceDiagram
+ participant D as Developer
+ participant G as GitHub Repo
+ participant A as GitHub Actions (Runner)
+ participant S as Production Server
+
+ D->>G: Push Code to Branch
+ G->>A: Trigger 'Test' Workflow
+ A->>A: npm install
+ A->>A: npm test
+ alt Tests Passed
+ A-->>G: Green Checkmark âś…
+ G->>S: Auto-Deploy to Production
+ else Tests Failed
+ A-->>G: Red Cross ❌
+ G-->>D: "Fix your code!"
+ end
+```
+
+
+
+In this example, the developer pushes code to a branch, which triggers the GitHub Actions workflow. The runner installs dependencies and runs tests. If the tests pass, it automatically deploys to production. If they fail, it notifies the developer to fix the code.
+
+## Best Practices for Absolute Beginners
+
+1. **Fail Fast:** Put your tests at the beginning of the workflow. If they fail, don't waste time/money building the app.
+2. **Stay Secure:** Never put passwords in your YAML files. Use **GitHub Secrets**.
+3. **Use Versions:** When using community actions, always specify a version (e.g., `actions/checkout@v4`) to prevent your pipeline from breaking when the action updates.
+
+:::info Did you know?
+GitHub Actions is free for public repositories! For private repositories, GitHub provides a generous amount of free minutes every month, which is plenty for most startup projects and personal portfolios.
+:::
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/github-actions/secrets-and-environments.mdx b/absolute-beginners/devops-beginner/github-actions/secrets-and-environments.mdx
new file mode 100644
index 0000000..ccb6de0
--- /dev/null
+++ b/absolute-beginners/devops-beginner/github-actions/secrets-and-environments.mdx
@@ -0,0 +1,112 @@
+---
+title: "Secrets and Environments"
+sidebar_label: "5. Security (Secrets)"
+sidebar_position: 5
+description: "Learn how to manage sensitive data and deployment targets using GitHub Secrets and Environments. This guide covers best practices for storing API keys, database credentials, and how to set up protected environments for staging and production deployments in your CodeHarborHub projects."
+tags: ["GitHub Actions", "CI/CD", "Automation", "DevOps", "CodeHarborHub", "Security"]
+keywords: ["GitHub Actions", "CI/CD", "Continuous Integration", "Continuous Delivery", "Automation", "DevOps", "CodeHarborHub", "Secrets Management", "Environments", "API Keys", "Database Credentials"]
+---
+
+In a professional **MERN stack** or **Docusaurus** project, your code needs to talk to external services like **MongoDB Atlas**, **AWS**, or **Stripe**. These services require "Keys" or "Passwords."
+
+At **CodeHarborHub**, we have one golden rule: **NEVER commit secrets to GitHub.** If a password is in your code, it is no longer a secret.
+
+:::info Why Not Commit Secrets?
+Even if you delete the secret from your code later, it still exists in the Git History. Anyone can go back and find it. This is a huge security risk. Instead, we use GitHub's built-in **Secrets** feature to store sensitive information safely. This way, your workflows can access the secrets without exposing them in the code or logs.
+
+**For example:** If you have a MongoDB connection string, you would store it as a secret called `MONGODB_URI` and then reference it in your workflow without ever showing the actual value.
+:::
+
+## What are GitHub Secrets?
+
+**GitHub Secrets** are encrypted environment variables that you create in your repository settings. They are only available to your GitHub Actions workflows and are never visible in the logs (GitHub will mask them with `***`).
+
+### How to Create a Secret:
+
+1. Navigate to your repository on GitHub.
+2. Go to **Settings** > **Secrets and variables** > **Actions**.
+3. Click **New repository secret**.
+4. Name: `MONGODB_URI` | Value: `mongodb+srv://username:password@cluster.mongodb.net/`
+
+## Using Secrets in your Workflow
+
+Once a secret is saved, you can "inject" it into your code using the `${{ secrets.NAME }}` syntax.
+
+```yaml title="deploy.yml"
+name: Production Deployment
+on: [push]
+
+jobs:
+ deploy:
+ runs-on: ubuntu-latest
+ steps:
+ - uses: actions/checkout@v4
+
+ - name: Deploy to AWS
+ run: ./deploy-script.sh
+ env:
+ # Injecting the secrets into the environment
+ AWS_ACCESS_KEY: ${{ secrets.AWS_ACCESS_KEY_ID }}
+ AWS_SECRET_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
+ DB_CONNECTION: ${{ secrets.MONGODB_URI }}
+```
+
+## Environments & Protection Rules
+
+An **Environment** is a logical target for your deployment (e.g., `Production`, `Staging`, `Development`). This is an "Industrial Level" feature that adds a layer of safety.
+
+### Why use Environments?
+
+ * **Manual Approvals:** You can set a rule that says: "The code cannot deploy to Production until the Team Lead clicks 'Approve'."
+ * **Environment Secrets:** You can have a `DATABASE_URL` secret for *Staging* and a different `DATABASE_URL` for *Production*. This way, your staging environment can use a test database, while production uses the real one.
+
+```yaml title="deploy.yml"
+jobs:
+ deploy-prod:
+ runs-on: ubuntu-latest
+ environment: Production # Connects this job to the Production rules
+ steps:
+ - run: echo "Deploying to the live CodeHarborHub site..."
+```
+
+## The "Zero-Trust" Security Flow
+
+```mermaid
+graph TD
+ A[Developer] -->|Adds Secret| B(GitHub Settings)
+ B -->|Encrypted Storage| C[GitHub Vault]
+ D[Workflow .yml] -->|Request Secret| C
+ C -->|Masked Value| E[Runner VM]
+ E -->|Execute| F[External API/Cloud]
+
+ subgraph "Safety Layer"
+ G[Logs: 'Printing Key: ***']
+ end
+```
+
+## Comparison: Secrets vs. Variables
+
+| Feature | GitHub Secrets | Configuration Variables |
+| :--- | :--- | :--- |
+| **Visibility** | Hidden (`***`) in logs. | Visible in logs. |
+| **Best For** | Passwords, API Keys, SSH Keys. | App Ports, Themes, Feature Flags. |
+| **Editability** | Cannot be viewed after saving. | Can be viewed and edited anytime. |
+
+## Professional Security Rules
+
+1. **Least Privilege:** Only give your GitHub Action the permissions it *absolutely* needs.
+2. **Rotation:** Change your secrets every 90 days.
+3. **No Echo:** Never try to `echo ${{ secrets.MY_KEY }}` into a file that is later uploaded as a public artifact.
+4. **Review:** Always check which "Third-Party Actions" you are using. Do you trust them with your secrets?
+5. **Use Environments:** For production deployments, always use an Environment with manual approval to add an extra layer of security.
+
+:::danger Security Warning
+If you accidentally commit a secret to your code, **it is compromised.** Changing the file and pushing again doesn't help because it stays in the Git History. You must **Rotate** (change) the key immediately on the provider's website (e.g., AWS or MongoDB).
+:::
+
+## Final Graduation Challenge
+
+1. Go to your GitHub repo and create a secret called `CODENAME` with the value `Goldfish`.
+2. Create a workflow that has one step: `run: echo "The secret is ${{ secrets.CODENAME }}"`.
+3. Run the workflow and check the logs. Notice how GitHub replaces `Goldfish` with `***`.
+4. **Congratulations!** You are now a DevOps-ready developer!
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/terraform/_category_.json b/absolute-beginners/devops-beginner/terraform/_category_.json
new file mode 100644
index 0000000..04405e8
--- /dev/null
+++ b/absolute-beginners/devops-beginner/terraform/_category_.json
@@ -0,0 +1,13 @@
+{
+ "label": "Terraform",
+ "position": 8,
+ "link": {
+ "type": "generated-index",
+ "title": "Terraform Infrastructure as Code (IaC)",
+ "description": "Learn to automate your cloud infrastructure using HashiCorp Terraform. Move from manual console clicking to professional, version-controlled code. This section covers the basics of Terraform, including writing configuration files, managing state, and deploying infrastructure across multiple cloud providers. By the end, you'll be able to define and provision your entire infrastructure as code, making it easier to maintain, scale, and collaborate on your projects."
+ },
+ "customProps": {
+ "icon": "🏗️",
+ "status": "Advanced Beginner"
+ }
+}
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/terraform/intro-to-iac.mdx b/absolute-beginners/devops-beginner/terraform/intro-to-iac.mdx
new file mode 100644
index 0000000..c74a2dc
--- /dev/null
+++ b/absolute-beginners/devops-beginner/terraform/intro-to-iac.mdx
@@ -0,0 +1,118 @@
+---
+title: "Introduction to Infrastructure as Code (IaC)"
+sidebar_label: "1. What is IaC?"
+sidebar_position: 1
+description: "Master the fundamentals of Infrastructure as Code and why Terraform is the industry standard for DevOps. Learn how to manage your cloud infrastructure with code, ensuring consistency, scalability, and version control. This guide will set the stage for your IaC journey and prepare you for hands-on labs in Terraform."
+tags: [terraform, iac, infrastructure-as-code, devops, cloud]
+keywords: [terraform, iac, infrastructure-as-code, devops, cloud]
+---
+
+In the early days of the web, if a developer at **CodeHarborHub** needed a new server, they had to manually buy hardware, rack it, and cable it. In the Cloud era, we used the **AWS Console** to click buttons.
+
+**Infrastructure as Code (IaC)** is the next evolution. It allows you to manage and provision your entire technology stack through **machine-readable definition files**, rather than physical hardware configuration or interactive configuration tools.
+
+:::info The "Industrial Level" Standard
+At **CodeHarborHub**, we don't just want you to know how to click buttons. We want you to understand how to **automate** and **scale** your infrastructure using code. This is the "Industrial Level" standard for modern DevOps.
+:::
+
+## The Problem: "ClickOps" vs. "Code"
+
+Before IaC, we practiced "ClickOps" (manually clicking in the AWS UI). While easy for beginners, it fails at scale.
+
+### Why ClickOps Fails in Production:
+* **Human Error:** It’s easy to forget to check a box or misspell a database name.
+* **Lack of History:** Who changed the Security Group settings at 2 AM? You can't "Undo" a click.
+* **Configuration Drift:** Over time, your "Staging" environment becomes different from "Production" because of manual tweaks.
+* **Slow Scaling:** Try manually creating 50 VPCs across 10 regions. It's impossible.
+
+## The 4 Pillars of IaC
+
+To build "Industrial Level" infrastructure, we follow these four principles:
+
+| Pillar | Description |
+| :--- | :--- |
+| **Declarative** | You define the *Desired State* (e.g., "I want 3 servers"). Terraform handles the *How*. |
+| **Idempotency** | Running the same code 100 times results in the exact same infrastructure every time. |
+| **Version Controlled** | Your infrastructure lives in GitHub. You can see history, branches, and PRs. |
+| **Reproducibility** | You can destroy your entire environment and rebuild it from scratch in minutes. |
+
+## How IaC Fits into the Lifecycle
+
+IaC sits right in the middle of your **CI/CD pipeline**. When you push code to GitHub, your infrastructure updates automatically alongside your application code.
+
+```mermaid
+graph LR
+ A[Developer] -->|Git Push| B(GitHub Repo)
+ B -->|Trigger| C{GitHub Actions}
+ C -->|Terraform Plan| D[Dry Run/Preview]
+ D -->|Terraform Apply| E[AWS Infrastructure]
+ E -->|Feedback| A
+```
+
+In this lifecycle:
+1. You push code to GitHub, which includes both application and infrastructure code.
+2. GitHub Actions runs a Terraform Plan to show you what changes will be made.
+3. If you approve, Terraform Apply executes the changes in AWS.
+4. Your infrastructure updates, and you get feedback on the deployment status.
+
+## Declarative vs. Imperative
+
+Understanding this distinction is the "Aha!" moment for DevOps beginners.
+
+
+
+
+**"I need a house with 3 bedrooms."**
+You describe the destination. You don't care how the contractor builds it, as long as the end result matches your blueprint.
+
+```hcl
+resource "aws_instance" "web" {
+ count = 3
+ ami = "ami-xyz"
+ instance_type = "t2.micro"
+}
+```
+
+
+
+
+**"Buy bricks, lay foundation, build wall 1, build wall 2..."**
+You define every single step. If one step fails, the whole process breaks, and you might end up with half a house.
+
+```bash
+# AWS CLI (Imperative)
+aws ec2 run-instances --image-id ami-xyz --count 3 --instance-type t2.micro ...
+```
+
+
+
+
+## Why we chose Terraform for CodeHarborHub?
+
+While AWS has **CloudFormation**, we prefer **Terraform** for our learners because it is **Cloud Agnostic**.
+
+```mermaid
+mindmap
+ root((Terraform))
+ Providers
+ AWS
+ Azure
+ GCP
+ Kubernetes
+ Benefits
+ State Management
+ Huge Community
+ Open Source
+ Format
+ HCL - HashiCorp Configuration Language
+```
+
+Terraform allows you to manage infrastructure across multiple cloud providers with a single tool and language. This flexibility is crucial for modern DevOps engineers who need to work in multi-cloud environments.
+
+:::tip Key Takeaway
+Infrastructure as Code isn't just a tool; it's a **mindset**. It treats your servers, networks, and databases with the same respect and rigor as your React or Node.js code.
+:::
+
+## Learning Challenge
+
+Take a look at your AWS account. If you had to delete everything and rebuild it right now, how long would it take you manually? If the answer is "more than 5 minutes," you need **IaC**.
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/terraform/providers-and-resources.mdx b/absolute-beginners/devops-beginner/terraform/providers-and-resources.mdx
new file mode 100644
index 0000000..8b05595
--- /dev/null
+++ b/absolute-beginners/devops-beginner/terraform/providers-and-resources.mdx
@@ -0,0 +1,116 @@
+---
+title: "Providers and Resources"
+sidebar_label: "3. Providers & Resources"
+sidebar_position: 3
+description: "Learn the fundamental building blocks of Terraform—how to connect to a cloud and define your infrastructure. This chapter covers Providers, Resources, and how Terraform manages dependencies between them."
+tags: ["terraform", "providers", "resources", "aws", "infrastructure as code"]
+keywords: ["terraform providers", "terraform resources", "aws provider", "terraform dependency"]
+---
+
+To build any "Industrial Level" infrastructure, you need two things: **A place to build it** (Provider) and **The things you want to build** (Resources).
+
+In Terraform, we define these using simple blocks of code. Think of the **Provider** as the "Cloud Platform" and the **Resource** as the "Cloud Service."
+
+## 1. The Provider: The "Who"
+
+A **Provider** is a plugin that Terraform uses to translate your code into API calls for a specific platform. Without a provider, Terraform is just a text processor with no way to talk to the outside world.
+
+### How to Configure a Provider
+In your `main.tf` file, you must declare which provider you are using. For our **CodeHarborHub** projects, we primarily use AWS.
+
+```hcl title="main.tf"
+# 1. Define the Provider
+provider "aws" {
+ region = "ap-south-1" # Mumbai Region
+}
+```
+
+:::info
+You can use **multiple providers** in the same project! For example, you could have an **AWS Provider** to launch a server and a **GitHub Provider** to create a repository for that server's code.
+:::
+
+## 2. The Resource: The "What"
+
+A **Resource** is the most important element in the Terraform language. It describes one or more infrastructure objects, such as virtual networks, compute instances, or higher-level components such as DNS records.
+
+### Resource Syntax
+
+The syntax follows a strict pattern: `resource "TYPE" "LOCAL_NAME" { ... }`
+
+```hcl title="main.tf"
+resource "aws_s3_bucket" "my_learning_assets" {
+ bucket = "codeharborhub-assets-2026"
+
+ tags = {
+ Environment = "Dev"
+ Owner = "CodeHarborHub"
+ }
+}
+```
+
+ * **Type (`aws_s3_bucket`):** The specific AWS service. This is defined by the provider.
+ * **Local Name (`my_learning_assets`):** A name used *only* inside your Terraform code to refer to this resource.
+ * **Arguments:** The configuration settings for that resource (e.g., `bucket` name).
+
+## Dependency: The "Order of Operations"
+
+Terraform is intelligent enough to know which resource to build first. This is called **Resource Dependency**.
+
+
+
+
+**The Automatic Way.** If Resource B uses an attribute from Resource A, Terraform automatically builds A first.
+
+```hcl title="main.tf"
+resource "aws_instance" "web" {
+ # Terraform sees this reference and builds the subnet FIRST
+ subnet_id = aws_subnet.main.id
+}
+
+resource "aws_subnet" "main" {
+ vpc_id = "vpc-12345"
+}
+```
+
+
+
+
+**The Manual Way.** Sometimes resources depend on each other but don't share data. You can force an order using `depends_on`.
+
+```hcl title="main.tf"
+resource "aws_instance" "web" {
+ ami = "ami-xyz"
+ instance_type = "t2.micro"
+
+ # Wait for the S3 bucket to exist before starting the server
+ depends_on = [aws_s3_bucket.example]
+}
+```
+
+
+
+
+## Mapping Theory to Code
+
+| Terraform Code | AWS Console Equivalent |
+| :--- | :--- |
+| `resource "aws_vpc"` | Creating a Virtual Private Cloud. |
+| `resource "aws_instance"` | Launching an EC2 Instance. |
+| `resource "aws_security_group"` | Configuring Firewall Rules. |
+| `resource "aws_db_instance"` | Provisioning an RDS Database. |
+
+## The "Lifecycle" of a Resource
+
+When you change your code and run `terraform apply`, Terraform does one of three things based on the provider's logic:
+
+1. **Update in Place:** Changes a setting (like a Tag) without destroying the resource.
+2. **Destroy and Re-create:** If you change a setting that cannot be edited (like the AZ of a server), Terraform deletes the old one and builds a new one.
+3. **No-op:** If the code matches the cloud, it does nothing.
+
+## Learning Challenge
+
+1. Create a folder named `terraform-lab`.
+2. Create a file named `main.tf`.
+3. Add a provider block for AWS.
+4. Add a resource block for an **S3 Bucket** with a unique name.
+5. Run `terraform init` and `terraform plan` to see the blueprint!
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/terraform/state-management.mdx b/absolute-beginners/devops-beginner/terraform/state-management.mdx
new file mode 100644
index 0000000..e4a907a
--- /dev/null
+++ b/absolute-beginners/devops-beginner/terraform/state-management.mdx
@@ -0,0 +1,101 @@
+---
+title: "Terraform State Management"
+sidebar_label: "5. State Management"
+sidebar_position: 5
+description: "Master the terraform.tfstate file, remote backends, and drift detection for professional DevOps workflows. This chapter covers how Terraform tracks your infrastructure, the importance of state files, and best practices for managing them securely."
+tags: ["terraform", "state management", "terraform state file", "remote backend", "drift detection"]
+keywords: ["terraform state management", "terraform state file", "remote backend", "drift detection", "terraform best practices"]
+---
+
+When you run `terraform apply`, how does Terraform know which resources already exist in AWS and which ones need to be created? It doesn't ask AWS every time—instead, it consults the **State File**.
+
+At **CodeHarborHub**, we treat the State File as the **"Single Source of Truth."** If you lose this file, you lose control over your infrastructure.
+
+## What is the State File?
+
+The `terraform.tfstate` is a JSON file that maps your HCL code to real-world resources. For example, if you have a resource block like this:
+
+```hcl title="main.tf"
+resource "aws_s3_bucket" "my_learning_assets" {
+ bucket = "codeharborhub-assets-2026"
+}
+```
+
+### Why do we need it?
+1. **Mapping:** Your code says `resource "aws_instance" "web"`. The state file remembers that this specific resource corresponds to ID `i-0123456789abcdef0` in AWS.
+2. **Metadata:** It stores complex dependencies that aren't always visible in your code.
+3. **Performance:** For large infrastructures, querying the AWS API for thousands of resources is slow. The state file acts as a local cache.
+
+## Drift Detection: Code vs. Reality
+
+**Drift** occurs when someone manually changes a resource in the AWS Console without updating the Terraform code.
+
+```mermaid
+graph TD
+ A[Terraform Code: 2 Servers] -->|terraform plan| B{State File}
+ C[AWS Console: 3 Servers] -->|Drift!| B
+ B --> D[Terraform: 'I will delete the extra server to match the code']
+```
+
+When you run `terraform plan`, Terraform compares the state file with the actual AWS resources. If it detects a difference (like an extra server), it will show you a plan to fix that drift.
+
+:::info
+Always run `terraform plan` before `apply`. It will alert you to any "Drift" so you can decide whether to update your code or let Terraform revert the manual changes.
+:::
+
+## Local vs. Remote State
+
+In a professional environment, keeping the state file on your laptop is dangerous.
+
+
+
+
+* **Stored in:** `terraform.tfstate` on your disk.
+* **Risk:** If your laptop breaks or you delete the folder, your infrastructure is "orphaned."
+* **Collaboration:** Impossible. Two developers can't work on the same project safely.
+
+
+
+
+* **Stored in:** AWS S3, HashiCorp Cloud, or Azure Blob.
+* **Benefit:** Centralized, backed up, and supports **State Locking**.
+* **Collaboration:** Multiple team members can run Terraform safely.
+
+
+
+
+
+## Implementing a Remote Backend (S3)
+
+To move to an "Industrial Level" setup at **CodeHarborHub**, add a `backend` block to your `main.tf`:
+
+```hcl title="main.tf"
+terraform {
+ backend "s3" {
+ bucket = "my-codeharborhub-terraform-state"
+ key = "dev/frontend-app.tfstate"
+ region = "ap-south-1"
+ dynamodb_table = "terraform-lock-table" # For State Locking
+ encrypt = true
+ }
+}
+```
+
+### Why use DynamoDB with S3?
+
+**State Locking:** If Developer A is running `terraform apply`, DynamoDB "locks" the state file. If Developer B tries to run it at the same time, Terraform will block them until A is finished. This prevents **State Corruption**.
+
+## The "Golden Rules" of State Security
+
+1. **NEVER Commit to Git:** The state file contains plain-text secrets (like database passwords). **Add `*.tfstate` to your `.gitignore` immediately.**
+2. **Enable Versioning:** If you use S3, enable **Bucket Versioning**. If a state file gets corrupted, you can roll back to a previous version.
+3. **Encrypted at Rest:** Always set `encrypt = true` in your backend configuration.
+
+## Final Graduation Challenge
+
+1. Create a local Terraform project.
+2. Run `terraform init` and `apply` to create a simple resource (like an S3 bucket).
+3. Open the `terraform.tfstate` file in VS Code. **Read it.** Notice how it stores the ARN and IDs.
+4. Manually change a tag on that bucket in the **AWS Console**.
+5. Run `terraform plan` again. Observe how Terraform detects the **Drift**.
+6. Finally, run `terraform destroy` to clean up your cloud.
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/terraform/terraform-architecture.mdx b/absolute-beginners/devops-beginner/terraform/terraform-architecture.mdx
new file mode 100644
index 0000000..4a0f223
--- /dev/null
+++ b/absolute-beginners/devops-beginner/terraform/terraform-architecture.mdx
@@ -0,0 +1,111 @@
+---
+title: "Terraform Architecture"
+sidebar_label: "2. How it Works"
+sidebar_position: 2
+description: "Understand the internal engine of Terraform, including the Core, Providers, and the State Engine. Learn how Terraform manages your infrastructure as code and why it's the industry standard for DevOps automation."
+tags: [terraform, architecture, core, providers, state-engine, devops]
+keywords: [terraform, architecture, core, providers, state-engine, devops]
+---
+
+To become a professional DevOps engineer at **CodeHarborHub**, you must look "under the hood." Terraform isn't just a single program; it is a **Plugin-Based System** designed to handle any API on the planet.
+
+:::info The "Industrial Level" Standard
+At **CodeHarborHub**, we don't just want you to know how to write Terraform code. We want you to understand how Terraform works internally, so you can troubleshoot, optimize, and even extend it with custom providers. This deep understanding is what sets "Industrial Level" DevOps engineers apart.
+:::
+
+## The Two Main Components
+
+Terraform's architecture is split into two distinct parts: **Terraform Core** and **Terraform Plugins (Providers)**.
+
+### 1. Terraform Core
+The Core is a statically linked binary written in **Go**. It is the "Brain" of the operation.
+* **Responsibilities:**
+ * Reading and interpolating your configuration files (`.tf`).
+ * Resource dependency analysis (building the **Graph**).
+ * Managing the **State File** (the source of truth).
+ * Communicating with Plugins via RPC (Remote Procedure Call).
+
+### 2. Terraform Plugins (Providers)
+Plugins are the "Hands" that do the work. Terraform doesn't know how to talk to AWS, Azure, or DigitalOcean natively. It uses **Providers**.
+* **Responsibilities:**
+ * Translating Terraform's generic commands into specific API calls (e.g., "Create Instance" $\rightarrow$ `RunInstances` API in AWS).
+ * Mapping the cloud's response back into Terraform's format.
+
+## The "Wall Socket" Analogy
+
+Think of Terraform like a **Universal Power Adapter**:
+
+| Component | Analogy | Function |
+| :--- | :--- | :--- |
+| **Terraform Core** | The Adapter Box | Logic that manages the voltage and flow. |
+| **Provider** | The Plug Head | The specific shape needed for India, UK, or USA sockets. |
+| **Cloud (AWS/GCP)** | The Wall Socket | The actual source of power (Resources). |
+
+## The Terraform Execution Lifecycle
+
+When you run a command, Terraform goes through a precise mathematical process to ensure your infrastructure is safe.
+
+```mermaid
+graph TD
+ A[HCL Files] --> B{Terraform Core}
+ S[(State File)] <--> B
+ B --> C[Graph Engine]
+ C --> D[Provider Discovery]
+
+ subgraph "External Cloud"
+ D --> E[AWS Plugin]
+ D --> F[GitHub Plugin]
+ E --> G((EC2/S3))
+ F --> H((Repo/Teams))
+ end
+```
+
+## The 3 Key Architectural Concepts
+
+### A. The Resource Graph
+
+Terraform builds a **Directed Acyclic Graph (DAG)** of all resources. It determines which resources depend on others.
+
+ * *Example:* If an EC2 instance needs a Security Group, Terraform knows to build the Security Group first. It can also build independent resources in **parallel** to save time.
+
+### B. The State Engine
+
+Terraform keeps a JSON database (`terraform.tfstate`) that acts as a "Mirror" of your cloud.
+
+ * **Code:** What you *want*.
+ * **State:** What Terraform *remembers*.
+ * **Cloud:** What is *actually there*.
+
+### C. The Provider Registry
+
+When you run `terraform init`, the Core looks at the [Terraform Registry](https://registry.terraform.io/) to download the necessary plugins for your specific providers. This modular design allows Terraform to support hundreds of providers, from major clouds to niche services.
+
+## Life of a Command: `terraform plan`
+
+
+
+
+Terraform asks the **Provider** to check the current state of resources in the real world (e.g., "Is the EC2 still running?").
+
+
+
+
+It compares the **Current State** with your **Desired Code**.
+
+ * Green (`+`): To be added.
+ * Yellow (`~`): To be modified.
+ * Red (`-`): To be destroyed.
+
+
+
+
+It presents you with an execution plan. **No changes are made to your cloud during this phase.**
+
+
+
+
+:::info The "Aha!" Moment
+Understanding the separation of concerns between the Core and Providers is the "Aha!" moment for many DevOps beginners. It explains why Terraform can manage anything with an API and how it maintains a consistent workflow regardless of the underlying cloud.
+
+Because Terraform is plugin-based, you can write your own **Custom Provider** to manage anything with an API—even your office's smart lightbulbs or a Spotify playlist! This extensibility is what makes Terraform the "Industrial Level" standard for infrastructure automation in the DevOps world.
+:::
\ No newline at end of file
diff --git a/absolute-beginners/devops-beginner/terraform/variables-and-outputs.mdx b/absolute-beginners/devops-beginner/terraform/variables-and-outputs.mdx
new file mode 100644
index 0000000..e45a618
--- /dev/null
+++ b/absolute-beginners/devops-beginner/terraform/variables-and-outputs.mdx
@@ -0,0 +1,127 @@
+---
+title: "Variables and Outputs"
+sidebar_label: "4. Variables & Outputs"
+sidebar_position: 4
+description: "Make your Terraform code dynamic, reusable, and organized using Input Variables and Output Values. This chapter covers how to define variables, assign values, and extract information from your infrastructure after it's built."
+tags: ["terraform", "variables", "outputs", "input variables", "output values", "infrastructure as code"]
+keywords: ["terraform variables", "terraform outputs", "input variables", "output values", "terraform best practices"]
+---
+
+In the previous chapter, we hardcoded values like `ami-0c55b159cbfafe1f0`. In a professional **MERN-stack** workflow, this is a bad practice. What if you want to deploy the same app to a different region or use a larger server for production?
+
+**Variables** allow you to parameterize your code, while **Outputs** allow you to extract information from your infrastructure after it is built.
+
+## 1. Input Variables (The Inputs)
+
+Think of variables as the "Arguments" of a function. Instead of hardcoding values, you define a variable and pass the value when you run Terraform.
+
+### Definition Syntax
+We typically store variables in a separate file named `variables.tf`.
+
+```hcl title="variables.tf"
+variable "instance_type" {
+ description = "The size of the EC2 instance"
+ type = string
+ default = "t2.micro" # Optional: provides a fallback
+}
+
+variable "server_port" {
+ description = "The port the server will use for HTTP requests"
+ type = number
+ default = 80
+}
+```
+
+### Using Variables in Code
+
+To use a variable, use the `var.` syntax in your `main.tf`:
+
+```hcl title="main.tf"
+resource "aws_instance" "app_server" {
+ ami = "ami-0c55b159cbfafe1f0"
+ instance_type = var.instance_type # Using the variable here
+
+ tags = {
+ Name = "CodeHarborHub-Server"
+ }
+}
+```
+
+## 2. Output Values (The Results)
+
+Outputs are like the "Return" value of a function. After Terraform finishes building your cloud, you might need to know the **Public IP** of your server or the **URL** of your database.
+
+### Definition Syntax
+
+We typically store these in `outputs.tf`.
+
+```hcl title="outputs.tf"
+output "server_public_ip" {
+ description = "The public IP address of the web server"
+ value = aws_instance.app_server.public_ip
+}
+```
+
+When you run `terraform apply`, Terraform will print these values to your terminal at the very end.
+
+## Ways to Assign Variable Values
+
+Terraform provides multiple ways to set your variables. At **CodeHarborHub**, we use these in order of priority:
+
+| Method | Best For... | Example |
+| :--- | :--- | :--- |
+| **Variable Files (`.tfvars`)** | Environment-specific settings. | `terraform apply -var-file="prod.tfvars"` |
+| **Command Line Flags** | Quick testing/one-off changes. | `terraform apply -var="instance_type=t2.large"` |
+| **Environment Variables** | CI/CD pipelines (GitHub Actions). | `export TF_VAR_instance_type=t2.small` |
+| **Default Values** | Sensible defaults for beginners. | Defined inside `variables.tf`. |
+
+## Variable Types & Validation
+
+Terraform is a "Strongly Typed" language. This prevents errors before they ever reach the cloud.
+
+```mermaid
+mindmap
+ root((Terraform Types))
+ Primitive
+ string
+ number
+ bool
+ Complex
+ list
+ map
+ object
+ set
+```
+
+### Pro Tip: Validation
+
+You can even add rules to ensure developers don't use expensive servers by accident!
+
+```hcl title="variables.tf"
+variable "instance_type" {
+ type = string
+ validation {
+ condition = contains(["t2.micro", "t3.micro"], var.instance_type)
+ error_message = "At CodeHarborHub, we only allow free-tier instances (t2/t3.micro)."
+ }
+}
+```
+
+## Practical Workflow: The `.tfvars` file
+
+To keep your code clean, create a file named `terraform.tfvars`:
+
+```hcl title="terraform.tfvars"
+# Values assigned here override the defaults in variables.tf
+instance_type = "t2.micro"
+server_port = 8080
+```
+
+When you run `terraform apply`, Terraform automatically loads `terraform.tfvars` and applies those values.
+
+## Learning Challenge
+
+1. Take your `main.tf` from the last lesson.
+2. Create a `variables.tf` and move the `bucket_name` into a variable.
+3. Create an `outputs.tf` to display the **ARN** (Amazon Resource Name) of the bucket.
+4. Run `terraform apply` and watch the output appear in your terminal\!
\ No newline at end of file
diff --git a/docusaurus.config.js b/docusaurus.config.js
index 3186105..ce2ed66 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -402,7 +402,7 @@ const config = {
"rust",
"java",
"yaml",
- // "dockerfile",
+ "hcl",
],
},
docs: {
diff --git a/static/img/tutorials/ansible-playbook-structure.png b/static/img/tutorials/ansible-playbook-structure.png
new file mode 100644
index 0000000..9046b9e
Binary files /dev/null and b/static/img/tutorials/ansible-playbook-structure.png differ