In GitLab CI/CD, the .gitlab-ci.yml
file is the blueprint for your automated workflows. While its declarative nature is powerful, the ability to control when specific jobs run is crucial for creating efficient, intelligent, and tailored pipelines. This is precisely where GitLab CI/CD’s rules
keyword shines.
rules
offer a highly flexible and intuitive way to manage conditional job execution, allowing your pipelines to adapt dynamically based on various factors like branch names, commit messages, file changes, and even the source of the pipeline trigger. This level of granular control is essential for building sophisticated CI/CD processes.
Why rules? The Evolution of Conditional Logic
Before rules
, GitLab CI/CD primarily relied on the only
and except
keywords. While these served a basic purpose for including or excluding jobs based on branches or tags, they had notable limitations:
- Limited Scope: They offered narrow conditions, primarily focused on Git references (branches, tags, merge requests).
- Order Sensitivity: The order of
only
/except
entries could sometimes lead to confusing or unintended behavior. - Redundancy: Similar logic often had to be repeated across multiple jobs.
The rules
keyword was introduced to address these challenges, providing a more explicit, ordered, and significantly more powerful mechanism for defining conditional logic within your CI/CD configuration.
Understanding the rules Keyword Structure
The rules
keyword is defined at the job level within your .gitlab-ci.yml
file. It accepts a list of rule expressions, and GitLab evaluates these rules sequentially, from top to bottom. The first rule that evaluates to true
dictates the job’s behavior, and no further rules for that job are processed.
Each rule entry typically consists of the following components:
if
: The core conditional expression.when
: Defines the job’s behavior if theif
condition is met.allow_failure
: (Optional) Determines if the pipeline should continue upon job failure.variables
: (Optional) Allows setting job-specific variables dynamically.
If none of the defined rules for a job evaluate to true
, the job is effectively skipped by default (an implicit when: never
applies). You can, however, add a final fallback rule to specify a different default behavior.
Let’s explore each component with detailed examples.
if: The Conditional Expression
The if
keyword holds a conditional expression that GitLab evaluates to true
or false
. If true
, the job adopts the when
behavior (and other settings) specified within that particular rule. These expressions often leverage predefined GitLab CI/CD variables and common comparison operators.
Example:
lint_codebase:
stage: test
script:
- echo "Running linter..."
- npm run lint
rules:
- if: '$CI_COMMIT_BRANCH == "main"' # Rule 1: Condition for 'main' branch
when: on_success
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"' # Rule 2: Condition for MR pipelines
when: on_success
- if: '$CI_COMMIT_MESSAGE =~ /skip-lint/' # Rule 3: Condition for commit message
when: never
- when: manual # Rule 4: Fallback for all other cases
Explanation:
- Rule 1 (
if: '$CI_COMMIT_BRANCH == "main"'
): This condition checks if the pipeline is running on themain
branch. If a commit is pushed tomain
, this rule’sif
evaluates totrue
. - Rule 2 (
if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
): This condition checks if the pipeline was triggered by a merge request event. If an MR is created or updated, thisif
evaluates totrue
. - Rule 3 (
if: '$CI_COMMIT_MESSAGE =~ /skip-lint/'
): This uses a regular expression (=~
) to check if the commit message contains the phrase “skip-lint”. This demonstrates how you can create conditions based on commit content. - The first rule that evaluates to
true
will determine the job’s action. If none of theif
conditions are met, the job will fall through to the last rule.
when: The Job’s Behavior
The when
keyword dictates what happens to the job if its preceding if
condition (or no if
condition in a fallback rule) is met.
on_success (default)
Meaning: The job will run automatically. This happens as soon as all jobs in its previous stages (or specific needs
dependencies) have completed successfully.
Example:
build_application:
stage: build
script:
- echo "Compiling application..."
- mvn clean install
rules:
- if: '$CI_COMMIT_BRANCH' # Any branch commit
when: on_success # Job runs automatically
on_failure
Meaning: The job will only run if a job in a previous stage (or a job it explicitly needs
) has failed. This is invaluable for error handling.
Example:
send_failure_alert:
stage: notifications
script:
- echo "Pipeline failed! Sending alert to #dev-ops channel."
- curl -X POST -H 'Content-type: application/json' --data '{"text":"Pipeline failed: $CI_PIELINE_URL"}' $SLACK_WEBHOOK_URL
rules:
- if: '$CI_COMMIT_BRANCH == "main"' # Only for main branch pipelines
when: on_failure # This job runs ONLY if a preceding job failed
Explanation: If any job in an earlier stage of a main
branch pipeline fails, this send_failure_alert
job (in the notifications
stage) will automatically trigger to send an alert.
manual
Meaning: The job will appear in the pipeline UI, but it won’t start automatically. Instead, it will display a “play” button, requiring a user to manually click it to initiate execution.
Example:
deploy_to_production:
stage: deploy
script:
- echo "Initiating production deployment for version $CI_COMMIT_TAG..."
- deploy-prod-script "$CI_COMMIT_TAG"
rules:
- if: '$CI_COMMIT_TAG =~ /^v\d+\.\d+\.\d+$/' # Triggered by semantic version tags
when: manual # Requires manual approval to deploy
- when: never
Explanation: When a new semantic version tag (e.g., v1.0.0
) is pushed, the deploy_to_production
job will be added to the pipeline, but it will be paused, awaiting a manual trigger from a user in the GitLab UI before deploying to production.
delayed
Meaning: The job will automatically run after a specified delay once its stage is reached and previous dependencies are met. This is useful for phased rollouts or giving services time to initialize.
Syntax: when: delayed
, start_in: <duration>
(e.g., 5 minutes
, 1 hour
, 1 day
).
Example:
post_deploy_health_check:
stage: post_deploy
script:
- echo "Waiting 10 minutes before running health checks..."
- run_health_checks.sh
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
when: delayed
start_in: 10 minutes # Job starts 10 minutes after previous stage completes
- when: never
Explanation: After a successful deployment to main
, this post_deploy_health_check
job will appear in the pipeline, but its execution will be automatically delayed by 10 minutes.
never
Meaning: The job will never be added to the pipeline if this specific rule is matched. This is frequently used as a final, explicit fallback rule to prevent a job from running if no other specific condition is met.
Example:
run_heavy_integration_tests:
stage: integration_tests
script:
- echo "Running heavy integration tests..."
- run_all_integration_tests.sh
rules:
- if: '$CI_COMMIT_BRANCH == "main"' # Rule 1: Always run on main
when: on_success
- if: '$CI_COMMIT_MESSAGE =~ /\[full-ci\]/' # Rule 2: Run if commit message has [full-ci]
when: on_success
- when: never # Rule 3: If neither of the above, DO NOT run this job
Explanation: If a pipeline is triggered on a branch other than main
and the commit message does not contain [full-ci]
, the run_heavy_integration_tests
job will be entirely excluded from the pipeline.
allow_failure (Optional): Pipeline Continuation
Purpose: If allow_failure: true
is set for a job, the overall pipeline will continue to subsequent stages even if this specific job fails. The failing job will be marked with a yellow warning icon in the GitLab UI, indicating a failure but not a pipeline halt.
Syntax: allow_failure: true
or allow_failure: false
(which is the default behavior if omitted).
Example:
static_code_analysis:
stage: security_scan
script:
- echo "Running static analysis tool..."
- run_sonarqube_scan.sh
rules:
- if: '$CI_COMMIT_BRANCH'
when: on_success
allow_failure: true # Pipeline proceeds even if SonarQube scan finds issues
Explanation: Even if the run_sonarqube_scan.sh
script returns a non-zero exit code (indicating a failure, perhaps due to quality gate violations), the static_code_analysis
job will be marked as “failed with warning,” but the pipeline will continue to the next stage (e.g., deploy
). This is useful for non-critical checks that provide feedback but shouldn’t block the entire CI/CD flow.
variables (Optional): Rule-Specific Variables
Purpose: You can define a variables
block directly within a rules
entry. These variables are only set and available to the job if that specific rule’s if
condition is met and that rule is the one that causes the job to be included in the pipeline. This enables highly dynamic job configuration based on the matching condition.
Syntax: Nested under the specific rules
entry.
Example:
deploy_to_environment:
stage: deploy
script:
- echo "Deploying to environment: $TARGET_ENV with version $APP_VERSION"
- deploy_tool --env "$TARGET_ENV" --version "$APP_VERSION"
rules:
- if: '$CI_COMMIT_BRANCH == "main"'
variables:
TARGET_ENV: "production" # Variable set if on main branch
APP_VERSION: "$CI_COMMIT_TAG" # Use commit tag for production version
when: manual
- if: '$CI_COMMIT_BRANCH == "develop"'
variables:
TARGET_ENV: "staging" # Variable set if on develop branch
APP_VERSION: "$CI_COMMIT_SHORT_SHA" # Use short SHA for staging version
when: on_success
- when: never
Explanation:
- If a pipeline runs on the
main
branch, thedeploy_to_environment
job will becomemanual
. For that specific job,$TARGET_ENV
will be set to “production”, and$APP_VERSION
will take the value of the Git tag that triggered the pipeline (e.g.,v1.0.0
). - If a pipeline runs on the
develop
branch, thedeploy_to_environment
job will runon_success
. For that specific job,$TARGET_ENV
will be “staging”, and$APP_VERSION
will be set to the short commit SHA. - This allows the same
deploy_tool
command to behave distinctly based on the branch, dynamically injecting the correct environment and versioning scheme without needing separate jobs or complex inlineif/else
logic in the script.
Best Practices for Using rules
- Order Matters: Always arrange your rules from the most specific condition to the most general. The first matching rule wins.
- Be Explicit with Fallbacks: Include a final
when: never
or a defaultwhen:
rule to clearly define what happens if none of the preceding conditions are met. This avoids ambiguity. - Leverage Predefined Variables: Familiarize yourself with GitLab’s extensive list of predefined CI/CD variables (e.g.,
$CI_COMMIT_REF_NAME
,$CI_PIPELINE_SOURCE
,$CI_COMMIT_MESSAGE
,$CI_MERGE_REQUEST_IID
) to craft precise conditions. - Optimize with
changes
: For large repositories or monorepos, usechanges
within yourrules
to trigger jobs only when specific files or directories have been modified, significantly speeding up pipelines. - Prioritize Readability: While
rules
can be complex, aim for clear and conciseif
conditions. If they become too unwieldy, consider breaking them down or using helper variables. - Test Thoroughly: Given their power,
rules
require careful testing. Push commits that should and should not trigger certain jobs to verify your logic. Utilize the GitLab CI/CD Lint tool (CI/CD -> Editor -> Lint
) to check for syntax errors. - Consider
workflow: rules
: For controlling whether an entire pipeline runs at all (rather than just individual jobs), explore the top-levelworkflow: rules
keyword. This acts as a pipeline-level gate.
Conclusion
The rules
keyword is a cornerstone for building advanced and efficient GitLab CI/CD pipelines. It provides an unparalleled level of control over when and how your jobs execute, allowing you to tailor your pipelines precisely to your development needs. By mastering rules
– understanding their components, precedence, and best practices – you can create highly optimized, intelligent, and adaptable CI/CD workflows that react dynamically to your code changes, merge requests, and deployment strategies, ultimately leading to faster and more reliable software delivery.
Author

Experienced Cloud & DevOps Engineer with hands-on experience in AWS, GCP, Terraform, Ansible, ELK, Docker, Git, GitLab, Python, PowerShell, Shell, and theoretical knowledge on Azure, Kubernetes & Jenkins.
In my free time, I write blogs on ckdbtech.com