Shipping GitGreen: Measuring CI/CD Carbon in GitLab
I wanted GitLab teams to see the climate cost of every deployment as clearly as they see unit-test failures. GitGreen grew out of that idea: a CLI that instruments your runners, watches the grid carbon intensity, and drops a comment on every merge request with the exact grams of CO2e you just emitted.
Why Bother Measuring Pipelines?
Pipelines are invisible power hogs. Auto-scaling runners mean you can easily spin up dozens of virtual machines per merge request without thinking about it. By blending real CPU/RAM metrics, Electricity Maps data, and research-backed power models, GitGreen answers three questions for every job:
- How much energy did this run consume based on the exact vCPU minutes and RAM GB-hours?
- What was the regional carbon intensity when those resources were consumed?
- Did we blow through the carbon budget we committed to as a team?
Every report includes CPU, RAM, and Scope 3 (embodied) emissions plus weekly aggregates so you can spot regression trends when a new test suite ships.
Instrumenting GitLab CI/CD Runners
The CLI ships as a gitgreen init wizard that configures credentials, selects your cloud provider (AWS or GCP), and patches .gitlab-ci.yml with a safe append-only job:
npm install -g gitgreen
cd your-repo
gitgreen init
Behind the scenes the wizard:
- Authenticates with CloudWatch or Cloud Monitoring and stores encrypted tokens inside GitLab CI/CD variables.
- Adds a single
gitgreenstage after your tests:
gitgreen:
stage: report
image: node:20
needs: []
variables:
GITGREEN_PROVIDER: aws
script:
- npx gitgreen run --job $CI_JOB_ID
- Creates a backup of
.gitlab-ci.ymlso you can roll back instantly if something ever fails.
The job scrapes runner metrics, calculates energy usage, then posts a merge request comment with a breakdown like:
Total emissions: 2.847 gCO2e
CPU: 1.923 gCO2e
RAM: 0.412 gCO2e
Scope 3: 0.512 gCO2e
Making the Numbers Actionable
Measuring carbon is interesting; enforcing it is useful. GitGreen ships with:
- Carbon budgets tuned per branch or project. Breaches can fail the job or just warn reviewers.
- Database connectors for PostgreSQL/MySQL so dashboards can track rolling weekly emissions alongside cost.
- Slack notifications that summarize which jobs are trending up so platform teams can investigate noisy workloads.
All of that ships in a single CLI because the operational overhead had to be close to zero for real teams to adopt it.
What's Next
I’m experimenting with GPU profiling for AI workloads, plus webhook integrations so you can trigger automated offset purchases. If you want to test-drive the CLI, the code lives in the GitLab repo and the npm package is here. Let me know how your pipelines behave once you shine a light on their carbon trail.