CODI Reporting & Dashboards

CODI produces structured artefacts for every run, including Markdown and HTML reports, metrics JSON, and dashboard datasets. This guide covers how reports are generated, where to find them, and how to publish dashboards.

1. Run Artefact Layout

runs/<timestamp>-<stack>-<label>/
├── inputs/
│   └── Dockerfile
├── candidates/
│   ├── candidate_1.Dockerfile
│   └── candidate_2.Dockerfile
├── metadata/
│   ├── run.json
│   ├── metrics.json
│   ├── llm_metrics.json (Complete)
│   ├── environment.json
│   └── rag.json
└── reports/
    ├── report.md
    └── report.html

codi run populates the entire tree. codi report can be re-run at any time to regenerate Markdown/HTML from metadata.

2. Report Generation (core/report.py)

The reporter ingests the stored metadata and produces two deliverables:

Sections

  1. Executive Summary – Stack, rule IDs, top-level improvements, command to reproduce run.
  2. Metrics Table – Estimated size reduction, layer changes, timing, air-gap status.
  3. Candidate Details – Per-candidate rationale, CMD rewrite summary, policy notes.
  4. CMD Rewrite Summary – Explanation of shell-form conversions and runtime promotions.
  5. Security Notes – Highlighted allowlist hits or policy warnings.
  6. LLM Assist (Complete) – Adapter version, confidence ranking, rationale excerpts.
  7. Environment Snapshot – Values from environment.json (rules path, toggles, output root).
  8. Diffs – Rendered diffs between original Dockerfile and each candidate.
  9. RAG Insights – Similar runs retrieved from runs/_rag. (Optional)

3. Markdown Report Usage

4. HTML Report Usage

5. Dashboard Workflow

5.1 CLI Aggregation

codi dashboard --runs docs/examples/dashboard \
  --export-json docs/dashboard/data/sample_runs.json \
  --relative-to docs/dashboard

5.2 Static Viewer

Files under docs/dashboard/ comprise a self-contained dashboard: - index.html - dashboard.js - styles.css - data/sample_runs.json

Serve locally via python -m http.server --directory docs/dashboard 8001 or host on any static site service. Query parameter ?data= can point the viewer at alternative JSON datasets.

5.3 Dataset Schema

Each run entry contains: - id: run directory name. - stack: detected stack. - rule_id: winning rule. - metrics: size/layer deltas, analysis/render timings. - environment: air-gap + LLM flags. - reports: relative or absolute paths to Markdown/HTML files.

Refer to REFERENCE.md for the exact schema.

6. Sharing Reports

7. Automation Tips

8. Troubleshooting

Issue Resolution
Report generation fails Verify codi report paths, ensure run directory contains metadata/ and candidates/.
HTML references broken Use --relative-to when exporting dashboard data; ensure reports/ directory exists in final location.
Metrics missing Confirm build runner produced metrics.json; rerun codi run if file absent.
LLM section empty Set LLM_ENABLED=true, ensure adapter is mounted when generating run.