Janis Iranee

Machine learning that survives production

From feasibility and experiments to pipelines, monitoring, and safe releases.

  • Validate value before building
  • Reproducible training/scoring pipelines
  • Monitoring + drift detection to keep performance stable
Discuss your ML use case

What this is

Two tracks:

  • ML Discovery: problem framing, baselines, evaluation design
  • ML Engineering: pipelines, deployment, monitoring, governance

Who it’s for

  • Product teams with a new ML idea but unclear ROI
  • Existing ML systems with unstable performance
  • Teams needing reproducible pipelines (compliance, audits)
  • Engineering teams integrating ML into production services

Typical engagements

ML Feasibility & Experiment Sprint (2–4 weeks)

  • Define target and evaluation methodology
  • Baseline models + error analysis
  • Recommendation: build/no-build + roadmap

Production ML Pipeline (4–12 weeks)

  • Training/evaluation/scoring pipeline design
  • Automated evaluation and monitoring
  • Versioning + rollback strategy
  • Integration with services/data platform

Operate & Improve (monthly)

  • Regression testing of model quality
  • Drift investigation and remediation
  • Periodic re-training decisions

Deliverables

  • Experiment report (Discovery) with recommendation
  • Production pipeline (Engineering) + monitoring + runbook
  • Documentation: inputs/outputs, evaluation, known limitations
  • Handover: “operate the model” workshop
For technical readers
  • Experiment design (train/test splits, leakage checks)
  • Robustness testing beyond accuracy
  • Feature drift monitoring between training and scoring
  • Pipeline components (preprocess/train/evaluate/score)
  • Model versioning, rollback, reproducibility
  • Monitoring dashboards and alerting
  • Privacy/governance controls when handling sensitive data

Why I’m good at this

  • Built and maintained ML pipelines (training/evaluation/scoring) using modern tooling
  • Conducted experiments to answer business questions beyond accuracy
  • Implemented drift monitoring and continuous evaluation
  • Experience integrating ML into production workflows and communicating results

How I work

  • Start with discovery to avoid building the wrong thing
  • Define acceptance criteria early
  • Treat ML as a product: monitoring, ownership, iteration cadence

Get in touch

Have a project in mind? I typically respond within 1-2 business days.