Proof of Work

Results That Speak
for Themselves.

We don't deal in vague outcomes. Every engagement has a before, an after, and a number that makes the difference impossible to ignore.

AI Automation

From Seven-Minute IT Tickets to Thirty-Second Slack Commands

Series B SaaS · ~120 Employees · Remote-First Operations Team

The Problem

The ops team was handling 10–20 requests per week for Jira project provisioning, permission assignments, and Google Workspace group management. Each task took 5–10 minutes of careful manual work across three admin consoles. Mistakes meant re-work — and occasionally the wrong person getting access to the wrong project. There was no audit trail.

What We Built

A natural-language Slack app backed by an LLM orchestration layer with scoped, read-limited integrations into Jira's REST API and Google Admin SDK. Employees describe what they need in plain English; the app interprets intent, maps it to permissioned operations, executes, and posts a confirmation with a full action log — all within the Slack thread. Ambiguous or out-of-scope requests are flagged for human review rather than guessed at.

Technical Detail
  • GPT-4o function-calling layer for intent parsing and structured operation mapping
  • Slack Bolt (Python) · Jira REST API · Google Admin SDK with principle-of-least-privilege scopes
  • All actions written to an append-only audit log in Jira itself — survives any re-org
  • Deployed serverless on AWS Lambda; cold-start p95 under 800ms
<30s
Average task completion, down from 5–10 minutes
0
Provisioning errors in six months post-launch
87%
Reduction in IT ops tickets reaching human queues
Data Engineering · Go-to-Market

312 Paid Subscribers in the First Week — Before a Dollar Went to Ads

B2B Analytics Startup · Seed Stage · Pre-Launch

The Problem

The product was ready. The audience wasn't. A pre-launch SaaS with no existing email list, no warm network, and a narrow ICP needed to build distribution fast — without burning runway on paid acquisition before product-market fit was confirmed.

What We Built

A multi-source data pipeline that identified and qualified over 100,000 ICP-matched email contacts from public signals: job board postings, LinkedIn enrichment, industry database cross-referencing, and technographic intent data. Contacts flowed into a fully automated customer.io lifecycle — warm introduction, educational nurture, trial prompt, activation nudges, paid conversion, and renewal. Every sequence was A/B tested on subject line, send time, and content angle. No batch-and-blast; every email arrived with relevant context.

Technical Detail
  • Python scraping + enrichment pipeline (Playwright, Hunter.io, Clearbit) → deduplicated PostgreSQL store
  • dbt for contact scoring model; ICP fit score weighted by role, company size, tech stack signals
  • customer.io broadcasts + behavioral triggers for full lifecycle; 11 distinct sequences
  • Statistical A/B framework: minimum detectable effect set before each test, Bonferroni-corrected
312
Paid subscribers acquired in the first seven days post-launch
34%
Open rate on warm sequence (vs. 18% industry average)
100k+
Qualified ICP contacts identified before a single cold email sent
Applied AI · Knowledge Systems

Turning a 15-Year Document Archive Into a Two-Minute Conversation

Mid-Market Professional Services Firm · 200 Consultants · ~40,000 Documents

The Problem

Junior consultants were spending 25–40 minutes per engagement searching for relevant past deliverables, templates, and client precedents buried across a decade and a half of SharePoint chaos — miscategorized folders, inconsistent naming, no taxonomy. Senior staff had tribal knowledge; everyone else had nothing. Billable hours were evaporating in search.

What We Built

A private, air-gapped retrieval-augmented generation (RAG) system over the firm's full document corpus. Documents are chunked, embedded, and stored in a vector index. Consultants ask in natural language — "find our M&A integration playbooks for mid-market manufacturing" — and receive a synthesized answer with source citations and ranked document links. Nothing leaves the firm's Azure tenant.

Technical Detail
  • SharePoint Graph API ingestion → recursive chunking → OpenAI text-embedding-3-large
  • Azure AI Search vector index with hybrid keyword + semantic retrieval and re-ranking
  • FastAPI backend · React front-end embedded in SharePoint via SPFx webpart
  • Citation pipeline surfaces exact document, section, and page — hallucination-minimized by design
<3min
Average research time, down from 32 minutes
$2.1M
Estimated annual value of recovered billable hours
94%
Consultant satisfaction in 90-day internal survey
Machine Learning · Customer Success

Predicting Churn 11 Weeks Early — and Stopping It Before It Happens

Growth-Stage SaaS · $8M ARR · 1,400 Active Customers

The Problem

The customer success team was entirely reactive: they learned about churn when cancellation requests arrived. By then, the relationship was already over. With a lean CS team stretched across 1,400 accounts, there was no systematic way to identify which customers were quietly disengaging before it was too late — and no playbook for what to do when they found one.

What We Built

An ML-based customer health scoring model trained on 90-day behavioral signals: login frequency decay, feature adoption breadth, support ticket velocity, and billing friction events. Scores update nightly. When an account crosses a risk threshold, an automated customer.io playbook fires: a personalized check-in email, a targeted feature nudge based on their specific adoption gap, and — if no engagement within 72 hours — a CSM task to book a success call. The CS team now focuses entirely on accounts the model flags; everyone else is handled automatically.

Technical Detail
  • XGBoost churn classifier trained on 18 months of historical data; SHAP values surfaced per-account in Looker
  • Feature engineering pipeline in dbt; Airflow DAG for nightly score refresh → PostgreSQL
  • customer.io event-triggered sequences with account-level personalization via Liquid templates
  • Shadow-mode A/B validation: model vs. CSM intuition over 60 days before full rollout
11wk
Average advance warning before churn, enabling intervention
38%
Reduction in monthly churn rate within 90 days of go-live
$640K
ARR retained in the first six months of operation

Your Problem Has a Number Too.

Every engagement starts with a genuine conversation — no pitch, no pressure. Tell us what's slowing you down.

Start the Conversation