Translating an Abstract AI Model
into an Investor-Ready Product
AI-Driven Career Matching Platform — Seed-Stage EdTech (Confidential)
The Problem
A seed-stage EdTech startup had built a sophisticated AI model that scored college graduates against job opportunities — weighing skills, credentials, and long-term career trajectory signals. The model worked. The experience didn't exist yet.
With investor demos imminent, the team needed a sole designer to translate a technically complex, probabilistic AI system into an interface that felt immediate, trustworthy, and intuitive to three very different user groups: colleges as the primary MVP customer seeking to demonstrate graduate outcomes, enterprise hiring organizations evaluating candidate pools, and students navigating early career decisions.
"The hardest design problem wasn't making it beautiful — it was making probabilistic AI output feel deterministic to the user without lying about what the model actually knows."
The Core Design Challenge
The AI model returned match scores across multiple dimensions simultaneously — skills alignment, credential weighting, trajectory prediction, employer preference signals. Surfacing this to a user who just wants to know "is this job right for me?" required compressing enormous analytical complexity into a single, legible interface moment.
The secondary challenge: the interface had to work with live data from day one. Investor demos used real pilot data, not prototyped placeholders. Filter changes had to return meaningful, real results — not simulated ones.
Key Design Decisions
- Translated user testing data, data scientist algorithms, user and business needs into a clear, appealing interface that conveyed the platform's value to investors
- Built complete design system with tokens and components transferred directly to the development team; designed live data interface architecture
- Translated ML model outputs into interface display logic — defining how match scores, confidence levels, and ranking signals surface through search results and filter interactions
- Designed role-specific views for colleges (outcomes-focused, primary customer), enterprise hiring organizations (candidate pool-focused), and students (opportunity-focused) from a single shared design system
- Coordinated directly with data scientists to understand model confidence ranges and designed UI states that communicated uncertainty honestly
Areas of Ownership
The placeability score compresses four signal types into a single composite — protecting students from volatile raw inputs while surfacing the most accurate fit signal for each role type. Protected attributes are excluded from the model entirely.
Strongest predictor of role fit. Weighted by role category — a 3.2 in Statistics may outperform a 3.8 in a less rigorous program for quantitative roles.
Compressed into placeability — not exposed as a raw filter. Protects students from a single volatile semester distorting their overall candidacy.
Tiebreaker signal where declared. Not universal — not penalized if absent.
Not collected. Not modeled. Not surfaced.
GPA compressed, minor as tiebreaker
Employer filtering cannot alter the placeability score. Score independence from employer preferences prevents a bias feedback loop — the model's definition of fit is not a function of who is paying.
Outcome
The MVP shipped on schedule for investor demos using live pilot data. The platform successfully secured seed funding, validating product-market fit with colleges as the primary customer. The design system established the foundation for future features including skill prediction models, credential analytics, and expanded role-based dashboards for all three user groups. The product subsequently expanded to university and enterprise partners.
College administrators and hiring organizations are enterprise desktop users. Students are mobile-first — checking matches between classes, on transit, in the moment. The same data model and token system adapts to both surfaces without duplication.
Full table · expandable rows · token inspector
Card stack · fit ring hero · tap to expand
AI-Assisted Billing Intelligence
for Global Law Firms
Decision-Support System for Billing Attorneys — Enterprise LegalTech (Confidential)
The Problem
At global law firms, billing attorneys are responsible for managing client health, revenue cycles, and timekeeper oversight — often across hundreds of active matters simultaneously. The signals that indicate a client relationship is at risk — delayed payments, escalating communication tone, unusual billing patterns — are buried across disconnected systems: email threads, document libraries, billing tables, matter management platforms.
Without surfacing these signals in one place, billing attorneys miss early warning signs until they become expensive problems. The existing SharePoint environments offered no decision-support layer — just raw data in fragmented views.
"The design principle that governed every decision: AI should surface what the attorney can't see — not make decisions they shouldn't delegate. Awareness without overreach."
The AI Integration Challenge
The core product challenge was designing an interface where AI aggregated signals from multiple disconnected data sources — emails, billing records, document libraries, matter status — and surfaced a coherent client health picture to the billing attorney in seconds, not hours.
The restraint principle was equally important: the AI's role was to prepare the attorney for action, not to act on their behalf. Every AI-surfaced insight had to be presented in a way that preserved the attorney's judgment as the final decision point — with drafted communications ready to send, but never sent automatically.
Key Design Decisions
- Designed a signal aggregation dashboard that pulled from 4+ data sources into a single client health view — reducing time-to-judgment from hours to minutes
- Created role-based views for billing attorneys, business professionals, and paralegals — each surfacing relevant signals for their specific workflow
- Implemented AI-drafted communication escalations that the attorney reviews and approves — action preparation without commitment
- Designed threshold-based alert logic: AI intervenes only when signal convergence crosses a defined risk threshold, not on every data point
- Built and maintained scalable design systems across ~20 custom intranet platforms — component libraries, token structures, and style guides consumed by dev teams
- Shipped custom HTML/CSS overrides for SharePoint environments where standard components couldn't meet design or usability requirements
Areas of Ownership
Four disconnected data sources feed a single aggregation engine. The AI compresses signals into a client health score — intervening only at threshold, never automatically. The attorney owns the final action.
into client health score
— seconds, not hours
signals converge at risk level
- →Surface converging risk signals
- →Compress context into a single view
- →Draft candidate communications
- →Intervene at defined risk threshold
- →Send client communications autonomously
- →Determine tone or relational appropriateness
- →Escalate severity through prescriptive language
- →Act on inferred intent from incomplete context
Governing Design Principles
Three principles emerged from this work that now govern all AI-human interface decisions:
The interface surfaces risk signals without triggering alarm — attorneys remain in control of pace and priority
AI acts when signal convergence crosses a defined risk level — not on every data point, reducing noise and alert fatigue
AI drafts; humans approve. Every AI output is a starting point for attorney judgment, not a replacement for it
Building the Accessibility Gap
No Plugin Had Closed
Figma WCAG Accessibility Checker & Design Token Exporter — Personal Project
The Market Gap
Every Figma accessibility plugin on the market either detects problems or suggests fixes — not both. None of them also generate design tokens and export design system documentation in the same workflow. Accessibility checking, remediation guidance, and documentation export existed as three separate manual steps, creating friction and — more critically — human error.
For designers working on enterprise-scale design systems where accessibility compliance is a legal and contractual requirement, this gap had real consequences: missed violations, inconsistent token naming, and documentation that didn't reflect actual component states.
"The goal wasn't to automate accessibility — it was to make compliance so frictionless that designers stop treating it as a separate step and start treating it as part of the design act itself."
The Prompt Engineering Challenge
The plugin's intelligence is driven by a prompt system I designed and continue to refine. The core challenge: prompts needed to detect WCAG violations across multiple categories — contrast ratios, ARIA label presence, screen reader flow, focus order — and for each violation, surface a specific compliant alternative rather than a generic warning.
The prompt architecture had to balance two competing requirements: comprehensive coverage (catching every relevant violation category) and zero false positives (not flagging acceptable design decisions as errors). Over hundreds of iterations, I refined the prompt logic to distinguish intentional design overrides from genuine compliance gaps — and to give the user override capability when the AI's suggestion conflicts with a deliberate design choice.
Key Technical Decisions
- Built in React and TypeScript at a hackathon — shipped a working plugin in the event timeframe, then continued refining post-event
- Designed the prompt system to output structured JSON that maps directly to Figma's API — violations, suggested alternatives, and token values in a single pass
- Implemented user override capability — the plugin surfaces AI suggestions but the designer retains final decision authority
- Built token export in standard JSON format compatible with Style Dictionary and major design-to-code pipelines
- Designed documentation export to auto-generate component usage guidelines from the token structure — reducing documentation time from hours to minutes
- Iteratively expanding coverage: currently adding ARIA label detection and screen reader flow analysis to the prompt system
Areas of Ownership
What This Demonstrates
This project is the most direct proof of concept for the design engineer profile: it required identifying a real market gap, designing a solution, writing production React/TypeScript, integrating an AI prompt system, and shipping a functional tool — all as sole contributor. The prompt architecture continues to evolve as new accessibility categories are added, making it a living demonstration of iterative AI system design.
From 30% to 85%
Task Completion in a Regulated Workflow
GreenCampus — Compliance SaaS for K–12 School Districts
The Problem
GreenCampus is a zero-to-one SaaS platform built to help K–12 school districts manage integrated pest management workflows in compliance with the Healthy Schools Act of California — a state regulatory framework governing pesticide application, notification, and recordkeeping in school environments. Two distinct user groups depended on the platform: facilities administrators responsible for scheduling and executing pest control treatments, and school administrators responsible for compliance oversight across multiple campuses.
Post-launch observation revealed a critical adoption failure: task completion in the Jobs tab — the platform's most operationally critical feature — sat at approximately 30%. Both user groups were struggling, but for different reasons. Facilities administrators couldn't move fluidly through the job lifecycle. School administrators couldn't assess compliance status without drilling through multiple disconnected tabs. The result was abandonment, workarounds, and compliance risk.
"By embedding real-world compliance workflows into the interface, I helped make a complex system both usable and trustworthy in a high-stakes environment."
Research & Discovery
I conducted contextual inquiry and task modeling with both user groups to understand their real workflows before and after the platform entered their lives. The research revealed three compounding problems: confusing terminology that didn't match how facilities staff actually thought about their work, a fragmented task flow that required too many steps to complete a single job lifecycle, and a compliance dashboard that gave administrators no at-a-glance signal — every assessment required manual investigation.
The regulatory dimension added additional design constraints. Under the Healthy Schools Act, pesticide application records must be accurately generated, retained, and accessible for inspection. Any interface friction that caused staff to skip or defer documentation steps created real legal exposure for the districts — not just usability problems.
UX Solutions Delivered
- Redesigned the Jobs tab around a clear linear lifecycle — Scheduled → In Progress → Documented — eliminating ambiguity about where a job stood and what action was needed next
- Created role-specific views: facilities administrators received a task-focused operational interface; school administrators received a compliance dashboard with at-a-glance status indicators across all campuses and zones
- Introduced modular job cards with compliance status pills — allowing administrators to triage out-of-compliance situations without drilling into nested tabs
- Rewrote UX copy throughout to match facilities staff mental models — replacing technical regulatory language with terminology that matched how staff actually described their work
- Developed a modular component library including job cards, filtering tools, and compliance status indicators designed to scale to additional job types
Frontend Engineering
- Built and maintained the full frontend in HTML, CSS, and jQuery across both the original build and the redesign — sole front-end developer throughout
- Engineered time-sensitive data display for the compliance dashboard: notification deadlines, treatment windows, and status expirations required careful prioritization logic to surface what was most urgent without overwhelming the user
- Implemented CSS architecture at component level using BEM methodology — buttons, status pills, job cards all built as structured, reusable components with documented variants
- Designed and developed the filtering system for the Jobs tab to support multi-dimensional filtering by campus, job type, status, and date range with live result updates
Areas of Ownership
Outcome
Job task completion rose from approximately 30% to over 85% through observed usage following the redesign — a nearly 3x improvement in the platform's most critical workflow. Users reported significantly improved clarity and control. The platform expanded from its initial deployment to 3 school districts with signed contracts, and the modular component system supported expansion to additional job types including irrigation and cleaning protocols.
The compliance-first design approach — building the regulatory framework into the interface rather than treating it as a separate documentation step — became the foundation for the platform's credibility with risk-conscious school district administrators operating under state law.
This is my life
in one view
ThyroidTracker — AI-Powered Chronic Care Management for Hypothyroid Patients
The Problem — Personal, Clinical, and Systemic
Hypothyroidism is one of the most common chronic conditions in the United States, yet it remains among the most poorly tracked. Patients manage a complex, shifting constellation of symptoms — fatigue, weight, temperature sensitivity, mood, hair loss, brain fog, heart rate — that interact with medications, lab values, diet, sleep, and life events in ways that are rarely visible to their care team at the time of a 15-minute appointment.
The deeper clinical reality is that no two hypothyroid patients track the same metrics. Autoimmune thyroid disease — particularly Hashimoto's thyroiditis — produces highly individual symptom profiles that change over time. Generic symptom trackers fail because they impose fixed categories on a condition that demands flexibility. Patients either under-track because the tool doesn't fit their reality, or over-track in a fragmented way that produces no usable signal.
This problem is personal. My background in biopsychology, medical product development at Abbott Laboratories, and close relationships with hypothyroid patients gave me both the clinical grounding and the human urgency to design a better tool.
"This is my life in one view." — Patient tester, after first session with the health timeline
The AI Design Challenge
The central design challenge was building an AI system flexible enough to accommodate every patient's unique tracking reality while structured enough to surface meaningful patterns over time. Two competing risks had to be balanced: too rigid, and patients would abandon the tool when their symptoms didn't fit the available categories; too open-ended, and the data would fragment into noise with no useful signal for the patient or their provider.
The solution was an AI-assisted category creation system. When a patient logs a new symptom or event, the AI suggests a small number of relevant categories based on what they've described — drawing on the patient's own prior entries and common hypothyroid symptom patterns. The patient can accept a suggestion, modify it, or create a completely custom category from scratch. Once created, that category becomes a persistent tracking dimension the patient can log against indefinitely.
This approach solves two problems simultaneously: it reduces the cognitive load of starting from a blank field, and it prevents the proliferation of near-duplicate custom categories that makes longitudinal data useless. The AI acts as a gentle organizing intelligence — not a gatekeeper.
Key Design Decisions
- Designed multiple input modalities — sliders, free text, AI-assisted category creation, and photo attachment — so patients can log in the way that fits their energy level and cognitive state at the moment of logging
- Built the health timeline view as the primary interface: a longitudinal display of symptom and event patterns over time that makes interactions visible that a patient would never notice entry-by-entry
- Designed AI category suggestions as a conversation — the system offers 2–3 options, the patient responds, and the selected category becomes part of their permanent personal tracking vocabulary
- Architected the data model in Supabase to support completely flexible per-patient category structures — no two users share the same schema, by design
- Began building the provider connection layer — allowing patients to share their timeline data with qualified thyroid specialists to bridge the gap between self-tracking and clinical care
Areas of Ownership
Domain Expertise Behind the Design
ThyroidTracker is built on a clinical foundation most health app designers don't have. My biopsychology degree from UC Santa Barbara included rigorous clinical psychology research training and experiment design — directly applicable to understanding how to design for symptom variability and longitudinal data collection. My work in medical product development and digital transformation at Abbott Laboratories, which included global health communications and patient-facing content, gave me a working understanding of how health information needs to be structured to be actionable rather than overwhelming.
Personal proximity to hypothyroid patients gave me the qualitative signal that no amount of secondary research could replace: the frustration of describing a symptom that doesn't fit any existing category, the anxiety of not knowing whether today's fatigue is the disease or the medication or something else entirely, and the helplessness of arriving at a clinical appointment with months of undocumented experience and 15 minutes to communicate it.
Governing Design Principles
The app accommodates the patient's reality, not the other way around. The AI helps organize; it never imposes
Every entry is only valuable in relationship to every other entry — the timeline view is the product, not the log form
AI suggestions are always overridable. The patient's language and categories take precedence over any model inference
The end goal is a patient arriving at a clinical appointment with a year of organized, legible data their provider can actually use
in one view"
Designing the System Behind
the Client-Facing Work
Meridian Advisory — Internal Wireframing Design System for Consulting Practice
The Problem
At Meridian Advisory, three consultants — with backgrounds in law, legal IT, and design technology — produced wireframes as the primary artifact for client discovery sessions. These wireframes went directly into multi-hour client meetings where product requirements were finalized, decisions were made, and scope was locked. The quality and flexibility of those wireframes directly determined how productive those meetings were.
The team used Balsamiq. It worked for low-fidelity sketching, but created three compounding problems: wireframes couldn't evolve in complexity during a live meeting, components were rebuilt from scratch for each new client engagement (roughly 60% rework per project), and the tool sat outside Figma — creating a handoff gap between discovery artifacts and production design work.
"I was designing for myself as the downstream consumer. I understood the consultants' output requirements better than they did — because I lived with the consequences of those outputs every day."
The Core Design Challenge
The consultants needed a system that could move fluidly across fidelity levels within the same component, in the same file, during a live client meeting. A table might start as a simple placeholder grid, then need to show invoice status indicators when a client asked "what does overdue look like?", then reveal a full filtering panel when the conversation moved to "how would they search across 200 matters?"
That range — from bare skeleton to near-production complexity — had to be achievable by consultants with minimal Figma experience and no layer-naming discipline, in real time, without breaking anything.
What I Built
- A Figma component library built around visibility toggling as the primary interaction model — each component had layered states switchable on or off without creating new components or duplicating frames
- Images ranging from placeholder rectangles to representative screenshots; text from lorem ipsum to realistic law firm copy; tables from simple grids to full filtering and invoice status variants (red, yellow, green)
- Deliberately calibrated fidelity — first iteration was too polished and caused clients to give visual feedback instead of functional feedback; pulled back a register to stay in the wireframe register while retaining structural range
- Component reuse architecture: common enterprise UI patterns (data tables, navigation, matter cards, status pills, filter panels) built once and applicable across any client engagement
- Introduced across 2–3 working sessions — no written documentation required; system designed to be self-explanatory for users with minimal Figma experience
Areas of Ownership
Each component carries the full range of complexity in a single Figma frame. Consultants toggle layers — not swap components — to evolve fidelity during a live client conversation.
Outcome
The system has been in continuous use for six months with zero support requests — the clearest possible signal that the design was right for the users. Component build time dropped from one hour to two minutes. Moving a standard platform wireframe from low to high fidelity dropped from eight hours to one hour. Component reuse across client engagements increased by approximately 40%, and the shorter review cycle between wireframe sign-off and dev handoff reduced the pipeline by approximately 25%.
The Balsamiq subscription was eliminated at no additional cost — the system runs entirely within Figma, which the team already licensed. But the more significant change was organizational: the consultants shifted from treating wireframes as static pre-meeting deliverables to using them as live instruments that could evolve in the room. That shift changed what client meetings could accomplish — and compressed the time between discovery and development handoff in ways that benefited every engagement downstream.