Portfolio Layout
Portfolio Development

How to Write a Case Study When You Have No Metrics

No analytics access. NDA restrictions. Student project. The numbers don't exist. Here's how to write a compelling case study anyway — because the lack of metrics isn't the real problem.

Nikki Kipple
Nikki Kipple
5 strategiesMar 2026

TL;DR

  • Key insight: Hiring managers want evidence of impact — metrics are just one form of evidence
  • Alternatives: Qualitative outcomes, proxy metrics, usability test results, team adoption
  • Never do: Leave outcomes blank or make up numbers
  • Best move: Be honest about what you know and specific about what you observed

The “No Metrics” Problem

Every piece of portfolio advice says the same thing: show your impact with metrics. Increased conversion by 34%. Reduced support tickets by 200/month. Improved NPS from 42 to 67.

Great advice. Except for the part where most designers don't have those numbers.

Maybe you were a junior and didn't have analytics access. Maybe the project was under NDA and you can't share specifics. Maybe it was a student project and there are no real users. Maybe the company just didn't measure things. Maybe the feature shipped after you left.

Whatever the reason, you're staring at the “Results” section of your case study with nothing to put there. And the advice you find online doesn't help — it just keeps telling you to “quantify your impact.”

Here's the thing nobody tells you: metrics are not the only form of evidence. They're the most convenient form, but hiring managers are really looking for something deeper — evidence that your work mattered and that you understand why it mattered.

Why Outcomes Matter (And What Counts)

When a hiring manager reads your case study, they're asking one question: “Does this person's work actually make a difference?”

Metrics answer that question efficiently. But they're not the only answer. Here's what else counts as evidence of impact:

Evidence of impact, ranked by strength:

  1. 1Quantitative metrics — conversion rates, time-on-task, error rates, revenue impact. The gold standard.
  2. 2Proxy metrics from testing — usability test success rates, task completion times, error counts from your own research.
  3. 3Qualitative outcomes — user quotes, team feedback, stakeholder reactions, adoption patterns.
  4. 4Process outcomes — design was selected over alternatives, shipped without revisions, influenced team decisions.
  5. 5Learning outcomes — what you discovered, what changed about your approach, what you'd do differently.

Most case studies without metrics fail because they skip all of these. The designer shows their process — wireframes, flows, mockups — and then... nothing. The story just stops. What you need is to pick the strongest evidence type available to you and commit to it.

Qualitative Outcomes That Work

Qualitative outcomes are underrated. A specific user quote or a concrete team reaction can be more memorable than a percentage. Here's what works:

User quotes from testing

If you did any user testing — even informal — pull direct quotes. “Oh, this is so much easier than before” from a test participant is genuine evidence that your design improved the experience.

“7 of 8 participants completed the checkout flow without assistance, compared to 3 of 8 on the previous version.”

Team and stakeholder reactions

How did the team respond? Was your design selected? Did engineering implement it without pushback? Did a stakeholder change their mind based on your research?

“The design was selected by the product team over two alternative approaches and shipped in the next sprint with no revision requests from engineering.”

Adoption and influence

Did your work get reused? Did other teams adopt your pattern? Did it become a component in the design system?

“The onboarding flow I designed became the template for 3 other product teams launching new features that quarter.”

Before/after comparisons

Even without metrics, showing what existed before and what you created after tells a visual story. Screenshots of the old flow vs. the new flow speak for themselves.

“The previous settings page had 47 options on a single screen. The redesign organized them into 5 contextual categories, reducing the visible options at any time to 8-12.”

Proxy Metrics: The Next Best Thing

You might not have access to production analytics, but you probably have some data you're not thinking of. Proxy metrics come from your own research and testing — they're real numbers, just from a smaller sample.

Sources of proxy metrics:

  • Usability testing: Task success rate, time-on-task, error rate, SUS scores. Even 5 participants give you real numbers.
  • A/B testing in prototypes: If you tested two versions with users, report which performed better and by how much.
  • Heuristic evaluation scores: If you ran a heuristic evaluation before and after, the improvement is measurable.
  • Survey data: Post-test satisfaction scores, preference rankings, ease-of-use ratings.
  • Complexity reduction: Number of steps in a flow, number of form fields, number of screens to complete a task. These are countable.

The key is labeling these honestly. “In usability testing with 8 participants...” is all you need to say. It's clear this isn't production data, and it's still compelling evidence that your design works.

How to Frame “I Don't Know the Numbers”

Sometimes you genuinely have nothing — no testing data, no analytics, no feedback. The project shipped after you left, or it was cancelled, or the company didn't measure anything. In these cases, honesty is your best strategy.

What to say

“I left the company before this feature launched, so I don't have access to post-launch metrics. Based on usability testing during development, the new flow reduced the average task completion time from 4 minutes to under 2 minutes across 6 test participants.”

What NOT to say

“The redesign improved the user experience.” (Too vague — how? For whom? By how much?)

The magic phrase

“If I had access to production data, I would measure [X] to validate this design.” This shows you think about measurement even when you don't have it — and that's a signal of design maturity.

Student & Conceptual Projects

If you're early in your career, most or all of your case studies might be from school or self-directed projects. That's completely fine — but the framing matters.

Rules for student project case studies:

  • Be transparent. Label it as a student or conceptual project. Don't pretend it's a real product launch. Hiring managers always figure it out, and the dishonesty is worse than the truth.
  • Focus on your research and thinking. Without real production constraints, your process is even more important. Show how you identified the problem, validated assumptions, and iterated on solutions.
  • Use usability testing data. You can always test your designs with 5 people. Recruit classmates, friends, or use remote testing tools. This gives you real data to report.
  • Show what you'd measure. “If this product launched, I would track signup completion rate, time-to-first-value, and 7-day retention.” This demonstrates product thinking.
  • Include constraints. “This was a 3-week project with a team of 2” sets appropriate expectations and shows you can deliver under constraints.

Need help structuring your student case study? Our Case Study Builder walks you through the structure step-by-step, or read the full case study structure guide for the complete framework.

NDA & Confidential Work

NDAs are real, and violating them can have serious consequences. But an NDA doesn't mean you can't include the project at all — it means you need to be creative about what you share.

Anonymize the client

“I redesigned the onboarding flow for a Fortune 500 fintech company” — specific enough to be interesting, vague enough to be safe.

Show process, not final screens

Wireframes, flow diagrams, and research artifacts are usually safe to share even under strict NDAs. The visual design of the shipped product is what's typically protected.

Use relative metrics

“Improved conversion by 34%” doesn't reveal the actual numbers. Most NDAs protect absolute figures, not relative improvements.

Ask your former employer

Many companies are fine with you showing sanitized work samples. A quick email asking “Can I include X in my portfolio?” often gets a yes.

Before & After Examples

Here's what the outcomes section looks like when it's bad vs. when it's good — even without hard metrics:

❌ Weak (no evidence)

“The client was happy with the final design. The new interface improved the user experience and made the product more intuitive.”

Problem: No specifics. No evidence. Could describe literally any design project ever.

✅ Strong (qualitative evidence)

“In usability testing with 6 participants, all 6 completed the checkout flow without assistance (vs. 2 of 6 on the previous version). The engineering team implemented the design in one sprint with no revision requests. The product manager noted it was the smoothest design-to-dev handoff in the team's history.”

Why it works: Specific numbers from testing, team adoption signal, concrete praise with attribution.

❌ Weak (vague process)

“I conducted user research, created wireframes, and designed high-fidelity mockups. The project was well-received.”

Problem: Describes activities, not outcomes. “Well-received” by whom? How do you know?

✅ Strong (honest framing)

“I left the company before launch, so I don't have production metrics. However, the design reduced the settings page from 47 options on one screen to 5 contextual categories (8-12 visible options at a time). In prototype testing, average task completion time dropped from 3.5 minutes to 1.2 minutes. If I were measuring post-launch, I'd track support ticket volume for settings-related issues and task completion rates in analytics.”

Why it works: Honest about limitations, specific about what you can measure, shows measurement thinking.

Building Measurement Into Future Work

The best long-term solution to the “no metrics” problem is to start measuring from the beginning. Even if your company doesn't have a data culture, you can create your own:

  • Run usability tests on every project. Even 5 participants give you data. Record task success rates, time-on-task, and satisfaction scores.
  • Define success metrics before you start designing. Ask your PM: “How will we know if this is successful?” If they don't know, propose metrics yourself.
  • Screenshot your analytics before and after. If you have access to any dashboard — even basic page views — capture the before state so you can compare later.
  • Keep a work journal. Document decisions, feedback, and outcomes as they happen. You'll forget the details 6 months later when you're writing the case study.
  • Ask for data retroactively. If you left a company, email your former PM and ask: “Hey, do you happen to know how [feature] performed?” You'd be surprised how often this works.

The designers who consistently have strong case studies aren't necessarily at more data-driven companies — they just build measurement into their process from day one.

Ready to structure your case study? Our Case Study Builder guides you through each section including outcomes — or get a portfolio critique to see how your current case studies stack up.

💬 Common Questions

Everything You Need to Know

Quick answers to help you get started

Share this resource

Nikki Kipple

Written by

Nikki Kipple

Product Designer & Design Instructor

Designer, educator, founder of The Crit. I've spent years teaching interaction design and reviewing hundreds of student portfolios. Good feedback shouldn't require being enrolled in my class — so I built a tool that gives it to everyone. Connect on LinkedIn →

Ready to put this into practice?

Get free AI-powered feedback on your portfolio design. Specific, actionable fixes in under 3 minutes.

Get My Free Critique →

Get design tips in your inbox

Practical advice, no fluff. Unsubscribe anytime.