Fundamentals

What Is a Design Critique? The Complete Guide for 2026

Everything you need to know about design critiques: how they work, how to run one, common formats, questions to ask, and how AI is changing the process.

Nikki Kipple
Nikki Kipple
18 min readMar 2026

TL;DR

  • What it is: A structured conversation about design work, focused on improving outcomes — not passing judgment
  • Why it matters: Designers who receive regular, quality critique produce measurably better work and grow faster
  • The key: Good critique is specific, actionable, and tied to goals — not "I don't like it" dressed up in professional language

What Is a Design Critique?

A design critique is a structured conversation where designers present their work and receive focused feedback from peers, leads, or cross-functional partners. The goal isn't to judge the work as "good" or "bad" — it's to make it better through specific, actionable observations tied to the project's goals.

I noticed that a lot of people confuse critiques with reviews, presentations, or just... getting opinions from whoever happens to be nearby. They're not the same thing. A critique has structure. It has rules. And when it's done well, it's one of the most powerful tools a designer has for improving their work and sharpening their thinking.

The word "critique" comes from the Greek kritike, meaning the art of judgment. But modern design critique isn't really about judgment at all. It's about observation. The best critiques sound more like "I notice the visual hierarchy pulls my eye to the secondary action first" than "this layout is wrong."

A design critique is NOT:

  • A performance review or evaluation of the designer
  • A brainstorming session for new ideas
  • A stakeholder approval meeting
  • An open forum for personal preferences ("I don't like blue")
  • A chance to redesign someone else's work in real time

At its core, a design critique exists because designers can't see their own blind spots. You've been staring at those screens for days. You know the user flow so well that you can't experience it as a newcomer anymore. You've made hundreds of micro-decisions that feel obvious to you but might not be to anyone else. Critique brings in fresh eyes with a specific purpose: help the work achieve its goals.

That's the piece most people miss. Good critique is always tethered to goals. "Does this design solve the problem it set out to solve?" Without that anchor, you just get a room full of people sharing opinions. And opinions, frankly, are the least useful form of design feedback.

Design Critique vs Design Review vs Design Feedback

These three terms get thrown around interchangeably, and it causes real confusion. They're related but serve fundamentally different purposes. I've seen teams run a "critique" that's actually a review, or ask for "feedback" when what they need is a structured critique. Getting this wrong wastes everyone's time.

Design CritiqueDesign ReviewDesign Feedback
PurposeImprove the work through discussionEvaluate whether work meets requirementsShare reactions and observations
TimingDuring the design process (iterative)End of a phase (gate-keeping)Anytime (informal or formal)
OutputActionable observations and questionsGo / no-go decisionOpinions, suggestions, reactions
StructureFacilitated, time-boxed, goal-orientedFormal checklist or criteriaLoose or unstructured
ParticipantsPeers, design leads, cross-functionalStakeholders, decision-makersAnyone — peers, managers, users

The key distinction: critique is collaborative, review is evaluative, feedback is reactive. In a critique, the presenter and critics are working together toward a better outcome. In a review, someone with authority is deciding whether the work passes. Feedback is the broadest category — any response to design work counts, from a detailed Figma comment to your friend saying "looks cool."

For a deeper dive into these differences, check out our design critique vs design review comparison.

The Anatomy of a Good Design Critique

Every good critique has the same basic ingredients, whether it's a formal session at a design agency or two designers leaning over a laptop. Here's what needs to be in the room.

The Roles

The Presenter

The designer showing their work. Their job is to frame the context: what problem they're solving, what stage the work is at, what constraints exist, and — critically — what kind of feedback they need. "I'm not looking for color feedback today — I need help with the information hierarchy" is a sentence that saves 30 minutes of derailed conversation.

The Critics

The people providing feedback. Their job is to observe, question, and suggest — in that order. Good critics resist the urge to redesign in real time. They describe what they see, ask why certain choices were made, and offer alternatives only when directly relevant to the stated goals. Not every thought needs to be spoken.

The Facilitator

The person keeping things on track. They manage time, redirect tangents, make sure quiet voices get heard, and enforce the ground rules. In smaller teams, the design lead often plays this role. In larger orgs, it might be a dedicated facilitator or a rotating responsibility. Either way, someone needs to be the adult in the room.

The Framework

Good critiques follow a consistent flow. Here's the one that actually works in practice:

The Critique Flow

1

Present

Designer shares context, goals, and constraints. Sets the scope for feedback.

2

Observe

Critics describe what they see. No judgment yet — just observations about the design.

3

Question

Critics ask about decisions. "What led to this layout?" not "Why didn't you...?"

4

Suggest

Offer alternatives tied to goals. "Have you considered..." not "You should..."

5

Decide

Presenter synthesizes feedback and decides what to act on. Their call, not the room's.

Timing & Artifacts

Critiques work best at specific moments in the design process. Too early, and there's nothing concrete to discuss. Too late, and nobody wants to hear that the foundation is shaky.

  • After initial exploration — share rough concepts and directional thinking. "Am I solving the right problem?"
  • Mid-design — share wireframes or early visual directions. "Does this structure work?"
  • Before handoff — share polished designs. "What am I missing?"

As for artifacts: bring whatever communicates the design clearly. Figma prototypes, wireframes, flow diagrams, even sketches on paper. The fidelity should match the stage. Showing pixel-perfect mockups during an early exploration critique sends the wrong signal — people will focus on visual details instead of structural questions.

How to Run a Design Critique

Running a critique well is a skill, and most teams never formally learn it. They just throw people in a room with a screen and hope for the best. Here's the step-by-step process that consistently produces useful outcomes:

  1. 1

    Set the stage (2 min)

    The facilitator reminds everyone of the ground rules: focus on goals, not preferences. Observe before judging. Ask before suggesting. Be specific. The presenter states what stage the design is at and what kind of feedback they need.

  2. 2

    Present the work (5-10 min)

    The designer walks through the design. Cover the problem statement, target users, key constraints, and design decisions. Don't over-explain — let the work speak where it can. If you need 15 minutes to explain a single screen, the screen has a problem.

  3. 3

    Silent review (3-5 min)

    Give everyone time to look at the work quietly and form their own thoughts. This prevents groupthink and ensures the first person to speak doesn't anchor everyone else's feedback. Sticky notes or written comments work well here.

  4. 4

    Structured feedback (15-30 min)

    Go around the room (or screen). Each person shares their observations and questions. The facilitator keeps things moving and redirects tangents. The presenter listens and takes notes — this is not the time to defend every decision.

  5. 5

    Discussion (5-10 min)

    Open the floor for deeper discussion on the most important themes that emerged. The presenter can ask clarifying questions. This is where the real value often surfaces — patterns across different people's observations.

  6. 6

    Wrap up (2-3 min)

    The presenter summarizes the top 3-5 takeaways and their planned next steps. This creates accountability and makes sure the critique leads to actual changes, not just conversation.

For a more detailed walkthrough with facilitation templates and real-world examples, see our full guide on how to run a design critique.

Common Design Critique Formats

Not every critique needs to be a formal, calendar-blocked event. Different formats work for different situations, team sizes, and design stages.

Formal Presentation Critique

The classic. Designer presents, room gives feedback, facilitator keeps order. Works best for mid-to-late stage work when the design is developed enough for substantive discussion.

Best for: 4-8 people, 30-60 min, complex or high-stakes projects

Desk Crit (Over-the-Shoulder)

Informal, spontaneous, and often the most honest. A designer pulls a colleague over and says "hey, can you look at this for a second?" Low pressure, high value for quick gut checks and early-stage work.

Best for: 2-3 people, 5-15 min, early exploration and quick decisions

Gallery Walk

Multiple designs are displayed simultaneously (think: pinned to walls or spread across Figma frames). Participants walk around, leaving comments via sticky notes or annotations. Great for comparing multiple directions or early-stage concepts when you need breadth, not depth.

Best for: 5-12 people, 20-40 min, comparing directions or design sprints

Async / Remote Critique

Share the design with context in a document, Figma file, or Loom video. Participants leave feedback on their own schedule. Works surprisingly well when you provide clear prompts — "I'm specifically looking for feedback on the onboarding flow and the dashboard hierarchy."

Best for: Distributed teams, any size, when schedules don't align

AI-Powered Critique

The newest format. Upload your design to an AI critique tool and get instant, structured feedback on visual hierarchy, accessibility, consistency, and common UX patterns. Not a replacement for human critique — but an incredibly useful first pass that catches things humans consistently miss.

Best for: Solo designers, quick sanity checks, before human critique sessions

Most healthy design teams use a mix. Desk crits for daily work, formal critiques at milestones, async feedback for distributed teams, and AI tools for rapid iteration. The format matters less than the consistency — regular critique of any kind beats occasional perfect sessions.

What Makes Critique Fail

I've sat through hundreds of critiques. The bad ones all fail for the same reasons. If you recognize any of these in your team's sessions, fixing even one will make a noticeable difference.

Useful vs Useless Feedback

Useless (Vague, Opinion-Based)

"I don't really like this layout."

"Can you make it more modern?"

"Something feels off about this page."

"Have you tried making the logo bigger?"

Useful (Specific, Goal-Oriented)

"The primary CTA competes with the navigation — my eye goes to the nav first."

"The type scale jumps from 14px to 32px with nothing in between, which flattens the hierarchy."

"Users need to complete 3 tasks on this page, but the visual weight suggests only 1 is important."

"The empty state doesn't guide users toward their first action — what should they do here?"

Vague Feedback

"It just doesn't feel right" is not critique — it's a mood report. Every piece of feedback should be specific enough that the designer knows what to do with it. If you can't articulate what's not working and why, sit with the work a bit longer before speaking.

Opinion Disguised as Observation

"I think this should be blue" is a preference. "The current color doesn't meet AA contrast standards against this background" is an observation. The difference matters enormously. Observations can be verified and acted on. Preferences can only be agreed or disagreed with.

No Follow-Through

A critique that generates great discussion but no action items is just a meeting. If the designer walks out without clear next steps, you've wasted everyone's time. Always end with documented takeaways and planned changes.

Power Dynamics

When the most senior person speaks first, everyone else adjusts their feedback to match. When the VP says "I love the direction," suddenly nobody wants to point out the usability problems. This is why facilitation matters, and why silent individual review before group discussion is so important.

Solving Instead of Observing

Critics who immediately jump to solutions ("You should add a modal here") skip the most valuable part: understanding the problem. If you spend all your time proposing fixes, you might be solving the wrong problem entirely. Observe and question first. Solutions come later.

Design Critique Questions to Ask

Having a bank of go-to questions makes you a better critic and a better facilitator. Here are the questions I've found most useful, organized by what you're evaluating:

Visual Design

  • Where does your eye go first? Is that where it should go?
  • How many levels of visual hierarchy can you identify? Are they distinct enough?
  • Does the typography scale create clear relationships between content types?
  • Is there enough contrast for readability across different devices?

User Experience

  • What's the primary action on this screen? How quickly can you identify it?
  • If a user arrives here with no context, what would they do first?
  • What happens when things go wrong? Is there an error state, empty state, loading state?
  • How does this flow connect to what comes before and after it?

Content & Copy

  • Can you understand what this page does from the headline alone?
  • Are the labels clear enough that you wouldn't need a tooltip?
  • Does the microcopy set accurate expectations for what happens next?
  • Is the tone consistent with the rest of the product?

Technical & Feasibility

  • Are there any interactions here that might be technically difficult to implement?
  • How does this perform with real data? What if there are 2 items? 200? 2,000?
  • Does this design reuse existing components, or does it require new ones?
  • What accessibility considerations apply here?

Tools for Design Critiques

The tool matters less than the process, but the right tool can reduce friction significantly. Here's a quick overview of what's available:

  • Figma comments — built into your design tool, good for async critique directly on the work
  • Loom — record yourself walking through a design with narration for async presentation
  • Markup.io — annotate live websites and prototypes with contextual comments
  • AI critique tools — get instant, structured analysis of visual hierarchy, accessibility, and patterns

For a comprehensive comparison of available tools, check out our design critique tools guide for 2026 or our design feedback tools comparison.

AI-Powered Design Critique

Here's where things get interesting. AI design critique tools have gone from "cute experiment" to "genuinely useful" in the past year. And I'm not just saying that because we built one.

AI critique works differently from human critique. It doesn't get tired. It doesn't have politics. It doesn't soften feedback because it likes you. It analyzes your design against established patterns, accessibility standards, and visual design principles — and it does it in seconds.

What AI critique does well:

  • Catching inconsistencies in spacing, alignment, and component usage
  • Flagging accessibility issues (contrast ratios, touch targets, text sizing)
  • Identifying visual hierarchy problems
  • Comparing against established UX patterns
  • Providing instant feedback at 2am when no colleagues are available

What AI critique doesn't do well (yet):

  • Understanding business context and strategic goals
  • Evaluating emotional resonance or brand fit
  • Navigating organizational politics and team dynamics
  • Judging cultural appropriateness or regional nuances

The smartest approach: use AI for the first pass, then bring refined work to human critique. This way, the human session can focus on the strategic and contextual questions that AI can't address, rather than spending 20 minutes pointing out that your contrast ratios are off.

For a deeper comparison, see our article on AI vs human design feedback. Or just try AI critique yourself and see what it catches.

Building a Critique Culture

The hardest part of critique isn't learning the framework or picking the right questions. It's building a culture where people actually want to participate. Where showing unfinished work feels safe. Where honest feedback is expected, not feared.

I've watched teams try to implement critique by scheduling weekly sessions and then wondering why nobody shows up, or why the sessions are painfully polite and surface-level. Culture doesn't come from a calendar invite.

Start Small and Informal

Don't launch with a formal, company-wide critique program. Start with two or three designers doing casual desk crits. Let it grow organically. When people see that informal feedback actually improves their work, they'll ask for more.

Leaders Go First

If you want your team to show unfinished work and accept critical feedback, the design lead needs to go first. Present your rough work. Accept criticism gracefully. Demonstrate that vulnerability doesn't equal weakness. This one action does more for critique culture than any process document.

Separate Critique from Performance

If designers feel like critique sessions are also evaluations of their competence, they'll only bring work they're confident about. Make it explicit: critique is about the work, not about you. How you respond to feedback matters, but the quality of your early drafts doesn't.

Make It Regular

Weekly or biweekly critique sessions, even if they're short, build the muscle. Sporadic critiques feel like events. Regular critiques feel like how work gets done. Consistency normalizes the practice.

Celebrate Changes, Not Just Designs

When someone implements feedback from a critique and the work gets notably better, call it out. "Remember the feedback about the information hierarchy? Look how much clearer this is now." This reinforces that critique leads to tangible improvement.

For more on building effective feedback loops in your team, read our guide to design feedback best practices.

The Bottom Line

Design critique is the single most effective way to improve design work. Not courses. Not books. Not tools. A structured conversation with people who care about the same goals as you.

But it only works when it's specific, actionable, and tied to real objectives. "I don't like it" is not critique. "The visual hierarchy doesn't guide users toward the primary action because the secondary elements have equal visual weight" — that's critique.

Whether you're giving feedback, receiving it, facilitating it, or using AI to supplement it, the principles are the same: observe before judging, ask before suggesting, and always tie it back to the goals. Everything else is just opinion.

Want instant, honest design critique?

Upload your design and get specific, actionable feedback in seconds. No scheduling, no politics, no pulled punches.

Try The Crit Free
💬 Common Questions

Everything You Need to Know

Quick answers to help you get started

Share this resource

Ready to put this into practice?

Get free AI-powered feedback on your portfolio design. Specific, actionable fixes in under 3 minutes.

Get My Free Critique →

Get design tips in your inbox

Practical advice, no fluff. Unsubscribe anytime.