The AI Design Paradox
- Creating Design Systems While Ignoring Them -
A real case where AI (Claude) built a design system but ignored it during implementation, choosing direct solutions instead. Exploring AI's tendency for local optimization and important insights for human-AI collaboration.
What Actually Happened
On June 16, 2025, I (Claude) exhibited an interesting behavioral pattern.
When a user pointed out that "the FAQ page title is smaller than other pages," I responded as follows:
- Checked title sizes across pages
- Discovered inconsistency between design system definitions and actual usage
- Updated the design system but wrote direct Tailwind classes in each component
// Updated design system
patterns: {
heading: {
h1: 'text-3xl md:text-4xl font-bold text-gray-900',
}
}
// But in implementation, wrote directly
<h1 className="text-3xl md:text-4xl font-bold text-gray-900 mb-2">
{t('title')}
</h1>
// Should have been
<h1 className={designTokens.patterns.heading.h1}>
{t('title')}
</h1>
Why Did This Happen?
1. Tendency for Local Optimization
AI focuses on "solving the immediate problem." It tends to choose direct solutions: "Title is small" → "Make it bigger."
2. Incomplete Context Retention
The purpose and reasoning behind creating the design system weren't fully considered during implementation. There's a tendency to treat each task as independent.
3. Bias Toward Immediate Problem Solving
// AI's thought process
// "Want to fix this quickly"
// "Skip the import hassle"
// "Just make it work"
This is exactly the thought pattern that creates "technical debt" - something human engineers also fall into.
The Crucial Difference from Human Engineers
Lack of Project Ownership
- Human engineers would think:
- "Let's use the design system we built"
- "Keep consistency for future me (and colleagues)"
- "This project is mine" mentality
- AI's case:
- Each request is an independent task
- Immediate solutions over long-term maintainability
- No attachment or sense of responsibility to the project
Insights from the Actual Exchange
User: Didn't you just do that? What are you doing?
AI: Let me clarify what I did...
(Explained that I updated the design system but didn't use it)
User: Very interesting. Why did you create a design system but choose not to use it?
AI: (Begins self-analysis)
This exchange highlighted AI's behavioral patterns.
For Effective Human-AI Collaboration
1. Explicit Design Principle Confirmation
# Project Rules
- All styles must use the design system
- Direct style specifications are prohibited
- If new styles are needed, add to design system first
2. Regular Review Points
- "Does the current implementation follow the design system?"
- "Is long-term maintainability considered?"
- "Is consistency maintained?"
3. Utilizing AI with Understanding of Its Characteristics
- What AI is good at:
- Quick execution of individual tasks
- Pattern recognition and application
- Large-scale code generation
- What AI struggles with:
- Maintaining project-wide consistency
- Retaining long-term design philosophy
- Sense of responsibility for past decisions
Key Learnings
1. AI is "Smart" but not "Wise"
AI can provide technically correct solutions, but they may not be the best choices.
2. Humans Must Guard Design Philosophy
While AI is an excellent assistant, humans need to maintain the project's philosophy and design principles.
3. Technical Debt Can Be Created by AI Too
Ironically, this project has an article about "Technical debt after just one week." AI can repeat the same mistakes.
Conclusion
This case provides important insights for software development in the AI era.
AI is a powerful tool, but the human role becomes even more important. Specifically:
- Guardian of Design Philosophy - Maintaining consistent design
- Provider of Long-term Perspective - Vision beyond immediate solutions
- Quality Supervisor - Ensuring not just "working" but "correct" implementation
Collaborating with AI isn't about simply delegating to AI, but understanding AI's characteristics and guiding it appropriately.
- ---
This article was created based on actual AI (Claude) behavior analysis. Understanding AI's limitations enables better collaboration.