AIEO
5 min

Lessons from AIEO-Checker Development Failure: Content Optimization in the Era Where AI Joined Your Readers

Explore the challenges faced in developing an AIEO scoring tool, including different evaluation criteria for each page type and context-dependent evaluation items, revealing the limitations of uniform scoring.

AIEOLearning from FailuresAI DevelopmentScoringEvaluation Criteria


Introduction: Background of the AIEO-Checker Project


On June 19, 2025, we embarked on developing "AIEO-Checker," a tool designed to quantitatively measure the effectiveness of AIEO (AI Engine Optimization). The tool aimed to score how well web pages are understood by AI and provide improvement suggestions.

Using Claude Code for development, we went from prototype implementation to problem discovery to the conclusion of "failure" in just one day. This incredible speed of trial and error enabled us to gain insights instantly that would have traditionally taken weeks to discover.

This article shares the valuable insights gained from this rapid failure.


AIEO-Checker Concept and Implementation Approach

Initial Concept


AIEO-Checker was planned to evaluate the following elements:

  1. Presence and Quality of Structured Data
  2. - JSON-LD implementation status - Appropriateness of Schema.org markup - Completeness of metadata
  1. Content Clarity
  2. - Logical heading structure - Paragraph conciseness - Appropriate keyword placement
  1. Credibility Indicators
  2. - Presence and quality of citations - Clear author information - Display of update timestamps
  1. Technical Optimization
  2. - Page load speed - Mobile responsiveness - Accessibility


Implementation Attempt

javascript
// Initial scoring logic (simplified)
function calculateAIEOScore(pageData) {
  let score = 0;
  
  // Check structured data
  if (pageData.hasStructuredData) score += 20;
  
  // Check for citations
  if (pageData.hasCitations) score += 15;
  
  // Evaluate heading structure
  if (pageData.hasProperHeadings) score += 10;
  
  // ... other evaluation items
  
  return score;
}


Fundamental Challenges Encountered

1. Different Evaluation Criteria by Page Type


The first problem we encountered was that important evaluation items vary significantly by page type.


Example: Presence of Citations

  • News Articles: Citations are important for credibility (+15 points)
  • Product Pages: Citations unnecessary, specifications more important
  • About Us Pages: Achievements and data more important than citations
  • Blog Posts: Citation importance varies by content
javascript
// Problem with uniform evaluation regardless of page type
if (hasCitations) {
  score += 15; // Same points for all pages... is this right?
}


2. Difficulty in Context-Dependent Evaluation

Example: Content Length


The optimal value for "content length" differs based on page purpose:

  • Detailed Technical Articles: 3000+ words preferred
  • FAQ Items: Concise 200-300 words optimal
  • Product Overview: Balanced around 1000 words


3. Black Box Nature of AI Evaluation Criteria


The most challenging aspect was that we cannot fully know what criteria AI uses to evaluate pages.


Gap Between Our Assumptions and Reality

javascript
// Weighting based on our assumptions
const weights = {
  structuredData: 0.3,    // 30% importance?
  contentClarity: 0.25,   // 25% importance?
  citations: 0.2,         // 20% importance?
  technicalSEO: 0.25      // 25% importance?
};

// However, actual AI evaluation criteria...
// - Varies by model
// - Changes dynamically with context
// - Cannot be perfectly replicated


4. Ambiguity of Overall Score Meaning


The limitations of expressing everything with a single score became clear:

  • Page A with score 85: Perfect structure but thin content
  • Page B with score 85: Average structure but rich content

Same score, but completely different improvement points.


Important Insights from Failure

1. Limitations of Uniform Scoring


AIEO cannot be evaluated with a uniform checklist like traditional SEO. Optimization direction varies greatly depending on page purpose, target audience, and content type.


2. Importance of Understanding Context

markdown
❌ Bad Approach:
"Adding citations to all pages will increase the score"

✅ Good Approach:
Consider "What is this page's purpose? What are readers looking for?"
then determine necessary elements


3. Addressing AI Diversity


Different AI models (GPT-4, Claude, Gemini, etc.) have different characteristics:

  • GPT-4: Emphasizes structured data
  • Claude: Values context consistency
  • Gemini: Utilizes multimodal information

A single evaluation criterion cannot address this diversity.


Recommendations for Future AIEO Strategies

1. Page Type-Specific Optimization Strategies

News/Blog Articles

  • Emphasize credibility indicators (author, date, citations)
  • Implement structured data (Article schema)
  • Clear heading structure


Product/Service Pages

  • Structure specification information
  • Enrich FAQ sections
  • Utilize user reviews


Company Information Pages

  • Structure organization information (Organization schema)
  • Display achievement data clearly
  • Manage update history


2. Continuous Improvement Approach

javascript
// Checklist approach instead of scoring
const aieoChecklist = {
  'productPage': [
    'Implement structured data (Product schema)',
    'Clear specification descriptions',
    'FAQ section setup',
    'Links to related products'
  ],
  'blogPost': [
    'Display author information',
    'Show publication/update timestamps',
    'Proper heading structure',
    'Links to related articles'
  ]
  // Different checklists for each page type
};


3. Importance of Qualitative Evaluation


Elements that cannot be quantified are also important:

  • Content originality
  • Value provision to readers
  • Information accuracy and currency


Conclusion: What We Learned from Failure


The development of AIEO-Checker ended in technical failure. However, the insights gained from this experience are extremely valuable for future AIEO strategies.


Key Points

  1. There's No Universal Solution for AIEO
  2. - Optimization must match page purpose
  1. Context is Everything
  2. - The same element's value changes with context
  1. Continuous Experimentation and Improvement
  2. - Strategies must evolve with AI evolution
  1. Don't Lose Sight of the Essence
  2. - Provide value to both readers and AI, not chase scores


Final Thoughts


Failure is the first step to success. The insights gained from AIEO-Checker development became the foundation for more practical and effective AIEO strategies.

Rather than pursuing uniform scores, optimize according to each page's purpose. We believe this is the path to true AIEO success.

We hope your AIEO strategies can gain something from this failure story.