CREATE SOMETHING .io / Papers

Papers for operating decisions that need evidence.

The paper archive collects the methodology, data, and conclusions behind CREATE SOMETHING research so a pattern can move from observation into implementation with a trail.

33 papers / methodology, data, and conclusions you can verify
33

published papers

3

research categories

33

available to inspect

3

database / automation / judgment frame

Category
Sort
Apr 25, 2026

Webflow Analyzer Productization

How CREATE SOMETHING translated reviewer-side analyzer infrastructure into creator-facing validation, autofill, screenshot packaging, and submission UX without collapsing trust boundaries.

14 min intermediate
WebflowAnalyzerProductizationCreator Workflow
Read paper
Apr 13, 2026

The Analyzer MCP: A Policy-Grounded Review Architecture

How CREATE SOMETHING turned Webflow template review into a multi-surface MCP system that joins Designer state, published-site evidence, policy ingestion, and governed review output.

16 min intermediate
Analyzer MCPWebflowMCPThree-Tier Framework
Read paper
Mar 4, 2026

Composio in the MCP Delivery System

A decision-grade analysis of why Composio is included for commodity connectivity, how the wrap pattern protects brand and margin, and how delivery remains aligned to Database, Automation, and Judgment control boundaries.

22 min intermediate
ComposioMCPThree-Tier FrameworkWrap Pattern
Read paper
Feb 19, 2026

The Wrap Pattern: Commodity Integration as Invisible Infrastructure

A structural pattern for integrating commodity MCP vendors as invisible infrastructure while preserving the client-facing surface, the Intelligence Layer margin, and the Three-Tier alignment.

15 min intermediate
MCPWrap PatternCommodity IntegrationCreation Moat
Read paper
Feb 18, 2026

The Webflow Way, Automated

A case study on exposing Webflow Way QA signals to agents from a published template preview, aligned to WebMCP-style in-browser tools.

10 min intermediate
WebflowTemplate MarketplaceQATemplate review
Read paper
Feb 15, 2026

Open-Weight Models in Client MCP Work

Guidance for consultancies building MCP integrations: how to choose between OpenAI open-weight models (gpt-oss-20b/120b, gpt-oss-safeguard) and hosted models, with concrete patterns for education, production, and compliance.

16 min intermediate
MCPModel Context ProtocolOpenAIOpen-weight models
Read paper
Feb 5, 2026

The Three-Tier Framework: Database, Rules, Policy

A hierarchical ontology identifying three tiers connected by typed Artifacts and spanning four cross-cutting concerns, with MCP as natural encapsulation.

25 min advanced
Three-Tier FrameworkMCPModel Context ProtocolAgent Systems
Read paper
Feb 1, 2026

The Andon Protocol

AI-native structured escalation for agent harnesses and multi-agent systems. v3.1 adds Silent Running Detection, cost-parameter defaults and worked examples, Resolution Surface design for batch review, and a three-phase implementation plan. The canonical boundary between Automation and Judgment in the Three-Tier Framework.

18 min intermediate
AndonThree-Tier FrameworkJudgmentAutomation
Read paper
Jan 30, 2026

Ground: Verification-First Code Analysis

Case study: How Ground saved 8+ hours analyzing an 80+ package monorepo by preventing AI hallucination in code analysis.

15 min intermediate
GroundCode AnalysisHallucination PreventionMonorepo
Read paper
Jan 27, 2026

Tufte for Mobile: Design Intent Across Screen Sizes

A methodology demonstrating how wireframe intent survives responsive transformation through five Tufte principles: data-ink ratio, sparklines, direct labeling, information density, and small multiples.

10 min intermediate
TufteMobileResponsive DesignData Visualization
Read paper
Jan 19, 2026

Ground: Evidence-Based Claims for AI Code Analysis

A tool that blocks AI agents from claiming code is dead, duplicated, or orphaned without first computing the evidence. Now with AI-native features: batch analysis, incremental diff mode, structured fix output, and fix verification. Rated 10/10 by agent testing across two production codebases.

18 min intermediate
GroundEvidence-Based ClaimsDRY ViolationsDead Code Detection
Read paper
Jan 19, 2026

Recursive Language Models: Context as Environment Variable

This paper documents the implementation and empirical validation of Recursive Language Models (RLMs) based on MIT CSAIL research. We identified critical bugs, validated the pattern against the original repository, and demonstrated practical application for codebase analysis—processing 157K characters to find 165+ DRY violations.

15 min advanced
RLMRecursive Language ModelsLong ContextCode Analysis
Read paper