If a customer asks ChatGPT for the best running shoes under $100, your store is either in the answer or it is not. There is no middle ground. SEO optimized product pages, fast load times, structured data in your Liquid templates: none of it was designed for the way AI systems consume the web. They tokenize everything, they reason over structured signals, and they surface what they can confidently represent in a response. If your catalog is not legible to an LLM, it is invisible to the people asking it questions.
LLMRank is our answer to that problem. It is a Shopify app that audits AI readiness across every product, generates structured data, serves clean Markdown snapshots for AI crawlers, exposes llms.txt and /llm-index.json discovery endpoints, and lets merchants simulate the exact queries their customers are typing into AI search engines. The entire system runs through the merchant's own Shopify domain via App Proxy.
The problem is not missing content
Most stores have enough content. The problem is that the content is not structured in a way AI systems can reason over reliably.
A product page with 200 words of description, no FAQ schema, no entity relationships, and no machine-readable representation of what the product actually is will score poorly in any AI context window. Not because it is bad writing. Because it is written for humans and browsers, not for systems that need to extract structured meaning at inference time.
That gap is what LLMRank measures, closes, and monitors.
AI Readiness Audit
The audit scores every product in the catalog from 0 to 100 across five rule categories:
- Thin content: Is there enough substance for an LLM to form a confident answer from this product?
- Duplicates: Are descriptions being reused across variants or similar products in ways that dilute uniqueness?
- Structured data coverage: Is JSON-LD present and complete enough for the product to be identified as a schema.org
Productentity? - Entity completeness: Are brand, category, attributes, and identifiers present in machine-readable form?
- Internal linking: Are related products and collections linked in a way that lets AI systems understand catalog structure?
The scoring is fully deterministic. No LLM is called during the audit. The same product produces the same score on every run, which means results are reproducible, comparable across time, and exportable as JSON for external reporting or integration into CI pipelines.
Deterministic scoring is not a constraint. It is a feature. You cannot act on a result you cannot reproduce.
Structured Data Generation
Once you know which products score low and why, the next step is fixing them. LLMRank handles this through a BYOK structured data generation layer: the merchant supplies their own LLM API key, and the system uses it to generate JSON-LD Product and FAQ schema.
Every generated schema goes through a preview, apply, rollback workflow before it touches the live store. Nothing is written directly. The merchant sees exactly what will change, approves it, and the schema is stored as a versioned Shopify metafield. If a generated schema needs to be revised or removed, the previous version is one click away.
API keys are AES-256-CBC encrypted at rest and never appear in job payloads, logs, or serialized state. The architecture has a Zero Retention Mode where raw LLM prompts and responses are never persisted to the database at all.
Markdown Export Layer
AI crawlers do not want your HTML. They want content they can parse cleanly and tokenize efficiently. LLMRank generates Markdown snapshots of every product and collection and serves them through public endpoints:
/{domain}/markdown/products/{handle}.md
/{domain}/markdown/collections/{handle}.md
These are not simplified text dumps. They use league/html-to-markdown to preserve rich formatting: headings, lists, tables, links, and emphasis survive the conversion. The endpoints implement automatic content negotiation: any agent sending Accept: text/markdown receives a Markdown response. Browsers get HTML. The signal handling is the same pattern we described in our Laravel Markdown for Agents package, applied here to the Shopify catalog layer.
llms.txt and the AI Index
LLMRank serves two discovery endpoints that are becoming standard signals for AI systems:
/llms.txt is the AI equivalent of robots.txt. It tells AI crawlers what this store is, what it sells, and where to find machine-readable representations of the catalog. Perplexity, Claude, and several other systems already follow it.
/llm-index.json is a structured catalog manifest listing every product with its canonical URL, handle, title, category, and key metadata. This gives AI systems a complete map of the store without crawling every page individually.
Both endpoints are served through the merchant's own Shopify domain via the App Proxy. The store's canonical authority is preserved. No subdomains. No third-party hosting.
AI Query Simulation
This is the feature that tends to change how merchants think about their catalog.
You type a natural-language query, something like "best running shoes under $100 for wide feet" or "gift for someone who loves cooking". The system passes that query to an LLM with your catalog data and returns:
- Which products the LLM would surface in its answer
- How the LLM would frame that answer
- Which relevant products are missing from the coverage and why
The gap analysis is the valuable part. When you can see that a product should appear in an answer but does not because it lacks FAQ schema or has thin content in a specific attribute, you know exactly where to invest. Vague recommendations like "add more content" get replaced by specific, actionable work items.
Why we built this
We have been running GEO projects for clients for a while now. Along the way, we kept doing the same things: auditing catalogs, fixing structured data, setting up AI crawler endpoints. At some point it made more sense to build a tool than to repeat the process manually for every store.
Most tools in this space focus on content recommendations and stop there. They do not score products, they do not touch structured data, and they do not serve machine-readable endpoints. We needed something that did all three: deterministic scoring you can reproduce, schema changes you can roll back, and discovery endpoints on the merchant's own domain. So we built it.
If you are running a Shopify store and you want your products to appear in AI-generated answers, this is where that work starts.
Install LLMRank
LLMRank is available on the Shopify App Store. If you are running a Shopify store and want your products to appear in AI-generated answers, install LLMRank and run your first audit in minutes. If you need hands-on GEO consulting alongside the tool, reach out.
Links:
Moneo as Your Enterprise Partner
We collaborate closely with enterprise teams to design, deliver, and operate systems built for the long run.
Start PartnershipHow We Can Help
GEO & AI Search Optimization
Make your brand visible in AI-generated answers. Optimize for ChatGPT, Perplexity, Gemini and other AI search engines.
Learn moreShopify
Future-ready Shopify solutions for growing brands.
Learn moreSoftware Consulting
We help you make the right technical decisions, improve processes, and build smarter with advice from engineers who have seen it all.
Learn more
The Web Is Being Tokenized. Serve Markdown.
We built a Laravel package that unifies Cloudflare's three Markdown conversion services under one elegant API. Convert URLs, files, and raw HTML to Markdown. Make your Laravel app agent ready with a single middleware.
Quiet Failures in Usage Metering
Usage metering rarely crashes loudly. It drifts silently. We built Laravel Usage Limiter to make metering atomic, idempotent, and auditable under real production pressure.
Meet Markdown++: Open Source Markdown Editor
Markdown++ is an open source browser based editor built by Moneo for static site creators. It blends the simplicity of Markdown with AI powered writing and seamless Git integration.