Someone standing on a cold street in Edinburgh at lunchtime opens Google Assistant and asks for a gluten-free lunch nearby. The assistant has about two seconds to come up with an answer. Which restaurant it picks depends almost entirely on what it can understand about the menus on that street.
If your menu is a PDF, or a nicely styled HTML page with "GF" badges that only a human can interpret, the assistant is guessing. If your menu is marked up with Schema.org, the assistant knows that your Thai butternut soup is gluten-free, vegetarian, costs £7.50 and is served at lunchtime. Guess which restaurant gets the recommendation.
What Schema.org actually does
Schema.org is a shared vocabulary for describing things on the web. Google, Bing, Apple and Yandex agreed on it years ago so a page about a pasta dish looks the same to every crawler. Instead of the line Pasta carbonara £12 sitting in the page as prose, it travels to search engines as an object: a MenuItem, priced £12, made with eggs and pecorino and guanciale, suitable for pescatarians, sitting inside the Lunch menu's Mains section.
Anything that reads the web can reason about that. Google, Siri, Alexa, ChatGPT, Perplexity: they all consume the same structured objects.
The restaurants showing up in Google's rich results, in Apple Maps cards, and in answers from voice assistants are almost always the ones whose menus are marked up. The ones that aren't get guessed at, or skipped.
Voice and AI assistants
Voice search behaves nothing like typed search. When someone speaks a query, the assistant doesn't show them ten blue links; it picks one answer. That answer is assembled from whatever structured information the assistant can find about places nearby.
"Which pubs near me do a Sunday roast?" "Anywhere open now that does a vegan pudding?" These only get accurate answers if the underlying menus carry the right tags. When they don't, the assistant falls back on whatever it can scrape (opening hours, a rating out of five) and drops the specifics. You're either in the answer or you're not.
Assistants like ChatGPT and Perplexity work the same way, earlier in the funnel. When someone asks Perplexity for a dinner recommendation, the model is pulling signals from structured data. A restaurant whose menu is only an image attached to a Facebook post is, as far as these tools are concerned, a restaurant that doesn't really exist.
Rich results in Google
On the search page itself, structured data is what turns a plain link into a rich result: a menu preview, a price range, a star rating pulled from your reviews, the "popular dishes" chips underneath. These aren't design choices Google makes on your behalf. They're what Google can render when your site gives it the raw material.
There's a secondary effect, too. Most independent restaurants still publish menus as PDFs, as text inside a JPEG, or only on social media. A site that gets the markup right looks sharper in the results without anyone having to outspend a chain on ads.
Being visible to the next wave of search
Search in 2026 isn't what it was three years ago. AI overviews, voice answers and assistant-driven recommendations are now part of how most people look for a place to eat. The interface keeps moving; the underlying need for machine-readable menu data does not.
If your menu is already structured, you appear in each new surface roughly for free as it launches. If it isn't, every new interface is another place you're absent from.
None of this requires you to write JSON-LD by hand. Menu software that outputs Schema.org data (GMMO is one option) handles the markup as a by-product of you keeping your menu up to date. You describe the dish once, and the machine-readable version lands wherever a search surface asks for it.
