AI still does not understand what your eyes decide in half a second
We spent twenty years learning how to make websites feel good to people. Now the same websites must become understandable to something that has never felt anything.
We spent twenty years learning how to make websites feel good to people. Now the same websites must become understandable to something that has never felt anything.
Today I was explaining what I do to a friend who runs a small online store. We sat down in a cafe, skimmed the menu, ordered quickly, and I tried to use that moment to explain how AI agents will buy things in her store.
It is hard for a person to understand why agents cannot just choose the way we do. My friend only glanced at the page of options, and her body decided before her mind caught up. Too busy, not in the mood, do not want to read. The result: coffee works.
In a fraction of a second, a hundred micro-signals collapsed into one decision: context, hunger, time of day, weather, fatigue from the previous call, years of looking at menus, eating, knowing herself, and reading situations.
This is what we call experience, taste, intuition, a trained eye. By the time the brain explains the choice, the choice is already made.
The agent only has what you wrote on the page. And this is where it gets uncomfortable for anyone who runs an online store.
A product description that works beautifully for humans - "stylish leather sneakers, perfect for the city, in black and white" - tells an agent almost nothing.
Humans infer quality from images. Agents need explicit material, construction, and care information.
A phrase like "city style" does not tell an agent what job the product is actually good for.
Returns often hide inside missing facts: sizing, width, weight, season, comfort, and constraints.
Agents need practical conditions: weather, durability, sole, water resistance, and daily use limits.
Even a very strong agent that sees every pixel of your photo and reads every word of the description still cannot decide for your customer.
The customer is not deciding based only on the shoe. They are deciding based on their foot, their wardrobe, their plans, their mood, their partner's opinion, and their bank balance at 3pm on a Tuesday. Most of that the agent will not see.
If an agent decides using only the facts I wrote down, which facts did I forget to write down?
Sit with your catalogue and ask, for every product: what did the human customer understand at a glance, and what did I never put into words?
That is what the agent is missing. And the store where the agent finds the answer is the store where the sale happens.
Visual taste, vibe, photography, brand feeling, social proof, and instant comparison.
Structured facts, constraints, fit, trust signals, shipping, return logic, and clear next action.
A source of truth that explains products, business identity, contacts, policies, and trust signals.
I do not think most of us have fully felt how strange this shift is. We spent years learning how to make websites that feel good to people. Now we are being asked to make the same websites understandable to a second reader.
The first reader is human and embodied. The second reader is an agent that needs explicit facts, structured data, and stable entrypoints.
This is not a replacement for good design. It is a translation layer between two very different readers. We are only at the beginning of learning the second language.