Source reputation
AI visibility needs a source reputation graph
A canonical source of truth is the foundation. But it is not the whole problem.
AI systems also read the rest of the web: reviews, Reddit threads, comparison pages,
media mentions, directories, podcasts, documentation, and old complaints.
The hard question is not only what the business says about itself.
It is which sources AI is likely to trust, repeat, and use when it forms the business narrative.
Not every source should have the same weight
A company website, a verified business profile, a G2 review, a trade publication,
a Reddit comment, and a six-year-old forum complaint do not carry the same kind of truth.
They can all affect AI answers, but they should not be treated equally.
Agntbase is adding a Source Reputation Graph: a way to score sources by authority, freshness,
independence, specificity, consistency, sentiment, citation likelihood, and business risk.
What the Source Reputation Graph measures
Authority
Can this source be trusted?
Official pages, respected industry media, verified directories, expert sources, and customer review platforms carry different levels of authority.
Freshness
Is the information still current?
Old information can remain visible to AI even when the business has changed. Stale sources need to be flagged, updated, or balanced.
Independence
Who is making the claim?
Self-published claims matter, but third-party confirmation often changes how much confidence AI can place in the claim.
Specificity
Is the signal concrete?
"Great company" is weak. "Fast setup for Shopify stores with clear return policy support" is much easier for AI to reuse.
Consistency
Does it match other sources?
AI systems tend to trust repeated patterns. Conflicting descriptions create ambiguity and weaker recommendations.
Risk
Can this distort the business?
A single outdated negative mention can become dangerous if it ranks well, gets cited often, or fills a gap the official site does not answer.
Why this matters for the business truth layer
The machine-readable profile should not exist in isolation.
It should know which public sources support it, which sources contradict it,
and which sources are strong enough to shape how AI summarizes the company.
That changes the model from a static profile into a living system:
- Official facts define the business baseline.
- External sources confirm, challenge, or distort that baseline.
- Source reputation decides how much weight each source deserves.
- Freshness monitoring catches outdated claims before they become the AI narrative.
- Owner review turns internal nuance into verified machine-readable context.
The part only the business can answer
Crawling can find public facts. It cannot always find the reasons a real customer should choose one business over another.
That is why Agntbase uses guided questions and owner approval.
The best AI-readable profile needs details that usually live inside the business:
who the offer is best for, who it is not for, which services matter most, what proof is strongest,
what limits exist, what changed recently, and when an agent should stop and ask for human confirmation.
More internal nuance means more precise recommendations. A business does not stand out only by publishing more text.
It stands out by giving AI systems clearer, verified reasons to choose it in the right context.
How this becomes a product feature
In the Agntbase Hub, Source Reputation Graph becomes the external narrative layer beside the canonical profile.
- Canonical profile: what the business officially confirms.
- Source graph: which external sources support or distort that profile.
- Reputation score: how much each source should influence the AI-readable truth layer.
- Freshness alerts: what looks outdated, risky, or inconsistent.
- Action plan: which sources to update, reinforce, answer, or add.
The goal is simple: help a business understand not only what it says about itself,
but what AI may believe about it after reading the wider web.