“I speak, therefore I’m right” – Part 3 the difference between Data and Interpretation
In Part 1 of these introductory posts, I talked about the trap of relying on intuition and ‘common sense’ instead of critical thinking, and in the second I mentioned how Sabine Hossenfelder — a YouTuber and theoretical physicist with a sharp eye for nonsense — had been discussing misinformation and warned that people who “want misinformation — consciously or subconsciously — to justify conclusions they hold dear” are a real problem. We explored why we often reject critical thinking altogether — thanks to Groupthink.
In Part 3, we need to look at something even more fundamental — the difference between data and interpretation — and how influencers can turn that gap into a cycle of misinformation.
What data actually is
Data is simply information about the world:
-
- a measurement
- a count
- a recorded observation
- a speed or distance
- a crash statistic
- a percentage
- a timestamp
Data is neutral. Data doesn’t care what we think. Data doesn’t have an agenda.
But the moment a human looks at data, something happens. We interpret it. And that’s where things go wrong.
Interpretation is where bias creeps in
Interpretation is the story we tell ourselves about the data. Two people can look at the same dataset and come to completely different conclusions because:
-
- they start with different assumptions
- they have different beliefs
- they want different outcomes
- they’re influenced by different groups
- they’re motivated to defend their worldview
This is why Hancock’s “I’ll take another decision if I think the science won’t work” is so revealing. He wasn’t rejecting the data — he was rejecting the interpretation he didn’t like. And riders do this too.
Motorcycling is full of examples where data and interpretation diverge
The example of the SMIDSY crash
The data from decades of research shows:
-
- drivers almost always look
- drivers often fail to see
- drivers often misjudge speed and distance
- because motorcycles fall below the threshold of visual salience
But the popular interpretation?
-
- “They didn’t look.”
- “They were distracted.”
- “They need their eyes tested.”
- “They are poorly trained.”
The data says one thing. And data isn’t wrong, it just is. But the narrative says something far more emotionally satisfying.
“Standing on the pegs lowers the centre of gravity”
Here’s another widely-disseminated statement. Asked in a reader’s letter “how the centre of mass of the motorcycle moves when a rider stands on the pegs”, the entire editorial staff of a US rider magazine some years ago claimed to have carefully consider the question, then stated that that answer was obvious; “the centre of mass moves down because the rider’s weight is taken on the pegs”.
In fact, the data (which we can derive from some pretty straightforward school level physics) is clear. The combined centre of mass of the bike + rider system moves up, not down. But unfortunately, that popular interpretation that “everyone knows it lowers the CoG” is repeated time and again. The data does change, but how riders interpret it does.
Where influencers come in: the misinformation cycle
In Part 2, Sabine Hossenfelder made an important point: people often want misinformation because it confirms what they already believe. But it’s not a straightforward one‑way street. It’s a really a cycle.
Step 1 — People want a simple explanation
Riding is complex. Human perception is complex. Crash causation is complex. Simple stories feel better. Simple stories spread online:
-
-
- cherry‑picking a single study
- quoting selectively
- bending statistics
- ignoring contradictory evidence
- oversimplifying complex issues
- presenting opinion as fact
It’s fast, confident, and emotionally satisfying — which is exactly why it spreads so easily.
Step 2 — Influencers supply the simple story
Not necessarily maliciously — but because confident statements sound authoritative and nuance doesn’t trend.
Step 3 — The simple story becomes a belief
Once repeated enough, it becomes “common knowledge”, “what everyone knows” and “how we’ve always done it”.
Step 4 — People seek confirmation
And then the effect of Confirmation Bias kicks in as we actively look for content that supports what we believe and that the “centre of gravity moves down when we stand on the pegs”, we avoid content that challenges us (my letter pointing out the basic misunderstanding of the physics was never printed by that US magazine) and we share content we trust with our peers.
Step 5 — Influencers see demand and produce more
Algorithms reward engagement, simple stories get clicks, and when channels are monetised…
Step 6 — Popular interpretation
And the cycle continues. This is how riding myths persist for decades. Word-of-mouth, bike magazines, motorcycle forums, influencers. They are all subject to the same biases.
How do we break the cycle?
Not by banning influencers. Not by shouting at people. Not by insisting that “science says so”, tempting though that may be.
To start the process, we need to recognise that there are two very different ways to interpret data. We need to recognise when we are looking at:
-
-
- popular interpretation — “what everyone knows”
- rigorous interpretation — “what the data tells us”
A rigorous scientific interpretation is an approach:
-
-
- looks at all the available studies, not just one
- checks whether findings fit what we know about the science (physics, psychology, human factors and many more)
- tests alternative explanations
- acknowledges uncertainty
- updates conclusions when new evidence appears
It’s slow, careful, and sometimes uncomfortable — but it’s the closest we get to understanding reality.
How do we apply a scientific interpretation ourselves?
One of the big and most persistent myths about science is that it’s “difficult” — something reserved for experts in white coats (someone made that exact ‘white-coat’ statement on my Facebook page this very morning), who are working in labs, surrounded by equipment most of us can’t name.
But that’s not what science is. Science isn’t a subject. It’s not a qualification. It’s not a club you need permission to join. Science is simply a structured way of making sense of the world. At its heart, science is nothing more than:
-
-
- noticing a puzzle
- asking a question
- gathering information
- testing an idea
- seeing what the evidence supports
We do this far more often that we realise, every single day of our lives. Need a new pair of boots? Which are you going to buy? Which pair will suit best? What size do you need? Are you going to try them on?
Unless you simply buy the first pair that grabs your attention and force your feet into them, you’re gathering data and updating your hypothesis about which pair are best and when you make your decision and choose one piece of kit over another, you’ve compared evidence and drawn a conclusion.
None of that is “difficult”. So why does science feel difficult?
The brain likes mental shortcuts
The human brain evolved to save energy. Thinking deeply is energy-expensive. Evaluating evidence is slow. Challenging assumptions is uncomfortable. It’s tiring.
So instead of analysing information carefully, the brain relies on mental shortcuts — quick, effortless rules of thumb that usually work well enough to get us through the day. Psychologists call these shortcuts heuristics. They’re brilliant for survival, but terrible for understanding complex problems and they can lead us into traps:
-
-
- we see what we expect to see
- we trust our intuition even when it’s wrong
- we prefer simple explanations over accurate ones
- we avoid information that contradicts our beliefs
- we mistake confidence for competence
This is why science feels hard. Not because the method is complicated, but because it asks us to slow down, question ourselves, and override our instincts. And that’s exactly the moment influencers can slip in.
How do we break the cycle?
The answer is to break the cycle by cultivating intellectual curiosity. Whenever someone presents a claim, ask some simple questions:
-
-
- What does the data actually say?
- Is this interpretation supported by evidence?
- Is the conclusion the only possible explanation?
- Does this feel true because it’s true, or because it fits what I already believe?
- Is someone simplifying a complex issue to sound authoritative?
And most importantly:
Does the interpretation match the mechanism? If it contradicts physics, psychology, or human factors, it’s probably wrong.
Why this matters for riders
Motorcycling is unforgiving. If we misinterpret any information that bears on our decision‑making — whether that’s the risk of a particular manoeuvre, how surfaces produce grip mid‑corner, or what type of clothing is appropriate for the kind of riding we’re doing — then it’s a classic case of garbage in, garbage out. The quality of our decisions depends entirely on the quality of the information we use to make them.
And this is where the brain’s love of mental shortcuts becomes a liability. Instead of analysing the situation carefully, we reach for the quickest, easiest explanation. We rely on intuition, habit, or whatever “everyone knows”. We trust confident voices over accurate ones. We prefer simple stories to complex mechanisms. It feels efficient — but it’s often wrong.
Breaking that cycle doesn’t require a degree in physics or psychology. It simply requires a willingness to pause and ask: “does this explanation actually fit what we know about how riding works?” Does it align with physics, human perception, and real‑world evidence? Or is it just a neat story that feels right because it saves us the effort of thinking? Influencers thrive in that space. They offer certainty, clarity, and confidence at exactly the moment our brains are looking for the path of least resistance.
Science isn’t difficult. Understanding the difference between “what the data says” and “what we think it means” is one of the most powerful safety tools we have. What’s difficult is resisting the shortcuts our brains prefer.
MotoScience exists to help riders make that shift — from comfortable stories to accurate understanding, from intuition to evidence, from “what everyone says” to what the data actually supports.