“I speak, therefore I’m right” – Part 3 the difference between Data and Interpretation

“I speak, therefore I’m right” – Part 3 the difference between Data and Interpretation

In Part 1 of these introductory posts, I talked about the trap of relying on intuition and ‘common sense’ instead of critical thinking, and in the second I mentioned how Sabine Hossenfelder — a YouTuber and theoretical physicist with a sharp eye for nonsense — had been discussing misinformation and warned that people who “want misinformation — consciously or subconsciously — to justify conclusions they hold dear” are a real problem. We explored why we often reject critical thinking altogether — thanks to Groupthink. 

In Part 3, we need to look at something even more fundamental — the difference between data and interpretation — and how influencers can turn that gap into a cycle of misinformation.

What data actually is

Data is simply information about the world:

    • a measurement
    • a count
    • a recorded observation
    • a speed or distance
    • a crash statistic
    • a percentage
    • a timestamp

Data is neutral. Data doesn’t care what we think. Data doesn’t have an agenda.

But the moment a human looks at data, something happens. We interpret it. And that’s where things go wrong.

Interpretation is where bias creeps in

Interpretation is the story we tell ourselves about the data. Two people can look at the same dataset and come to completely different conclusions because:

    • they start with different assumptions
    • they have different beliefs
    • they want different outcomes
    • they’re influenced by different groups
    • they’re motivated to defend their worldview

This is why Hancock’s “I’ll take another decision if I think the science won’t work” is so revealing. He wasn’t rejecting the data — he was rejecting the interpretation he didn’t like. And riders do this too.

Motorcycling is full of examples where data and interpretation diverge

The example of the SMIDSY crash

The data from decades of research shows:

    • drivers almost always look
    • drivers often fail to see
    • drivers often misjudge speed and distance
    • because motorcycles fall below the threshold of visual salience

But the popular interpretation?

    • “They didn’t look.”
    • “They were distracted.”
    • “They need their eyes tested.”
    • “They are poorly trained.”

The data says one thing. And data isn’t wrong, it just is. But the narrative says something far more emotionally satisfying.

“Standing on the pegs lowers the centre of gravity”

Here’s another widely-disseminated statement. Asked in a reader’s letter “how the centre of mass of the motorcycle moves when a rider stands on the pegs”, the entire editorial staff of a US rider magazine some years ago claimed to have carefully consider the question, then stated that that answer was obvious; “the centre of mass moves down because the rider’s weight is taken on the pegs”.

In fact, the data (which we can derive from some pretty straightforward school level physics) is clear. The combined centre of mass of the bike + rider system moves up, not down. But unfortunately, that popular interpretation that “everyone knows it lowers the CoG” is repeated time and again. The data does change, but how riders interpret  it does. 

Where influencers come in: the misinformation cycle

In Part 2, Sabine Hossenfelder made an important point: people often want misinformation because it confirms what they already believe. But it’s not a straightforward one‑way street. It’s a really a cycle.

Step 1 — People want a simple explanation

Riding is complex. Human perception is complex. Crash causation is complex. Simple stories feel better. Simple stories spread online:

      • cherry‑picking a single study
      • quoting selectively
      • bending statistics
      • ignoring contradictory evidence
      • oversimplifying complex issues
      • presenting opinion as fact

It’s fast, confident, and emotionally satisfying — which is exactly why it spreads so easily.

Step 2 — Influencers supply the simple story

Not necessarily maliciously — but because confident statements sound authoritative and nuance doesn’t trend. 

Step 3 — The simple story becomes a belief

Once repeated enough, it becomes “common knowledge”, “what everyone knows” and “how we’ve always done it”. 

Step 4 — People seek confirmation

And then the effect of Confirmation Bias kicks in as we actively look for content that supports what we believe and that the “centre of gravity moves down when we stand on the pegs”, we avoid content that challenges us  (my letter pointing out the basic misunderstanding of the physics was never printed by that US magazine) and we share content we trust with our peers. 

Step 5 — Influencers see demand and produce more

Algorithms reward engagement, simple stories get clicks, and when channels are monetised… 

Step 6 — Popular interpretation

And the cycle continues. This is how riding myths persist for decades. Word-of-mouth, bike magazines, motorcycle forums, influencers. They are all subject to the same biases.

How do we break the cycle?

Not by banning influencers. Not by shouting at people. Not by insisting that “science says so”, tempting though that may be.

To start the process, we need to recognise that there are two very different ways to interpret data. We need to recognise when we are looking at:

      • popular interpretation — “what everyone knows”
      • rigorous interpretation — “what the data tells us”

A rigorous scientific interpretation is an approach:

      • looks at all the available studies, not just one
      • checks whether findings fit what we know about the science (physics, psychology, human factors and many more)
      • tests alternative explanations
      • acknowledges uncertainty
      • updates conclusions when new evidence appears

It’s slow, careful, and sometimes uncomfortable — but it’s the closest we get to understanding reality.

 

How do we apply a scientific interpretation ourselves?

One of the big and most persistent myths about science is that it’s “difficult” — something reserved for experts in white coats (someone made that exact ‘white-coat’ statement on my Facebook page this very morning), who are working in labs, surrounded by equipment most of us can’t name.

But that’s not what science is. Science isn’t a subject. It’s not a qualification. It’s not a club you need permission to join. Science is simply a structured way of making sense of the world. At its heart, science is nothing more than:

      • noticing a puzzle
      • asking a question
      • gathering information
      • testing an idea
      • seeing what the evidence supports

We do this far more often that we realise, every single day of our lives. Need a new pair of boots? Which are you going to buy? Which pair will suit best? What size do you need? Are you going to try them on?

Unless you simply buy the first pair that grabs your attention and force your feet into them, you’re gathering data and updating your hypothesis about which pair are best and when you make your decision and choose one piece of kit over another, you’ve compared evidence and drawn a conclusion.

None of that is “difficult”. So why does science feel difficult?

The brain likes mental shortcuts

The human brain evolved to save energy. Thinking deeply is energy-expensive. Evaluating evidence is slow. Challenging assumptions is uncomfortable. It’s tiring.

So instead of analysing information carefully, the brain relies on mental shortcuts — quick, effortless rules of thumb that usually work well enough to get us through the day. Psychologists call these shortcuts heuristics. They’re brilliant for survival, but terrible for understanding complex problems and they can lead us into traps:

      • we see what we expect to see
      • we trust our intuition even when it’s wrong
      • we prefer simple explanations over accurate ones
      • we avoid information that contradicts our beliefs
      • we mistake confidence for competence

This is why science feels hard. Not because the method is complicated, but because it asks us to slow down, question ourselves, and override our instincts. And that’s exactly the moment influencers can slip in.

How do we break the cycle?

The answer is to break the cycle by cultivating intellectual curiosityWhenever someone presents a claim, ask some simple questions:

      • What does the data actually say?
      • Is this interpretation supported by evidence?
      • Is the conclusion the only possible explanation?
      • Does this feel true because it’s true, or because it fits what I already believe?
      • Is someone simplifying a complex issue to sound authoritative?

And most importantly:

Does the interpretation match the mechanism? If it contradicts physics, psychology, or human factors, it’s probably wrong.

Why this matters for riders

Motorcycling is unforgiving. If we misinterpret any information that bears on our decision‑making — whether that’s the risk of a particular manoeuvre, how surfaces produce grip mid‑corner, or what type of clothing is appropriate for the kind of riding we’re doing — then it’s a classic case of garbage in, garbage out. The quality of our decisions depends entirely on the quality of the information we use to make them.

And this is where the brain’s love of mental shortcuts becomes a liability. Instead of analysing the situation carefully, we reach for the quickest, easiest explanation. We rely on intuition, habit, or whatever “everyone knows”. We trust confident voices over accurate ones. We prefer simple stories to complex mechanisms. It feels efficient — but it’s often wrong.

Breaking that cycle doesn’t require a degree in physics or psychology. It simply requires a willingness to pause and ask: “does this explanation actually fit what we know about how riding works?” Does it align with physics, human perception, and real‑world evidence? Or is it just a neat story that feels right because it saves us the effort of thinking? Influencers thrive in that space. They offer certainty, clarity, and confidence at exactly the moment our brains are looking for the path of least resistance.

Science isn’t difficult. Understanding the difference between “what the data says” and “what we think it means” is one of the most powerful safety tools we have. What’s difficult is resisting the shortcuts our brains prefer.

MotoScience exists to help riders make that shift — from comfortable stories to accurate understanding, from intuition to evidence, from “what everyone says” to what the data actually supports.

 

“I speak, therefore I’m right” — Part 2: the allure of ‘groupthink’

I speak, therefore I’m right” – Part 2 the allure of ‘groupthink’

Last time out in Part 1, I talked about the trap of relying on intuition and “common sense” instead of critical thinking. By coincidence, the very next day I watched a video by Sabine Hossenfelder — a theoretical physicist with a sharp eye for nonsense — discussing misinformation on YouTube.

She made a point that surprised me. She wasn’t just criticising creators who peddle misleading content. She was more concerned about the people who want misinformation. As she put it:

“The problem isn’t the few people who produce this content, it’s the many who watch it… They want misinformation — consciously or subconsciously — to justify conclusions they hold dear.”

And she’s right. We click on content we agree with because it’s mentally easier. It feels good. It fits our worldview. And it saves us the effort of thinking critically.

Understanding Groupthink

This is where Groupthink creeps in — the tendency to adopt the beliefs of the group around us, even when those beliefs are wrong.

Groupthink occurs when individuals conform to the views of their peers.

Groupthink can come about because individually we’re lazy – it’s easier to listen to someone else telling us what’s right, rather than critically assessing the topic, because that means we have to seek out the information needed to formulate our own, better-informed point-of-view.

Some years ago I covered an article in a US riding magazine that claimed to have asked all their editors, riders with years of experience, whether “standing up on the pegs lowers the centre of gravity of the motorcycle”. They ALL claimed to have carefully considered the problem and then agreed. They were ALL wrong as any physics teacher will tell you. I even wrote a comment explaining why they were all wrong, by using a simple diagram, and it was never published. ‘Stand up to lower the bike’s CoG’ is STILL a common Groupthink myth.

What’s worse is that we may indulge in Groupthink even when we hold dissenting opinions in order to be a better fit in a social group which objects to having its Groupthink challenged. We can see Groupthink operating when a group prioritises conformity over critical evaluation of ideas; “we’ve always done it this way” is an expression of Groupthink suppressing an objection. Group members who continually express dissenting opinions may find their voices suppressed. Either they leave the group so their voice no longer troubles the group, or they avoid speaking out and offering their own differing perspectives in order to maintain group cohesion.

We can see Groupthink operating every time a SMIDSY crash video is put up. Having done my own research, I’ve documented the visual perception issues behind the ‘Looked But Failed To See’ crash in the ‘Science Of Being Seen’ (SOBS) project, and documented the fact that incidents where a driver genuinely ‘did not look’ at a junction and caused a collision are rare. Yet, as soon as the video appears, there will be a long sequence of responses claiming “the driver didn’t look”. Or must have been “on the phone”. Or “was distracted”.

Without a crash study, those are speculative statements, powered by Groupthink. It’s what our peers tell us “what must have happened”. The facts — as gleaned from scientific investigations into how road users behave at junctions – show that it’s far more likely the driver:

    • ‘looked but COULD NOT see’ thanks to Vision Blockers.
    • ‘looked but FAILED to see’ thanks to visual perception issues.
    • ‘looked, saw but MISJUDGED speed and distance’ thanks to the cognitive difficulties of determining the ‘time to arrival’ of small objects like motorcycles.

As I’ve shown in SOBS, “didn’t look” or “distracted” are rare events, alongside  causative factors such as “medical emergencies” which we’d probably not consider. Together they account for just one in ten collisions at junctions.

Even spending a few moments using some critical thinking would show us that if drivers really ‘didn’t look’ they’d rarely get past the first intersection where they encounter other vehicles! There’s no property that makes other vehicles somehow magically visible to drivers who aren’t looking.

But every time we see a newspaper report that says a crash was caused by a “driver not looking properly” or police claim theirs statistics show most collisions result from “poor observation”, it’s reinforcement for our built-in tendency to look for information that aligns with our existing beliefs.

So what can we do?

The short answer is “ask questions”.

1. Cultivate intellectual curiosity.

2. Pause before accepting a claim.

Ask yourself:

    • Is this a proven fact?
    • Is it an opinion backed by reasoning?
    • Or is it just a hunch dressed up as expertise?

The moment someone says “we all know…”, treat it as a warning sign. And be especially wary of influencers — including those in motorcycling — who treat facts, data, and science as optional extras rather than foundations.

This doesn’t mean you should stop exploring new ideas. It means you should evaluate them, especially when you find yourself nodding along.

And yes — that includes this article.

Next time: we’ll look at how influencers misuse data, why misleading statistics are so persuasive, and how to spot when you’re being led astray.