Strange Math That Predicts Everything: The Story of Pavel Nekrasov, Markov Chains, and the Birth of Modern Probability

The Mystery of Predicting Everything

Imagine a world where math could predict everything — from the weather tomorrow, to the stock market, even to how crowds move or how diseases spread. Sounds like science fiction? Yet, deep within the world of mathematics, a special branch called probability theory aims to do exactly that: to make sense of uncertainty, randomness, and change.

But it wasn’t always this way. A little over a century ago, probability theory was still very much a puzzle. How could we use math to understand things that seemed random or unpredictable? Was it even possible to have a mathematical formula that predicts everything?

This article takes you on a fascinating journey, starting with a man named Pavel Nekrasov, whose ideas sparked a revolution in how we understand randomness and prediction. Along the way, we’ll meet other brilliant thinkers like Andrey Markov, whose disagreements with Nekrasov led to the creation of Markov chains, a powerful tool still used today in everything from Google’s search algorithms to weather forecasting.

Whether you’re a math lover or just curious about how math helps us predict the world, this story will open your eyes to the strange, beautiful world where math and randomness dance together — a world where strange math really does predict almost everything.


Chapter 1: The Early Days of Probability — A World of Uncertainty

Before we dive into Pavel Nekrasov’s story, it helps to understand what probability is and why it mattered.

Probability is a branch of math that deals with uncertainty — like rolling dice, flipping coins, or predicting rain. It tries to give a number to how likely something is to happen, from 0 (impossible) to 1 (certain).

Why did people start thinking about probability?

People have always wondered about chance: What are the odds? Why do some things happen and others don’t? Games of chance, gambling, insurance — all these things pushed mathematicians to find ways to calculate probabilities.

But in the 1800s, probability was still rough around the edges. Many mathematicians worked hard to make it more rigorous and reliable. Among these was Pavel Nekrasov, a Russian mathematician born in 1853, who wanted to build a strong foundation for probability.


Chapter 2: Pavel Nekrasov — The Man and His Ideas

Pavel Nekrasov was a serious and careful thinker. He wanted probability to be a strict science, based on clear rules.

At the time, mathematicians were fascinated by the Law of Large Numbers — a principle that says if you repeat an experiment many times, the average result tends to settle down to a predictable number. For example, if you flip a fair coin thousands of times, the proportion of heads should get closer and closer to 50%.

Nekrasov tried to understand and prove this law more generally. But he made a particular choice: he insisted that the random things you look at should be independent and non-negative. “Independent” means the result of one doesn’t affect the others (like flipping different coins). “Non-negative” means the values are zero or more (like counting something, never negative).

He thought this restriction would make the math cleaner and more powerful. But there was a problem.


Chapter 3: The Nekrasov–Markov Dispute — A Mathematical Debate That Changed Everything

Enter Andrey Markov, another brilliant Russian mathematician, known for his openness to messier, more complicated problems.

Markov disagreed with Nekrasov’s narrow view. He argued that in real life, many things aren’t independent — they can influence each other. For example, if one day is rainy, the next day is more likely to be rainy too. These dependencies matter.

Markov showed that the Law of Large Numbers could still work even when things depend on each other in certain ways. This insight led him to develop what we now call Markov chains — models of processes where the future depends on the present, but not on the distant past.

This debate wasn’t just academic. It opened the door to modeling complex, realistic systems where things depend on each other, rather than being magically independent.


Chapter 4: What Are Markov Chains? Explaining the Idea Simply

Think about a board game where you move pieces by rolling dice. Now imagine the rules change so that your next move depends only on where you are now — not how you got there.

Markov chains are like that: a way to model processes where the future depends only on the present state, not the full history.

Everyday examples of Markov chains:

  • Weather prediction: If today is sunny, there’s a certain chance tomorrow is sunny or rainy, but it depends mostly on today, not the whole past week.
  • Text prediction: Your phone guesses your next word based on the last word you typed, not the entire sentence.
  • Google’s PageRank: It models users clicking through websites as a Markov chain to rank pages.

Markov chains are a powerful way to predict complex systems — because even though they only look one step ahead, their long-term behavior can be calculated and understood mathematically.


Chapter 5: Why Pavel Nekrasov’s “Strange Math” Was Important

Even though Markov disagreed with Nekrasov’s strict rules, Nekrasov’s insistence on rigor pushed probability theory forward.

His “strange math” — focusing on independence and positivity — made mathematicians think deeply about what assumptions were necessary to predict things.

This led to the recognition that real-world randomness isn’t always neat or independent, inspiring the creation of new, more flexible models.

In that sense, Nekrasov’s ideas helped spark the development of modern probability, which today underpins much of science and technology — from risk analysis to machine learning.


Chapter 6: From Nekrasov to Today — How Strange Math Predicts Almost Everything

Today, the tools that arose from this early debate — especially Markov chains and their descendants — let us model almost any system involving uncertainty.

  • Finance: Predicting stock prices and market risks.
  • Biology: Modeling populations and gene expression.
  • Computer Science: Algorithms, artificial intelligence, language processing.
  • Physics: Understanding particles and quantum phenomena.

The strange, abstract math that started as a debate between two mathematicians has blossomed into a giant framework that literally predicts everything from the smallest particles to human behavior.


Chapter 7: Why Should We Care?

Understanding the origins of these ideas helps us appreciate the math behind everyday technologies — like your smartphone’s suggestions, medical diagnoses, or even how governments prepare for pandemics.

It also reminds us that math isn’t just about numbers; it’s about making sense of uncertainty, and using careful logic to predict the unpredictable.


Conclusion: The Legacy of Pavel Nekrasov and the Power of Prediction

Pavel Nekrasov’s work may seem like a footnote in history, but his ideas forced others to rethink probability, leading to powerful tools we rely on today.

The strange math he championed — and the debates it inspired — paved the way for a modern understanding of randomness and prediction, shaping science, technology, and even how we understand ourselves.

In the end, the story of Nekrasov and Markov is a story of curiosity, rigor, and the quest to predict everything in a world full of uncertainty.

Join us to explore the new depts of our history.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *