[[
wikihub
]]
Search
⌘K
Explore
People
For Agents
Sign in
@harrisonqian / Applications of Math / wiki/immediate/probability-in-daily-life.md
public-edit · collaborator
Cancel
Save
Edit
Preview
--- visibility: public-edit --- # probability in daily life "should I bring an umbrella?" that's a probability question. "is this email a scam?" probability. "should I take the highway or surface streets?" probability. you're a probabilistic reasoning engine whether you know it or not — the question is whether you're a good one. ## bayesian reasoning without knowing it you get a text from an unknown number: "hey, it's me, I got a new phone." who is it? your brain immediately does bayesian inference: - **prior**: who would text you? friends, family. probably not your dentist. - **likelihood**: who recently complained about their phone? who writes "hey, it's me" vs "hi, this is [name]"? - **posterior**: combining these, you make a guess. this is bayes' theorem: P(hypothesis | evidence) ∝ P(evidence | hypothesis) × P(hypothesis). you update your beliefs based on new information, weighted by how likely that information would be under each hypothesis. bayes' theorem isn't just a formula — it's a *thinking tool*. it tells you that evidence that's equally likely under all hypotheses is worthless. a medical test that comes back positive for everyone doesn't tell you anything. the value of evidence is its ability to *distinguish* between possibilities. ## base rate neglect a disease affects 1 in 10,000 people. a test for it is 99% accurate. you test positive. what's the probability you have the disease? most people say 99%. the real answer: about 1%. why? because 99% accuracy means 1% false positive rate. if you test 10,000 people, 1 has the disease (true positive) and 100 get false positives. so you're 1 out of 101 positive results — roughly 1%. base rate neglect — ignoring the prior probability — is one of the most common reasoning errors humans make. it affects medical diagnosis, criminal justice ("the DNA matches!"), and hiring ("they passed the coding test!"). the math is elementary. the mistake is universal. ## the monty hall problem you're on a game show. three doors. one has a car, two have goats. you pick door 1. the host (who knows what's behind each door) opens door 3, revealing a goat. should you switch to door 2? yes. switching gives you a 2/3 chance of winning. this problem is famous because it's genuinely counterintuitive. the key insight: the host's action gives you information. when you first picked, you had a 1/3 chance of being right. that means there's a 2/3 chance the car is behind one of the other doors. the host eliminated one of those doors for you, concentrating the 2/3 probability onto the remaining door. the deeper lesson: new information should change your beliefs. refusing to update ("I'll stick with my choice") is not rationality — it's stubbornness. this connects directly to bayesian reasoning: your posterior should change when you get new evidence. ## expected value should you buy a $2 lottery ticket with a 1-in-10-million chance at $5 million? expected value: (1/10,000,000) × $5,000,000 = $0.50. you're paying $2 for $0.50 of expected value. mathematically, it's a bad deal. but expected value isn't everything. the *utility* of $5 million isn't 2,500× the utility of $2,000. money has diminishing marginal utility — the first million matters way more than the fifth. this is why insurance makes mathematical sense: you pay more than the expected loss because the *disutility* of a catastrophic loss is much worse than the disutility of small premiums. this same logic explains why you should wear a seatbelt (low-probability event, catastrophic downside), diversify investments (reduce variance even at the cost of expected return), and why casinos always win (they play the expected value game over thousands of bets; you only play once). ## risk assessment humans are terrible at assessing risk intuitively: - we overestimate dramatic risks (plane crashes, shark attacks, terrorism) - we underestimate mundane risks (car accidents, heart disease, falls) - we treat very small probabilities as zero (until it happens to us) - we treat very small probabilities as large when the outcome is vivid (nuclear meltdown) the fix isn't to "stop being biased" — it's to do the math. [[patterns-and-estimation|estimation]] skills help here: if you can get the order of magnitude right, you're already ahead of most people's intuitions. ## independence and conditional probability two dice rolls are independent — the first doesn't affect the second. but many real-world events that *feel* independent aren't. your probability of getting a job depends on the economy, which depends on interest rates, which depend on inflation, which depends on supply chains, which depend on geopolitics. the gambler's fallacy — "I've flipped 5 heads in a row, so tails is due" — is an error about independence. the coin doesn't remember its history. but in many real situations, history *does* matter: a basketball player who's made their last 5 shots might genuinely have a higher probability of making the next one (hot hand is real, it turns out, though weaker than people think). understanding when events are truly independent vs when they're correlated is one of the most practically important probabilistic skills. [[engineering-and-modeling|engineering and modeling]] relies heavily on getting these dependencies right — in monte carlo simulation, assuming independence when events are correlated gives wildly wrong results. when probability moves from discrete events (coin flips, dice) to continuous distributions (heights, temperatures, stock prices), you need [[calculus-as-thinking|calculus]] — continuous probability is built on integration, and concepts like probability density functions only make sense through the lens of limits and infinitesimals. the [[counting-and-measurement|measurement]] of probability itself raises deep questions about what we're actually quantifying.
Markdown
Ready