
Buy it from Amazon
Subtitle | Finding the Value of "Intangibles" in Business |
First Written | 2007 |
Genre | Business |
Origin | US |
Publisher | Wiley |
My Copy | hardback |
First Read | August 18, 2024 |
How to Measure Anything
I. The solution exists.
1. Challenging the idea of intangibles.
‘Intangible’ means literally that you can’t touch it, but in biz we mean ‘this is not measurable’, like flexibility or brand image or premium positioning. But Hubbard calls bullshit, and says if it’s a real thing and it matters to your business, then it CAN be measured and it can be done in an economically viable way.
2. “Intuitive Measurement Habits’.
Starts with three Estimating heroes and their stories:
- Eratatosthenes who measured the size of the earth more or less from first principles and measuring a shadow in Alexandria,
- Enrico Fermi who popularized ‘Fermi questions’ where you reduce uncertainty on an estimate (how many piano tuners in Chicago) by estimating relevant related ideas, and
- Emily Rosa, who won a science fair by devising a simple measurement experiment to see if ‘touch healers’ could even detect human ‘energy’ in the first place.
Big banger: if some touchy feely thing (‘empowerment’ or ‘creativity’ or ‘innovation’) is REAL then it has an observable effect in the world. If it has an observable effect, then you can measure it. If it isn’t observable, ar you sure it exists at all? EG. In a parallel universe where GM is exactly the same but has more ‘innovation’, how would it be different?
3. Illusion of intangibles: they ARENT.
If you think something can’t be measured, consider this mnemonic: howtomeasureanything.COM. the Concept of measurement itself, or the Object being measured, or the Methods of measurement. (Ed note: what a ballsy mnemonic?)
CONCEPT of measurement. A measurement is not like, a TRUTH. It’s a set of observations that reduce uncertainty, where the result is expressed as a quantity. (It doesn’t need to be ABOUT a quantity, just expressed as one). Quantitites can be binary (yes/no, is/isn’t)or ordinal (more/less). Mohs hardness scale as example: doesn’t tell you how hard something is in absolute terms, just relative to rocks above/below it.
Fujita scale for Tornados is about what observed damage they do, not an absolute ‘how strong is this wind force’.
OBJECT of measurement. (A problem well stated is half solved). Does it seem immesasurable because you actually just don’t know what it is? EG, how do you measure mentorship? Well, what is mentorship?
- if it matters, then it makes a difference in the world and that’s observable
- if it’s observable, then it can be detected as an amount or range
- if it can be an amount, then it can be MEASURED.
Remember: a measurement reduces uncertainty. It doesn’t have to be a true absolute fact.
4 good assumptions about measuring something difficult:
1. Your problem is not as unique as you think
2. You have more data than you think
3. You need less data than you think
4. There is a useful measurement that is simpler than you think.
METHODS of measurement. It’s not just counting. You can do random samples. You can count things you can’t see. You can count the opposite of something. Get creative.
Objections:
- economic: “information value” can be 0 or <0. That’s true.
- rhetorical: ‘you can prove anything with statistics’. Bullshit, that’s not what proving is. But you can confuse or trick people with numbers. So what?
- ethical: measuring something might feel dehumanizing or icky. So? Is ignorance better than knowledge?
II. BEFORE you measure.
4. Clarifying the Measurement Problem.
Clarify what you’re measuring. Some questions to ask:
- what decision is this measurement going to support?
- what really is being measured?
- why does this thing matter to the decision?
- what do you already know about this?
- what is the value to measuring further?
Eg, when you say ‘we want to measure IT security’… what is that? What does your org look like with ‘more’ security?
5. Calibrated Estimates.
How much do you know NOW? AN intro to the idea of ‘confidence interval’
Most things should be estimated with 90% confidence interval. He’s got exercises to train your sense of 90% confident. A series of fact questions, where you supply an upper and lower bound with 90% confidence.
EG: when Newton published ‘Principles of Gravitation’ (1400-1700?) or how wide is wingspan of a 747 (200-400ft?)
THEN imagine a spinner wheel where 9/10 of the disc is ‘win $1000’ and 1/10 is ‘win zero’. If I gave you $1000 for getting the Newton question right, you should FEEL like the odds are the same if you’re really 90% confident. If you prefer the Newton question, then you’re too confident there and you should narrow your range (1500-1700?). If you prefer the wheel, you’re under confident and you should expand your range (1400-1750?). You could be 100% confident on newton (-1000 BC to 2024 AD) but that’s not the point.
You might feel like you’re just guessing but you can apply some absurdity tests. Is a 747 wider than 3 football fields? Then it’s <900ft. Is it wider than half a football field? Then >150ft.
Hubbard does this calibration training for orgs. There’s an appendix with example questions and results.
6. Measuring Risk. AN intro to Monte Carlo.
Risk = uncertainty where the possible outcome involves a loss.
You can’t just have ‘high, med, low’ risk. The risk itself is relative to the scale of the loss. A coin flip on $1 bet is a low risk. A coin flip on my IRA is high risk.
He explains Monte Carlo simulations (multiplying out the risk factors over time and across multiple scenario ranges) and explains how to build this in excel. This book is from 2007.
7. Measuring value of information.
Three ways your measurement can have value:
- info reduces uncertainty about decisions with economic consequences
- info affects the behavior of others, which has economic consequences
- info can have market value itself.
EVI: ‘expected value of information’. He’s got a formula to calculate this. If you expect to make money based on an activity that has a range of consequences, then there’s a way to calculate the value of measuring information that reduces uncertainty about that range. This part felt a little abstract.
Subject to various biases! Streetlight/availability fallacy, managers only measuring things that will give good news, or ‘my only tool is a hammer’ scenarios.
III. Measurement methods
8. How to measure.
This gets into some basic that seemed a little more obvious. Decompose your problem into smaller measurable chunks. Look for other research where this has already been done. Measure just enough to get what you need.
9. Sampling Reality.
An intro to how random sampling works. T statistics. You approach a 90% range quickly across a low number of measurements. Standard deviations. Watch for biases in your sampling (too clustered or too available etc). McKinsey got his sex research by asking for referrals which maybe biased towards people more sexually open than the avg? Regression modeling. Lines of best fit.
10. Bayes: adding to what you know now.
Our buddy Bayes! Prior knowledge should count when you do stats. Eg, you sample sales in March-Sept. You can’t extrapolate directly for a year, because you know xmas is going to change that shape.
The instinctive bayesian method: start with a calibrated estimate, gather additional info, update your estimate (WITHOUT doing any extra math).
Heterogeneous benchmarks. Estimating the weight of an average jellybean in grams is tough. But what if I told you an average business card weighs 1 gram? That helps. It doesn’t tell you anything about a jellybean but it tells you more about grams.
IT security examples. Everybody has a ‘wouldn’t it be horrible if…’ scenario but that doesn’t give you any sense of how to prioritize stuff. So you have to estimate just how horrible it would be.
IV. Beyond the basics, examples.
11. Preferences and attitudes.
You can measure preferences based on what people SAY and what they DO. Stated vs revealed prefs. Stated prefs are in surveys. Survey design is hard. Watch out for leading the witness and other biases. Look at tradeoffs to find a measurement (eg art may seem ‘priceless’ but actually you can buy a Picasso for $10M). Sometimes you can throw out everything people think is valuable to measure and just use a simpler metric (eg MONEYBALL on base %)
12. The ultimate measurement instrument: Human judgement
Sometimes people’s judgement are just better than other things. BUT look out for common cognitive biases:
- anchoring
- halo/horns effect
- bandwagoning
Rasch models a method for normalizing human judgement (basically comparing judge results to others I think?)
Questionable methods: above all else dont use methods that ADD more error to your initial estimate. (Remember: a measurement should REDUCE uncertainty
13. New Measurement Instruments
In short: computers!
14. A universal measurement method: applied information economics
AIE should answer 4 things:
- how to model the current state of uncertainty
- how to compute what should be measured
- how to measure those things in an economically justifiable way
- how to decide
Noted on August 20, 2024
Honestly, I’ve been telling people this is a banger of a business book. It’s full of actual interesting ideas that I have not come across other places. And it’s pretty dense. It’s NOT a blog post of content padded with a books worth of cruft. Here are my notes, chapter-by-chapter.
Noted on August 20, 2024