Tuesday, 12th June 2018 at 9:11 am
Will Dayble ponders the who and what of measurement, and whether people are best suited to measure themselves?
Measurement kills me. Rubrics are a mangled formalisation of intuition, where biases lurk and assumptions run riot. Frameworks shield us from the search for truth and standardisation reeks like a smoke stack.
Why so serious, Will?
It’s mid-semester break at university right now, and teachers are mashing our beloved students through another stale, semester-end, standardised measurement framework.
Don’t get me wrong, I love my work. It’s just the measurement bit that summons a cacophony of generalised anxieties about the well-intentioned traumas of measurement and management.
What follows is as much a cry for help as a series of unresolved questions. If you want to help me figure this out, email me directly so we can figure it out together: email@example.com.
- We want people to be more creative and entrepreneurial.
- We like to measure things.
This dichotomy makes working on impact and entrepreneurship difficult, and I believe my struggle is shared with anyone who’s invested in the growth of people and communities.
The immediate questions for me are all about people, timing, and agency.
Q: Who should measure?
Caroline Fiennes is arguably the world’s foremost evidence-based philanthropy expert. I think she’s straight up brilliant. She’s also Fitzroy Academy faculty, and nails it in the first sentence of her lesson: “To me one of the single most useless research questions is ‘What is your impact?’ Firstly, you can never get a reliable answer to that, and secondly, even if you know the answer, it doesn’t help you.”
Her provocation (if oversimplified) is that you should only measure things that will materially change how you operate in the very near future. The big, meaty, important questions should be asked – and answered – by people who are trained to do that kind of research properly.
In her fantastic talk on the foibles of research, Caroline shows that most research produced by charities is not only terrible quality (jump to 8:03 for the graph), but that research showing favourable results is more likely to come from poor research. Ouch.
Put simply: We publish bullshit when it makes us look good, and we measure only because we’ve been made to by some higher power.
Therefore, if capacity builders are measured on the outputs of the people they support, we will optimise towards selecting those likely to succeed without our help. This effect masks a subtle disemboweling of a program’s true potential.
Q: When should we measure?
One theme of much modern innovation and impact theory is that it generally trends towards impatience. Go go go. Learn fast.
Brandon from Shopify has a fascinating piece where he shows that good decision makers are quick decision makers.
Memes like Agile, Lean, and Blitzscaling are mostly built on the idea that moving fast is more important than almost anything else. Don’t measure end state; measure trajectory, escape velocity, and other rocket-based metaphors.
To clarify: I think Blitzscaling is fascinating, while having the potential to be the most horrid thing to come from the valley yet. It takes the most negative traits of the “move fast and break stuff” paradigm and says: “You know what bro, we can ratchet this up even harder. Booyah. Crushing it. Fist bump.”
Ethical train wrecks aside, speed of measurement is interesting. Measure less, more often, and act upon that measurement. Booyah.
A lot of Kevin from Mulago’s approach, aka The Lazy Funder’s Guide to High-Yield Philanthropy, is around using time effectively. He has strong views on proposals and paperwork:
“RFPs and application processes waste too much time for too many people, while generating huge piles of turgid stuff nobody really wants to read.”
Graeber has similar views on entire professions in his new book, Bullshit Jobs.
I’m sure you get similar feels any time a funder asks for an impact report, or a government client asks you to log into their EZ Tender-o-matic Portal 2000.
Funnily enough, for the search term “how to measure entrepreneurial ability”, four of the top 10 results on Google are direct links to PDFs. I’m not sure how to feel about that.
One alternative – ask them:
I’ve whined a lot and not offered any solutions, so here’s the beginning of an idea: Let people measure themselves.
Half way through last semester’s classes, my students and I figured out what they’d learnt so far, and what they wanted to learn more of.
They distilled down what skills, behaviours and disciplines they believed were important, on their terms, for their definitions of success.
Diving deeper, four teams developed simple measurement methods:
- Weekly progress recorded as you wish (eg voice recording), collated into an Experience CV, including coffee dates and failures, after 12 weeks.
- Personalised assessment criteria, heavy on feedback and support. Progress tracked by both teacher and two peers for accountability.
- Weekly reflection on effort, small progress towards larger goals, with peer/mentor feedback. Only measurement by uni to be whether it was done, not on the content or “success”.
- Goal-based measurement: five goals at start of semester, with one goal dropped every fortnight, measured against self-set standards by self, peers, and teacher. End with one clear goal as ongoing focus.
The last one slays me. It’s so clean, and deceptively simple. It’s useful measurement, in the “true, useful and kind” sense.
Imagine if an organisation proposed that to their funder: “We have five ideas that might work. We’re going to systematically drop the least viable one every N weeks. Stick with us, if and when we learn what works we’re going to focus on that until the job is done.”
Now imagine a funder savvy enough to back that team. Phew.
I also love how easily these students opted in for peer accountability, and that the only instance of the word “success” appeared in quotes.
“Success” in learning is beautiful, but only in the sense that it’s entirely in the eye of the beholder, and it’s used shamelessly to sell people things.
But seriously, this is all theory.
I want to figure this out. if you have ideas on how we can work on this productively, please reach out so we can start the work. 🙌 😁
P.S. An addendum:
Years ago when I started digging into this problem I asked Paul Steele from Benefit Capital, “What is an entrepreneur?” His deliciously simple answer: “Someone who lies awake at night wondering how they’ll make payroll.”
(I forgot he’d said this at all, and he had to remind me again recently. Oops.).
Maybe we should be asking our students and peers, “Do you stay awake at night thinking about the impact you’re creating?” and optimise towards more sleepless nights. Maybe not.
About the author: Will Dayble is a teacher, and founder of the Fitzroy Academy, an online social impact school. The academy works with students and educators to teach people about entrepreneurship and social impact. Will is at once a loyal supporter and fierce critic of of both the startup and impact ecosystems.
This is part of a regular series of articles for Pro Bono Australia exploring impact, education and startups. Please do reach out with advice, commentary, criticism or ideas: firstname.lastname@example.org.