For most of my working life, I’ve been part of a problem I only recently learned how to articulate.
Somewhere between impact frameworks, funding bids, and conversations about “what works”, I slipped into a culture that measures everything… except, perhaps, the things that matter most. These days I’m so deep in the culture wars over impact, measurement, and grant funding that I can’t use a motorway loo without interrogating the power dynamics behind the feedback form!

I’ve spent years working at the crossroads of nature conservation, community engagement and landscape recovery. Work that is, by its nature, deeply complex, slow and relational. Much of that work, however, has been delivered through short-term project funding that demands instant results, expressed in whatever format the funder of the moment required.
I’ve written more reports than I care to remember, many of which I’m fairly sure nobody read. I’ve lost sleep over targets I didn’t hit, without realising at the time how arbitrary many of them were. I’ve also, on occasion, stretched the numbers a little to meet a target. Haven’t we all?
You know things have become really tenuous when a chat in the pub with a farmer, during which the conversation briefly drifts on to the ‘Big Butterfly Count in the village’, is quietly logged as the entire acreage of his farm “acting on advice received and delivering positive action for nature.”
For years, I conformed and I even got pretty good at it. I learned the language: outputs, outcomes, indicators, attribution, performance. Our evaluation data was often celebrated. But was anything actually changing on the ground? I like to think it was…. Still, I suspect we’ll be in a better position to know in about twenty years’ time – by which point the funding will have ended, the staff will have moved on, and the “learning” will be sitting quietly in a folder labelled Final Report (v7), waiting to be dusted off as evidence when or if needed.
What I didn’t do until recently, was stop to ask whether the stories we were telling to justify the public money flowing into them really matched what was happening in the landscape, or in people’s lives.
Because out in the real world, the work isn’t tidy. It’s relational, contextual, slow. It’s often chaotic and unpredictable. There are unintended consequences, positive and negative.
I’m reminded of a positive one as I write this from an afternoon managing a landscape recovery project in South Wales. A group of volunteers became so absorbed in measuring Mesolithic human footprints that had been revealed by erosion on the soft sands of the Severn Foreshore that none of us noticed the tide rapidly closing in. We retreated just in time. What stayed with me was the depth of discovery and delight that swept across the sands that day. The experience epitomised a felt connection to place, heritage and history that resists neat explanation, and certainly doesn’t lend itself to a pre- and post- survey. Nor do such tools capture the quieter changes: the way a one-off taster day becomes a lifelong hobby, or the way loneliness eases and friendships form. Some changes are sparks. Others are slow ripples that take years to travel.
And yet the systems that govern us often ask us to neatly record the “impact” of something like noticing the light on a leaf, as if the kindling of wonder that might last a lifetime could be logged by… next Tuesday. And all while trying to let people fully immerse themselves in the experiences they came to us for, such as deeply restorative time in nature, only to sheepishly interrupt them with a clipboard and pen to ask whether their newfound kinship with Mother Earth felt more like a six or a ten.
Because evaluation is often not experienced as learning. It’s experienced as compliance. A hoop to jump through. A necessary evil to keep the system running.
This isn’t because anyone is trying to do harm. Most funders and charities are full of good people doing their best. And I’m not the first person to notice this – many deeply committed and infinitely wiser people have been pointing this out for years. However, curiosity about what’s really going on here has quietly consumed the last year of my life. I’ve read, listened, talked, argued gently and less gently with myself (and others), and tried to put words to what I’ve long sensed but struggled to articulate.
And by writing about my experiences past, present and future, I hope to help normalise what still feels like a relatively fringe critique of the models that govern many of the systems shaping charity and public service work. I want to stay with the discomfort of it, and make space for more open questioning about the mental models we’ve come to take for granted.
In other words, I’ll be digging into why impact so often feels like the wrong answer to the right question. To do this, I’m planning to experiment with alternative ways of thinking about learning in my personal and professional life over the coming months, drawing on Human Learning Systems, complexity science, place-based working, and participatory and creative approaches to research. My aim is to nudge more of the focus away from proving impact and further towards creating the conditions for good learning and systems change.
I’d be keen to hear from others exploring similar ground.