Measuring Up

While our field correspondent reports her adventures from Port-au-Prince, I thought I’d chew on a thought project.  I just got back from a conference in DC that locked all the top public health professionals and academics (and those of us who seek to learn from them) in a confined space, and they’ve set us free again to ask serious questions about how we measure the impact of our work.  This isn’t anything new to C2C’s organizational development discussions, but now that we’re entering the practical phase of our work, the rigor of the discussions and their practical application are a lot more relevant.

Conventional wisdom has it that what goes in, must come out.  Think about everything that’s gone into the C2C project as inputs defined broadly as capital and labor; what, ultimately, will be the outcomes and how, quantitatively, will we measure them?  This is a line of questioning that’s recently been given a lot of attention; recall the chagrin when we all heard there were 10,000 NGOs in a country that kept getting poorer (Haiti).  How does that happen?  To be fair, there are SO many variables, and success never looks the same twice. Still, there’s no doubt that there’s inefficiency in non-profit operations, and we need a way of measuring it so that we might do better.

How do we hold ourselves accountable?  The relative value of money is universally agreed upon (allow me that one generalization), so ideally, we would monetize health outcomes. But that’s tough to do.  Follow a dollar investment into a family planning intervention, which connects a woman with the knowledge and resources to space her pregnancies or have only as many children as she and her partner can afford.  Maybe now that she’s a bit more in control of her health and productivity, she gets a job, which lets us assign a dollar amount to the consequent addition to her family’s wealth.  Actually, you could follow it even further and look at the healthy, not-overstretched mom who’s able to send her kids to school and because of their educations they land solid jobs and move above the breadline.  But because you don’t have an accurate shot of the counterfactual – the “what would have happened” if the family planning NGO hadn’t intervened – you can’t measure a delta in wealth or the dollar outcome of the NGO’s dollar input. There’s no doubt that these are totally over-simplified hypotheticals, but the logic works.  The public health community hasn’t yet come up with a universally agreed upon set of definitions or metrics, and I think that’s something we’ll see a lot more discussion about in the next few years: holding ourselves accountable to efficiency.

Another thought on measurement, while we’re on it: there was also a bit of talk at the conference about “cash on delivery” aid, or cash for improved outcomes.  The idea is for financiers of development projects to espouse a results-based financing model, which would necessarily raise the bar on outcome measurement.  If you pay for proven and improved outcomes rather than talk of inputs and activities, financiers incentivize more innovation and results-oriented interventions, and we more closely align what we get out with what we put it – in other words, funding inextricably tied to accountability . That’s something C2C takes very seriously, and it’s why we’re working so closely with MSH to evaluate the change in capacity and quality of care at Grace Children’s Hospital.  It will be a hybrid quantitative/qualitative evaluation, which is harder to measure, but we’ll learn to understand our outcomes in terms of an agreed upon value system – if that’s dollars, then we monetize quality.

This is something I hope we can come back to on the blog. It fascinates me, and I offer it up as food for thought.  When we design health interventions – just as when we fund them – we must have an eye toward results.

This entry was posted on by Allison Howard-Berry.