Connect with us

Website: the-constellation.org

Newsletter EnglishFrench Spanish  

Facebook https://www.facebook.com/pages/The-Constellation/457271687691239  

Twitter @TheConstellati1

Instagramhttps://www.instagram.com/constellationclcp/

Youtube channel: The Constellation SALT-CLCP
I finished the book weeks ago. When I closed Dean Karlan and Jacob Appel’s “More Than Good Intentions: How a New Economics Is Helping to Solve ...,” I had many notes and reactions jotted down but they hadn’t yet gelled. Something just didn’t sit quite right with me and I was having trouble articulating it.

Of course that’s related to intuition and inspiration, which can’t be measured…but I digress.

After my Friday “tweet debate” with @poverty_action and after reading many other bloggers’ reactions to the book (see related post), I realized that Karlan and Appel have heard and refuted it all by now. As seasoned and skilled randomistas, they have ready responses in support of incorporating randomized control trials (RCTs) into more aid projects.

And this is not something I’m necessarily against. In a development discourse that is still ruled by economic academics, I respect Innovations for Poverty Action’s work, which attempts to bridge this knowledge base with aid practitioners’ experience. In fact, Karlan, Appel and I agree on many aspects of their main arguments. We all agree that aid can be more effective and that well-formed questions and well-executed, applied research can offer many relevant clues about this. We all want to see deeper thinking behind the doing.

Where I think we differ is on some fundamental beliefs about what prevents this and what ails the aid industry overall. Is it a lack of information about “what works”? Or is it a lack of respect for local initiatives and understanding about complex power dynamics that impede authentic relationships among development partners? And if it’s the latter, are RCTs just a band-aid on a deeper issue?

I am, like many others, worried about the implications of donors moving towards making RCTs yet another conditionality of aid, more food to satisfy their seemingly insatiable appetite for evidence. (See my related post providing "how matters" advice for donors on RCTs.) As someone working within the extensive web of local organizations and grassroots movements in the developing world, this is especially troubling for nascent, non-“formal” and under-resourced organizations that are already marginalized from the aid system.

Whether RCTs gain momentum as just the latest fad in aid, or whether they become a part of accepted practice, I am also afraid that now “unsubstantiated claims” such as the example below, which I recently read in a report, will now be considered invalid (and un-fundable), rather than be probed for clearer understanding and viewed as a opportunity for learning.

Example of a statement of long-term impact of a program: “Most notable is the increase in the enrollment rates of girls in school in areas where Org X was focused on girls education. In the District of X, enrollment of girls went from 43% to 46% of total enrollment from 2002-3 to 2008-9. The overall graduation rate from primary schools in District X grew from 45 percent in 2002-3 to 79 percent in 2008-9.

Anyone reading this can ask the obvious questions related to the comparison and attribution. Of course there may be many factors in these increases, but this report was on a grant that was less than US$20,000, a relatively small amount when you consider the scale of most development projects. Let’s always consider what is the appropriate cost and complexity needed for measurement, especially given the size and scope of the program.

Proportional expectations for the applicability of RCTs, as well as the potential consequences of poorly-done RCTs for those who are being studied are also important, especially when people are in the process of organizing at the local level. Rather than an afterthought, let’s talk simultaneously about how local partner organizations become drivers of the use of RCTs, rather than just being consulted or included in them. As a commenter on Owen Barder’s blog shared, “Great tools, we economists undoubtedly do have. In studying development issues, they are often used unhelpfully due to hubris and a shocking level of comfort with ignorance about the phenomenon being studied.”

Despite behavioral economists’ so-called acceptance of the rationality of the poor’s decision-making, I find phrases like “the bizarre thing was that Oti didn’t seem to mind wasting his own time,” “he might have spent his last twenty rupees on [flower garlands],” “people showed they had both the will and desire to save,” “people were learning” (as if these were a surprise) and the most striking, “were [the research subjects] just thickheaded?” contained in the book’s anecdotes to reveal subtle, underlying, and perhaps unexamined judgment, if not contempt, to which I am admittedly very sensitive.

If an assumption is operating that poor people don’t know what’s good for them, then the flip side of this assumption is that someone else must. As Tom Murphy comments on Bottom Up Thinking, “In development, can’t we say that the batch of behavioral economists are exercising some amount of paternalism? They are using ‘nudges’ to encourage behaviors but that inherently comes from a place of knowing.” In my career in the aid sector, I’ve learned the hard way to become comfortable with stepping away from the role of “expert.” In fact, much of my work is focused on encouraging aid workers, donors, and international do-gooders to do the same.

Sixty years of development aid hasn’t reduced poverty using existing methods, yes. But sixty years of development aid that has squashed local initiatives by not giving the due attention to how that aid (and the accompanying monitoring, surveys, etc.) makes people feel, is, I believe, perhaps one of our biggest challenges in making aid more effective. The prevalent, yet not often exposed negative attitudes, behaviors and perceptions towards local people and organizations in the aid world is something that has been under-reported, insufficiently documented, and poorly-studied. Consider—How would programs change if an equal amount of curiosity and energy that we spend in conducting RCTs were invested in well-facilitated listening exercises in which we had to learn about people’s experiences of being on the receiving side of aid?

Let’s not forget our Sen. Freedom, power and poverty are inexorably intertwined and the mechanics of economic transformation just a part of development. In all of the discussions of RCTs and their usefulness, the reality of power relationships within the aid system and the lack of humility that continues to plague us is not something we can escape.

Undoubtedly, soundly-interpreted data provides important new perspectives for us all. There remains, however, quite a lot we cannot know.

And I, for one, am okay with that.

***

This post originally appeared at: http://www.how-matters.org/2011/05/24/rcts-band-aid-on-deeper-issue/

See also related how-matters.org posts, RCTs: Much to be said and RCTs: "how matters" advice for donors.

***

Related Posts

Got ‘Em: An Evaluation Story

Trying to Quit

Did I fund Organization X?

More on Why ‘How Matters’

How to build strong relationships with grassroots organizations (Pa...

Views: 30

Comment

You need to be a member of Community life competence to add comments!

Join Community life competence

Comment by Jennifer Lentfer on May 31, 2011 at 1:39pm
Head, heart & hands - the balance we're all striving for. Gaston you might be interested in the follow-up piece I wrote, "RCTs: some 'how matters' advice for donors."
Comment by Gaston on May 31, 2011 at 6:38am

haha, that last comment made me laugh, but certainly touches on some truth. I remember when I was facilitating a community visit in Guyana last year with my colleague Usa. Many people joined from NGOs, UN, governmental institutions etc. At the end of the visit, she said one thing: "we often forget that the most important thing is how the community feels about themselves after a visit. Are they feeling more confident and good about themselves?". It's not about how good WE feel after the visit. Actually, it is also about how we feel, but more through our personal transformation rooted in the learning from local response, not from the 'I made a difference' viewpoint.    

 

On the blog, I like the mix. I am both an economist and a development professional and sometimes it's challenging to mix the two. My experience is that as long I stay human and don't let my economic mind fully take over my heart (the risk of large scale RCTs), I am fine and they play nicely together. 

Comment by Jennifer Lentfer on May 31, 2011 at 5:16am

Thanks Laurence - RCTs are certainly getting more and more attention in the aid world. As the "Stuff Expat Aid Workers Like" satirical blog writes, "RCTs will enable expat aid workers to provide those savvy, evidence-based donors with the proof-positive needed in order to feel good about having 'made a difference'...And a happy donor is the very best kind in the whole world." You can read more perspectives on the trend at: http://www.how-matters.org/2011/05/24/rcts-and-aid-effectiveness-co...

Comment by Laurence Gilliot on May 31, 2011 at 4:58am

Hi Jennifer,

 

Thanks a lot for taking the time to explain the jargon ;-) I learned something today. 

I guess that we often underestimate the impact we have as 'researchers' on the object of research. We do not think of ourselves in the equation. For instance, if you don't give school uniforms to children but you do interview them about going or not going to school, you might already influence them :-)

 

I also think about my dear friend John-Pierre who said "Life is not a programme, with inputs and outputs. We forgot the human touch" (here is my favorite video of John-Pierre: http://aidscompetence.ning.com/video/keeping-in-touch-with-our

 

Laurence

Comment by Jennifer Lentfer on May 30, 2011 at 2:25pm

Thanks for the question Laurence. Indeed that's an important definition to share.

A randomized controlled trial (RCT) is a type of scientific experiment, mostly used in medical science, to test the safety or efficacy of drugs or other treatments. The key distinguishing feature of the usual RCT is that eliminates bias between its study subjects, because there is a control group to which results can be compared. Conceptually, the process is like tossing a coin. Some people get the drug or treatment. Some people don't. RCTs are considered the "gold standard" of research in rigorously uncovering and building evidence about what works and what does not work.

In the aid world, for example (much simplified): Researchers want to determine the biggest factor that impedes children attending school. To test if it is the issue of uniforms, researchers would design an "experiment" in which half of the children that are currently not in school would receive uniforms, and the other would not. 

Comment by Laurence Gilliot on May 30, 2011 at 11:27am

Hi Jennifer,

 

Can you explain what are Randomized Control Trials? It seems like a very technical thing and I'm not familiar with the term. I guess that I'm not the only one... :-)

 

Thanks, Laurence

© 2024   Created by Rituu B. Nanda.   Powered by

Badges  |  Report an Issue  |  Terms of Service