
Atul’s work sits at the intersection of data, systems, and lived realities. Having worked across climate, education, food systems, and child protection, he brings a steady reminder to impact work: evidence is never neutral, and outcomes don’t always tell the full story.
In this Team Speak, Atul reflects on how his understanding of impact has evolved, from tracking project results to asking harder questions about durability, context, and whose lives are truly changing.
Read on for a thoughtful take on why rigour needs context, and why learning only matters if it lasts beyond the project cycle.
Atul (A): Working across sectors like education, weather and climate resilience, food systems, and child protection has made me very conscious that evidence is never neutral or one-size-fits-all. At Athena, this diversity pushes me to design MEL systems and intervention that are theory-driven but flexible, combining quantitative rigour with participatory and qualitative approaches. I pay close attention to how systems, power dynamics, and institutional incentives differ across contexts, and I focus learning not just on “what worked” but on “why”, for whom, and under what conditions. This helps ensure evidence is credible for donors, useful for implementers and policymakers, and respectful of realities faced by the beneficiaries and the underserved.
A: Yes – this was very clear during my work evaluating child protection and anti-trafficking programmes in India. Quantitative indicators showed progress in prosecution and service delivery, but qualitative fieldwork across districts revealed that fear, social stigma, and institutional trust gaps strongly shaped outcomes for survivors. Without rigorous qualitative methods and participatory engagement with the communities, the evaluation would have overstated impact. That experience reinforced for me that methodological rigour must include contextual and ethical rigour, especially when working with vulnerable populations. This is not just a reflection from my experience working on child protection programmes. This learning has been reflected in my work on education in emergencies, food systems, education and gender and disability inclusion. This has driven me to be a strong believer in the need for context-driven mixed-methods evaluations.
A: Yes, significantly. Earlier in my career, I viewed impact largely in terms of measurable outcomes within a project lifecycle, often focused on achievements by the project or the implementing team. Over time, especially through systems and sustainability evaluations in climate, education, child protection and food systems – I’ve come to see impact as the durability of change within institutions, markets, and behaviours beyond programme funding. Above all, I see impact now as the sustained change in the lives of people, rather than as achievements collected by a development actor. This shift was prompted by seeing strong short-term results fade once projects ended, and how achievements by development actors sometimes fail to result in clear long-term benefits for people. It led me to focus more on systems change pathways, programme sustainability, and learning mechanisms that capture whether interventions are truly changing how systems function over time and if development efforts are translating to real-world improvements in the lives of the people.