Over the past year, I led the design and implementation of a multi-country online survey. The experience was as revealing as it was instructive, not just in terms of the insights we generated, but also in what it taught us about data collection in a digitally fragmented, low-response, and fast-moving policy landscape.
As I reflect on the process, particularly after attending the recent Data User Conference 2025 (organized by the Ministry of Statistics and Programme Implementation, Government of India), one thing stands out: we are in urgent need of a shift in how we think about, produce, and use data.
Our online survey was structured around a carefully constructed, multi-choice questionnaire designed to support a broader research framework. The instrument combined ordinal, categorical, and ranking questions, refined through internal expert feedback.
Programming the survey posed its own set of questions—which platform best accommodates our needs, and those of our respondents? After testing multiple platforms for usability, skip logic, and compatibility with question formats, we selected SurveyCTO. The selection was not only based on its interface, but also due to our team's prior experience.
Yet, no matter how sound the technical back-end, user experience and participation became the critical front-end challenges. We faced low response rates—despite targeted outreach, snowball sampling, and multiple reminders—and had to troubleshoot UX issues on mobile devices in real time. These practical frictions reflected a broader theme I later heard echoed at the Data User Conference: data systems today are only as strong as their interface with reality.
What resonated most during the conference was the stark disconnect between what data users need and what current data systems deliver. In one panel, private sector analysts described how, in the absence of quarterly consumption data in India, they were “shooting in the dark,” relying on proxy indicators like vehicle sales and FMCG trends to understand economic behaviour. Calls were made for more real-time, mobile-compatible, and lower-cost data collection methods, particularly in the wake of India’s widespread digital adoption.
On the producer side, researchers shared the intense challenges of conducting face-to-face surveys—from outdated sampling frames (still based on Census 2011) to field investigator attrition, respondent fatigue, and data inconsistencies across large and diverse geographies. What stayed with me was their appeal for standardization and shared infrastructure, including a unified, regularly updated sample frame, shared tools, and clearer validation benchmarks.
In this context, the reforms to the Household Consumption and Expenditure Survey (HCES) 2022–24 feel nothing short of transformational. The use of separate survey instruments for food, consumables, and durables to reduce respondent fatigue, along with multiple visits—where each household was visited three times over three months instead of just once—demonstrates a thoughtful shift in survey design. Combined with the adoption of Computer-Assisted Personal Interviewing (CAPI), these changes reflect a deep understanding of the trade-offs between respondent burden, data quality, and operational feasibility.
Equally important were findings that should reshape how we think about online surveys. For instance, the sequence in which questionnaires were administered affected reported consumption, with respondents reporting higher expenditure when asked about consumables first. This speaks directly to question order effects, a nuance we rarely account for in rapid online survey deployments.
HCES also reported low non-response rates and a high degree of reliability, even when respondents changed between visits. This debunks long-held concerns about continuity being essential to data consistency.
Our online survey ultimately served its purpose, not only in collecting data, but in validating and triangulating insights from secondary research and stakeholder interviews. But this experience, combined with what I heard at the Data User Conference raises bigger questions:
In a world awash with information, good data is less about volume and more about intentionality. As a researcher navigating both digital surveys and legacy data systems, I’m convinced that the future lies not in choosing one over the other, but in building bridges between them.
Jyoti Nayak is an Associate Consultant with the Monitoring, Evaluation, Research, and Learning (MERL) team at Athena Infonomics. Her interests lie in socio-economic research within the development and impact space, with technical expertise spanning both qualitative and quantitative research methodologies, proposal development, and stakeholder engagement.
She has co-facilitated participatory workshops, conducted key informant interviews, and led in-depth consultations with government and non-government actors across multiple countries in South Asia to inform strategic decision-making. Jyoti brings a strong blend of data analysis and field-based insight to her work. She holds a Master’s degree in Economics from the University of Mumbai