Author: Marie Mallet

This blog post is part of a series on SMS surveying. The first part, ‘SMS Surveying: Design’, can be accessed here. The second part, ‘SMS Surveying: Logistics’, can be accessed here. 

In Sub-Saharan Africa (SSA), the transition from in-person to mobile surveys has been possible thanks to a spectacular increase of mobile phone access penetration over the past decade. The rise in mobile phone access in SSA has proved a great opportunity for deploying remote and high frequency data collection such as SMS surveys. However, the implementation of SMS surveys still faces important research challenges in SSA, such as the problem of representativeness of the population with a low level of education, the elderly, women and people not familiar with technology (Lau et al., 2019) as well as the challenge of low response rates.

In this last blog post from a series on SMS surveys, I want to share some general thoughts and some lessons that I learnt after implementing multiple rounds of SMS surveys in Tanzania to overcome some of the challenges presented above.

    • Piloting and testing the survey before full scale implementation: The researchers should always do extensive internal testing and carry out a small-scale pilot before launching the SMS survey to the full sample, as a poor display of the text on the respondents’ screen and programming errors can prove hugely damaging to the data and the response rates. Internal testing with colleagues and then setting-up a small pilot with 10 to 20 respondents is a low-cost investment and would certainly prove beneficial to the tool to test some functionalities and follow-up protocols.


    • Shared phones: SMS surveys are self-administered questionnaires, hence the researchers do not have control on who answers the SMS survey on the phone. This can be a challenge for surveys that target a particular population with specific characteristics (e.g., specific sex or age range), particularly in places where it is common to share a phone among several people, like in East Africa. Adding control questions for checking eligibility and prepopulating the name of the targeted respondent in the survey could mitigate the risk of the wrong person answering the survey. However, in a self and remote completion setting, these control questions cannot fully ensure the targeted respondent would be the one replying to the SMS survey.


    • Self-completion and SMS questionnaire design: There is no control of the responses submitted by the respondent as the SMS survey is self-administered. However, programming automated SMS to indicate invalid/unlikely answers can limit the submission of erroneous answers. However, with a self-completion survey, other issues like the respondent misunderstanding a question or entering the data incorrectly, cannot be totally prevented. The number and type of questions included should take into account that there is no intervention of an interviewer for clarifying any potential misunderstanding or for providing visual aids to help the respondent. For example, in questions using a recall period, it may be easier for the respondent to answer a question that refers to a defined time period. In the example below, the second version is preferred as it refers to a specific month to minimise the risks of misinterpretation of the time period from the question:

1. In the past four weeks, did you or any household member have to eat some foods that you really did not want to eat because of a lack of resources?

2. In November, did you or any household member have to eat some foods that you really did not want to eat because of a lack of resources?

    • Local context and population reached by SMS survey: As for any type of survey, the local context should be taken into account, but for designing an SMS survey paying particular attention to the following factors is also key: mobile phone ownership in the country, type of phone used, number of networks, targeted population and familiarity using SMS, and literacy rates. The selection bias in SMS surveys is an important factor to be examined before deciding on using a SMS survey versus in-person survey.

Concluding  thoughts:

Using SMS surveys in research is promising for the collection of high frequency data or for complementing other survey methods in longitudinal studies and collecting additional data at a lower cost. However, as discussed in this three-part blog post series (part one available here, and part two available here), before deciding to use this remote approach, researchers need to (i) consider the benefits and limitations of SMS surveys, (ii) develop strategies for contacting respondents and maximising response rates according to the local context, and (iii) plan ahead with the data analysis to address potential biases resulting from an SMS survey (low response rate, selection bias, etc.).

The rise of the smartphone with internet access in the region will offer even more remote data collection possibilities such as web surveys, voice calls and in-person surveys recorded on mobile devices. These additional remote data collection methods will enable collecting other types of data remotely, such as qualitative data, from a broader population in Sub-Saharan Africa in the coming years.


  • Charles Q Lau, Ansie Lombaard, Melissa Baker, Joe Eyerman, Lisa Thalji, How Representative Are SMS Surveys in Africa? Experimental Evidence From Four Countries, International Journal of Public Opinion Research, Volume 31, Issue 2, Summer 2019, Pages 309–330,

Additional resources to go further:

  • Dabalen, Andrew; Etang, Alvin; Hoogeveen, Johannes; Mushi, Elvis; Schipper, Youdi; von Engelhardt, Johannes. 2016. Mobile Phone Panel Surveys in Developing Countries : A Practical Guide for Microdata Collection. Directions in Development–Poverty;. Washington, DC: World Bank. © World Bank. License: CC BY 3.0 IGO
  • Nina DePena Hoe, Heidi E Grunwald, The Role of Automated SMS Text Messaging in Survey Research, Survey Practice, Volume 8, Issue 6, 2015,