Discover Surveybe
Our research

Dr. Johanna Choumert Nkolo  Marie Mallett  linda-terenzi-300 Dr. Johanna Choumert Nkolo, Marie Mallet, and Linda Terenzi

The question of how to deal with asking sensitive questions is an important one in the evolving field of survey based research. EDI has recently conducted several surveys on sensitive topics and this blog has been written to share a few insights to interested readers working on sensitive topics such as sexual behaviour, health habits, violence, illicit drug use, voting, attitudes in conflict-affected zones, religion, etc.

Challenges when asking sensitive questions in surveys

According to Tourangeau and Yan (2007, p. 860), three types of sensitive questions can be found in surveys:

Intrusive questions, i.e. “inappropriate in everyday conversation”

Threat of disclosure, i.e. “concerns about the possible consequences of giving a truthful answer should the information become known to a third party”

Social undesirability, i.e. “the respondent admits he or she has violated a social norm”.

Collecting data on sensitive topics can be a challenge in field and raises concerns for data quality.  Problems encountered in field mainly refer to:

  • Respondents’ fear that the data will be shared with their family or with third parties: it will prevent them from disclosing information that might cause themselves trouble in the future. Especially for questions relating to illegal actions, the respondent might not give honest answers, fearing consequences.
  • A household member might prohibit another member (e.g. wife, daughter, son, etc.) from participating in the survey: for this reason, it is essential to ask for parental consent if the targeted respondent is a minor, as well as informing other household members that an interview will take place.
  • Respondents’ fear of the consequences of revealing some information which may be socially undesirable. For example, admitting to drug use or homosexual feelings in a community where this is forbidden.
  • Social desirability bias: the respondents might misreport, if according to their culture, they perceive that there are ‘right’ or ‘wrong’ answers. For example, respondents’ personal opinion on gender violence might not be accurately reported because they know that their community has a different social norm.
  • Respondents’ reactions to the content of some questions which are perceived as inappropriate or frustrating: respondents might start crying, laughing, refusing to answer, and/or overreacting to some questions.

On the other side, problems may arise on the data quality and data analysis. They relate to:

  • Non-response: the respondent might not be willing to disclose the information. It is always a good idea to include a “Refuse to answer” option to sensitive questions. However, the interviewers need to be trained in such a way to encourage the respondents to answer.
  • Accuracy of the data: given the difficulties mentioned above, respondents’ misreporting might affect the data quality.
  • Measurement errors: respondents’ misreporting leads to biased data.
  • Over-reporting of socially desirable behaviours.
  • Under-reporting of socially undesirable behaviours.

Note that, unfortunately, the notion of sensitiveness is not objective: many topics that are perceived as sensitive in some countries might not be perceived that way in other ones. Therefore, knowledge of the context is a must in dealing with surveys including sensitive questions.

Insights from EDI experience

Over the past 14 years, EDI has developed useful tools and specific procedures for sensitive questions in surveys in order to maximize the quality of the data collected in field.

First, questionnaire design is crucial when asking sensitive questions. A careful wording of the questions is needed to avoid offending respondents. In the same way, providing an accurate and appropriate wording of the answer list may encourage the respondent to disclose the true information when the answers are read out by the interviewer.

As for every survey, interviewers should clearly read out the clause on confidentiality in the consent reading at the start of the interview and should restate this clause to respondents for all sensitive sections.

Regarding the order of the questions, it is not recommended to start or end the questionnaire with sensitive questions as the respondent could feel suspicious. Finally, the context of the country should also be considered as some questions cannot be asked in some communities. Employing local partners and local staff is therefore highly valuable in the questionnaire design.

Second, the quality of data for sensitive questions cannot rely only on good questionnaire design, the interviewer also plays a key role in the data collected. During interviewer training, the importance of building a relationship of confidence with the respondent should be made clear. Thus, encouraging interviewers’ behaviour can help to reduce non-responses and misreported answers. The interviewer must also ensure complete privacy during the interview to create a safe environment for the respondent. From EDI’s experience, most respondents answer sensitive questions sincerely when the questions are asked in a positive, respectful, and non-judgmental way.

Finally, some specific settings in the project are required for sensitive surveys such as having same sex interviews (male interviewers interviewing only male respondents and female interviewers only female respondents). According to the degree of sensitiveness, further planning may be necessary such as having separate sub-cluster by sex (one female sub-cluster and one male sub-cluster). It is also important during field that the interviewers can have access to special support from supervisors or external counsellors as the well-being of the field teams have a big impact on the quality of data collected.

Methods and tools to reduce the problems related to distortions in data related to asking sensitive questions

In addition to a well-designed questionnaire, specific protocols and adapted training, EDI has also developed innovative tools for sensitive questionnaires in surveybe, our CAPI software.

  • Audio or video recording: sensitive questions can be recorded and then be auto-completed by the respondent by listening or watching the pre-recorded question using earphones.
  • Randomized response lists: The order of the answers may impact on the probability that an answer is selected; with surveybe it is possible to randomise responses list in each interview.
  • Use of images in responses list: images can be used in surveybe (e.g. smiley faces) to answer sensitive questions or in response to a provocative image.
  • Sequential screens (to be publicly released in 2017): this recently developed function of surveybe will allow the respondent to self-complete questions following linear screens. The respondent answers screen by screen questions in either ‘strict’ mode where they cannot move to next questions before answering previous questions and cannot go back to previous questions, or in the default flexible mode which permits such actions. The mode is determined by the person(s) designing the questionnaire. In either case, the data is securely collected by the respondent as the interviewer cannot have access to screens containing sensitive questions in the questionnaire once the stand-alone section has been answered.
  • Further development of self-completion: The surveybe team is also working on expanding the self-completion sections for use outside ‘the cycle’ of the questionnaire. This is where the respondent can complete screens of the questionnaire after the interviewer visit on his/her phone and the data is transmitted directly to EDI headquarters via a ‘submit’ button.

What does the academic literature say on sensitive surveys?

The academic literature is vast on the topic and provides some pragmatic guidelines on how to conduct surveys on sensitive topics.  Tourangeau and Yan (2007) carry out an interesting review of the research on how to conduct sensitive surveys, and specifically studies addressing sensitivity and social desirability bias, the consequences of asking sensitive questions, and factors affecting reporting on sensitive topics. Their summary of the literature also provides useful tools to cope with challenges described above, such as self-administration or the randomized response list method.

In their blog post, Shaver and Zhou (2015) share some excellent insights on how to conduct surveys in war zones and link tothe website of Pr. Kosuke Imai which contains several publications and software codes related to sensitive survey questions. Several methods are proposed in the literature, namely the List Experiments (also known as the Item Count Technique), the Endorsement Experiments and the Randomized Responses Techniques.

List Experiments are conducted as follows (Blair and Imai, 2012, p.48):

The basic idea of list experiments is best illustrated through an example. In the 1991 National Race and Politics Survey, a group of political scientists conducted the first list experiment in the discipline (Sniderman, Tetlock, and Piazza 1992). In order to measure racial prejudice, the investigators randomly divided the sample of respondents into treatment and control groups and asked the following question for the control group:

Now I’m going to read you three things that sometimes make people angry or upset. After I read all three, just tell me HOW MANY of them upset you. (I don’t want to know which ones, just how many.)

 

(1) the federal government increasing the tax on gasoline

(2) professional athletes getting million-dollar-plus salaries

(3) large corporations polluting the environment

 

How many, if any, of these things upset you?

For the treatment group, they asked an identical question except that a sensitive item concerning racial prejudice was appended to the list,

Now I’m going to read you four things that sometimes make people angry or upset. After I read all four, just tell me HOW MANY of them upset you. (I don’t want to know which ones, just how many.)

 

(1) the federal government increasing the tax on gasoline

(2) professional athletes getting million-dollar-plus salaries

(3) large corporations polluting the environment

(4) a black family moving next door to you

 

How many, if any, of these things upset you?

The premise of list experiments is that if a sensitive question is asked in this indirect fashion, respondents may be more willing to offer a truthful response even when social norms encourage them to answer the question in a certain way. In the example at hand, list experiments may allow survey researchers to elicit truthful answers from respondents who do not wish to have a black family as a neighbor but are aware of the commonly held equality norm that blacks should not be discriminated against based on their ethnicity. The methodological challenge, on the other hand, is how to efficiently recover truthful responses to the sensitive item from aggregated answers in response to indirect questioning.”

Endorsement experiments consist of the following actions (Bullock et al., 2011, p. 363): “In endorsement experiments, randomly selected respondents are asked to express their opinion about several policies endorsed by a socially sensitive actor of interest. These responses are then contrasted with those from a control group that receives no endorsement. If the endorsement by a political actor induces more support for policies, then this is taken as evidence for the existence of support for that actor. The main advantage of endorsement experiments is that indirect questioning may increase truthful responses, improve response rates, and enhance safety for enumerators and respondents in the case of extremely sensitive topics. The drawback, on the other hand, is that it can only provide an indirect measure of support.”

The Randomized Responses Technique consists of asking respondents to use a randomization tool (a coin or a dice for example). The result of the random action determines survey functions such as the questions asked to the respondents or whether they are given predetermined responses, with the interviewer not knowing the output of the randomization process.

Overall the literature offers promising ways to elicit sensitive behaviours, attitudes, or opinions. Combined with the pertinent survey tools and software, and the appropriate field settings, there are numerous potential improvements that can be made to the way sensitive surveys are conducted.

 

Further reading

Blair G. and Imai K. (2012) Statistical analysis of list experiments. Political Analysis 20, 47–77.

Blair G., Imai K., Zhou Y.-Y. (2015) Design and Analysis of the Randomized Response Technique. Journal of the American Statistical Association 110, 1304–1319.

Bullock W., Imai K., Shapiro J.N. (2011) Statistical analysis of endorsement experiments: Measuring support for militant groups in Pakistan. Political Analysis 19, 363–384.

Friedman J. (2012)Being indirect sometimes gets closer to the truth: New work on indirect elicitation surveys. Development Impact World Bank blog.

Goldstein M. (2011) Measuring secrets. Development Impact World Bank blog.

Kramon E. and Weghorst K.R. (2012) Measuring Sensitive Attitudes in Developing Countries: Lessons from Implementing the List Experiment. Newsletter of the APSA ExperimentalSection 3, 14–24.
McKenzie D. (2013) Three new papers on measuring stuff that is difficult to measure. Development Impact World Bank blog.

Shaver A. and Zhou Y.-Y. (2015) How to make surveys in war zones better, and why this is important. The Washington Post, 7th January 2015.

Tourangeau R. and Yan T. (2007) Sensitive questions in surveys. Psychological bulletin Vol. 133, No. 5, 859–883.