Non-response bias occurs in statistical surveys if the answers of respondents differ from the potential answers of those who did not answer. It may occur due to several factors as outlined in Deming (1990).
If one selects a sample of 1000 managers in a field and polls them about their workload, the managers with a high workload may not answer the survey because they do not have enough time to answer it, and/or those with a low workload may decline to respond for fear that their supervisors or colleagues will perceive them as unnecessary (either immediately, if the survey is non-anonymous, or in the future, should their anonymity be compromised). Therefore, non-response bias may make the measured value for the workload too low, too high, or, if the effects of the above biases happen to offset each other, "right for the wrong reasons." For a simple example of this effect, consider a survey that includes, "Agree or disagree: I have enough time in my day to complete a survey."
In the 1936 U.S. presidential election, The Literary Digest mailed out 10 million questionnaires, of which 2.3 million were returned. Based on this, they predicted that Republican Alf Landon would win with 370 of 531 electoral votes; he actually got 8. Research published in 1976 and 1988 concluded that non-response bias was the primary source of this error, although their sampling frame was also quite different from the vast majority of voters.
There are different ways to test for non-response bias. A common technique involves comparing the first and fourth quartiles of responses for differences in demographics and key constructs. In e-mail surveys some values are already known from all potential participants (e.g. age, branch of the firm, ...) and can be compared to the values that prevail in the subgroup of those who answered. If there is no significant difference this is an indicator that there might be no non-response bias.
In e-mail surveys those who didn't answer can also systematically be phoned and a small number of survey questions can be asked. If their answers don't differ significantly from those who answered the survey, there might be no non-response bias. This technique is sometimes called non-response follow-up.