July 2016 - Talking Data
At i-Link, we realise that the delivery of reliable data, on time, is of the utmost importance to our clients. For this reason, we don’t use automated systems to clean the data but give that responsibility to the Project Managers, all based in Sydney, who conduct checks manually. These checks, detailed below, are done during the pilot stage to identify the expected rate of poor responses and then again before delivering a final data file.
- Speeders – This involves comparing the average completion rate with the respondents completion rate. Rather than using a specific formula we do this manually in order to be able to look further into the responses of the potential speeders so we can identify if they have legitimately attempted to answer the questions within the survey.
- Flat liners – This involves looking at a list of statements which are rated on a scale and if 95% or more of the responses have been given the same rating then they would be considered to have not attempted to legitimately answer the survey. The context of the question is also considered in the event that a flat line response may be legitimate. For example answering ‘don’t know’ on the scale to a series of a statements they might not have an opinion on.
- Bad verbatim – This involves reading each of the responses to the open ended questions within the survey and determining if they have provided a legitimate answer to the question. Again the context in which the question is placed is also considered here. For example if they state they are an avid basketball fan but couldn’t name any basketball teams then their response would be considered junk.
You can ask Patrick or Scott at the below address about anything relating to data quality or i-Link's services in general!