ASC attended the 2016 Conference on Test Security (COTS), held October 18-20 in Cedar Rapids IA, graciously hosted by Pearson. The conference brings together thought leaders on all aspects of test security, including statistical detection of test fraud, management of test centers, candidate agreements, investigations, and legal implications. ASC was lucky enough to win three presentation spots. Please check out the abstracts below. If you are interested in learning more, please get in touch!
SIFT: Software for Investing Fraud in Testing
Nathan Thompson & Terry Ausman
SIFT is a software program specifically designed to bring data forensics to more practitioners. Widespread application of data forensics, like other advanced psychometric topics, is somewhat limited when an organization’s only options are to hire outside consultants or attempt to write code themselves. SIFT enables organizations with smaller budgets to apply some data forensics by automating the calculation of complex indices as well as simpler yet important statistics, in a user-friendly interface.
The most complex portion is a set of 10 collusion indices (more in development) from which the user can choose. SIFT also provides functionality for response time analysis, including the Response Time Effort index (Wise & Kong). More common analyses include classical item statistics, mean test times, score gains, and pass rates. All indices are also rolled-up into two nested levels of groups (for example, school and indices are also rolled-up into two nested levels of groups (for example, school and district or country and city) to facilitate identification of locations with issues.
All output is provided in spreadsheets for easy viewing, manipulation, and secondary analysis. This allows, for example, a small certification organization to obtain all of this output in only a few hours of work, and quickly investigate locations before a test is further compromised.
Statistical Detection: Where Do I Start?
Nathan Thompson & Terry Ausman
How can statistical detection of test fraud be better directed, or test security practices in general for that matter? This presentation will begin by cogently outlining various types of analysis into a framework by aligning them with the hypothesis each intends to test, show that this framework should be used to direct efforts, and then provide some real experience by applying these to real data sets from K-12 education and professional certification.
In the first section, we will start by identifying the common hypotheses to be tested, including: examinee copying, brain dump makers, brain dump takers, proctor/teacher involvement, low motivation, and compromised locations. Next, we match up analyses, such as how collusion indices are designed to elucidate copying but can also help find brain dump takers. We also provide deeper explanations on the specific analyses.
In the second section, we apply this framework to the analysis of real data sets. This will show how the framework can be useful in directing data forensics work rather than aimlessly poking around. It will also demonstrate usage of the statistical analyses, facilitating learning of the approaches as well as driving discussions of practical issues faced by attendees. The final portion of the presentation will then be just such a discussion.
Statistical Methods of Detecting Test Fraud: Can We Get More Practitioners on Board?
Statistical methods of detecting test fraud have been around since the 1970s, but are still not in general use by most practitioners, instead being limited to a few specialists. Similarly, best practices in test security are still not commonly used except at large organizations with big stakes in play. First, we will discuss the sort of hurdles that can prevent more professionals from learning about the topic, or for knowledgeable professionals to apply best practices. Next, we will discuss some potential solutions to each of those hurdles. The goal is to increase the validity of scores being reported throughout the industry by elevating the profession.
Want to improve the quality of your assessments?
Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.
Latest posts by Nathan Thompson, PhD (see all)
- All Psychometric Models Are Wrong - August 8, 2017
- Want to learn more about adaptive testing? Attend IACAT. - July 18, 2017
- What are cognitive diagnostic models? - June 28, 2017