ASC presents at the conference on test security

ASC attended the 2016 Conference on Test Security (COTS), held October 18-20 in Cedar Rapids IA, graciously hosted by Pearson. The conference brings together thought leaders on all aspects of test security, including statistical detection of test fraud, management of test centers, candidate agreements, investigations, and legal implications. ASC was lucky enough to win three presentation spots.  Please check out the abstracts below.  If you are interested in learning more, please get in touch!

 

SIFT: Software for Investing Fraud in Testing
Nathan Thompson & Terry Ausman

SIFT is a software program specifically designed to bring data forensics to more practitioners. Widespread application of data forensics, like other advanced psychometric topics, is somewhat limited when an organization’s only options are to hire outside consultants or attempt to write code themselves. SIFT enables organizations with smaller budgets to apply some data forensics by automating the calculation of complex indices as well as simpler yet important statistics, in a user-friendly interface.student cheating on test

The most complex portion is a set of 10 collusion indices (more in development) from which the user can choose. SIFT also provides functionality for response time analysis, including the Response Time Effort index (Wise & Kong). More common analyses include classical item statistics, mean test times, score gains, and pass rates. All indices are also rolled-up into two nested levels of groups (for example, school and indices are also rolled-up into two nested levels of groups (for example, school and district or country and city) to facilitate identification of locations with issues.

All output is provided in spreadsheets for easy viewing, manipulation, and secondary analysis. This allows, for example, a small certification organization to obtain all of this output in only a few hours of work, and quickly investigate locations before a test is further compromised.

 

Statistical Detection: Where Do I Start?
Nathan Thompson & Terry Ausman

How can statistical detection of test fraud be better directed, or test security practices in general for that matter? This presentation will begin by cogently outlining various types of analysis into a framework by aligning them with the hypothesis each intends to test, show that this framework should be used to direct efforts, and then provide some real experience by applying these to real data sets from K-12 education and professional certification.

In the first section, we will start by identifying the common hypotheses to be tested, including: examinee copying, brain dump makers, brain dump takers, proctor/teacher involvement, low motivation, and compromised locations. Next, we match up analyses, such as how collusion indices are designed to elucidate copying but can also help find brain dump takers. We also provide deeper explanations on the specific analyses.

In the second section, we apply this framework to the analysis of real data sets. This will show how the framework can be useful in directing data forensics work rather than aimlessly poking around. It will also demonstrate usage of the statistical analyses, facilitating learning of the approaches as well as driving discussions of practical issues faced by attendees. The final portion of the presentation will then be just such a discussion.

 

Statistical Methods of Detecting Test Fraud: Can We Get More Practitioners on Board?
Nathan Thompson

Statistical methods of detecting test fraud have been around since the 1970s, but are still not in general use by most practitioners, instead being limited to a few specialists.  Similarly, best practices in test security are still not commonly used except at large organizations with big stakes in play.  First, we will discuss the sort of hurdles that can prevent more professionals from learning about the topic, or for knowledgeable professionals to apply best practices.  Next, we will discuss some potential solutions to each of those hurdles.  The goal is to increase the validity of scores being reported throughout the industry by elevating the profession.

 

The following two tabs change content below.
Avatar for Nathan Thompson, PhD

Nathan Thompson, PhD

Nathan Thompson earned his PhD in Psychometrics from the University of Minnesota, with a focus on computerized adaptive testing. His undergraduate degree was from Luther College with a triple major of Mathematics, Psychology, and Latin. He is primarily interested in the use of AI and software automation to augment and replace the work done by psychometricians, which has provided extensive experience in software design and programming. Dr. Thompson has published over 100 journal articles and conference presentations, but his favorite remains https://scholarworks.umass.edu/pare/vol16/iss1/1/ .
Avatar for Nathan Thompson, PhD

Latest posts by Nathan Thompson, PhD (see all)