conference on test security, data forensics, test fraud

ASC Presents at 2016 Conference on Test Security

ASC attended the 2016 Conference on Test Security (COTS), held October 18-20 in Cedar Rapids IA, graciously hosted by Pearson.  The conference brings together thought leaders on all aspects of test security, including statistical detection of test fraud, management of test centers, candidate agreements, investigations, and legal implications.  ASC was lucky enough to win three presentation spots.  Please check out the abstracts below.  If you are interested in learning more, please get in touch!

 

SIFT: Software for Investing Fraud in Testing
Nathan Thompson & Terry Ausman

SIFT is a software program specifically designed to bring data forensics to more practitioners. Widespread application of data forensics, like other advanced psychometric topics, is somewhat limited when an organization’s only options are to hire outside consultants or attempt to write code themselves. SIFT enables organizations with smaller budgets to apply some data forensics by automating the calculation of complex indices as well as simpler yet important statistics, in a user-friendly interface.

The most complex portion is a set of 10 collusion indices (more in development) from which the user can choose. SIFT also provides functionality for response time analysis, including the Response Time Effort index (Wise & Kong). More common analyses include classical item statistics, mean test times, score gains, and pass rates. All indices are also rolled-up into two nested levels of groups (for example, school and indices are also rolled-up into two nested levels of groups (for example, school and district or country and city) to facilitate identification of locations with issues.

All output is provided in spreadsheets for easy viewing, manipulation, and secondary analysis. This allows, for example, a small certification organization to obtain all of this output in only a few hours of work, and quickly investigate locations before a test is further compromised.

 

Statistical Detection: Where Do I Start?
Nathan Thompson & Terry Ausman

How can statistical detection of test fraud be better directed, or test security practices in general for that matter? This presentation will begin by cogently outlining various types of analysis into a framework by aligning them with the hypothesis each intends to test, show that this framework should be used to direct efforts, and then provide some real experience by applying these to real data sets from K-12 education and professional certification.

In the first section, we will start by identifying the common hypotheses to be tested, including: examinee copying, brain dump makers, brain dump takers, proctor/teacher involvement, low motivation, and compromised locations. Next, we match up analyses, such as how collusion indices are designed to elucidate copying but can also help find brain dump takers. We also provide deeper explanations on the specific analyses.

In the second section, we apply this framework to the analysis of real data sets. This will show how the framework can be useful in directing data forensics work rather than aimlessly poking around. It will also demonstrate usage of the statistical analyses, facilitating learning of the approaches as well as driving discussions of practical issues faced by attendees. The final portion of the presentation will then be just such a discussion.

 

Statistical Methods of Detecting Test Fraud: Can We Get More Practitioners on Board?
Nathan Thompson

Statistical methods of detecting test fraud have been around since the 1970s, but are still not in general use by most practitioners, instead being limited to a few specialists.  Similarly, best practices in test security are still not commonly used except at large organizations with big stakes in play.  First, we will discuss the sort of hurdles that can prevent more professionals from learning about the topic, or for knowledgeable professionals to apply best practices.  Next, we will discuss some potential solutions to each of those hurdles.  The goal is to increase the validity of scores being reported throughout the industry by elevating the profession.

Want to improve the quality of your assessments?

Sign up for our newsletter and hear about our free tools, product updates, and blog posts first! Don’t worry, we would never sell your email address, and we promise not to spam you with too many emails.

Newsletter Sign Up
First Name*
Last Name*
Email*
Company*
Market Sector*
Lead Source

The following two tabs change content below.

Nathan Thompson, PhD

Chief Product Officer at ASC
I am a psychometrician, software developer, author, and researcher, currently serving as Chief Product Officer for Assessment Systems Corporation (ASC). My mission is to elevate the profession of psychometrics by using software to automate the menial stuff like job analysis and Angoff studies, so we can focus on more innovative work. My core goal is to improve assessment throughout the world. I was originally trained as a psychometrician, doing an undergrad at Luther College in Math/Psych/Latin and then a PhD in Psychometrics at the University of Minnesota. I then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader. Research and innovation are incredibly important to me. In addition to my own research, I am cofounder and Membership Director at the International Association for Computerized Adaptive Testing, You can often find me at other important conferences like ATP, ICE, CLEAR, and NCME. I've published many papers and presentations, and my favorite remains http://pareonline.net/getvn.asp?v=16&n=1.

Latest posts by Nathan Thompson, PhD (see all)

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply