It’s October 30, 2017, and collusion is all over the news today… but I want to talk about a different kind of collusion. That is, non-independent test taking. In the field of psychometric forensics, examinee collusion refers to cases where an examinee takes a test with some sort of external help in obtaining the correct […]
About Nathan Thompson, PhD
I am a psychometrician, software developer, author, and researcher, currently serving as Chief Product Officer for Assessment Systems Corporation (ASC). My mission is to elevate the profession of psychometrics by using software to automate the menial stuff like job analysis and Angoff studies, so we can focus on more innovative work. My core goal is to improve assessment throughout the world.
I was originally trained as a psychometrician, doing an undergrad at Luther College in Math/Psych/Latin and then a PhD in Psychometrics at the University of Minnesota. I then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader.
Research and innovation are incredibly important to me. In addition to my own research, I am cofounder and Membership Director at the International Association for Computerized Adaptive Testing, You can often find me at other important conferences like ATP, ICE, CLEAR, and NCME. I've published many papers and presentations, and my favorite remains http://pareonline.net/getvn.asp?v=16&n=1.
Entries by Nathan Thompson, PhD
In the past decade, terms like machine learning, artificial intelligence, and data mining are becoming greater buzzwords as computing power, APIs, and the massively increased availability of data enable new technologies like self-driving cars. However, we’ve been using methodologies like machine learning in psychometrics for decades. So much of the hype is just hype. So, what […]
An emerging sector in the field of psychometrics is the area devoted to analyzing test data to find cheaters and other illicit or invalid testing behavior. We lack a generally agreed-upon and marketable term for that sort of work, and I’d like to suggest that we use Psychometric Forensics. While research on this topic is more […]
Last week, I had the opportunity to attend the 2017 Conference on Test Security (COTS), hosted by the University of Wisconsin-Madison. If your organization has any concerns about test security (that is, you have any sort of real stakes tied to your test!), I recommend that you attend COTS. It has a great mix of […]
The British statistician George Box is credited with the quote, “All models are wrong but some are useful.” As psychometricians, it is important that we never forget this perspective. We cannot be so haughty as to think that our models actually represent the true underlying phenomena and any data that does not fit nicely is just […]
Computerized adaptive testing (CAT) is an incredibly important innovation in the world of assessment. It’s a psychometric paradigm that applies machine learning principles to personalize millions and millions of assessments, from K12 education to university admissions to professional certification to employment screening to medical surveys. While invented in the 1970s, primarily at as part of […]
Cognitive diagnostic models are an area of psychometric research that has seen substantial growth in the past decade, though the mathematics behind them, dating back to MacReady and Dayton (1977). The reason that they have been receiving more attention is that in many assessment situations, a simple overall score does not serve our purposes and […]
I recently received a email from a researcher that wanted to implement item response theory, but was not sure where to start. It occurred to me that there are plenty of resources out there which describe IRT but few, if any, that provide guidance for how someone new to the topic could apply IRT. That is, plenty […]
Guttman errors are a concept derived from the Guttman Scaling approach to evaluating assessments. There are a number of ways that they can be used. Meijer (1994) suggests an evaluation of Guttman errors as a way to flag aberrant response data, such as cheating or low motivation. He quantified this with two different indices, G […]