Entries by Nathan Thompson, PhD

What validity threats are relevant to psychometric forensics?

Validity, in its modern conceptualization, refers to evidence that supports our intended interpretations of test scores (see Chapter 1 of APA/AERA/NCME Standards for full treatment).  Validity threats are issues that issues that hinder the interpretations and use of scores.  The word “interpretation” is key because test scores can be interpreted in different ways, including ways […]

What is classical item difficulty (P value)?

One of the core concepts in psychometrics is item difficulty.  This refers to the probability that examinees will get the item correct for educational/cognitive assessments or respond in the keyed direction with psychological/survey assessments (more on that later).  Difficulty is important for evaluating the characteristics of an item and whether it should continue to be part of […]

Examinee Collusion: Primary vs Secondary

It’s October 30, 2017, and collusion is all over the news today… but I want to talk about a different kind of collusion.  That is, non-independent test taking.  In the field of psychometric forensics, examinee collusion refers to cases where an examinee takes a test with some sort of external help in obtaining the correct […]

Machine Learning in Psychometrics: Old News?

In the past decade, terms like machine learning, artificial intelligence, and data mining are becoming greater buzzwords as computing power, APIs, and the massively increased availability of data enable new technologies like self-driving cars. However, we’ve been using methodologies like machine learning in psychometrics for decades. So much of the hype is just hype. So, what […]

Can we call it Psychometric Forensics?

An emerging sector in the field of psychometrics is the area devoted to analyzing test data to find cheaters and other illicit or invalid testing behavior. We lack a generally agreed-upon and marketable term for that sort of work, and I’d like to suggest that we use Psychometric Forensics. While research on this topic is more […]

2017 Conference on Test Security

Last week, I had the opportunity to attend the 2017 Conference on Test Security (COTS), hosted by the University of Wisconsin-Madison.  If your organization has any concerns about test security (that is, you have any sort of real stakes tied to your test!), I recommend that you attend COTS.  It has a great mix of […]

All Psychometric Models Are Wrong

The British statistician George Box is credited with the quote, “All models are wrong but some are useful.”  As psychometricians, it is important that we never forget this perspective.  We cannot be so haughty as to think that our models actually represent the true underlying phenomena and any data that does not fit nicely is just […]

Want to learn more about adaptive testing? Attend IACAT.

Computerized adaptive testing (CAT) is an incredibly important innovation in the world of assessment.  It’s a psychometric paradigm that applies machine learning principles to personalize millions and millions of assessments, from K12 education to university admissions to professional certification to employment screening to medical surveys.  While invented in the 1970s, primarily at as part of […]

What are cognitive diagnostic models?

Cognitive diagnostic models are an area of psychometric research that has seen substantial growth in the past decade, though the mathematics behind them, dating back to MacReady and Dayton (1977).  The reason that they have been receiving more attention is that in many assessment situations, a simple overall score does not serve our purposes and […]