ASC has been empowering organizations to develop better assessments since 1979. Curious as to how things were back then? Below is a copy of our newsletter from 1988, long before the days of sharing news via email and social media! Our platform at the time was named MICROCAT. This later became modernized to FastTest PC […]
About Nathan Thompson, PhD
I am a psychometrician, software developer, author, and researcher, currently serving as Chief Product Officer for Assessment Systems Corporation (ASC). My mission is to elevate the profession of psychometrics by using software to automate the menial stuff like job analysis and Angoff studies, so we can focus on more innovative work. My core goal is to improve assessment throughout the world.
I was originally trained as a psychometrician, doing an undergrad at Luther College in Math/Psych/Latin and then a PhD in Psychometrics at the University of Minnesota. I then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader.
Research and innovation are incredibly important to me. In addition to my own research, I am cofounder and Membership Director at the International Association for Computerized Adaptive Testing, You can often find me at other important conferences like ATP, ICE, CLEAR, and NCME. I've published many papers and presentations, and my favorite remains http://pareonline.net/getvn.asp?v=16&n=1.
Entries by Nathan Thompson, PhD
Item response theory is the predominant psychometric paradigm for mid or large scale assessment. As noted in my introductory blog post, it is actually a family of models. In this post, we discuss the two parameter IRT model (2PL). The 2PL is described by the following equation (simplified from Hambleton & Swaminathan, 1985, Eq. 3.3): […]
Item response theory (IRT) is an extremely powerful psychometric paradigm that addresses many of the inadequacies of classical test theory (CTT). If you are new to the topic, there is a broad intro here, where you will learn that IRT is actually a family of mathematical models rather than one specific one. Today, I’m talking […]
Classical test theory is a century-old paradigm for psychometrics – using quantitative and scientifically-based processes to develop and analyze assessments to maximize their quality. (nobody likes unfair tests!) The most basic and frequently used item statistic from classical test theory is the P-value. It is usually called item difficulty but is sometimes called item facility, […]
The modified-Angoff method is arguably the most common method of setting a cutscore on a test. The Angoff cutscore is legally defensible and meets international standards such as AERA/APA/NCME, ISO 17024, and NCCA. It also has the benefit that it does not require the test to be administered to a sample of candidates first; methods like […]
Linear on the fly testing (LOFT) is an approach to delivering assessments to examinees. In general, there are two families of test delivery. Static approaches deliver the same test form or forms to everyone; this is the ubiquitous and traditional “linear” method of testing. Algorithmic approaches deliver the test to each examinee based on a […]
The field of Psychometrics is definitely a small niche in the world, even though it touches almost every person at some point in their lives. When I’m trying to explain what I do to people from outside the field, I’m often asked something like, “Where do you even go to study something like that?” I’m […]
Have you heard about standard setting approaches such as the Hofstee method, or perhaps the Angoff, Ebel, Nedelsky, or Bookmark methods? There are certainly various ways to set a defensible cutscore or a professional credentialing or pre-employment test. Today, we are going to discuss the Hofstee method. Why Standard Setting? Certification organizations that care about […]
The Spearman-Brown Prediction Formula, also known as the Spearman-Brown Prophecy Formula or Correction, is a method used in evaluating test reliability. It is based on the idea that split-half reliability has better assumptions than coefficient alpha, but only estimates reliability for a half-length test, so we need to implement a correction that steps it up […]
Artificial intelligence (AI) and machine learning (ML) have become buzzwords over the past few years. As I already wrote about, they are actually old news in the field of psychometrics. Factor analysis is a classical example of ML, and item response theory also qualifies as ML. Computerized adaptive testing is actually an application of AI […]