I recently received a email from a researcher that wanted to implement item response theory, but was not sure where to start. It occurred to me that there are plenty of resources out there which describe IRT but few, if any, that provide guidance for how someone new to the topic could apply IRT. That is, plenty […]
About Nathan Thompson, PhD
I am a psychometrician, software developer, author, and researcher, currently serving as Chief Product Officer for Assessment Systems Corporation (ASC). My mission is to elevate the profession of psychometrics by using software to automate the menial stuff like job analysis and Angoff studies, so we can focus on more innovative work. My core goal is to improve assessment throughout the world.
I was originally trained as a psychometrician, doing an undergrad at Luther College in Math/Psych/Latin and then a PhD in Psychometrics at the University of Minnesota. I then worked multiple roles in the testing industry, including item writer, test development manager, essay test marker, consulting psychometrician, software developer, project manager, and business leader.
Research and innovation are incredibly important to me. In addition to my own research, I am cofounder and Membership Director at the International Association for Computerized Adaptive Testing, You can often find me at other important conferences like ATP, ICE, CLEAR, and NCME. I've published many papers and presentations, and my favorite remains http://pareonline.net/getvn.asp?v=16&n=1.
Entries by Nathan Thompson, PhD
Guttman errors are a concept derived from the Guttman Scaling approach to evaluating assessments. There are a number of ways that they can be used. Meijer (1994) suggests an evaluation of Guttman errors as a way to flag aberrant response data, such as cheating or low motivation. He quantified this with two different indices, G […]
What is a rubric? It’s a rule for converting unstructured responses on an assessment into structured data that we can use psychometrically. Why do we need rubrics? Measurement is a quantitative endeavor. In psychometrics, we are trying to measure things like knowledge, achievement, aptitude, or skills. So we need a way to convert qualitative data […]
Why online essay marking? Essay questions and other extended constructed response (ECR) items remain a mainstay of educational assessment. From a purely psychometric perspective, they are usually not beneficial – that is, from an item response theory paradigm, the amount of information added per minute of testing time will be less than other item types […]
Test security is an increasingly important topic. There are several causes, including globalization, technological enhancements, and the move to a gig-based economy driven by credentials. Any organization that sponsors assessments that have any stakes tied to them must be concerned with security, as the greater the stakes, the greater the incentive to cheat. And any […]
Today I read an article in The Industrial-Organizational Psychologist (the colloquial journal published by the Society for Industrial Organizational Psychology) that really resonated with me. Has Industrial-Organizational Psychology Lost Its Way? -Deniz S. Ones, Robert B. Kaiser, Tomas Chamorro-Premuzic, Cicek Svensson Why? Because I think a lot of the points they are making are also true […]
I just returned from the 2017 Innovations in Testing conference in Scottsdale, AZ, organized by the Association of Test Publishers. This is one of my favorite conferences because it contains a compelling blend of ingredients – quality psychometrics, innovative technology, and the business of assessment – and the 2017 edition definitely did not disappoint. This year I was […]
There are a number of acceptable methodologies in the psyychometric literature for standard setting studies, also known as cutscores or passing points. Examples include Angoff, modified-Angoff, Bookmark, Contrasting Groups, and Borderline. The modified-Angoff approach is by far the most commonly used, yet it remains a black box to many professionals in the testing industry, especially non-psychometricians […]
I often hear this question, especially regarding the scaled scoring functionality found in software like FastTest and Xcalibre. The following is adapted from lecture notes I wrote while teaching a course in Measurement and Assessment at the University of Cincinnati. Scaling: Sort of a Tale of Two Cities Scaling at the test level really has […]