ASC hosted a webinar on June 14, 2022, as part of its AI In Assessment Series.  This edition is an interview with the father of computerized adaptive testing, Prof. David J. Weiss.

Learn more about his journey in the world of psychology, education, and assessment, from his initial interest in measuring individual differences to early exposure to computers, to the initial applications of item response theory in the development of the CAT approach as part of a research team at the University of Minnesota. He also discusses his recent research and expectations for the future of AI in Assessment.

 

Prof. David J. Weiss

Dr. David J. Weiss is widely regarded as the father of computerized adaptive testing (CAT) and is one of the first people to apply computers to deliver assessments. Dr. Weiss has been incredibly impactful in the field of psychometrics and assessment:

Professor at the University of Minnesota, longtime director of the Psychometric Methods program, where he supervised 37 PhDs

– Founder of the journal: Applied Psychological Measurement

– Founder of the Journal of Computerized Adaptive Testing (JCAT)

– Co-Founder of Assessment Systems Corporation (assess.com)

– Co-Founder of Insurance Testing Corporation

– Co-Founder of the International Association for Computerized Adaptive Testing (iacat.org)

– Published hundreds of journal articles, reports, and conference presentations

ASC partners with Escuela Superior de Administración Pública (ESAP) to develop and securely deliver nationwide civil service exams in Colombia as well as admissions exams for educational programs in the field of civil service.  We worked together to provide a comprehensive solution to modernize these assessments when the COVID-19 pandemic hit.  Learn more below!

Requirements For Escuela Superior de Administración Pública

Escuela Superior de Administración Pública (ESAP) is a public Colombian institution which offers educational programs for bachelor and post graduate degrees, but it’s also involved in the development of custom tests for governmental pre-employment programs. That means they are constantly searching for better solutions to meet their assessing needs. The outbreak of the pandemic in 2020 pushed them to look for online solutions for their civil service exams, but just as important, simple and easy to understand solutions to assess all kinds of populations, from prospective students to high level professionals to fill directive governmental positions all across the country.

When they faced this immediate need, ESAP understood that their best bet would be to look for existing solutions rather than developing their own. However, after trying other online services, they discovered they had very specific needs that required an strategic ally instead of a mere deliverer of prebuilt tests.  Requirements includes professional item banking, online test delivery, and modern psychometrics like item response theory and adaptive testing.  It also required a candidate interface in Spanish, and the option for remote proctoring in Spanish.  ASC’s FastTest platform was an ideal fit.

¿Qué Es La Vigilancia (Proctoring) En Línea?

Project Approach

Psychometrician and decision maker teams from Escuela Superior de Administración Pública met with our expert and bilingual staff to adequately understand their needs, as they were not fluent English speakers. From there, we could offer the tools that better fit them and offered our support throughout their planning and designing process. When we started working together, ESAP already had upcoming test deliveries and the best option from scratch was to look for configurations as clean and straightforward as possible, given the fact that ESAP constantly assess populations all across the country and their stakeholders always look for successful projects, where absenteeism rate is not a hurdle to their objectives.bogota skyline

Implementation

The key evidence of the successful implementation of the FastTest platform into ESAP’s daily work is that they were able to keep up with their tight agenda, which includes parallel projects and activities of item banking, online test deliveries, scoring and report making. Such experience made ESAP pioneers in Colombia in using tools designed specifically for these purposes, instead of adapting other platforms they previously used and spending valuable time of their team in operative activities. Our support team took ESAP’s needs very seriously, lending a hand to do some of the operative duties when a big number of different tests were being created.

Furthermore, these accomplishments encouraged ESAP to go for more projects, and the satisfaction of their contracting institutions has been the best presentation for new clients to earn their trust to supply their assessing needs.

Regional Challenges

Colombia is a country located in South America, and its only official language is Spanish. Some percentage of their inhabitants in main cities have skills in other languages, but a vast majority of their total population do not have any. This challenge required a service provider with Spanish language support, allowing to present all dialogs and navigation options in this Spanish. This strategy also had to go along with a solid communication campaign to make the whole assessing process easy and even familiar to the examinees. This was a key strategy to ensure their best attitude, as online assessments have been used in Colombia in various contexts, but never before in such big and high stakes projects as these civil service exams and admissions tests.

Technical / Digital Challenges

One of the main concerns the Escuela Superior de Administración Pública had when they considered moving to an online solution was if it would be harder for participants to take their exams, given the fact that Colombia is a country with great challenges regarding the internet coverage and a large portion of their population is not used to handling computers. From the very first test delivery, the absenteeism rate was equal to the one observed in previous paper-and-pencil assessments they led before, allowing them to believe that going for this online strategy was easy to understand, thanks to its straightforward user interface.

The support that ASC provided to the ESAP includes bilingual staff to support any kind of questions they may have as administrators or requests from their stakeholders and legal requirements, as well as assistance to those examinees who might need help or guidance with their processes and procedures. We have also set a communication strategy between ESAP and ASC to let us know when the big test deliveries are coming, to adjust our servers and configurations accordingly.

Scale and Complexity

To understand the nature and complexity of ESAP’s activities, we need to mention that they’re usually involved in all kinds of different and parallel projects. That means they must design a workflow to accomplish their goals in projects where they need to assess different traits in different demographic populations. They manage quick projects to select their provisional staff, assessments to select their prospect students and high stakes projects to select the best candidates to fill all kinds of governmental positions.

To select their students, ESAP uses tests that includes general and specific competencies related to the field they intend to study. For the pre-employment programs, ESAP designs and delivers hybrid tests to assess both personal competencies required for the job and behaviors expected from the candidates. The objectives of the assessing are defined by the kind of process, field, and level (or background) required.

Benefits for Escuela Superior de Administración Pública

The flexible online testing solution has allowed Escuela Superior de Administración Pública to successfully deliver exams for top-level positions with only 1 applicant, all the way up to sessions for more than 12 thousand examinees simultaneously.  ASC has been working together with ESAP for two years now, and we have delivered around 90 thousand online tests during that time.  The project was successful enough that other institutions in Colombia have invited ESAP to serve as a consultant or vendor to them.

 

Authors: Nathan Thompson & Leonardo Copete Parra

 

ASC is proud to announce that we have successfully passed an audit for FISMA / FedRAMP Moderate, demonstrating our extremely high security standards for our online assessment platform!  FISMA and FedRAMP are both security protocols that are required to provide cloud-based software to the United States government, based on the National Institute of Standards and Technology (NIST) controls, outlined in NIST SP 800-53.  More information is below.

What does it mean that we achieved FISMA / FedRAMP?

If you are a US state or federal government entity, this means that it is much easier to utilize ASC’s powerful software for item banking, online testing, and psychometrics.  You can serve as a sponsor for an Authority to Operate (ATO).  If you are not such an entity, it means you can rest assured that ASC’s commitment to security is strong enough that it meets such stringent standards.

There are many aspects that go into building a platform of this quality, and then of course there is the substantial investment of a third-party audit.  This includes code quality, code review, user roles, separation of concerns, staff access to servers, tracking of tickets and code releases, etc.

More information on FISMA / FedRAMP

https://www.paloaltonetworks.com/cyberpedia/difference-between-fisma-and-fedramp

https://www.fedramp.gov/program-basics/

https://foresite.com/blog/fisma-vs-fedramp-and-nist-making-sense-of-government-compliance-standards/

Yes, I’d like to learn more.

Please contact us for a software demo or request a trial account.  We’d love to hear your requirements!

 

The California Department of Human Resources (CalHR, calhr.ca.gov/) has selected Assessment Systems Corporation (ASC, assess.com) as its vendor for an online assessment platform. CalHR is responsible for the personnel selection and hiring of many job roles for the State, and delivers hundreds of thousands of tests per year to job applicants. CalHR seeks to migrate to a modern cloud-based platform that allows it to manage large item banks, quickly publish new test forms, and deliver large-scale assessments that align with modern psychometrics like item response theory (IRT) and computerized adaptive testing (CAT).

Assess.ai as a solution

ASC’s landmark assessment platform Assess.ai was selected as a solution for this project. ASC has been providing computerized assessment platforms with modern psychometric capabilities since the 1980s, and released Assess.ai in 2019 as a successor to its industry-leading platform FastTest. It includes modules for item authoring, item review, automated item generation, test publishing, online delivery, and automated psychometric reporting.

Read the full article here.

Multistage adaptive testing

Multistage testing

Automated item generation

automated item generation

Nathan Thompson, Ph.D., was recently invited to talk about ASC and the future of educational assessment on the Ednorth EdTech Podcast.

EdNorth is an association dedicated to continuing the long history of innovation in educational technology that has been rooted in the Twin Cities of Minnesota (Minneapolis / Saint Paul). Click below to listen online, or find it on Apple or other podcast aggregators.

Dr. Thompson discusses the history of ASC, ASC’s mission to improve assessment with quality psychometrics, and how AI and automation are becoming used more often – even though they’ve been part of the Psychometrics field for a century.

Thank you to Dave Swerdlick and the team at EdNorth for the opportunity to speak!

Last week, I had the opportunity to attend the 2017 Conference on Test Security (COTS), hosted by the University of Wisconsin-Madison.  If your organization has any concerns about test security (that is, you have any sort of real stakes tied to your test!), I recommend that you attend COTS.  It has a great mix of psychometric research with practical discussions such as policies and procedures.  While it was originally titled “Conference on Statistical Detection of Test Fraud” it has since expanded its scope and thankfully reduced the number of syllables in the name.

The venue was the Pyle Center on the shores of Lake Mendota, just one block from the famous State Street.  Madison is a beautiful city, situated on an isthmus between two large lakes; great for visuals but not so great for traffic patterns.  The location was incredibly convenient for me, as it is driving distance from my home in Minnesota, and allowed me to stay with my family in nearby Watertown, watch my brother coach a high school football game in Columbus, and stop at the CamRock mountain bike trails that I’ve always wanted to try (highly recommend!).

One highlight of the conference was the chance to present with my friend, former colleague, and graduate school office-mate Jennifer Davis from the National Association of Boards of Pharmacy.  We compared three software programs for psychometric forensics: SIFT, CopyDetect, and Outlier Detection Tool.  SIFT and CopyDetect both provide several collusion indices, but SIFT provides more and is incredibly faster (CopyDetect took 2 hours to run 134 examinees).  The Outlier Detection Tool is an internal spreadsheet used by NABP that serves a slightly different purpose; for more information, contact them.

The best part of the Conference on Test Security, just like the IACAT conference I just attended, was the chance to spend time with old friends that I only see once every year or two, as well as make new friends such as a researcher from ASC’s partner Ascend Learning.  In fact, I didn’t even get a chance to attend any sessions on the second day, I instead spent the time talking to colleagues.

Biggest disappointment?  I didn’t hang around until Saturday to attend the Badger game and join the traditional “Jump Around!”

ASC attended the 2016 Conference on Test Security (COTS), held October 18-20 in Cedar Rapids IA, graciously hosted by Pearson. The conference brings together thought leaders on all aspects of test security, including statistical detection of test fraud, management of test centers, candidate agreements, investigations, and legal implications. ASC was lucky enough to win three presentation spots.  Please check out the abstracts below.  If you are interested in learning more, please get in touch!

 

SIFT: Software for Investing Fraud in Testing
Nathan Thompson & Terry Ausman

SIFT is a software program specifically designed to bring data forensics to more practitioners. Widespread application of data forensics, like other advanced psychometric topics, is somewhat limited when an organization’s only options are to hire outside consultants or attempt to write code themselves. SIFT enables organizations with smaller budgets to apply some data forensics by automating the calculation of complex indices as well as simpler yet important statistics, in a user-friendly interface.student cheating on test

The most complex portion is a set of 10 collusion indices (more in development) from which the user can choose. SIFT also provides functionality for response time analysis, including the Response Time Effort index (Wise & Kong). More common analyses include classical item statistics, mean test times, score gains, and pass rates. All indices are also rolled-up into two nested levels of groups (for example, school and indices are also rolled-up into two nested levels of groups (for example, school and district or country and city) to facilitate identification of locations with issues.

All output is provided in spreadsheets for easy viewing, manipulation, and secondary analysis. This allows, for example, a small certification organization to obtain all of this output in only a few hours of work, and quickly investigate locations before a test is further compromised.

 

Statistical Detection: Where Do I Start?
Nathan Thompson & Terry Ausman

How can statistical detection of test fraud be better directed, or test security practices in general for that matter? This presentation will begin by cogently outlining various types of analysis into a framework by aligning them with the hypothesis each intends to test, show that this framework should be used to direct efforts, and then provide some real experience by applying these to real data sets from K-12 education and professional certification.

In the first section, we will start by identifying the common hypotheses to be tested, including: examinee copying, brain dump makers, brain dump takers, proctor/teacher involvement, low motivation, and compromised locations. Next, we match up analyses, such as how collusion indices are designed to elucidate copying but can also help find brain dump takers. We also provide deeper explanations on the specific analyses.

In the second section, we apply this framework to the analysis of real data sets. This will show how the framework can be useful in directing data forensics work rather than aimlessly poking around. It will also demonstrate usage of the statistical analyses, facilitating learning of the approaches as well as driving discussions of practical issues faced by attendees. The final portion of the presentation will then be just such a discussion.

 

Statistical Methods of Detecting Test Fraud: Can We Get More Practitioners on Board?
Nathan Thompson

Statistical methods of detecting test fraud have been around since the 1970s, but are still not in general use by most practitioners, instead being limited to a few specialists.  Similarly, best practices in test security are still not commonly used except at large organizations with big stakes in play.  First, we will discuss the sort of hurdles that can prevent more professionals from learning about the topic, or for knowledgeable professionals to apply best practices.  Next, we will discuss some potential solutions to each of those hurdles.  The goal is to increase the validity of scores being reported throughout the industry by elevating the profession.

 

Every Spring, the Association of Test Publishers (ATP) hosts its annual conference, Innovations in Testing.  This is the leading conference in the testing industry, with nearly 1000 people from major testing vendors and a wide range of test sponsors, from school districts to certification boards to employment testing companies.  While the technical depth is much lower that pure-scholar conferences like NCME and IACAT, it is the top conference for networking, business contacts, and discussion of practical issues.

The conference is typically held in a warm location at the end of a long winter.  This year did not disappoint, with Orlando providing us with a sunny 75 degrees each day!

Interested in attending a conference on assessment and psychometrics?  We provide this list to help you decide.

ATP Presentations

Here are the four presentations that were presented by the Assessment Systems team:

Let the CAT out of the Bag: Making Adaptive Testing more Accessible and Feasible

This session explored the barriers to implementing adaptive testing and how to address them.  It still remains underutilized, and it is our job as a profession to fix that.

FastTest: A Comprehensive System for Assessment

FastTest  revolutionizes how assessments are developed and delivered with scalable security.  We provided a demo session with an open discussion.  Want to see a demo yourself?  Get in touch!

Is Remote Proctoring Really Less Secure?

Rethink security. How can we leverage technology to improve proctoring?  Is the 2000-year old approach still the most valid?  Let’s critically evaluate remote proctoring and how technology will make it better than one person watching a room of 30 computers.  Privacy is another important consideration.

The Best of Both Worlds: Leveraging TEIs without Sacrificing Psychometrics

Anyone can dream up a drag and drop item. Can you make it automatically scored with IRT in real time?  We discussed our partnership with Charles County (MD) Public Schools to develop a system that allowed a single school district to develop quality assessments on par with what a major testing company would do. Learn more about our item authoring platform.

The topic of test security is an emerging field within the assessment industry.  Test fraud is one of the most salient threats to score validity, so methods to combat it are extremely important to all of us.  So important, in fact, that an annual conference has been established that is devoted to the topic: the Conference on Test Security (COTS).

The 2015 Conference on Test Security

The 2015 edition of Conference on Test Security was hosted by the Center for Educational Testing and Evaluation at the University of Kansas.  Assessment Systems had the privilege to present two full sessions: One Size Does Not Fit All: Making Test Security Configurable and Scalable and Let’s Rethink How Technology Can Improve Proctoring.  Abstracts for these are below; if you would like to learn more, please contact us.  Additionally, we had the opportunity for a product demonstration of our upcoming data forensics software, SIFT (more on that below).

The Conference on Test Security kicked off with a keynote on the now-famous Atlanta cheating scandal.  This scandal was unique in that it was systematic and top-down rather than bottom-up.  The follow-up message was a stressing of the difference between assessment and accountability; because a test is tied to accountability standards does not mean the test in itself is bad, or that all testing is bad.

One of the most commonly presented topics at the conference is data forensics.  In fact, the Conference on Test Security used to be call the Conference on Statistical Detection of Test Fraud.  But while there has been research on statistical detection of test fraud for more than 50 years, it is effectively a much younger topic and we are still learning a lot.  Moreover, there are no good software programs that are publicly available to help organizations implement best practices in data forensics.  This is where SIFT comes in.

What is Data Forensics?

In the realm of test security, data forensics refers to analysis of data to find evidence of various types of test fraud.  There are a few big types, and the approach to analysis can be quite different.  Here are some descriptions, though this is far from a complete treatment of the topic!

Answer-changing: In Atlanta, teachers and administrators would change answers on student bubble sheets after the test was turned in.  This involves quantification of the changes, and then analysis of right-to-wrong vs. wrong-to-right changes, amongst other things.  This, of course, is primarily relevant for paper-based tests, but some answer-changing can happen on computer-based tests.

Preknowledge: If an examinee purchases a copy of the test off the internet from an illegal “brain dump” website, they will get a surprisingly high score while taking less time than expected.  This could be on all items or a subset.

Item harvesting: An examinee is paid to memorize as many items as they can.  They might spend 5 minutes each on the first 15 items and not even look at the remainder.

Collusion:  The age-old issue of Student A copying off Student B is collusion, but it can also involve multiple students and even teachers or other people.  Statistical analysis looks at unusual patterns in the response data.

How can I implement some of this myself?

Unfortunately, there is no publicly available software that is adequately devoted to data forensics.  Existing software is very limited in the analysis it provides and/or its usability.  For example, there are some packages available in the R programming language, but you need to learn to program in R!  Therefore Assessment Systems has developed our own system, entitled Software for Investigating Test Fraud (SIFT), to meet this market need.

SIFT test security data forensics

 

SIFT will provide a wide range of analysis, including a number of collision indices (see the first six on the left; we will do more!), flagging of possible preknowledge or item harvesting, unusual time patterns, etc.  It will also aggregate the analyses up a few levels; for example flagging test centers or schools that have unusually high numbers of students with unusually high collusion or time pattern flags.

A beta version will be available in December, with a full version available in 2016.  If you are interested, please contact us!

Presentation Abstracts

One Size Does Not Fit All: Making Test Security Configurable and Scalable

Development of an organization’s test security plan involves many choices, an important aspect of which is the test development, publishing, and delivery process.  Much of this process is now browser-based for many organizations.  While there are risks involved with this approach, it provides much more flexibility and control for organizations, plus additional advantages such as immediate republishing.  This is especially useful because different programs/tests within an organization might vary widely.  It is therefore ideal to have an assessment platform that maximizes the configurability security.

This presentation will provide a model to evaluate security risks, determine relevant tactics, and design your delivery solution by configuring test publishing/delivery option around these tactics to ensure test integrity.  Key configurations include:

  • Regular browser vs. lockdown browser
  • No proctor, webcam proctor, or live proctor
  • Login processes such as student codes, proctor codes, and ID verification
  • Delivery approach: linear, LOFT, CAT
  • Practical constraints like setting delivery windows, time limits, and allowing review
  • Complete event tracking during the exam
  • Data forensics within the system.

In addition, we invite attendees to discuss technological approaches they have taken to addressing test security risks, and how they fit into the general model.

Let’s Rethink How Technology Can Improve Proctoring

Technology has revolutionized much of assessment.  However, a large proportion of proctoring is still done the same way it was 30 years ago.  How can we best leverage technology to improve test security by improving the proctoring of an assessment?  Much of this discussion revolves around remote proctoring (RP), but there are other aspects.  For example, consider a candidate focusing on memorizing 10 items: can this be better addressed by real-time monitoring of irregular response times with RP than by a single in-person proctor on the other side of the room?  Or by LOFT/CAT delivery?

This presentation discusses the security risks and validity threats that are intended to be addressed by proctors and how they might be instead addressed by technology in some way.  Some of the axes of comparison include:

  • Confirming ID of examinee
  • Provision of instructions
  • Confirmation of clean test area with only allowed materials
  • Monitoring of examinee actions during test time
  • Maintaining standardized test environment
  • Protecting test content
  • Monitoring irregular time patterns

In addition, we can consider how we can augment the message of deterrence with tactics like data forensics, strong agreements, possibility of immediate test shutdown, and more secure delivery methods like LOFT.


Contact Us To Get Assessment Solution