Nikole Gregg

Nikole Gregg, third-year student in the Assessment and Measurement PhD program, completed a 10 week internship at Cambium Assessment under the supervision of Dr. Sue Lottridge, an alumna of the program. During the 10 week internship, Nikole completed a project investigating the fairness of Cambium’s automated scoring engine for English Language Learners. Automated essay scoring is an increasingly common method to score short answers and essays in K-12 assessments. Though automated scoring engines perform well in general, they are susceptible to outputting biased scores that inhibit fairness in testing. Nikole specifically worked with her colleagues to define and implement a validation approach to investigate score bias across the entirety of the automated scoring engine. She is currently putting together a proposal to present this project at the annual meeting of the National Council on Measurement in Education (NCME), as well as a paper for publication.

Beth Perkins

Beth Perkins, fourth-year student in the Assessment and Measurement PhD program, completed a summer internship at the National Board of Medical Examiners (NBME). Below she reflects on her internship experience.

This past summer, I had a remote internship at the National Board of Medical Examiners, located in Philadelphia, PA. The NBME’s internship is mainly focused on the completion of a research project, but I was also able to learn a lot about their test development, administration, scoring, and reporting process. I was one of four interns, which allowed me to connect with graduate students from other programs. Everyone involved with the internship was so welcoming, knowledgeable, and encouraging. I felt like I was part of the team even though I wasn’t working in the same physical space with everyone.

I had two project mentors, Jerusha Henderek, Ph.D. and Thai Ong, Ph.D. Jerusha and Thai are both alumni of the Assessment and Measurement program here at JMU. They were both so supportive and willing to answer any questions that I had! During the 8 week internship I (in conjunction with Jerusha and Thai) examined the variability in time it takes raters to grade patient notes from the Step 2 Clinical Skills Exam (a high-stakes performance assessment). Grading time could vary substantially across raters that are not scoring in a centralized location. Thus, understanding whether grading time varies, and subsequent factors that influence rating time can inform the rater assignment process. We used multilevel modeling to examine variability in grading time of patient notes to contribute to the limited research in this area. Differences in grading time were found for rater specialty and gender, as well as the presence of physical exam findings in the case. Two raters could spend a substantially different amount of time over the course of a year grading a similar number of notes depending on the presence of physical exam findings in the case. Consideration of note characteristics when assigning patient notes to raters could result in increased consistency in the total time spent rating patient notes across raters.

During this internship I strengthened existing skills and learned new techniques. I had to teach myself about a specific multilevel model that I had an introductory level of knowledge about. I enjoyed the experience of collaborating with Jerusha and Thai on how best to present the results from the study and in writing a proposal to present the work at a national professional conference. Every opportunity I have had to write and present research with different people has revealed so much about my strengths and areas that need to be improved. I’m looking forward to continuing our work on this project as we write a full manuscript, which we will hopefully submit for publication. The internship gave me the opportunity to interact with many professionals in the field and see the wide range of opportunities that will be available after graduate school!

Paulius Satkus

Paulius Satkus, second-year student in the Assessment and Measurement PhD program, completed a summer internship at the American Board of Surgery (ABS). Below he reflects on his internship experience.

In June 2020, I had the pleasure to join the American Board of Surgery (ABS) as the psychometrics intern. ABS is a non-profit organization responsible for administering two certification tests for licensed surgeons who seek to become Board-Certified. The psychometrics team consists of two professional psychometricians (both graduates from JMU's Assessment & Measurement Ph.D. program) and an experienced data analyst. My role on this team was to conduct a research study related to automated test assembly (ATA). Specifically, ABS had a research question regarding the use of item discrimination indices with ATA models to improve classification accuracy of their extensive medical knowledge test. In the 10 weeks of the internship, I reviewed the literature on ATA and carried out a simulation study examining the performance of several ATA models. I presented the results of the study at the end of internship program to the psychometrics team and we hope to present our findings at a national conference in the upcoming year.

The internship experience was rewarding to me in few ways. First, I was able to directly apply the knowledge and skills that I developed in my graduate courses and experiences within a real-world setting. Being a part of the testing organization for the summer provided me an opportunity to reflect on my experiences in the Assessment & Measurement Ph.D. program and develop plans to deepen my knowledge in several areas while I'm still in graduate school. Second, it was rewarding to observe operational psychometric work and the types of daily activities and challenges involved. Moreover, I learned more about the standards and expectations of the profession. The summer internship solidified my professional and career goals for the future.

Back to Top