Sarah Alahmadi, Ph.D.
Psychometrician
Sarah has diverse assessment consulting experience comprising projects to develop and maintain psychometrically sound assessment programs (e.g., program effectiveness studies, item writing, statistical/psychometric analyses, and facilitation of job task analysis and standard setting workshops).
She received her Ph.D. in Assessment and Measurement from James Madison University, M.S. in Experimental Psychology from Villanova University, and B.A. in Psychology from the University of Virginia. With a background that combines the sciences of psychology, statistics, and measurement, Sarah strives to produce accessible research that affords practical solutions to assessment/testing programs and their stakeholders.
Her research, presented at national and international conferences and published in peer-reviewed journals, answers questions relevant to both high- and low-stakes assessments: what are the best existing methods to improve the accuracy of equating in the presence of item drift? What impact does low examinee test-taking effort have on the trustworthiness of policy and psychometric decisions, and how to counteract it? Are there interdisciplinary theories that we can use to enhance our standard setting approaches? Sarah is committed to staying at the forefront of the field, continually seeking out new knowledge and innovations in assessment practices and psychometrics.
Outside of her professional life, Sarah likes to immerse herself in all forms of art, from literary works to painting and music. Her favorite way to end long and productive workdays is a boxing workout. A self-proclaimed “foodie”, she is always looking for the latest culinary hotspot in town or whipping up new recipes from different world cuisines herself.
For a copy of Ms. Alahmadi’s resume, click here.
Dr. Alahmadi’s Selected Scholarly Research
Selected Publications
Alahmadi, S., & DeMars, C. E. (2025). From item estimates to test operations: The cascading effect of rapid guessing. Journal of Educational Measurement. https://doi.org/10.1111/jedm.70010
Alahmadi, S., & DeMars, C. E. (2024). Comparing examinee-based and response-based motivation filtering methods in remote, low-stakes testing. Applied Measurement in Education, 37(1), 43-56. https://doi.org/10.1080/08957347.2024.2311927
Alahmadi, S., Jones, A. T., Barry, C. L., & Ibáñez, B. (2023). Comparing drift detection methods for accurate Rasch equating in different sample sizes. Applied Measurement in Education, 36(2), 157-170. https://doi.org/10.1080/08957347.2023.2201704
Alahmadi, S., & DeMars, C. E. (2022). Large-scale assessment during a pandemic: Results from James Madison University’s Assessment Day. Research and Practice in Assessment, 17(1), 5-15.
Finney, S. J., Gilmore, G. R., & Alahmadi, S. (2021). “What’s a good measure of that outcome?” Resources to find existing and psychometrically sound measures. Research & Practice in Assessment, 16(2), 46-58.
Selected Presentations
Alahmadi, S. & Buckendahl, C. W. (2024, April). Priors and evidence: Introducing Bayesian reasoning to standard setting. Poster presented at the annual meeting of the National Council on Measurement in Education (NCME), Philadelphia, PA.
Alahmadi, S. (2023, September). Rethinking precedents and reimagining norms in current assessment practice. Invited presentation presented for the Center for Assessment and Research Studies (CARS) at JMU, Harrisonburg, VA.
Alahmadi, S., Goodman, J. T., Jackson, J., Li, X., & Runyon, C. R. (2023, April). In B. C. Leventhal (moderator), Internships in the measurement profession: A discussion among organizers, mentors, and students. Panel presented at the annual meeting of NCME, Chicago, IL.
Alahmadi, S. & DeMars, C. E. (2022, October). What if we ignore non-effortful responses: The impact of rapid guessing on item parameter estimates. Poster presented at the annual meeting of Northeastern Educational Research Association (NERA), Trumbull, CT.
Alahmadi, S. (2022, March). Using process data for motivation filtering in remote, low-stakes assessment. Award-winning research presented at the Association of Test Publishers’ annual Innovation in Testing conference, Orlando, FL.


