Post-bacc

Project
PREP0003674
Overview

This project focuses on using Large Language Models (LLMs) to provide annotations of evaluation data (a.k.a., LLM as judge), and the design of an Inter-Annotator Agreement study to assess the reliability of both human and LLM annotations. The candidate will explore assessing the indicators of a given AI-related risk, determining how to identify them, and providing annotators with examples to annotate the presence of various risks. The project aims to develop an annotation framework for AI risk assessment and establish metrics for data quality in AI risk research, supporting broader work at NIST in assessing and measuring the validity and reliability of AI-related risks in data annotation.

Reliability of Human and LLM Annotations for AI Risk Assessment

Qualifications
  • Background in Computer Science, Data Science, or related field.
  • Education level : undergraduate or graduate student 
  • Strong interest in data annotation and AI risks 
  • Familiarity with scientific reading and technical writing
Research Proposal

Key Responsibilities

  • Gain familiarity with existing literature on data annotation and LLM as judge 
  • Understand NIST’s role and ongoing efforts in assessing and measuring the validity and reliability of AI-related risks in data annotation
  • Contribute to developing an annotation framework for AI risk assessment
  • Collaborate effectively with cross-functional and interdisciplinary stakeholders to ensure successful project outcomes

Deliverables

  • Contributions to a NIST report that supports ongoing NIST AI evaluation efforts focused on the design of an Inter-Annotator Agreement to assess the reliability of both human and LLM annotations.
NIST Sponsor
Mark A. Przybocki
Group
Information Access - HQ
Schedule of Appointment
Part time
Start Date
Sponsor email
Work Location
Onsite NIST
Salary / Hourly rate {Max}
$30.00
Total Hours per week
20
End Date