Odixcity Consulting

LLM Evaluator

Odixcity Consulting

Software & Data

2 weeks ago
Easy apply New

Job summary

We are seeking a detail-oriented and analytical LLM Evaluator to assess, analyze, and improve the performance of large language models (LLMs). In this role, you will evaluate AI-generated content for accuracy, coherence, factual reliability, bias, safety, and alignment with defined guidelines.

Min Qualification: Degree Experience Level: Mid level Experience Length: 4 years

Job descriptions & requirements

Responsibilities:

  • Evaluate and rank model-generated text based on complex rubrics covering dimensions such as factuality, coherence, safety, instruction- following, and creativity.
  • Review multiple model responses to the same prompt and determine which output a human would prefer, providing justifications for your choices.
  • Provide clear, concise feedback to the modeling and training teams regarding recurring failure models observed during evaluation sessions.
  •  Attempt to “break” the model by crafting prompts designed to elicit biased, harmful, or insecure outputs to help patch safety vulnerabilities.
  • Collaborate with the quality assurance team to suggest improvements to evaluation guidelines when you encounter ambiguous or unclassifiable edge cases.
  • Participate in regular “cross-checking” sessions with other evaluators to calibrate scoring standards and ensure inter-rater reliability across the global team.
  • When a model underperforms, dig deeper than the surface score to hypothesize “why” the model made a specific error (e.g., training data vs. prompt misinterpretation).
  • Identify and flag novel or unexpected model behaviors to the research team, contributing to a living library of unique model outputs and failure modes.


Requirements:

  • Minimum of 4 years of professional experience in a relevant field such as Computational Linguistics, Data Analysis, Technical Writing, Quality Assurance (specifically for NLP/AI), or cognitive science.
  • Bachelor’s degree in Computer Science, or a related field.
  • Deep understanding of how to craft prompts to elicit specific behaviors and test model limits.
  • Ability to look at a text output and explain “why” it is “good” or “bad” based on logic, tone, factuality, and instruction adherence.
  • Experience working with Reinforcement Learning from Human Feedback (RLHF) data collection.
  • Proven experience in monitoring and improving consistency among evaluation teams. Ability to analyze IAA scores and conduct calibration sessions to align judgment.
  • Experience sourcing, cleaning, and annotating datasets specifically for the fine-tuning or evaluating LLMs. Understanding of data distribution and its impact on model performance.
  • =Familiarity with A/B testing concepts applied to AI. Ability to help design experiments to test if a new model version is truly “better” than the previous one.

 

Remuneration: NGN 500,000 monthly

Important safety tips

  • Do not make any payment without confirming with the Jobberman Customer Support Team.
  • If you think this advert is not genuine, please report it via the Report Job link below.

This action will pause all job alerts. Are you sure?

Cancel Proceed

Similar jobs

Lorem ipsum

Lorem ipsum dolor (Location) Lorem ipsum Confidential
3 years ago

Stay Updated

Join our newsletter and get the latest job listings and career insights delivered straight to your inbox.

v2.homepage.newsletter_signup.choose_type

We care about the protection of your data. Read our

We care about the protection of your data. Read our  privacy policy .

Follow us On:
Get it on Google Play
2026 Jobberman

Or your alerts