AI Safety & Policy Annotator
Job summary
We are looking for a highly analytical and principled AI Safety & Policy Annotator to help ensure AI systems operate within established safety and compliance guidelines. This role focuses on reviewing AI-generated outputs, identifying risks, and enforcing policy standards across training and evaluation workflows.
Job descriptions & requirements
Responsibilities:
- Review AI-generated outputs for safety violations and harmful content
- Tag disallowed, restricted, or sensitive content according to policy guidelines
- Evaluate outputs for policy compliance and alignment
- Annotate ambiguous or edge-case scenarios requiring nuanced judgment
- Classify risk severity levels and provide structured compliance scoring
- Document findings clearly to support model improvement and safety reinforcement
Requirements:
- Bachelor’s degree in Computer Science, Information Technology, or equivalent professional experience
- 2 - 4 years of practical experience with AI safety policies
- Strong ethical reasoning and sound judgment
- High consistency in decision-making across repeated evaluations
- Ability to interpret nuanced or context-dependent content
- Exceptional attention to detail
Important safety tips
- Do not make any payment without confirming with the Jobberman Customer Support Team.
- If you think this advert is not genuine, please report it via the Report Job link below.