Y

C++ Software Engineer - Ai Training

YO IT Consulting

Engineering & Technology

Today
New

Job descriptions & requirements


Work Mode: Remote
Engagement Type: Independent Contractor
Schedule: Full-Time or Part-Time Contract
Language Requirement: Fluent English
Role Overview
We partner with leading AI teams to improve the quality, usefulness, and reliability of general-purpose conversational AI systems.
This project focuses specifically on evaluating and improving how AI systems reason about code, generate programming solutions, and explain technical concepts across various complexity levels.
The role involves rigorous technical evaluation of AI-generated responses in coding and software engineering contexts.
What You’ll Do
Evaluate LLM-generated responses to coding and software engineering queries for accuracy, reasoning, clarity, and completeness
Conduct fact-checking using trusted public sources and authoritative references
Conduct accuracy testing by executing code and validating outputs using appropriate tools
Annotate model responses by identifying strengths, areas of improvement, and factual or conceptual inaccuracies
Assess code quality, readability, algorithmic soundness, and explanation quality
Ensure model responses align with expected conversational behavior and system guidelines
Apply consistent evaluation standards by following clear taxonomies, benchmarks, and detailed evaluation guidelines
Who You Are
You hold a BS, MS, or PhD in Computer Science or a closely related field
You have significant real-world experience in software engineering or related technical roles
You are an expert in at least one relevant programming language (e.g., Python, Java, C++, JavaScript, Go, Rust)
You are able to solve HackerRank or LeetCode Medium and Hard-level problems independently
You have experience contributing to well-known open-source projects, including merged pull requests
You have significant experience using LLMs while coding and understand their strengths and failure modes
You have strong attention to detail and are comfortable evaluating complex technical reasoning, identifying subtle bugs or logical flaws
Nice-to-Have Specialties
Prior experience with RLHF, model evaluation, or data annotation work
Track record in competitive programming
Experience reviewing code in production environments
Familiarity with multiple programming paradigms or ecosystems
Experience explaining complex technical concepts to non-expert audiences
What Success Looks Like
You identify incorrect logic, inefficiencies, edge cases, or misleading explanations in model-generated code, technical concepts, and system design discussions
Your feedback improves the correctness, robustness, and clarity of AI coding outputs
You deliver reproducible evaluation artifacts that strengthen model performance
<

Important safety tips

  • Do not make any payment without confirming with the Jobberman Customer Support Team.
  • If you think this advert is not genuine, please report it via the Report Job link below.

This action will pause all job alerts. Are you sure?

Cancel Proceed

Similar jobs

Lorem ipsum

Lorem ipsum dolor (Location) Lorem ipsum Confidential
3 years ago

Stay Updated

Join our newsletter and get the latest job listings and career insights delivered straight to your inbox.

v2.homepage.newsletter_signup.choose_type

We care about the protection of your data. Read our

We care about the protection of your data. Read our  privacy policy .

Follow us On:
Get it on Google Play
2026 Jobberman

Or your alerts