Rise Data Labs builds high-fidelity data services to power AI systems. We partner with leading research teams and industry stakeholders to improve model quality through human oversight, evaluation, and annotation. Our work spans image, language, and multimodal systems, with a focus on real-world applicability, trust, and safety.
As an Image Evaluator, you will be part of a project that supports an AI research initiative. Your role involves:
Reviewing and assessing outputs generated from AI systems.
Judging how well outputs align with given prompts and established quality standards.
Using guidelines/rubrics to provide ratings and structured feedback.
Helping refine evaluation criteria over time as part of the research collaboration.
Comfortable judging visual content with a good eye for detail, quality, and alignment.
Able to follow guidelines and apply them consistently.
Reliable, communicative, and detail-oriented.
Curious about AI and excited to contribute to its improvement.
Access to a stable internet connection and a computer capable of handling visual tasks.
Ability to meet productivity and quality thresholds.
Familiarity or comfort with AI / generative models, visual content, or creative/graphic domains is helpful.
Willingness to work under confidentiality agreements and evolving guidelines.
Prior experience working with data vendors such as Scale AI, Surge AI, Sama, Labelbox, Appen, or Lionbridge.
Background in content evaluation, annotation, or quality assurance for AI/ML projects.
Exposure to research-oriented environments or human-in-the-loop workflows.
Ability to adapt quickly to changing rubrics and instructions.
Collaborate directly with an AI research team shaping cutting-edge models.
Contribute to meaningful projects with real impact in the AI space.
Flexible remote work arrangements.
Gain experience in evaluation processes central to the future of AI.
Ready to contribute your expertise to this project?
You'll be guided through our expert onboarding process
We are seeking an expert to evaluate and improve our AI models through comprehensive testing and analysis. You will be responsible for designing evaluation frameworks, conducting model assessments, and providing actionable insights for model improvement. Key Responsibilities: • Design and implement evaluation metrics for AI models • Conduct thorough testing of model performance across different scenarios • Analyze model outputs for bias, fairness, and accuracy • Collaborate with ML engineers to implement improvements • Document findings and recommendations Ideal Candidate: • PhD or Masters in Computer Science, ML, or related field • 5+ years experience in AI/ML model evaluation • Strong background in statistical analysis • Experience with evaluation frameworks and metrics • Excellent communication skills
We need legal experts to help train our AI systems for legal document analysis. This role involves reviewing, categorizing, and annotating legal documents to create training data for our legal AI platform. Responsibilities: • Review and categorize legal documents • Identify key legal concepts and entities • Annotate contracts and legal texts • Ensure accuracy and legal compliance • Provide feedback on AI model outputs Qualifications: • JD from accredited law school • 3+ years practicing law • Experience with contract review • Understanding of legal AI applications • Strong analytical skills
Help us build the next generation of financial risk assessment AI. You will work with financial data to train and validate AI models for risk prediction and analysis. Key Tasks: • Analyze financial datasets for risk patterns • Label and categorize financial transactions • Validate AI model predictions • Identify edge cases and anomalies • Contribute to model improvement strategies Requirements: • CFA, FRM, or similar financial qualification • 5+ years in financial risk management • Experience with quantitative analysis • Knowledge of regulatory requirements • Strong analytical and communication skills