| 65 Visites |
0 Candidats |
Description du poste:
About the team As a core member of our Seed Global Data Team, you'll be at the heart of our operations. Gain first-hand experience in understanding the intricacies of training Large Language Models (LLMs) with diverse data sets. As a project intern, you will have the opportunity to engage in impactful short-term projects that provide you with a glimpse of professional real-world experience. You will gain practical skills through on-the-job learning in a fast-paced work environment and develop a deeper understanding of your career interests. Applications will be reviewed on a rolling basis - we encourage you to apply early. Successful candidates must be able to commit to at least 3 months long internship period. Your Role Will Involve: 1. Help to conduct research on the latest developments in AI safety across academia and industry, and support the identification of limitations in existing evaluation paradigms. 2. Aid in designing and continuously refining safety evaluation frameworks for multi-models to assess safety-related behaviors, failure modes, and alignment with responsible AI principles. 3. Support projects to enforce safety training or evaluate safety data, which may include data analysis to uncover insights that will inform model iteration and product design improvements. Please note that this role may involve exposure to potentially harmful or sensitive content, either as a core function, through ad hoc project participation, or via escalated cases. This may include, but is not limited to, text, images, or videos depicting: - Hate speech or harassment - Self-harm or suicide-related content - Violence or cruelty - Child safety Support resources and resilience training will be provided to support employee well-being
Profil requis du candidat:
Minimum Qualifications 1. Currently pursuing Bachelor's or Master's degree in AI policy, Computer Science, Engineering, Journalism, International Relations, Law, Regional Studies, or a related discipline. 2. Strong analytical skills, with the ability to interpret both qualitative and quantitative data and translate them into clear insights. 3. Creative problem-solving mindset, with comfort working under ambiguity and leveraging tools and technology to improve processes and outputs. Preferred Qualifications: 1. Experience in AI safety, Trust & Safety, risk consulting, or risk management is highly desirable. 2. A growth mindset, with a genuine receptiveness and enthusiasm for continuous learning. Readiness to actively solicit and apply constructive feedback. Intellectually curious, self-motivated, detail-oriented, and team-oriented. 3. Deep interest in emerging technologies, user behavior, and the human impact of AI systems. Enthusiasm for learning from real-world case studies and applying insights in a high-impact setting. By submitting an application for this role, you accept and agree to our global applicant privacy policy, which may be accessed here: https://jobs.bytedance.com/en/legal/privacy If you have any questions, please reach out to us at apac-earlycareers@bytedance.com
| Origine: | Site web de l'entreprise |
| Publié: | 18 Dec 2025 (vérifié le 27 Dec 2025) |
| Type de poste: | Stage |
| Secteur: | Internet / Nouveaux Médias |
| Durée d'emploi: | 3 mois |
| Langues: | Anglais |
Entreprises |
Offres |
Pays |