Publish an internship
en
View Offer
Work > Internships > Science/Research > USA > San Jose > View Offer 

Student Researcher [Seed Vision - Multimodal Video Generation] - 2026 Start (PhD)

TikTok
United States  San Jose, United States
Internship, Science/Research, English
154
Visits
0
Applicants
Register

Job Description:

About the team The Seed Vision Team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI. Researching and developing foundational models for visual generation (images and videos), ensuring high interactivity and controllability in visual generation, understanding patterns in videos, and exploring various visual-oriented tasks based on generative foundational models. We are looking for talented individuals to join us for an internship in 2026. PhD Internships at ByteDance aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Conduct research on multimodal video generation, with a focus on improving semantic alignment between inputs and generated content. - Integrate vision-language models (e.g., CLIP, pre/post-trained VLMs) into video generation architectures to enhance input understanding. - Explore and implement joint training or fine-tuning approaches that couple VLMs with video generation backbones. - Evaluate model performance on tasks requiring high-level reasoning or detailed semantic control over generation. - Collaborate with researchers and engineers to iterate on prototypes within an existing infrastructure

Candidate Requirements:

Minimum Qualifications: - Currently pursuing a PhD in Computer Vision, Machine Learning, or a related field. - Research experience in one or more of the following areas:Vision-language models (VLMs); Multimodal or joint model training; Video generation - Solid coding ability and clean research implementation style, and expected to work with a production-grade codebase (e.g., PyTorch). - Demonstrated research ability, with first-author publications in top-tier ML/CV/AI conferences such as CVPR, ICCV, ECCV, and ICLR Preferred Qualifications: - Experience in training or fine-tuning autoregressive or diffusion-based video generation models. - Background in multimodal instruction-following, alignment, or conditioning for generation tasks. - Understanding of evaluation techniques for assessing semantic consistency in generated video

Source: Company website
Posted on: 07 Dec 2025  (verified 15 Dec 2025)
Type of offer: Internship
Industry: Internet / New Media
Languages: English
Register
121.936 jobs and internships
in 157 countries
Register
Recruiters
Top Jobs
Countries