Pubblicare uno stage
it
Offerta
Lavoro > Stage > Scienza/Ricerca > Stati Uniti > San Jose > Offerta 

Student Researcher [Seed Vision - World Models for Video Generation] - 2026 Start (PhD)

TikTok
Stati Uniti  San Jose, Stati Uniti
Stage, Scienza/Ricerca, Inglese
150
Visite
0
Candidati
Registrarsi

Descrizione del lavoro:

About the team The Seed Vision Team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI. Researching and developing foundational models for visual generation (images and videos), ensuring high interactivity and controllability in visual generation, understanding patterns in videos, and exploring various visual-oriented tasks based on generative foundational models. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Develop world models that simulate physical, embodied, or abstract environments through video generation. - Learn latent dynamics from video sequences that support controllable or goal-conditioned generation. - Model spatial-temporal consistency, causality, and state transitions in realistic or synthetic video datasets. - Explore integration of control signals (e.g., actions, prompts, scene graphs) into video generation pipelines. - Build benchmarks and analysis tools to measure dynamics understanding and causal consistency

Requisiti del candidato:

Minimum Qualifications: - Currently pursuing a PhD in Computer Vision, Machine Learning, or a related field. - Research experience in video generation, world models, or dynamics modeling. - First-author publications in CVPR, ICCV, ECCV, NeurIPS, ICLR, or ICML. - Familiarity with generative frameworks (e.g., VAE, diffusion, transformers) and temporal modeling. Preferred Qualifications: - Experience with model-based RL, embodied video prediction, or interactive generative agents. - Understanding of structured representations (e.g., latent state, object-centric video models). - Familiarity with datasets like Physion, CATER, or RLBench for grounded video modeling

Provenienza: Web dell'azienda
Pubblicato il: 08 Dic 2025  (verificato il 14 Dic 2025)
Tipo di impiego: Stage
Settore: Internet / New Media
Lingue: Inglese
Registrarsi
124.214 lavori e stage
in 158 Paesi
Registrati