| 117 Besuche |
0 Bewerbungen |
Beschreibung:
About the team The Seed Vision Team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI. Researching and developing foundational models for visual generation (images and videos), ensuring high interactivity and controllability in visual generation, understanding patterns in videos, and exploring various visual-oriented tasks based on generative foundational models. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Develop world models that simulate physical, embodied, or abstract environments through video generation. - Learn latent dynamics from video sequences that support controllable or goal-conditioned generation. - Model spatial-temporal consistency, causality, and state transitions in realistic or synthetic video datasets. - Explore integration of control signals (e.g., actions, prompts, scene graphs) into video generation pipelines. - Build benchmarks and analysis tools to measure dynamics understanding and causal consistency
Ihr Profil:
Minimum Qualifications: - Currently pursuing a PhD in Computer Vision, Machine Learning, or a related field. - Research experience in video generation, world models, or dynamics modeling. - First-author publications in CVPR, ICCV, ECCV, NeurIPS, ICLR, or ICML. - Familiarity with generative frameworks (e.g., VAE, diffusion, transformers) and temporal modeling. Preferred Qualifications: - Experience with model-based RL, embodied video prediction, or interactive generative agents. - Understanding of structured representations (e.g., latent state, object-centric video models). - Familiarity with datasets like Physion, CATER, or RLBench for grounded video modeling
| Quelle: | Website des Unternehmens |
| Datum: | 08 Dez 2025 (geprüft am 09 Dez 2025) |
| Stellenangebote: | Praktikum |
| Bereich: | Internet / New Media |
| Sprachkenntnisse: | Englisch |