| 76 Visite |
0 Candidati |
Descrizione del lavoro:
About the team The Seed Vision Team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI. Researching and developing foundational models for visual generation (images and videos), ensuring high interactivity and controllability in visual generation, understanding patterns in videos, and exploring various visual-oriented tasks based on generative foundational models. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Conduct research on joint training of vision, language, and video models under a unified architecture. - Develop scalable and efficient methods for autoregressive-style multimodal pretraining, supporting both understanding and generation. - Explore cross-modal tokenization, alignment, and shared representation strategies. - Investigate instruction tuning, captioning, and open-ended generation capabilities across modalities. - Contribute to system-level improvements in data curation, model optimization, and evaluation pipelines
Requisiti del candidato:
Minimum Qualifications: - Currently pursuing a PhD in Computer Vision, Machine Learning, NLP, or a related field. - Research experience in multimodal learning, large-scale pretraining, or vision-language modeling. - Proficiency in deep learning frameworks such as PyTorch or JAX. - Demonstrated ability to conduct independent research, with publications in top-tier conferences such as CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR. Preferred Qualifications: - Experience with autoregressive LLM training, especially in multimodal or unified modeling settings. - Familiarity with instruction tuning, vision-language generation, or unified token space design. - Background in model scaling, efficient training, or data mixture strategies. - Ability to work closely with infrastructure teams to deploy large-scale training workflows
| Provenienza: | Web dell'azienda |
| Pubblicato il: | 09 Dic 2025 (verificato il 14 Dic 2025) |
| Tipo di impiego: | Stage |
| Settore: | Internet / New Media |
| Lingue: | Inglese |
Aziende |
Offerte |
Paesi |