Publier un stage
fr
Détails de l'offre
Emploi > Stages > Science/Recherche > Etats-Unis > San Jose > Détails de l'offre 

Student Researcher [Seed Vision - Multimodal Joint Modeling] - 2026 Start (PhD)

TikTok
Etats-Unis  San Jose, Etats-Unis
Stage, Science/Recherche, Anglais
76
Visites
0
Candidats

Description du poste:

About the team The Seed Vision Team focuses on foundational models for visual generation, developing multimodal generative models, and carrying out leading research and application development to solve fundamental computer vision challenges in GenAI. Researching and developing foundational models for visual generation (images and videos), ensuring high interactivity and controllability in visual generation, understanding patterns in videos, and exploring various visual-oriented tasks based on generative foundational models. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Conduct research on joint training of vision, language, and video models under a unified architecture. - Develop scalable and efficient methods for autoregressive-style multimodal pretraining, supporting both understanding and generation. - Explore cross-modal tokenization, alignment, and shared representation strategies. - Investigate instruction tuning, captioning, and open-ended generation capabilities across modalities. - Contribute to system-level improvements in data curation, model optimization, and evaluation pipelines

Profil requis du candidat:

Minimum Qualifications: - Currently pursuing a PhD in Computer Vision, Machine Learning, NLP, or a related field. - Research experience in multimodal learning, large-scale pretraining, or vision-language modeling. - Proficiency in deep learning frameworks such as PyTorch or JAX. - Demonstrated ability to conduct independent research, with publications in top-tier conferences such as CVPR, ICCV, ECCV, NeurIPS, ICML, ICLR. Preferred Qualifications: - Experience with autoregressive LLM training, especially in multimodal or unified modeling settings. - Familiarity with instruction tuning, vision-language generation, or unified token space design. - Background in model scaling, efficient training, or data mixture strategies. - Ability to work closely with infrastructure teams to deploy large-scale training workflows

Origine: Site web de l'entreprise
Publié: 09 Dec 2025  (vérifié le 14 Dec 2025)
Type de poste: Stage
Secteur: Internet / Nouveaux Médias
Langues: Anglais
124.206 emplois et stages
dans 158 pays
S'inscrire
Entreprises
Offres
Pays