Publica unas prácticas
es
Detalles de la Oferta
Empleo > Prácticas > Informática/Tecnología > EE.UU. > San Jose > Detalles de la Oferta 

Student Researcher [Seed Multimodality & World Model - RL + Streaming Video Understanding] - 2026 Start (PhD)

TikTok
Estados Unidos  San Jose, Estados Unidos
Prácticas, Informática/Tecnología, Inglés
129
Visitas
0
Candidatos
Regístrate

Descripción del puesto:

About the team The Seed Multimodal Interaction and World Model team is dedicated to developing models that boast human-level multimodal understanding and interaction capabilities. The team also aspires to advance the exploration and development of multimodal assistant products. We are looking for talented individuals to join us for an internship in 2026. PhD Internships at ByteDance aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at ByteDance provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities This role focuses on building real-time multimodal LLM-based agents for streaming video tasks, tackling unique challenges in designing novel model architectures, scalable data pipelines, and reinforcement learning algorithms in streaming interactions. - Conduct research on streaming video understanding, especially for first-person or long-horizon applications, where the agent must continuously observe, interpret, and act. - Apply reinforcement learning to improve real-time perception and planning capabilities of streaming agents, including learning from human feedback, demonstrations, and/or verfiable rewards. - Build or enhance scalable data pipelines that convert offline video datasets into streaming-compatible formats, enabling the development of new agent capabilities. - Design and evaluate video agents that integrate LLMs/VLMs with decision-making components for downstream applications (e.g., tool use, retrieval, resolution switching)

Requerimientos del candidato/a:

Minimum Qualifications: - Currently pursuing a PhD in Computer Vision, Machine Learning, or a related field. - Research experience in video generation, world models, or dynamics modeling. - First-author publications in CVPR, ICCV, ECCV, NeurIPS, ICLR, or ICML. - Research experience in one or more of the following areas: - Streaming video understanding, online video processing, or sequential decision making from continous visual inputs. - Reinforcement learning (RL), especially when combined with LLMs or multimodal models (e.g., decision-making with VLMs, generative agents, action-planning). - Data engineering, such as synthetic data generation, prompt engineering, scalable data pipeline curation. Preferred Qualifications: - Strong software engineering skills and ability to work in existing infrastructure (e.g., PyTorch, distributed training frameworks). - Familiarity with streaming video processing in multimodal LLMs. - Experience working with RL for LLMs or multimodal LLMs. - Experience working with large-scale data pipelines, including multimodal dataset processing and task-specific synthetic data generation

Origen: Web de la compañía
Publicado: 25 Sep 2025  (comprobado el 15 Dic 2025)
Tipo de oferta: Prácticas
Sector: Internet / Nuevos Medios
Idiomas: Inglés
Regístrate
121.936 empleos y prácticas
en 157 países
Regístrate
Empresas
Ofertas
Países