Publier un stage
fr
Détails de l'offre
Emploi > Stages > Informatique/Technologie > Etats-Unis > San Jose > Détails de l'offre 

Student Researcher - (Seed Infra-Compiler) - 2026 Start (PhD)

TikTok
Etats-Unis  San Jose, Etats-Unis
Stage, Informatique/Technologie, Anglais
9
Visites
0
Candidats

Description du poste:

About the Team The Seed Infrastructures team oversees the distributed training, reinforcement learning framework, high-performance inference, and heterogeneous hardware compilation technologies for AI foundation models. We are looking for talented individuals to join us for an internship in 2026. PhD Internships at our Company aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at Our Company provides students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - Contribute to AI compiler optimizations for training and inference workloads - Develop and extend MLIR-based compiler passes for graph lowering, optimization, and code generation - Optimize model execution on GPU and NPU accelerators, focusing on performance, memory efficiency, and scalability - Support model deployment pipelines, including compilation, packaging, and runtime integration - Assist with distributed training and inference acceleration, such as parallel execution, communication optimization, and runtime scheduling - Benchmark, profile, and analyze performance of large-scale models across different hardware backends - Collaborate with researchers and engineers to translate model and system requirements into compiler and runtime improvements

Profil requis du candidat:

Minimum Qualifications - Currently pursuing a PhD degree in Computer Science, Electrical Engineering, or related technical fields - Experience using or developing open source frameworks for LLM inference such as vLLM or SGLang. Proficient in at least one deep learning framework (e.g., PyTorch, Megatron, DeepSpeed, JAX), with experience in model inference workflows - Understanding of modern computing systems, including hardware, storage, and networking, and how they impact ML workloads - Familiarity with compilers or model optimization pipelines (e.g., PyTorch Dynamo), or related model execution workflows - Able to commit to working for 12 weeks in 2026 Preferred Qualifications - Experience with distributed or large-scale ML systems, including training or inference pipelines and related optimizations (e.g., FSDP, DeepSpeed, Megatron, GSPMD) - Experience with GPU/TPU/NPU programming and performance optimization, or high-performance computing and communication (e.g., CUDA, Triton, NCCL, RDMA) - Understanding of AI compiler and model optimization stacks (e.g., torch.fx, PyTorch Dynamo, XLA, MLIR)

Origine: Site web de l'entreprise
Publié: 11 Avr 2026  (vérifié le 15 Avr 2026)
Type de poste: Stage
Secteur: Internet / Nouveaux Médias
Langues: Anglais
153.308 emplois et stages
dans 159 pays
S'inscrire
Entreprises
Offres
Pays