| 26 Visites |
0 Candidats |
Description du poste:
About the team The Seed Infrastructures team oversees the distributed training, reinforcement learning framework, high-performance inference, and heterogeneous hardware compilation technologies for AI foundation models. We are looking for talented individuals to join us for an internship in 2026. PhD Internships at our Company aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at Our Company provides students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Responsibilities - As an Infrastructure Intern, you may work on one or more of the following areas: - Design and optimize large-scale distributed training systems (e.g., data/model/pipeline parallelism, memory efficiency, fault tolerance) - Contribute to reinforcement learning training frameworks and large-scale post-training systems - Improve inference performance, latency, and throughput for foundation models - Develop compiler or runtime optimizations for heterogeneous hardware (GPU/accelerator) - Work on system-level performance analysis, profiling, and bottleneck diagnosis - Build tooling and automation to improve developer productivity and system reliability
Profil requis du candidat:
Minimum Qualifications: - Currently pursuing a PhD degree in Computer Science, Electrical Engineering, or related technical fields - Strong programming skills in Python and/or C++ - Solid understanding of systems, distributed computing, machine learning systems, or performance optimization - Experience with one or more of the following: Distributed training frameworks (e.g., PyTorch FSDP, Megatron-style parallelism); Reinforcement learning training systems; GPU programming (CUDA, Triton) or compiler technologies; Large-scale inference optimization; Performance profiling and systems debugging - Strong problem-solving skills and the ability to work in fast-paced research-driven environments Preferred Qualifications: - Experience working on large-scale ML systems or infrastructure projects - Contributions to open-source ML systems or performance tooling - Publications in ML systems, distributed systems, or related areas (a plus but not required)
| Origine: | Site web de l'entreprise |
| Publié: | 16 Mar 2026 (vérifié le 15 Avr 2026) |
| Type de poste: | Stage |
| Secteur: | Internet / Nouveaux Médias |
| Langues: | Anglais |