| 114 Visitas |
0 Candidatos |
Descripción del puesto:
We are looking for talented individuals to join us for a Student Researcher opportunity in 2025. Student Researcher opportunities at ByteDance aim to offer students industry exposure and hands-on experience. Turn your ambitions into reality as your inspiration brings infinite opportunities at ByteDance. The Student Researcher position provides unique opportunities that go beyond the constraints of our standard internship program, allowing for flexibility in duration, time commitment, and location of work. Candidates can apply to a maximum of two positions and will be considered for jobs in the order you apply. The application limit is applicable to ByteDance and its affiliates' jobs globally. Applications will be reviewed on a rolling basis - we encourage you to apply early. Responsibilities - Research and develop our machine learning systems, including heterogeneous computing architecture, management, scheduling, and monitoring. - Manage cross-layer optimization of system and AI algorithms and hardware for machine learning (GPU, ASIC). - Implement both general purpose training framework features and model specific optimizations (e.g. LLM, diffusions). - Improve efficiency and stability for extremely large scale distributed training jobs
Requerimientos del candidato/a:
Minimum Qualifications - Currently in PhD program in distributed, parallel computing principles and know the recent advances in computing, storage, networking, and hardware technologies. - Familiar with machine learning algorithms, platforms and frameworks such as PyTorch and Jax. - Have basic understanding of how GPU and/or ASIC works. - Expert in at least one or two programmingf languages in Linux environment: C/C++, CUDA, Python. - Must obtain work authorization in country of employment at the time of hire, and maintain ongoing work authorization during employment. Preferred Qualifications The following experiences will be a big plus: - GPU based high performance computing, RDMA high performance network (MPI, NCCL, ibverbs). - Distributed training framework optimizations such as DeepSpeed, FSDP, Megatron, GSPMD. - AI compiler stacks such as torch.fx, XLA and MLIR. - Large scale data processing and parallel computing. - Experiences in designing and operating large scale systems in cloud computing or machine learning. - Experiences in in-depth CUDA programming and performance tuning (cutlass, triton)
| Origen: | Web de la compañía |
| Publicado: | 16 Oct 2025 (comprobado el 12 Dic 2025) |
| Tipo de oferta: | Prácticas |
| Sector: | Internet / Nuevos Medios |
| Idiomas: | Inglés |
Empresas |
Ofertas |
Países |