| 6 Visite |
0 Candidati |
Descrizione del lavoro:
Job Description
As an intern in the field of Vision-Language-Action Models, your responsibilities will be the following:
* Conduct advanced research on LLMs/VLMs/Diffusion models for autonomous driving
* Design, implement supervised and reinforcement fine-tuning algorithms to optimize LLMs/VLMs for autonomous driving task.
* Collaborate with mentors and team members to refine research goals, discuss technical challenges, and explore extensions such as closed-loop fine-tuning and RL integration.
* Regularly report research progress through meetings, written updates, and technical presentations.
* Analyze experimental results, document methodologies, and summarize findings in clear and reproducible formats.
* Contribute to the preparation of research papers, technical reports, or potential submissions to top conferences
Requisiti del candidato:
Qualifications
Basic Qualifications
* Ph.D. student in Computer Science, Robotics or related fields.
* Hands-on experience on developing algorithms with focus on at least two of the following areas: multimodal foundation models, diffusion model, world model, autonomous driving, reinforcement learning and robotic navigation or planning.
* Solid Python skills and proficient with libraries such as PyTorch.
* Minimum GPA of 3.0
Preferred Qualifications
* Publication record in top venues including CVPR, ICCV, ECCV, ICLR, ICRA, and IROS
* Familiar with CARLA or NavSim
* Able to work independently, has strong research and problem-solving skills
* Good communication and teamwork skills
| Provenienza: | Web dell'azienda |
| Pubblicato il: | 25 Mar 2026 (verificato il 28 Mar 2026) |
| Tipo di impiego: | Stage |
| Settore: | Elettronica di consumo |
| Lingue: | Inglese |