| 97 Visite |
0 Candidati |
Descrizione del lavoro:
About the team: Nestled Flow AI Organization, the Security AI team stands as a beacon of innovation, exploring cutting-edge technology to enhance the security of Large Language Models (LLMs) and their applications serving the company's global products. Tasked with constructing, implementing, and sustaining secure frameworks, the team pioneers a new frontier in AI Security Research, inviting talented individuals with a background in AI to join through the student researcher program. Through collaboration and unwavering commitment, this team ensures a safe and secure digital experience for users worldwide while offering a stimulating environment for LLM enthusiasts to thrive and shape the future of AI security. We are looking for talented individuals to join us for an internship in 2026. PhD Internships at our Company aim to provide students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. PhD internships at Our Company provides students with the opportunity to actively contribute to our products and research, and to the organization's future plans and emerging technologies. Our dynamic internship experience blends hands-on learning, enriching community-building and development events, and collaboration with industry experts. Applications will be reviewed on a rolling basis - we encourage you to apply early. Please state your availability clearly in your resume (Start date, End date). Job Responsibilities: - Explore, design, and develop security solutions for Large Language Models (LLMs), such as techniques to enhance LLM robustness against potential threats like prompt injections and private information disclosure. - Conduct comprehensive security risk assessments and provide actionable recommendations for mitigating potential vulnerabilities in LLMs. - Stay well-informed about existing AI risks to ensure the highest level of security across all AI projects. - Keep abreast of the latest AI technologies and trends, proactively identifying opportunities to strengthen the security of AI products. - Foster strategic collaborations with external organizations and experts to remain at the forefront of best practices and advancements in AI security. - Continuously evaluate and optimize internal processes and procedures related to AI security to maintain a robust and resilient framework. - Provide leadership and guidance to clients, empowering them to understand and adopt secure AI practices seamlessly. - Develop and deliver comprehensive training and educational programs on secure AI principles and best practices to team members, promoting a culture of AI security awareness and excellence
Requisiti del candidato:
Minimum Qualifications: - Currently pursuing PhD in Computer Science or a related discipline. - Excellent knowledge of theory and practice of LLM and foundation model. - Strong publication record at conferences (NeurIPS, ICML, ICLR, ACL, EMNLP, etc.). Preferred Qualifications: - Good communication and collaboration skills, able to explore new technologies with the team and promote technological progress. - Demonstrated software engineering or natural language processing, or deep learning experience from previous internships, work experience, coding competitions, or publications. - High levels of creativity and quick problem-solving capabilities. - Excellent coding ability, familiar with data structures, and fundamental algorithm skills, proficient in Python or Go or Java, winners of competitions such as ACM/ICPC, USACO/NOI/IOI, Top Coder, Kaggle, etc. are preferred;
| Provenienza: | Web dell'azienda |
| Pubblicato il: | 19 Gen 2026 (verificato il 17 Feb 2026) |
| Tipo di impiego: | Stage |
| Settore: | Internet / New Media |
| Lingue: | Inglese |
Aziende |
Offerte |
Paesi |