| 6 Visites |
0 Candidats |
Description du poste:
Imagine what you could do here. At Apple, new ideas become great products, services, and customer experiences quickly. Bring focus, rigor, and curiosity to your work and help shape the future of Apple's data infrastructure. Apple is seeking an experienced software engineer to join the Data Solutions team within Data Services, responsible for building and evolving the distributed data systems that power Apple's most critical consumer services - including Apple Music, TV, and Podcasts. You'll work on high-scale streaming and data processing platforms, solving hard problems around cross-datacenter replication and getting user data to the edge as fast as possible. Your work will directly impact hundreds of millions of Apple users and the teams that build experiences for them.
Apple's Data Solutions team, within the broader ASE Data Services organization, builds and operates data infrastructure that is reliable, scalable, and low-latency. The team focuses on real-time data streaming, large-scale data processing, and optimizing cross-datacenter replication to bring user data to the edge with the smallest possible latency. Apache Kafka plays a central role in our streaming and replication infrastructure, and familiarity with its ecosystem is a meaningful advantage. Our systems sit at the heart of services like Apple Music, TV, and Podcasts, ensuring that data is available where and when it's needed - anywhere in the world. A key focus for this team is automating existing systems and integrating them into a centralized cloud platform - modernizing the infrastructure that underpins these services and building the tooling that lets us operate them at scale. Engineers on this team own their platforms end-to-end: from internals and protocol-level work to operational tooling, observability, and multi-region deployment. As a member of this team, you will build and evolve foundational components of Apple's data replication platform. Areas of work include: Real-time data streaming and event-driven pipelines Cross-datacenter replication and consistency Edge delivery optimization for lowest-latency data access Platform reliability, observability, and incident response Automation of existing systems and integration into centralized cloud platforms
Proficiency in Java and/or C++, with strong understanding of concurrency, memory management, and performance with a focus on distributed systems or data infrastructure at scale. Experience designing, building, and operating large-scale distributed systems. Solid understanding of data structures, algorithms, fault tolerance, and system performance. Experience with RESTful API design and service-oriented architectures. Bachelor's degree in Computer Science or equivalent practical experience.
* Experience with distributed data systems such as Cassandra, Redis, Kafka, or similar platforms. * Experience with Apache Kafka - including broker internals, producers/consumers, and ecosystem tooling - is a strong plus. * Experience with multi-datacenter deployments, replication strategies, and consistency models. Hands-on experience with cloud platforms and container orchestration (e.g. Kubernetes, AWS, GCP, or similar). Exposure to observability practices including monitoring, alerting, and performance benchmarking. Experience with fault injection, chaos engineering, or property-based testing methodologies. Contributions to open-source projects, especially in the data infrastructure ecosystem. Experience with fault injection, chaos engineering, or property-based testing methodologies. Contributions to open-source projects, especially in the data infrastructure ecosystem
| Origine: | Site web de l'entreprise |
| Publié: | 10 Avr 2026 (vérifié le 13 Avr 2026) |
| Type de poste: | Emploi |
| Secteur: | Électronique grand public |
| Langues: | Anglais |