| 179 Visitas |
0 Candidatos |
Descripción del puesto:
Job Description
* Designing and implementing data pipelines from multiple sources using Azure Databricks or Hadoop ecosystem
* Developing scalable and re-usable frameworks for ingesting of data sets
* Integrating end to end data pipeline - to take data from source systems to target data repositories ensuring quality and consistency of data is maintained at all times
* Working with event based / streaming / scheduling technologies to ingest and process data
* Write python script to process data according to the needs from other engineers
* Involved in analyzing data and build machine learning model as well as application to use large language model
Requerimientos del candidato/a:
Qualifications
* Last year student major in computer science, commit to a full-time 6-month internship.
* Strong ownership mindset: willing to take challenging tasks, actively work on obstacles and come up with proposals.
* Good at computational problem-solving ability (analyze complex problem, build up hypothesis with logical reasoning, verify hypothesis and implement solution with code).
* Proficient in programming languages like Pyspark, Python and SQL.
* Strong foundation in data structure and algorithms
* Experience designing, developing, deploying and/or supporting data pipelines using Databricks, Azure or Hadoop ecosystem
* Knowledgeable about CSS, APIs, SQL database.
* Willingness to learn and apply new technology, able to work independently
| Origen: | Web de la compañía |
| Publicado: | 10 Oct 2025 (comprobado el 13 Dic 2025) |
| Tipo de oferta: | Prácticas |
| Sector: | Electrónica de Consumo |
| Idiomas: | Inglés |
Empresas |
Ofertas |
Países |