| 179 Visite |
0 Candidati |
Descrizione del lavoro:
Job Description
* Designing and implementing data pipelines from multiple sources using Azure Databricks or Hadoop ecosystem
* Developing scalable and re-usable frameworks for ingesting of data sets
* Integrating end to end data pipeline - to take data from source systems to target data repositories ensuring quality and consistency of data is maintained at all times
* Working with event based / streaming / scheduling technologies to ingest and process data
* Write python script to process data according to the needs from other engineers
* Involved in analyzing data and build machine learning model as well as application to use large language model
Requisiti del candidato:
Qualifications
* Last year student major in computer science, commit to a full-time 6-month internship.
* Strong ownership mindset: willing to take challenging tasks, actively work on obstacles and come up with proposals.
* Good at computational problem-solving ability (analyze complex problem, build up hypothesis with logical reasoning, verify hypothesis and implement solution with code).
* Proficient in programming languages like Pyspark, Python and SQL.
* Strong foundation in data structure and algorithms
* Experience designing, developing, deploying and/or supporting data pipelines using Databricks, Azure or Hadoop ecosystem
* Knowledgeable about CSS, APIs, SQL database.
* Willingness to learn and apply new technology, able to work independently
| Provenienza: | Web dell'azienda |
| Pubblicato il: | 10 Ott 2025 (verificato il 13 Dic 2025) |
| Tipo di impiego: | Stage |
| Settore: | Elettronica di consumo |
| Lingue: | Inglese |
Aziende |
Offerte |
Paesi |