Job Description:
Job Description
* Designing and implementing data pipelines from multiple sources using Azure Databricks or Hadoop ecosystem
* Developing scalable and re-usable frameworks for ingesting of data sets
* Integrating end to end data pipeline - to take data from source systems to target data repositories ensuring quality and consistency of data is maintained at all times
* Working with event based / streaming / scheduling technologies to ingest and process data
* Write python script to process data according to the needs from other engineers
* Build dashboard using PowerBI or Tableau to fulfill the need from end users
* Evaluating the performance and applicability of multiple tools for the automated data pipeline
* Involve in application development using low-code platforms such as PowerApps and Outsystems.
* Involve in AI solution development in the organization focus on building python script for data processing
Candidate Requirements:
Qualifications
* Last year student major in computer science or equivalent field
* Basic experience in designing, developing, deploying and/or supporting data pipelines using Databricks, Azure, Openshift, Virtual Machine
* Basic experience in building dashboard using PowerBI, Tableau.
* Proficient in programming languages like Pyspark and Python
* Strong foundation in data structure and algorithms
* Good understanding of SQL, API
* Good at computational problem solving
* Willingness to learn and apply new technology, able to work independently
* Knowledge and experience with knowledge graph is a plus
| Source: | Company website |
| Posted on: | 22 Oct 2025 (verified 14 Dec 2025) |
| Type of offer: | Internship |
| Industry: | Consumer Electronics |
| Languages: | English |