Do you want the chance to be one of the engineers dedicated to running advanced ETL jobs across 100+ machines, bridging the gap between Business Intelligence and Data Engineering, and enabling (COMPANY NAME)'s BI teams to deliver insights, data and reports to internal and external stakeholders?
Our BI team is looking for skilled and courageous engineers, who are not scared at the prospect of working with such huge data and complex structures, but cannot wait to get stuck in.
If this sounds like you, then read on…
- What you'll do:
* Design, implement and schedule reporting pipelines.
* Maintain and optimize existing data transformation processes.
* Solve issues on production data systems and pipelines and fix bugs on production systems.
* Support Analysts in using tools such as Spark, Hive, or Impala.
* Align with Data Engineering for new solutions and applications to solve new reporting challenges.
* ETL (extract, transform load) processes.
- What you'll definitely need:
* At least 1 year experience working in IT, Operations, DBA or report development, ideally with a focus on data.
* Expert knowledge of SQL and at least a basic familiarity of the general tools and principles of software engineering.
* An educational background in Software Engineering, at apprenticeship level or higher.
* Soft skills (personal traits suited to the role - data-drive, team player, multi-tasker etc.)
* To speak fluent English (our company language).
- What we'd love you to have:
* 1+ years' work experience in Software Engineering, Data Analytics, QA, or a relevant field, this can also include internships.
* Good knowledge of Hive, Oozie and Sqoop.
* Basic knowledge of Git and Shell.
* An understanding of Java, Yarn and Spark