Descrizione del lavoro:
You are a goal-oriented, proactive team player with an engineering mindset who is focused on delivering efficient business problem's solutions.
What we offer
* Be part of a growing team making fast decisions, where you can watch your ideas and actions come to fruition. This is what adds up to time well-spent - not mere billable hours.
* Creative ownership - drive the business forward with your ideas, launch projects from the ground up and see them through from inception to completion, giving your input and galvanising your colleagues while you bring out the best in each other.
* An opportunity to be the change you want to see: at (COMPANY NAME) you can use your skills to not only make a good living, but to enhance the transparency of the financial ecosystem.
* Diversity: intellectual as well as cultural - join a welcoming international team of smart and open-minded people, where it's easy to make friends.
* A Personal Development Plan, along with access to dedicated resources to ensure that you can be the best in your role.
* Work-family-friends balance - step off the treadmill and feel like a human again: our office is all about maintaining a healthy balance between a results-driven work environment and your all-around wellbeing.
What you will do
* Develop a scalable, extendable and easy maintainable data platform to allow business to grow effectively
* Own data pipelines and support cross-functional data and product teams to assure smooth product development
* Support the data science team to integrate machine learning services into product
What you bring on board
* Complete uni degree plus 2+ years of working experience or 5+ years of work experience in a software engineering or data engineering role in at fast paced environment (i.e. startup)
* Python (6+ on the scale of 1 to 10, where 10 is the level of Maxime Beauchemin)
* SQL (able to read/ debug/ write complex queries)
* Good understanding of transactional and analytical databases
* Work experience in Linux/Unix environment
* Work experience of building scalable data pipelines
* TDD, BDD, DDD, extreme programming, lean development
* Gitlab CI/CD
* AWS (s3, RDS, EKS, Fargate)
* Being programming polyglot, e.g. experience in golang, nodejs, R, etc.
* Experience of working with Apache Airflow
* FinTech experience is a plus
What we are doing
(COMPANY NAME) is developing a platform to connect institutional sellers (originators) and buyers (investors) of loans on a large scale. Originators are providing historical data that are used to create prediction models. Based on the predictions, investors can run several types of analytics to define their investment criteria. Once an investment is done, the platform will be fed with updates about the loan performance to provide further portfolio analysis. This enables a high transparency for the investors to prevent crashes like 2008. In an additional step, the purchased loans can be resold on the platform to create a liquid market for loans to enable a better flow of capital between countries in the European Union.
The Data platform team is one of the most critical business units. We develop infrastructure to integrate originators, perform data science tasks, provide reporting solutions and analytical instruments to deliver the best value to our customers. We follow lean development approach and rely on the following tech stack:
* AWS and Kubernetes, as the host and orchestrator for our platform
* Transactional and analytical databases, i.e. Postgres, redshift, snowflake
* Python, as the main language to prototype and productionize data platform services
* SQL, as a universal instrument to analyse structured data at scale