Big Data Engineer (Hadoop/Spark/Python)
I am currently working with a multinational client based in Munich looking for a Data Engineer for a heavily remote, long term contract.
Key duties include:
- Design, development and operation of data pipelines within a multidisciplinary agile team.
- Analysis of requirements in order to understand the design options.
- Understand and communicate the pros and cons of different technologies and approaches.
- Collaborate with architects and operations engineers to propose and deploy cloud or on-premises infrastructure.
Ideal Experience:
- Large scale data processing platforms, typically based on Hadoop/Spark
- Business intelligence/analytics products or frameworks
- Data visualisation frameworks
- Different types of database - relational; document; graph; columnar; key-value.
- Ability to write good quality code in Scala, incorporating disciplines such as Test Driven Development and structured version control; familiarity with Python a bonus.
If you or any of your colleagues are interested please do not hesitate to reach out using the following:
T: +44 (0)207 014 0231
E: zackt@montash.de
(We also offer €350 in Amazon Vouchers for successfully placed referred candidates)
