Big Data Engineer (Spark, Kafka, Hive, T-SQL, Python, Big Data)
I am working with a leading financial services client based in Frankfurt that are looking for a Big Data Engineer (Spark, Kafka, Hive, T-SQL, Java, Big Data) who will be responsible for designing, developing and delivering end to end solutions within the Markets Data Lake for work funded by the Data Theme or associated programs.
The successful Big Data Engineer will be responsible for the end to end delivery of processes that ingest real-time, batch and file-based data from the appropriate Middleware/Kafka, processes. Transformation & persistence of this data as well as making it available for the specific business/application process requiring it.
Skills required for the Big Data Engineer (Spark, Kafka, Hive, T-SQL, Python, Big Data):
- Proficient in designing, developing and delivering Big Data solutions on a Hadoop/HortonWorks platform.
- Expertise & Experience of Hadoop based platform and associated technology; Spark, Kafka, Hive, HBase, Yarn, Phoenix, Oozie, Python
- End to end delivery from PoC through to Production service
- Strong developer / coding skills that encompass:
- Architecture and solution design
- T-SQL experience
- Python development
- Design and engineering decisions that cater for both functional and non-functional requirements
- Knowledge of industry and technology trends around Big Data
- Enterprise solutions
Desired Skills required for the Big Data Engineer (Spark, Kafka, Hive, T-SQL, Python, Big Data):
- Investment Banking experience (products and their lifecycles)
- Understanding of Banking regulation projects
- Worked in a Scrum / Agile delivery team
If this sounds like you then please apply with your up to date CV. In addition we offer a €350 Amazon Voucher for each successfully placed referred candidate, so if you think any of your friends or colleagues will be interested feel free to put them in touch.