This listing has expired.
Join one of Austria´s leading financial service providers in a stable corporate environment, embarking on data engineering projects with a focus on Databricks.
Tasks
- Develop, optimize, and operate data processing pipelines in PySpark.
- Incorporate streaming data processing technologies such as Kafka and Spark Streaming.
- Design cloud-based solutions and ensure their integration with on-premise systems.
- Collaborate on solution design´s architectural aspects.
- Implement software development standards and best practices within the team.
- Provide support and mentoring to team members.
- Take on development responsibilities and leadership.
Requirements
- Minimum 2 years of development experience on Databricks platform.
- Hands-on experience with Apache Spark and message broker technologies.
- Proficiency in SQL, including development and optimization of complex queries.
- Experience setting up and configuring CI/CD pipelines (e.g., Github, Github actions).
- Ready to mentor less experienced colleagues in code development.
- Facilitate pair programming, code review, and joint refactoring.
- Proficiency in English is mandatory; knowledge of German is a plus.
HO regulations
- 1day/week onsite in Vienna