
Raiffeisen Bank
Job title:
DATA ENGINEER (Data Science Enablement)
Company:
Raiffeisen Bank
Job description
What will your tasks be?
- Participate in the lifecycle of data science projects, incl. design and development of data processing and monitoring pipelines
Work with the state-of-the-art cloud infrastructure (AWS, Databricks) * Assemble large, complex data sets to meet functional / non-functional business requirements
- Develop, maintain and optimize ELT and ETL pipelines (incl. incidents investigation, writing “postmortems”)
- Continuously support internal consumers (data analysts, data scientists) in best data engineering practices and automation of development pipelines
- Find and adopt best practices, share your knowledge with a team and constantly learn from others
Why is it worth working with us?
- You’ll work both locally and in an international analytics team at a leading bank
- Be part of a huge international community of Data Team sharing learnings, best practices, use cases regularly
- Data science courses provided by the group
- Flexible Home Office opportunity (up to 80%)
- You can choose from 16 types of Cafeteria benefits: SZÉP card, extra day off, commuting support, health insurance, tickets to cultural events and sports events,
- We provide housing loan support, an employee account package
- A modern working environment, dining rooms and cafes await you
- As part of a well-being program, we pay attention to the physical and mental health of our colleagues
Application requires
- Structured and conceptual mindset coupled with strong quantitative and analytical problem-solving attitude.
- Professional experience in designing and developing production ready data application and pipelines in cloud ecosystem.
- Software engineering excellence: understanding of SDLC, Unit / Integration tests, Data Lake architecture.
- Knowledge in PySpark and SQL (DDL, analytical functions, sub-queries, optimization of performance, principles for optimization).
- Experience working within agile (scrum) methodology.
- Fluent English, spoken and written.
Will be a plus:
- BSc in Computer Science, Informatics, Software Engineering or related major
- Solid knowledge of ML principles and frameworks, analytical libraries (e.g., pandas, NumPy, scikit-learn etc.)
- Knowledge of DBT for ETL pipelines
- Good comprehension of data warehousing principles, MDM, data models (LDM/PDM)
- Experience with other programming languages beyond Python
- Experience in developing CI/CD/CT pipelines (we’re using GitHub actions)
Expected salary
Location
Budapest
Job date
Sun, 11 May 2025 04:45:43 GMT
To help us track our recruitment effort, please indicate in your email/cover letter where (vacanciesin.eu) you saw this job posting.