Development of the data models underlying our BI and Analytics Solutions
Coding in SQL & Python on Palantir Foundry
Assemble large, complex data sets that meet business requirements or advanced analytics use cases
Build robust and automated pipelines to ingest and process structured and unstructured data from source systems into analytical platforms
Collaborate within a cross-functional international team to design and implement data driven workflow solutions
Your profile
Hands-on experience with PySpark, Python and SQL
Hands-on experience in various Big Data Hadoop tools such as Pig, Spark or HDFS
Experience with AWS, Azure or GCP services
Experience with agile methodology, scrum and problem-centric solution design
Proven knowledge of data engineering best practices
Fluent in English, German is a plus
Additional Assets
Previous exposure to Palantir Foundry and AIP
Experience with Kubernetes or Kafka
Media/Advertising/Marketplaces industry domain knowledge
Recruiting Process
After checking your documents, there will be a short call for initial clarifications and to make an appointment
This is followed by a video call with Kalin Ivanov, our Head of Data Engineering (duration approx. 45 to 60 minutes)
The next step is a second on-site interview and you get to know the team
And then, at best, the contract offer will already be made
Do you feel addressed? Then we look forward to receiving your application. We show our colours and are convinced that diversity is an opportunity. Everyone (*all) should be exactly who they are. We create a working environment in which this is possible. For this position we only consider direct applications.