Employment Information
Trustly is leading the human-centric payments revolution. To us, this means passionately building the most convenient, intelligent and responsible way of paying for things online. Whether it's for shopping, paying subscriptions, funding trading accounts, booking airfare, playing online games and much more -- we're all about a better way to pay. At our core, we are a tech company with industry-leading tech capabilities. But, it's the ingenuity of our people that makes us leaders in our field. Thus, our appetite for innovation will never be anything less than fierce. Trustly is steadily growing as it connects thousands of businesses with hundreds of millions of people. And with a strong presence across Europe and the Americas, we are leading the human-centric payment revolution as a truly global team.
About the Data domain
The Data domain is part of the Product and Tech function with the mission of transforming Trustly into a truly data-driven organization. We are responsible for every aspect of the data journey, from collection & modeling to analysis and visualization. We provide the whole organization with data and analytics artifacts aiming to improve our decisions, operations and the customer's experience.
About the role
We are seeking a talented Machine Learning Engineer to join our Machine Learning team. As a Machine Learning Engineer, you will work on building and deploying machine learning models that will help drive and automate critical business decisions within our product services. You will have the opportunity to work on cutting-edge ML applications, collaborate with a multidisciplinary team, and take ownership of end-to-end machine learning lifecycles. If you're passionate about ML and AI, problem-solving, and building scalable solutions, we'd love to have you on board.
What you'll do
- Model Development & Optimization: Develop end-to-end pipelines to deploy machine learning models into production environments and continuously improve model performance through feature selection, hyperparameter tuning and model lifecycle management.
- Data Pipeline Design & Management: Build and maintain efficient data pipelines for data collection, preprocessing, and feature transformation to ensure high-quality near real-time input for ML models.
- Performance Monitoring: Monitor and evaluate feature drift and the performance of deployed models, ensuring they meet business requirements and adjusting as necessary to maintain a high level of model performance over time.
- Collaborate Across Teams: Work closely with data scientists, software engineers, architects and product managers to understand the business needs and translate them into scalable machine learning products.
- Stay Ahead of Trends: Keep up with the latest advancements in machine learning and AI technologies, applying new methodologies and tools to enhance the effectiveness of your work.
Who you are
- Experience: You have 5+ years of hands-on experience developing and deploying machine learning models in a production environment with near real-time latency.
- Education: You have a Master's degree in Computer Science, Engineering, Mathematics, or a related field.
Technical Skills:
- Proficient in programming languages like Python, Java, or C++.
- Experience with ML frameworks such as Sagemaker, MLFlow, TensorFlow, Keras, PyTorch and similar frameworks.
- Strong understanding of machine learning algorithms, statistics, and optimization techniques.
- Experience working with ML Operations, model lifecycle, infrastructure as code and CI/CD pipelines for model deployment.
- Familiarity with cloud platforms like AWS, Google Cloud, and deploying ML models at scale in said platforms.
- Experience with big data tools and distributed computing (e.g., Spark, Apache Beam/Flink)
- Experience working with streaming data pipelines in Kafka (or similar)
- Knowledge of version control systems (e.g., Git) and containerization tools (e.g., Docker).
- Experience within open banking is a plus but not required