Data Engineer

fulltime

Employment Information

About Us
Zuru Tech is digitalizing the construction process of buildings all around the world. We have a multi-national team developing the world's first digital building fabrication platform, you design, we build it!
We at ZURU develop the Zuru Home app, a BIM software meant for the general public, architects, and engineers, from here anyone can buy, design, and send to manufacturing any type of building with complete design freedom. Welcome to the future!
What are you Going to do?

📌Be responsible for developing and maintaining complex data pipelines, ETL, data models, and standards for various data Integration & data warehousing projects from source to sinks on Saas/Paas platforms - snowflake, databricks, Azure Synapse, Redshift, BigQuery, etc.
📌Develop scalable, secure, and optimized data transformation pipelines and integrate them with downstream sinks.
📌Provide support to technical solutions from a data flow design and architecture perspective, ensure the right direction, and propose a resolution to potential data pipeline-related problems.
📌Participate in developing Proof of concepts (PoC) of key technology components to project stakeholders.
📌Developing scalable and reusable frameworks for ingesting geospatial data sets.
📌Develop connectors to extract data from sources and use event/streaming services to persist and process data into sinks.
📌Collaborate with other project team members (Architects, Sr Data Engineers) to support the delivery of additional project components (like API interfaces, Search, and visualization).
📌Participate in evaluating and creating PoVs around the performance aspects of data integration tools in the market against customer requirements.
📌Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints.
What are we Looking for?

✔ Must have 3+ years of experience working as a Data Engineer in cloud transformation projects in the areas of data management solutions - AWS or Azure or GCP or Snowflake, Databricks, etc.
✔ Must have hands-on experience in at least one end-to-end implementation of data lake/warehouse projects using Paas and Saas - such as Snowflake, Databricks, Redshift, Synapse, BigQuery, etc. and 6 month+ data warehouse/data lake implementations on-premise.
✔ Experience in data pipeline development, ETL/ELT, implementing complex stored Procedures and standard DWH and ETL concepts.
✔ Experience in setting up resource monitors, RBAC controls, warehouse sizing, query performance tuning, IAM policies, and Cloud networking (VPC, Virtual Network etc).
✔ Experience in Data Migration from on-premise RDBMS to cloud data warehouses
✔ Good understanding of relational as well as NoSQL data stores, methods, and approaches (star and snowflake, dimensional modeling).
✔ Hands-on experience in Python, PySpark, and programming for data integration projects.
✔ Good experience in Cloud data storage (Blob, S3, Object stores, Minio), data pipeline services, data integration services, and data visualization.
✔ Support in providing resolution to an extensive range of complicated data pipeline-related problems, proactively and as issues surface.

Required Skills

✔ AWS / Azure / GCP (any one of the 3) native data engineering / architect
✔ Apache Kafka, ELK, Grafana
✔ Snowflake / data bricks / redshift
✔ Hadoop

What do we Offer?
💰 Competitive compensation
💰 Annual Performance Bonus
⌛️ 5 Working Days with Flexible Working Hours
🌎 Annual trips & Team outings
🚑 Medical Insurance for self & family
🚩 Training & skill development programs
🤘🏼 Work with the Global team, Make the most of the diverse knowledge
🍕 Several discussions over Multiple Pizza Parties

A lot more! Come and discover us!

joxBox

Join our newsletter to get monthly updates on data science jobs.

joxBox