Job Details
All about Zeta Suite :Zeta is the world's first and only Omni Stack for banks and fintechs We are rethinking payments from core to the edge, led by the vision to augment the purpose of money and banking with technology A single, modern software stack comprising processing, loans, customizable mobile and web apps, a fraud engine, and rewards for retail bankingWe are a new-age, high-growth startup (& a unicorn!) founded in 2015 by two visionary leaders, , whose entrepreneurial legacy & excellence has put us on top of the global fintech ecosystem Zeta counts amongst its customers over 10 banks and 25 fintechs across 8 countries - some of our notable clients include Sodexo - a leading issuer of employee benefits & rewards with over 30 million global users, and HDFC Bank - the 14th largest bank by market cap in the world.
What would you do here
Lead a team of data engineers in the design, development, and maintenance of data architecture and data models to support data ingestion, transformation, and analysis using tools such as Apache Spark, Flink, Airflow, Trino and DBTDevelop, maintain and optimize ETL pipelines and data integration workflows using various programming languages and ensure high quality and timely delivery of dataManage the team's workflow, including setting priorities, managing resources, and ensuring timely delivery of high-quality projectsMonitor and troubleshoot data quality and performance issues, identifying opportunities for improvement and implementing best practices using tools like Prometheus and Grafana.
Design and implement data security and privacy measures to ensure data is protected and compliant with relevant regulations using tools such as Apache Ranger and AWS IAMCollaborate with data scientists, analysts, and business stakeholders to understand their data needs and provide scalable solutions using tools such as Jupyter NotebooksStay up-to-date with industry trends and emerging technologies to continuously improve the data engineering capabilities of the organizationMentor and guide data engineers, sharing your expertise in data engineering best practices, tools, and technologiesWork with stakeholders to define and document data governance policies and ensure adherence to them.
Recruit, onboard, and train new team members as neededWhat are we looking forBachelor's degree in Computer Science, Information Systems, or related field12+ years of experience in data engineering, with a strong focus on data architecture, ETL, and data modeling using tools such as Apache Spark, Flink, Trino, Airflow, DBT, and Python4+ years of experience in managing a team of data engineers, with a track record of successful delivery of complex data projects and strong leadership skillsExperience leading the design and implementation of complex data systems and architectures, with a deep understanding of data warehousing concepts and cloud-based data platforms (e.
g, AWS, GCP, Azure) and associated technologies such as S3, Redshift, and BigQueryExpertise in programming languages like Python, Java, and Scala, as well as one or more big data technologies (eg, Delta Lake, Iceberg, Hadoop, Spark, Kafka, Airflow).
Excellent SQL skills are mandatoryExperience with data security and privacy practices, as well as regulatory compliance (eg, GDPR, CCPA) and tools such as Apache Ranger and AWS IAMFamiliarity with streaming data architectures and tools like Debezium for change data capture and Apache Kafka for messaging.
Experience with distributed query engines like Trino (formerly Presto) for high performance querying of big dataStrong leadership skills and experience mentoring and guiding data engineersExcellent problem-solving skills and ability to work independently or as part of a teamStrong communication and interpersonal skills to collaborate effectively with cross-functional teams