Job Details
All about Zeta Suite :Zeta is the world's first and only Omni Stack for banks and fintechs We are rethinking payments from core to the edge, led by the vision to augment the purpose of money and banking with technology A single, modern software stack comprising processing, loans, customizable mobile and web apps, a fraud engine, and rewards for retail bankingWe are a new-age, high-growth startup (& a unicorn!) founded in 2015 by two visionary leaders, , whose entrepreneurial legacy & excellence has put us on top of the global fintech ecosystem Zeta counts amongst its customers over 10 banks and 25 fintechs across 8 countries - some of our notable clients include Sodexo - a leading issuer of employee benefits & rewards with over 30 million global users, and HDFC Bank - the 14th largest bank by market cap in the world.
What would you do here
Lead the design, development, and maintenance of data architecture and data models to support data ingestion, transformation, and analysis using tools such as Debezium, Kafka, Apache Spark, Flink, Airflow, Trino and DBTDevelop, maintain and optimize ETL pipelines and data integration workflows using programming languages like Python, Java, and Scala to ensure high quality and timely delivery of dataMonitor and troubleshoot data quality and performance issues, identifying opportunities for improvement and implementing best practices using tools like Prometheus and GrafanaDesign and implement data security and privacy measures to ensure data is protected and compliant with relevant regulations using tools such as Apache Ranger and AWS IAM.
Stay up-to-date with industry trends and emerging technologies to continuously improve the data engineering capabilities of the organizationMentor and guide junior data engineers, sharing your expertise in data engineering best practices, tools, and technologiesWork with stakeholders to define and document data governance policies and ensure adherence to themCollaborate with data scientists, analysts, and business stakeholders to understand their data needs and provide scalable solutionsWhat are we looking forBachelor's degree in Computer Science, Information Systems, or related field8+ years of experience in data engineering, with a strong focus on data architecture, ETL, and data modeling using tools such as Apache Spark, Trino (formerly Presto), Airflow, DBT, and Python.
Experience leading the design and implementation of complex data systems and architectures, with a deep understanding of data warehousing concepts and cloud-based data platforms (eg, AWS, GCP, Azure) and associated technologies such as S3, Redshift, and BigQueryExpertise in programming languages like Python, Java, and Scala, as well as one or more big data technologies (eg.
, Hadoop, Spark, Kafka, Airflow) Excellent SQL skills are mandatoryExperience with data security and privacy practices, as well as regulatory compliance (eg, GDPR, CCPA) and tools such as Apache Ranger and AWS IAM.
Familiarity with streaming data architectures and tools like Debezium for change data capture and Apache Kafka for messagingExperience with distributed query engines like Trino (formerly Presto) for high performance querying of big dataStrong leadership skills and experience mentoring and guiding junior data engineersExcellent problem-solving skills and ability to work independently or as part of a teamStrong communication and interpersonal skills to collaborate effectively with cross-functional teams.