Job Details
Job Description :We are looking forward to hireData Engineerwho thrives on challenges and desires to make a real difference in the business world With an environment of extraordinary innovation and unprecedented growth, this is an exciting opportunity for a self-starter who enjoys working in a fast-paced, quality-oriented, and team environment What you should have - Minimum 3+ Years of Mandatory experience in ETL, Data Engineering pipeline using the following Skills, tools, and technologies: - AWS Data & Analytics Services: Athena, Glue, EMR, DynamoDB, Redshift, Kinesis, LambdaPySpark, Spark SQL - 3+ years of coding experience with modern programming or scripting language (Python) - Expert-level skills in writing and optimizing SQL - Sound knowledge of distributed systems and data architecture (lambda)- design and implement batch and stream data processing pipelines, knows how to optimize the distribution, partitioning, and MPP of high-level data structures - Experience with full software development life cycle, including coding standards, code reviews, source control management, build processes, and testing What you will do - Explore and learn the latest AWS Data & Analytics and Databricks Platform features /technologies to provide new capabilities and increase efficiency.
- Build and operationalize large-scale enterprise data solutions and applications using one or more of AWS data and analytics services in combination with 3rd parties - Databricks, Spark, EMR, DynamoDB, RedShift, Kinesis, Lambda, Glue, Athena - Design and build production data pipelines/ETL jobs from ingestion to consumption within a big data architecture, using Python, and PySpark - Implement data engineering, ingestion, and curation functions on the AWS cloud