We're looking for experienced Python Data Engineers to join our dynamic team.
Responsibilities:
• Develop and maintain robust, scalable data pipelines using Python and SQL scripting.
• Orchestrate data workflows leveraging Apache Airflow for job scheduling and automation.
• Implement distributed data processing using Apache Spark.
• Build and optimize data lakes, ensuring data accuracy and availability.
• Work extensively on AWS services, including S3 Lambda, and Glue for data ingestion, processing, and transformation.
• Optimize data ingestion, transformation, and consumption processes for large-scale datasets.
Requirements:
• ++ years of hands-on experience in Python and SQL scripting.
• Proven expertise in Apache Airflow and Apache Spark.
• Hands-on experience with AWS services like S3 Glue, and Lambda.
• Strong problem-solving skills and a proactive, collaborative attitude.
Read more