Required:5+ years of experience with Apache SparkStrong programming skills in Scala, Java, or PythonExperience with big data tools and technologies such as Hadoop, Hive, Kafka, and HDFSProficiency in SQL and experience with relational databasesFamiliarity with cloud platforms like AWS, Azure, or Google CloudStrong problem-solving skills and attention to detailExcellent communication and teamwork skillsPreferred:Master’s degree in Computer Science or related fieldExperience with streaming technologies such as Spark Streaming or Kafka StreamsFamiliarity with DevOps practices and tools (e.g., Docker, Kubernetes)Knowledge of data warehousing and data modelingExperience with machine learning frameworks and libraries