We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results

Data Engineer - Spark / Kafka / AWS

Eliassen Group
life insurance, 401(k)
United States, New York, New York
61 Broadway (Show on map)
Nov 11, 2024

Description:

Hybrid role in New York City. Our client, a major US-based Sports & Entertainment organization, is looking to engage an experienced Data Engineer to join a group responsible for developing and extending their next generation data platform. This platform will enable fan to access personalized content and related offerings through advanced analytics. The platform is being built leveraging state of the art cloud components and capabilities.

In this role, you will be responsible for building scalable solutions for data ingestion, processing and analytics that crosscut data engineering, architecture and SW development. You will be involved with designing and implementing cloud-native solutions for ingestion, processing and compute at scale

Due to client requirement, applicants must be willing and able to work on a w2 basis. For our w2 consultants, we offer a great benefits package that includes Medical, Dental, and Vision benefits, 401k with company matching, and life insurance.

Rate: $70 - $80 / hr. w2



Responsibilities:

  • Design, implement, document and automate scalable production grade end to end data pipelines including API and ingestion, transformation, processing, monitoring and analytics capabilities while adhering to best practices in software development.
  • Build data-intensive application solutions on top of cloud platforms such as AWS, leveraging state-of-the-art solutions including distributed compute, lake house, real-time streaming while enforcing best practices for data modeling and engineering.
  • Work with infrastructure engineering team to setup infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.



Experience Requirements:

  • Bachelor's degree computer science or related field required.
  • Minimum of 2 years related experience with track record of building production software.
  • Hands-On experience building and delivering cloud native data solutions (AWS preferred)
  • Solid Computer Science fundamentals with experience across a range of disciplines, with one or more area of deep knowledge and experience in an advanced programming language.
  • Working experience of distributed processing systems including Apache Spark.
  • Hands-On experience with lake house architecture, open table formats such as Hudi, orchestration frameworks such as Airflow, real time streaming with Apache Kafka and container technology.
  • Deep understanding of best software practices and application of them in data engineering.
  • Familiarity with data science and machine learning workflows and frameworks
  • Ability to work independently and collaborate with cross-functional teams to complete projects.
  • Experience leading integration of technical components with other teams.

Applied = 0

(web-69c66cf95d-glbfs)