We’re partnering with a high-growth, forward-thinking organization that specializes in digital innovation and marketing across international markets. The company is on an exciting journey, rapidly scaling its capabilities and leveraging advanced technology to deliver cutting-edge solutions. Join a dynamic team within a business that values innovation, supports professional development, and offers exceptional career progression.

 

The Role:

We’re seeking a Lead Data Engineer to take a hands-on role in designing and delivering robust, real-time data pipelines and infrastructure in a Google Cloud Platform (GCP) environment. The company is particularly interested in candidates with strong expertise in SQL. As the Lead Data Engineer, you’ll play a critical role in shaping their data architecture and driving transformation. You’ll partner closely with engineering, product, and analytics teams to ensure efficient, high-performance data systems that enable the business to thrive in a fast-paced environment.

 

Key Responsibilities:

  • Design, develop, and maintain scalable, data pipelines and infrastructure in a GCP environment.
  • Integrate multiple data sources to ensure seamless data flow across the organization.
  • Build and optimize data models for querying and analytics use cases.
  • Develop fault-tolerant, highly available data ingestion and processing pipelines.
  • Continuously monitor and improve pipeline performance for low-latency and high-throughput operations.
  • Ensure data quality, integrity, and security across all systems.
  • Implement effective monitoring, logging, and alerting mechanisms.
  • Collaborate with product, engineering, and analytics teams to deliver tailored solutions that meet business needs.

 

About You:

  • Strong hands-on experience in data engineering with expertise in Python.
  • Proven track record of building and managing data pipelines.
  • In-depth experience with Google Cloud Platform (GCP) and its associated tools for data ingestion and processing.
  • Familiarity with distributed streaming platforms such as Kafka or similar technologies.
  • Advanced knowledge of SQL.
  • Experience with data orchestration tools.
  • Ability to optimize and refactor data pipelines for improved performance and scalability.
  • Strong problem-solving skills and the ability to thrive in a collaborative, fast-paced environment.

Tagged as: , , , , , , , , , , , , , , , , , , , , , , , , , ,

Print Job Listing

Cart

Basket

Share