Enso Recruitment is on the hunt for a Data Engineer to become part of a dynamic community of data experts, working closely with our prestigious client. This role offers the chance to be a key contributor to the Data Analytics portfolio at the client's organisation.
As a Data Engineer, you'll play a vital role in expanding and optimizing data and data pipeline architecture. You'll be responsible for enhancing data flow, collection, and supporting cross-functional teams. If you're passionate about crafting data solutions from the ground up and optimizing them, we want to hear from you. Your responsibilities will include collaborating with software developers, database architects, data analysts, and data scientists to ensure a consistent data delivery architecture across projects.
Provide technical expertise in designing, developing, and supporting data solutions for the client's business partners.
Build and maintain data pipelines using Snowflake, dbt, Azure Cloud, Python, Docker, and SQL.
Engage in Agile/DevOps methodologies, utilizing Azure DevOps for development.
Translate user requirements into actionable analytics, including data modeling.
Act as a subject matter expert for data integration from enterprise applications, enhancing data solutions for global stakeholders.
Who We're Seeking:
Enso Recruitment is looking for intelligent, driven individuals with a passion for their work and strong teamwork skills. While the ideal candidate should possess the skills and experience listed below, we welcome those with a desire to learn and grow.
Minimum 4+ years' experience in Data Engineering or Software Engineering with a data focus.
Proficiency in a programming language (e.g., Python, Java, Go).
Experience with public cloud platforms (Azure, AWS, or GCP).
Proficiency in writing complex SQL queries.
Knowledge of ETL tools, data modeling, data warehousing, and large-scale datasets.
Familiarity with DevOps and lean development principles, including Continuous Integration and Continuous Delivery/Deployment.
Design and implementation of modern data pipelines and data streams.
Nice to Have:
Familiarity with dbt (Data Build Tool).
Experience with containerization (Docker, Kubernetes).
Knowledge of big data technologies (e.g., Kafka, Spark).
Experience supporting data and processing needs of enterprise applications.
Background in Semiconductor or High-Tech Manufacturing.
Familiarity with BI tools (e.g., Power BI, Tableau, Looker).