25 March 2026
Senior Data Engineer

25 March 2026
Senior Data Engineer



About the Role:
This role is responsible for designing, developing, and maintaining robust and scalable data solutions. You will work closely with stakeholders to translate business requirements into data pipelines, enabling advanced analytics, reporting, and data-driven decision-making.

Key Responsibilities:

  • Design, develop, deploy, and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to transform data into actionable insights
  • Collaborate with stakeholders to understand reporting requirements and translate them into scalable pipelines, queries, and Power BI reports
  • Identify and implement best practices for Spark application development, data ingestion, and data transformation
  • Implement data quality checks to ensure data accuracy, consistency, and integrity
  • Optimise Spark and MS Fabric jobs for performance, scalability, and reliability
  • Collaborate with DevOps teams to automate deployment and management of Spark notebooks and pipelines

Experience:

  • Proven experience as an Apache Spark Engineer, Senior Data Engineer, or similar role, working with large-scale analytics deployments using metadata-driven development frameworks
  • Experience in designing and building solutions with a mindset of reusability, re-run-ability, and modularity
  • Experience in modelling and optimising Power BI semantic models and reports
  • Proven expertise in designing, building, operating, and maintaining scalable data warehousing and business intelligence solutions using ETL processes
  • Proven experience with DevOps, DataOps, and CI/CD to develop and deploy data pipelines
  • Knowledge of data ingestion and processing tools and technologies such as Apache Kafka, Confluent Kafka, and Databricks
  • Strong communication and collaboration skills, with the ability to work effectively in cross-functional teams
  • Experience with cloud platforms and services such as Azure or AWS is a plus
  • Insurance/finance domain experience is desirable but not essential

SKill:

  • Administration and support of Microsoft Fabric / Synapse, Snowflake, Databricks, or AWS data stack
  • PySpark (including understanding of Spark internals, not just notebook usage)
  • Basic understanding of how Kafka clients work
  • Strong understanding of data architectures to troubleshoot failures and interpret new project requirements
  • Data extraction, manipulation, and report writing using Power BI and Microsoft SQL Server

Get in touch for a confidential chat via 0211242749 or email me to miguel@consult.co.nz



Share link:

Job Details

Job ID (JID):
11301

Location:
Auckland CBD

Category:
IT & Digital

Type:
Contract & Temporary

Explore latest jobs.

Scam Alert: We have been made aware of an increased number of scammers on Facebook, Instagram and WhatsApp posing as Consult Recruitment employees. We will never contact you on these platforms about job opportunities. Please do not respond to anyone who does, and inform us at info@consult.co.nz.