- As DevOps Engineer you will undertake a range of duties in relation to the development of data capabilities including the Platform, Data Structures, Pipelines, tools and code
- Be Accountable for the delivery of effective and efficient data capabilities that enable and optimise the data structures used by stakeholders in the performance of their activities
- Deliver the detailed design, development and build of the data structures required by the stakeholders to perform all their activities (e.g. Data Quality, Decisioning, Analytics and reporting)
What Will You Be Doing? & Role Accountabilities
- Complete detailed and thorough impact assessment of stakeholder requirements to enable the design and build of effective and efficient data capabilities
- Perform requirements gathering and documentation for development activities, ensuring accuracy, clarity and appropriate levels of brevity or detail as required
- Drive development though design and build to successful conclusion, including supporting and often devising / completing the performance/system testing of delivered development, gaining sign off and user acceptance from the appropriate stakeholder
- Deliver data engineering activities for data structures (e.g. design and build)
- Active collaboration with stakeholder teams to translate business requirements into technical solutions
- Deliver consistent practices for the management of Unit/Integration Testing including defining, performing and documenting the appropriate testing in the relevant environments
- Ensure that development performed maintains the code repository, version control and audit trails for all data structures
- Lead contributor to knowledge management framework for all development activities ensuring publication of the data structures documentation to provide Customer Insight teams with clarity of the data structures
- Act as the lead on specific recurring problems and determine and resolve root causes
About You: Knowledge & Experience
- Extensive knowledge of Data Engineering
- Experience building and maintaining data pipelines on cloud based big-data platforms
- Expertise of relational and non-relational databases and data management
- Expert levels of programming, coding in multiple traditional and new technologies.
- Demonstrable experience of conversion of business terminology to technical and vice versa
- Knowledge of typical data platform uses and activities including data quality, decisioning, analytics etc
- Experience implementing and maintaining Continuous Integration and Delivery (CI/CD) pipelines and the associated technologies
- Professional experience and expertise as a Data Engineer on Big Data Platforms using multiple technologies (e.g. SQL, Spark, Python, Git, HDFS, Hadoop, Hive, CI/CD, Map Reduce etc...) to deliver innovative solutions to enable stakeholder teams use of curated data
- Shows a systematic, disciplined, and analytical approach to problem solving and pays close attention to detail
- Good communication, presentation, explanation and influencing skills
- Experienced in process, project, change, time management and problem solving skills
- Excellent prioritisation and organisational skills.
At Centrica we embrace diversity and actively seek to attract individuals with unique backgrounds and perspectives. To build a more sustainable future, we need the best team – a team with a diverse mix of people and skills, where everyone feels welcome and able to succeed. We are dedicated in helping to close the diversity gap across the technology sector and would love to see more females, people of colour and LGBTQ+ employees, as well as those from a variety of cultures and ethnicity to veterans and the differently abled. Supporting diversity and inclusion is a big part of who we are, we are not looking for people to fit into our culture but to add to it!
PLEASE APPLY ONLINE by hitting the 'Apply' button.
Applications will ONLY be accepted via the ‘Apply’ button.
This role is being handled by the Centrica recruitment team and NO agency contact is required.