Overview of the role
You will be responsible for defining and executing R&D work to implement algorithms for visual, lidar and radar perception for our novel uncrewed air vehicle, Stork.
As a Principal Machine Learning/Computer Vision Engineer you will provide expertise in modern computer vision or deep learning methods for perception systems and will be able to demonstrate experience deploying systems in the real world as a product or service, rather than just proofs of concept working in a lab or demo setting.
If you’re excited to work on novel sensing intelligence in the aerial space and get your algorithms deployed on cutting edge aerial robotics then this role is for you. We’re starting to build out the wider Autonomy team, so it’s a chance not just to contribute but to shape future direction.
Responsibilities… what will you do?
As Machine Learning/Computer Vision Engineer you will be responsible for:
- Develop and investigate algorithms for visual perception for aircraft, landing site and airborne detection, situational awareness.
- R&D into lidar, and other multimodal sensing (e.g. radar)
- Progression of R&D efforts into implementation and deployment.
- Input into dataset capture for development and validation
- Help develop approach to validating and verifying ML/Vision systems for safe flight.
- Support simulator development for perception systems.
- Support flight testing of perception systems.
- Provide support to the autonomy team to overcome complex issues
- Liaise with key stakeholders, including mechanical, systems, electronics, testing and verification, and project management
- Be a positive role model, showing proactive behaviour to improve our processes and ways of working, and provide mentorship to other engineers
Essential skills and experience… what are we looking for?
You will have the following skills and experience:
- Experienced with deployed machine learning or computer vision, you’ve implemented systems that have been part of a product or service operating in the real world.
- Deep knowledge in one or more of the areas of semantic segmentation, object detection, motion estimation, calibration, multi-view geometry or 3D vision.
- You’ve implemented deployed ML or vision systems working on real data.
- Experience of the entire process from early proof-of-concept to production deployment and monitoring.
- Experience of the entire design and implementation process for ML systems, from data collection, through labelling, training, implementation and performance analysis.
- Strong skills in at least one major deep learning framework (e.g. Pytorch, tensorflow etc) or common computer vision/sensing libraries (e.g. OpenCV, PCL etc)
- Strong C++ or Python skills with experience in a commercial or team setting.
- Comfortable starting from scratch, planning, designing and implementing new systems.
- A demonstrable proactive approach to resolving complex issues
- Excellent communication skills – written and oral
Desirable skills and experience:
- R&D Experience in an academic or industrial setting.
- LIDAR or Radar systems experience.
- Robotics or UAV experience.
- Use of simulators for developing and validating systems.
- An understanding of MLOps concepts.
Benefits… what do you get?
- Competitive salary up to 80k, dependent on experience
- EMI share options scheme
- Flexible working
- 25 days annual leave, bank holidays, plus 3 days additional leave for Christmas closure
- Company discount scheme
- Pension scheme
- Career Development Framework to support your professional development
This role is advertised as full-time but we want to hire the best person and would therefore consider part-time options.
Note to all applicants
Due to the security classification on the work we do, we will undertake a routine Baseline Personnel Security Standard check on the successful applicant. This is a standard process which includes references and career history checks.
For full details of how we will use your personal data please see our Recruitment Privacy Statement available at www.animal-dynamics.com/rps