We are an Engineering and IT Recruitment agency tasked with staffing a Computer Vision Engineer for one of our clients. Our client is a fast-growing robotics company developing highly mobile, intelligent robots capable of advanced manipulation and autonomous operation in complex environments. Their mission is to solve some of the toughest challenges in robotics—creating systems that can work alongside humans, adapt to dynamic spaces, and improve safety, efficiency, and quality of life in industries ranging from logistics and manufacturing to disaster response and healthcare.

With a strong emphasis on real-world deployment, AI-enabled autonomy, and cutting-edge hardware design, this team is seeking a Computer Vision Engineer to help develop perception systems that enable their robots to see, interpret, and interact with the world around them.

Typical Duties and Responsibilities

As a Computer Vision Engineer, you’ll join a collaborative team of perception and autonomy specialists building the visual intelligence layer of robotic systems. You’ll work at the intersection of software, sensors, and machine learning, building robust systems that support object detection, pose estimation, semantic segmentation, and 3D scene understanding.

Key Responsibilities Include:

  • Design and implement real-time computer vision algorithms for object recognition, tracking, segmentation, and SLAM (Simultaneous Localization and Mapping)
  • Develop depth and multi-modal perception pipelines using RGB, stereo, LiDAR, and depth cameras
  • Optimize models and vision pipelines for deployment on embedded platforms such as NVIDIA Jetson/Drive, Qualcomm Robotics RB5, or similar edge hardware
  • Utilize deep learning frameworks (e.g., PyTorch, TensorFlow) to develop and train custom CNNs and vision transformers for perception tasks
  • Integrate visual systems with robot control software via ROS/ROS2, ensuring robust data flow from camera input to autonomous decision-making modules
  • Build tools and workflows for data collection, labeling, and validation using popular platforms such as CVAT, Supervisely, or Labelbox
  • Conduct performance tuning, benchmarking, and model quantization to ensure low-latency performance in real-world robotics deployments
  • Collaborate with mechanical, electrical, and autonomy teams to validate vision algorithms in simulation (e.g., Gazebo, Isaac Sim, Unity) and on physical hardware
  • Support continuous integration and test automation for computer vision components in the robotics stack

Education

  • Bachelor’s degree in Computer Science, Robotics, Electrical Engineering, or a related field is required
  • Master’s degree or equivalent research experience preferred, especially in areas such as Computer Vision, Robotics, or Artificial Intelligence

Required Skills and Experience

  • 3–5 years of hands-on experience developing computer vision solutions for robotics, embedded systems, or autonomous platforms
  • Strong proficiency in Python and C++, with experience building real-time perception systems
  • Practical experience with OpenCV, CUDA, and GStreamer, especially in latency-sensitive applications
  • Familiarity with ROS or ROS2 for integrating vision algorithms into robotic systems
  • Experience working with 3D sensors (e.g., Intel RealSense, Ouster, Velodyne, ZED) and multi-camera rigs
  • Demonstrated ability to train, evaluate, and deploy models using TensorFlow, PyTorch, or Keras
  • Understanding of geometric computer vision principles: camera calibration, epipolar geometry, depth estimation, optical flow
  • Comfort working in Linux environments, including scripting, debugging, and hardware integration
  • Experience with version control (Git), CI/CD pipelines, and modern software engineering best practices

Preferred Qualifications

  • Experience with SLAM, multi-view geometry, or visual-inertial odometry (VIO)
  • Background in 3D reconstruction, mesh generation, or point cloud registration using tools like PCL, Open3D, or Meshlab
  • Exposure to robot simulation environments such as Gazebo, Isaac Sim, or Unity Robotics Hub
  • Familiarity with hardware acceleration techniques (e.g., TensorRT, ONNX Runtime, or GPU-based inference optimization)
  • Contributions to open-source robotics or vision libraries (e.g., OpenCV, ROS perception stack)
  • Experience deploying models on edge platforms or embedded AI chipsets (Jetson Nano/Xavier, Coral Edge TPU, etc.)
  • Prior work in real-world robotics deployments—especially in unstructured or dynamic environments (warehouses, construction, agriculture, etc.)