You will work on the compute and sensor integration layer inside HoloTotem, HoloTable and HoloCabine — the hardware units that capture and render real-time volumetric humans. This means everything from RGBD camera bring-up and sensor driver development to the real-time multi-camera processing pipelines that feed into our streaming stack.
You will be on-site at our Barcelona lab, working directly with the hardware and getting your hands on prototype revisions before they go into production. This is a role for someone who enjoys the full stack from schematic review to real-time Linux pipelines.
Responsibilities
Integrate and calibrate RGBD camera arrays — colour, depth and infrared channels
Develop and maintain sensor drivers and hardware abstraction layers on embedded Linux
Build real-time multi-camera synchronisation and preprocessing pipelines
Manage hardware bring-up for new device revisions — SBCs, SoCs and companion boards
Profile and optimise compute pipelines for embedded hardware constraints
Write BSP documentation, hardware integration guides and driver test suites
Requirements
4+ years in embedded Linux or firmware development (C/C++)
Experience with USB3 or PCIe camera interfaces and V4L2 or proprietary camera SDKs
Hands-on familiarity with RGBD sensors — RealSense, Kinect, Orbbec or similar
Understanding of real-time constraints and task scheduling on embedded Linux
Comfortable reading hardware schematics and working alongside electrical engineers
Nice to have
Experience with ISP pipelines or factory camera calibration (OpenCV, Kalibr, etc.)
Background in SoC bring-up — Qualcomm, NVIDIA Jetson, Rockchip or similar
Prior work on multi-sensor time synchronisation or time-of-flight depth processing
Familiarity with ROS or robotics middleware for sensor fusion
Experience with FPGA prototyping for hardware acceleration