Samsara logo

Senior Machine Learning Engineer

Samsara
Full-time
Remote
United States
$135,000 - $227,000 USD yearly
Software/ IT

We’re looking for a motivated Senior Machine Learning Engineer I to join our Safety In-Vehicle Experience team. Samsara’s In-Vehicle Experience team is shaping how millions of drivers interact with Samsara on the road. From real-time safety alerts and AI-driven insights to intuitive in-cab experiences, our work directly improves driver safety, engagement, and trust.

This role calls for an experienced Edge AI engineer passionate about developing and optimizing ML models for highly constrained embedded environments. You will bridge the gap between research and production by deploying efficient, reliable, and scalable AI models that power the next generation of Samsara’s in-vehicle intelligence.

This is a remote position open to candidates residing in the US.

You should apply if:

  • You want to impact the industries that run our world: The software, firmware, and hardware you build will result in real-world impact—helping to keep the lights on, get food into grocery stores, and most importantly, ensure workers return home safely.
  • You want to build for scale: With over 2.3 million IoT devices deployed to our global customers, you will work on a range of new and mature technologies driving scalable innovation for customers across industries driving the world's physical operations.
  • You are a life-long learner: We have ambitious goals. Every Samsarian has a growth mindset as we work with a wide range of technologies, challenges, and customers that push us to learn on the go.
  • You believe customers are more than a number: Samsara engineers enjoy a rare closeness to the end user and you will have the opportunity to participate in customer interviews, collaborate with customer success and product managers, and use metrics to ensure our work is translating into better customer outcomes.
  • You are a team player: Working on our Samsara Engineering teams requires a mix of independent effort and collaboration. Motivated by our mission, we’re all racing toward our connected operations vision, and we intend to win—together.

In this role, you will: 

  • Design, optimize, and deploy computer vision and multimodal ML models that run efficiently on constrained edge platforms powering Samsara’s in-vehicle camera systems.
  • Apply advanced model optimization techniques—such as quantization, pruning, and distillation—to achieve real-time inference under strict CPU, memory, and thermal constraints.
  • Partner with ML research and product teams to translate new AI detections into deployable, maintainable edge models.
  • Collaborate with firmware, ML research, and hardware teams to productize our ML runtime pipeline, bringing scalable, reliable, and testable on-device inference to production.
  • Develop performance benchmarking, profiling, and validation frameworks for edge-deployed models to ensure robustness across millions of deployed devices.
  • Drive continuous improvement of our edge ML toolchain and advocate for best practices in model optimization, inference reliability, and deployment efficiency.
  • Mentor peers on efficient inference design and collaborate cross-functionally to accelerate feature delivery for safety and driver experience programs.
  • Champion, role model, and embed Samsara’s cultural principles (Focus on Customer Success, Build for the Long Term, Adopt a Growth Mindset, Be Inclusive, Win as a Team) as we scale globally and across new offices.

Minimum requirements for the role:

  • 5+ years of experience developing and deploying deep learning models for edge, embedded, or real-time systems.
  • Strong background in computer vision or multimodal ML (e.g., 2D/3D CNNs, Transformers) using industry-standard deep learning frameworks.
  • Proficiency in Python and C++, with hands-on experience optimizing inference runtimes and applying model optimization techniques for edge deployment.
  • Deep understanding of performance tuning, including compiler- or DSP-level optimizations, runtime profiling, latency analysis, and memory management on constrained hardware.
  • Familiarity with middleware or streaming frameworks used in real-time perception pipelines.
  • Excellent cross-functional communication and collaboration skills, especially across ML, firmware, and product domains.

An ideal candidate also has:

  • Experience bringing ML infrastructure or runtime systems from prototype to production at scale.
  • Background in multimodal ML (e.g., audio + vision fusion) or event-based detection systems.
  • Experience validating AI models across large, diverse fleets of deployed devices in real-world environments.