Frigate

From Leo's Notes
Last edited on 3 February 2024, at 22:21.

Frigate is a self hosted and open source video surveillance system.

It is capable of:

  • streaming video from RTSP/RTMP
  • performing real-time object detection or simply just motion detection
  • keeping track of video clips and image snapshots with a retention policy
  • MQTT integration for messaging events

This page will contain some of my notes on using Frigate.

Setup[edit | edit source]

Frigate is distributed as a Docker image only and has to run as a Docker container.

Object detection[edit | edit source]

The object detection system requires a detector set up. By default, Frigate will have a CPU detector enabled. This isn't recommended as the object detection uses a lot of CPU cycles while being slow. Frigate recommends using a USB Google Coral for this task, but you may also choose to use a NVIDIA GPU with TensorRT (supported on post-Pascal GPUs only).

Due to the chip shortage, Google Corals aren't easy to come by. The best alternative is to pick up a cheap NVIDIA Quadro GPU such as the P400. I got a second-hand P400 for around $80 CAD.

TensorRT[edit | edit source]

See the documentation on this at: https://docs.frigate.video/configuration/object_detectors/#nvidia-tensorrt-detector

According to the official documentation, there are a long list of models that could be used. I find that the latest YOLO models work best, but they may run slower and require more GPU memory. The YOLOv4-tiny model works well as it is very fast and light on GPU memory, but it had trouble detecting cars in my garage. I eventually decided to go with the YOLOv7-320 model which is more accurate but slower which we'll set up now.

First, we'll have to convert the YOLO models into a TensorRT model that works with our GPU. Keep in mind that the TensorRT .trt model file has to be built on the same GPU that you will use for detection. We'll generate the model file with NVIDIA's tensorrt container image:

## Store the output models and the tensorrt_demos repo
# mkdir trt-models tensorrt_demos

## Launch the tensorrt container
# docker run --gpus=all -it -v `pwd`/trt-models:/tensorrt_models -v `pwd`/tensorrt_demos:/tensorrt_demos -v `pwd`/tensorrt_models.sh:/tensorrt_models.sh nvcr.io/nvidia/tensorrt:22.07-py3 bash

## Download the tensorrt_models.sh script from Frigate and run it. This will download all the YOLO models and then generate the .trt model file.
## If you _don't_ want to download all the YOLO models, you'll have to interrupt the script and edit the 'download.yolo.sh' 'script manually.
container# wget https://github.com/blakeblackshear/frigate/raw/master/docker/tensorrt_models.sh
container# YOLO_MODELS="yolov7-320" bash tensorrt_models.sh

With the yolov7-320.trt file generated, we'll configure the TensorRT based detector with the following configuration lines:

detectors:
  path: /trt-models/yolov7x-320.trt
  width: 320
  height: 320
  input_tensor: nchw
  input_pixel_format: rgb

If you are using a different model, you'll have to make sure the width/height sizes match what was used to train the model for best results.

Tuning[edit | edit source]

If you're getting false positives, you may want to adjust the filter thresholds to help tune out the error rates.

For example:

cameras:
  front:
...
    detect:
      enabled: True
    objects:
      track:
        - person
        - bus
      filters:
        person:
          min_score: 0.6 # min score for object to initiate tracking (default: 0.5
          threshold: 0.8 # min decimal percentage for tracked object's computed score to be considered a true positive (default: 0.7)

Alternatively, if you're using a GPU, you may want to swap out the TensorRT model for another one and see if that helps with the detection.

You may also want to enable object masks to mask out areas that are causing the false positives.

Go2rtc[edit | edit source]

go2rtc is a built-in service in the Frigate container that may be used to stream RTSP video from all your cameras. This way, your ffmpeg processes will stream from the internal go2rtc server rather than from the cameras directly. The added bonus with using this is that you may use the webrtc option in the Frigate web interface for higher FPS video streams.

My configuration with my Wyze Cams running the RTSP beta firmware looks like this:

go2rtc:
  streams:
    entrance: rtsp://admin:wyzecam@10.1.x.x:554/live
    backyard: rtsp://admin:wyzecam@10.1.x.x:554/live
    front: rtsp://admin:wyzecam@10.1.x.x:554/live
    garage: rtsp://admin:wyzecam@10.1.x.x:554/live
  webrtc:
    candidates:
      - 10.1.x.x:8555
      - stun:8555

General tips[edit | edit source]

Some tips I wished I knew earlier when using Frigate.

  • Use a GPU or Coral if possible. CPU is okay, but it uses a lot of CPU which in turn means higher power usage. A cheap Quadro is both faster and more efficient than a CPU. Plus, a GPU lets you use hardware acceleration on video encoding (with ffmpeg) which also reduces your CPU usage.
  • Object detection only happens when Frigate detects motion (that is, changed pixels) in a frame. It then sends the portion of the frame that it detected motion on for object detection. As a result, using motion masks on areas you aren't interested in can reduce the amount of object detection being performed.
  • If you want to save clips when a specific object appears (such as 'person' appearing in the garage) while also wanting Frigate to also keep track of persistent objects but not record them (such as a 'car' in the garage), the best approach I've had is to create two cameras in Frigate: One for 'person' with clip recording enabled and another for 'car' with clip recording disabled.
    • This was done so that I can count how many 'cars' are in the garage for Home Assistant to act on. I do not want constant video clips of a stationary 'car', however.
  • Enable go2rtc. You'll get higher framerates when viewing live video from the Frigate web interface which won't be limited to the objection detection fps.
  • Enabling object detection will automatically have the camera report the objects in Home Assistant (also via MQTT). If you're monitoring for 'car', you'll get a 'car' count in Home Assistant which you can automate on.

Troubleshooting[edit | edit source]

ffmpeg frequently OOMs[edit | edit source]

One of the camera's ffmpeg process periodically chews up all the memory assigned to the container and then gets OOM killed. This issue seems to happen to others as well.

Things that I tried which did not help with the OOM are:

  • Disabled NVIDIA hwaccel flags to ffmpeg - ffmpeg continued to OOM periodically
  • Updated go2rtc from 1.2.0 to 1.6.2 - no change

What did stop the OOMs completely was to not have the frigate camera config use go2rtc but rather stream from the camera directly.

Speculation: All the other cameras are all Wyze Cam v3 with the same firmware and they have no issues. The only difference would possibly be due to a weak Wi-Fi connection which is causing unexpected latency or corruption that's causing go2rtc and ffmpeg to go haywire?