wiki:ControlSystems/SoftwareTeam/Training/GettingStarted/Targeting

Version 4 (modified by David Albert, 6 years ago) (diff)

--

OpenCV allows you to do sophisticated processing of the received frames using various algorithms to filter the images, detect edges, and more. Unfortunately, image processing is very CPU intensive and the RoboRIO is not really up to the task. The $435 RoboRIO has a 667MHz dual-core processor with 256MB of RAM. For comparison, a $40 Raspberry Pi 3B has a 1.2GHz quad-core processor with 1GB of RAM or nearly 4x the computing power. If you try to do much video processing using the RoboRIO, it slows to a crawl and can't perform its other robot-control duties well.

For this reason, 2537, and most other teams, don't do much video processing on the RoboRIO, instead doing video processing on a separate platform (e.g. Raspberry Pi) and send the concise results (e.g. target angle) to the RoboRIO. Choices for video co-processors include:

  • Raspberry Pi 3B or 3B+ (there is also a Raspberry PI 4, but it requires active cooling)
  • NVidia Jetson TX2 ($400) or Jetson Nano ($100) - uses same cameras and GPIO as Pi.
  • Sending the video to the driver-station laptop for processing (consider using tools like GRIP
  • Off-the-shelf solutions: PixyCam ($60), Limelight 2 ($400), etc.

Each method has its advantages; 2537 has traditionally used a Raspberry Pi running custom C++ OpenCV software because Java adds enough overhead that it significantly reduces processing resolution/framerates. The system is described in detail here.