Version 7 (modified by 6 years ago) (diff) | ,
---|
OpenCV allows you to do sophisticated processing of the received frames using various algorithms to filter the images, detect edges, and more. Unfortunately, image processing is very CPU intensive and the RoboRIO is not really up to the task. The $435 RoboRIO has a 667MHz dual-core processor with 256MB of RAM. For comparison, a $40 Raspberry Pi 3B has a 1.2GHz quad-core processor with 1GB of RAM or nearly 4x the computing power. If you try to do much video processing using the RoboRIO, it slows to a crawl and can't perform its other robot-control duties well.
For this reason, 2537, and most other teams, don't do much video processing on the RoboRIO, instead doing video processing on a separate platform (e.g. Raspberry Pi) and send the concise results (e.g. target angle) to the RoboRIO. Choices for video co-processors include:
- Raspberry Pi 3B or 3B+ (there is also a Raspberry PI 4, but it requires active cooling)
- NVidia Jetson TX2 ($400) or Jetson Nano ($100) - can use RPi camera) and has same GPIO pinout as Pi (but w/low-drive capability).
- Sending the video to the driver-station laptop for processing (consider using tools like GRIP
- Off-the-shelf solutions: PixyCam ($60), Limelight 2 ($400), etc.
2537 has traditionally used a Raspberry Pi running custom C++ OpenCV software because Java adds enough overhead that it significantly reduces processing resolution/framerates. The system is described in detail here.
When using a co-processor, there are multiple ways to send the results back to the RoboRIO including:
- Network Tables (for use on Pi see here)
- UDP (for an example see here)
- PWM (read about Semi-period mode)
- Serial communications
To learn more about vision processing, see ScreenStepsLive and this video from the RoboJackets?