OpenCV allows you to do sophisticated processing of the received frames using various algorithms to filter the images, detect edges, and more. Unfortunately, image processing is very CPU intensive and the RoboRIO is not really up to the task. The $435 RoboRIO [https://forums.ni.com/t5/FIRST-Robotics-Competition/roboRIO-Details-and-Specifications/ta-p/3494658?profile.language=en specifications] show that it has a 667MHz dual-core processor with 256MB of RAM. For comparison, a $40 Raspberry Pi 3B ([https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ specifications]) has a 1.2GHz quad-core processor with 1GB of RAM or nearly 4x the computing power. If you try to do much video processing using the RoboRIO, you'll find that it slows to a crawl and can't perform its other robot-control duties well. For this reason, 2537, and most other teams, don't do much video processing on the RoboRIO, instead doing video processing on a separate platform (e.g. Raspberry Pi) and send the concise results (e.g. target angle) to the RoboRIO. Choices for video co-processors include: * Raspberry Pi [https://www.amazon.com/Raspberry-Pi-MS-004-00000024-Model-Board/dp/B01LPLPBS8/ 3B] or 3B+ (there is also a Raspberry PI 4, but it requires active cooling) * NVidia Jetson [https://developer.nvidia.com/embedded/jetson-tx2 TX2] ($400) or Jetson [https://developer.nvidia.com/embedded/jetson-nano-developer-kit Nano] ($100) * Sending the video to the driver-station laptop for processing * Off-the-shelf solutions: [https://limelightvision.io/ Limelight 2] ($400), [https://pixycam.com/ PixyCam] ([https://www.amazon.com/gp/product/B07D1CLYD2 $60]), etc. 2537 has traditionally used a Raspberry Pi running custom C++ software because Java adds enough overhead that it significantly reduces the processing resolution/framerates. The system is described in detail [VisionFramework here].