[https://first.wpi.edu/FRC/roborio/release/docs/java/ OpenCV] allows you to do sophisticated processing of the received frames using various algorithms to filter the images, detect edges, and more. Unfortunately, image processing is very CPU intensive and the RoboRIO is not really up to the task. The $435 [https://forums.ni.com/t5/FIRST-Robotics-Competition/roboRIO-Details-and-Specifications/ta-p/3494658?profile.language=en RoboRIO] has a 667MHz dual-core processor with 256MB of RAM. For comparison, a $40 [https://www.raspberrypi.org/products/raspberry-pi-3-model-b/ Raspberry Pi 3B] has a 1.2GHz quad-core processor with 1GB of RAM or nearly 4x the computing power. If you try to do much video processing using the RoboRIO, it slows to a crawl and can't perform its other robot-control duties well. For this reason, 2537, and most other teams, don't do much video processing on the RoboRIO, instead doing video processing on a separate platform (e.g. Raspberry Pi) and send the concise results (e.g. target angle) to the RoboRIO. Choices for video co-processors include: * Raspberry Pi ($40) [https://www.amazon.com/Raspberry-Pi-MS-004-00000024-Model-Board/dp/B01LPLPBS8/ 3B] or 3B+ (there is also a Raspberry PI 4, but it requires active cooling) * NVidia Jetson Nano [https://developer.nvidia.com/embedded/jetson-nano-developer-kit Nano] ($100) - which can use [https://www.jetsonhacks.com/2019/04/02/jetson-nano-raspberry-pi-camera/ RPi camera] and has same GPIO pinout as Pi (but w/low-drive capability). * Sending the video to the driver-station laptop for processing (consider using tools like [https://wpilib.screenstepslive.com/s/currentCS/m/vision/l/463566-introduction-to-grip GRIP] * Off-the-shelf solutions: [https://pixycam.com/ PixyCam] ([https://www.amazon.com/gp/product/B07D1CLYD2 $60]), [https://limelightvision.io/ Limelight 2] ($400), etc. 2537 has traditionally used a Raspberry Pi running custom C++ OpenCV software because Java adds enough overhead that it significantly reduces processing resolution/framerates. The system is described in detail [VisionFramework here]. When using a co-processor, there are multiple ways to send the results back to the RoboRIO including: * [https://wpilib.screenstepslive.com/s/currentCS/m/75361/l/843361-what-is-networktables Network Tables] (for use on Pi see [https://www.chiefdelphi.com/t/has-anybody-gotten-networktables-to-work-on-a-raspberrypi/147256/13 here]) * UDP (for an example see [http://einsteiniumstudios.com/using-the-roborio-with-the-beaglebone.html here]) * PWM (read about [https://wpilib.screenstepslive.com/s/currentCS/m/java/l/599716-counters-measuring-rotation-counting-pulses-and-more Semi-period mode]) * Serial communications To learn more about vision processing, see [https://wpilib.screenstepslive.com/s/currentCS/m/vision/l/682117-strategies-for-vision-programming ScreenStepsLive] and [https://www.youtube.com/watch?v=ZNIlhVzC-4g this video] from the RoboJackets