Detecting and tracking objects on video

Object detection and tracking

Smart farming animal monitoring

Animal monitoring is a crucial aspect of modern smart farming, leveraging computer vision and AI to enhance animal welfare and farm management. By employing advanced object detection and pose estimation models, farmers can monitor the health, behavior, and well-being of their livestock in real time.

Smart farming systems utilize computer vision to analyze video feeds from cameras placed in the field, enabling farmers to track animal movements, detect signs of distress or illness, and assess overall herd health. These systems can automatically identify individual animals, monitor their behavior, and even detect anomalies such as lameness or injuries.

object detection chicken
object detection tomato

Smart agriculture crop health monitoring

Object detection models can identify and classify various objects in the field, such as crops, weeds, and pests. This information can be used to optimize resource allocation, reduce chemical usage, and improve overall crop yield. For example, farmers can use object detection to pinpoint areas of a field that require additional irrigation or pest control, allowing for targeted interventions rather than blanket treatments.

By integrating object detection and computer vision into smart farming practices, farmers can enhance productivity, reduce costs, and promote sustainable agriculture. These technologies enable real-time monitoring and analysis, empowering farmers to make data-driven decisions that improve crop health and yield while minimizing environmental impact.

How to detect and track objects on video

Object detection is a computer vision task that involves identifying and locating objects within an image or video. It has numerous applications, including surveillance, autonomous vehicles, and robotics. In this tutorial, we will demonstrate how to use the YOLOv8 object detection model to detect objects in a video using the Abraia Vision SDK.

1. Install the Abraia Vision SDK

You can install the package on Windows, Mac, and Linux:

python -m pip install -U abraia

2. Load and run the object detection model

Import the "Model" and "Tracker" class from the inference module and load the "yolov8n" detection model. Then use the "run" method from the model and the "Video" class to detect the objects in the frames. To show the results on the image, simple use the "render_results" function.

from abraia.inference import Model, Tracker
from abraia.utils import Video, render_results

model = Model("multiple/models/yolov8n.onnx")

video = Video('images/people-walking.mp4')
tracker = Tracker(frame_rate=video.frame_rate)
for frame in video:
    results = model.run(frame, conf_threshold=0.5, iou_threshold=0.5)
    results = tracker.update(results)
    frame = render_results(frame, results)
    video.show(frame)

You can even run the model directly on the camera stream, just use "Video(0)" to use your webcam.

3. Record the YOLOv8 detection results on video

To record the output video, you just need to add the output parameter when you declare the video object, and use the "write" method from video, instead of "show".

from abraia.inference import Model, Tracker
from abraia.utils import Video, render_results

model = Model("multiple/models/yolov8n.onnx")

video = detect.Video('people-walking.mp4', output='people-detected.mp4')
tracker = Tracker(frame_rate=video.frame_rate)
for frame in video:
    results = model.run(frame, conf_threshold=0.5, iou_threshold=0.5)
    results = tracker.update(results)
    frame = render_results(frame, results)
    video.write(frame)

Congratulations! You have successfully run the "yolov8n" object detection model on a video. This tutorial provides a basic overview, and you can further explore advanced features, optimize your model, and fine-tune parameters based on your specific use case.


Contact Us