Video-based automatic lameness detection of dairy cows using pose estimation and multiple locomotion traits

This study presents an automated lameness detection system that uses deep-learning image processing techniques to extract multiple locomotion traits associated with lameness. Using the T-LEAP pose estimation model, the motion of nine keypoints was extracted from videos of walking cows. The videos were recorded outdoors, with varying illumination conditions, and T-LEAP extracted 99.6% of correct keypoints. The trajectories of the keypoints were then used to compute six locomotion traits: back posture measurement, head bobbing, tracking distance, stride length, stance duration, and swing duration. The three most important traits were back posture measurement, head bobbing, and tracking distance. For the ground truth, we showed that a thoughtful merging of the scores of the observers could improve intra-observer reliability and agreement. We showed that including multiple locomotion traits improves the classification accuracy from 76.6% with only one trait to 79.9% with the three most important traits and to 80.1% with all six locomotion traits.

Saved in:
Bibliographic Details
Main Authors: Russello, Helena, van der Tol, Rik, Holzhauer, Menno, van Henten, Eldert J., Kootstra, Gert
Format: Article/Letter to editor biblioteca
Language:English
Subjects:Cows, Deep-learning, Detection, Lameness, Locomotion, Pose-estimation,
Online Access:https://research.wur.nl/en/publications/video-based-automatic-lameness-detection-of-dairy-cows-using-pose
Tags: Add Tag
No Tags, Be the first to tag this record!

Similar Items