Recall is a classification performance metric that measures how well a model identifies all positive instances. Formally, recall = TP / (TP + FN) – the fraction of actual positive cases that the model correctly predicted as positive​.High recall means the model misses very few positives (low false negatives). Recall is also known as sensitivity or the true positive rate (TPR)​.This metric is critical in scenarios where failing to detect a positive case is costly (e.g., medical diagnostics or fraud detection), as it emphasizes capturing as many positives as possible. There is often a trade-off between recall and precision: maximizing recall can increase false positives, so the appropriate balance depends on the application’s requirements.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on devide
Learn More