Cross-entropy loss, also known as log loss, is a widely used loss function in machine learning, particularly for classification problems. It quantifies the difference between two probability distributions: the true labels and the predicted probabilities from a model. For binary classification, binary cross-entropy is used, measuring the performance of a classification model whose output is a probability value between 0 and 1. For multi-class classification, categorical cross-entropy is employed. The loss increases as the predicted probability diverges from the actual label. Specifically, it penalizes predictions that are confident but incorrect more heavily than predictions that are less confident but still incorrect. Minimizing cross-entropy loss during training encourages the model to output probability distributions that closely match the true label distributions, making it a critical component in training neural networks for classification tasks.
Data Selection & Data Viewer
Get data insights and find the perfect selection strategy
Learn MoreSelf-Supervised Pretraining
Leverage self-supervised learning to pretrain models
Learn MoreSmart Data Capturing on Device
Find only the most valuable data directly on device
Learn More