Interpreting the Report

General Information

In this part of the report, you get all the numbers about the filtering process.

  • How long did the filtering process take?
  • How many samples have been removed?
  • How much money did I save?​

The goal is to give you a simple overview of the most important numbers. There is a full section about estimated savings which we will explain in more detail here:

Data Annotation Savings:

Computed using the following formula:

            savings = #samples x annotation_cost

For the annotation cost we use industry numbers:

  • Classification: $0.30
  • Object Detection: $1.20
  • Segmentation: $6.00

​It is possible to customize the report with your own internal figures.

CO2 Savings:

Computed using the following formula:

            savings = #samples x CO2_emission

We derive the CO2 emission per sample from: Energy and Policy Considerations for Deep Learning in NLP

Using the reported number of CO2 emission per pound CO2e = 0.954pt together with average training times for models we can derive the estimated CO2 emission per sample. Note that the 0.954pt are pounds per kilowatt-hour. Typically CO2 equivalents (CO2e) use kg instead. And since 1 pound is 0.45kg we end up with 0.4289kg CO2e/kWh.

For the training time and power consumption, we use the following numbers from our experiments.

(Note that we don't train models to their peak accuracy like in research but rather try to find a good balance between training time and accuracy):

  • Training an image classifier on
    800k samples for 8 days on 1 P100 GPU --> 24.7kg CO2e (8 days x 24h x 300W x 0.4289 kg CO2e/kWh)
  • Training an object detection model on
    110k samples for 4 days on 1 P100 GPU --> 12.35kg CO2e
  • Training an image segmentation model on
    3k samples for 2 days on 1 P100 GPU --> 6.18kg CO2e

The per sample CO2 emissions are:

  • 0.0000308 kg
  • 0.00011 kg
  • 0.0021 kg

As expected, the CO2 emission only scales with the number of GPUs and days for the training. Think about the three example scenarios for the reported training dataset size and time: Classification on ImageNet using a ResNet-50, Object Detection on MS COCO with a Retinanet, Segmentation using an E-Net on Cityscapes.

Improve your data
Today is the day to get the most out of your data. Share our mission with the world — unleash your data's true potential.
Contact us