Data Curation for

Improve your machine learning models by using the right data

How it works

Get from big data to high quality data by curating unlabeled data

Find data redundancy and bias

Find and remove redundancy and bias introduced by the data collection process to reduce overfitting and improve ML model generalization.

10x more efficient

Save money on your data related costs by removing redundancies

Increased accuracy

Reduce overfitting and improve generalization by diversifying your dataset

Manage everything in one place

Understand your data within minutes after collection and before any data labeling.
We use self-supervised learning combined with active-learning to accelerate your data preparation pipeline.

Data Selection

Most companies only use between 0.1% and 10% of their data for machine learning. Use our state-of-the-art methods to select the most relevant samples. Let Lightly handle the selection of the data for you while you focus on the training process.

Smart Data Pool

Keep track of the data your team is working on. Our algorithms help you only adding relevant data to the existing pool. We only store non-sensitive meta-information on our servers so you don't have to worry about transfer costs or privacy issues.

Data Analytics

Use our deep data analytics framework to analyze your raw datasets. Get insights about the distribution, diversity, and other key metrics. Find dataset bias before training and evaluating your model.

Use Cases


Make your vehicle autonomous for the street, sea, or air.


Shipping, Logistics, Airline, Defense & Military


Visual Inspection

Detect defects in infrastructure, manufactured products, or find infected plants.


Railways & Roads, Infrastructure, Manufacturing, Agriculture, Surveillance & Security


Medical Imaging

Find abnormalities in medical images such as X-rays, MRIs, microscope & medical scans.


Health/Life Science, Biotechnology, and Digital Diagnostics/Pathology


Space Data

Improve space products and achieve better results


Sattelite Imaging, Visual Inspection for Space Components, Autonomous Systems


Our Interfaces

  • <100'000 samples
  • Drag n Drop (no coding required)
  • 2048-bit SSL encryption
  • Visual Analytics
Python PIP Package (CLI)
  • < 100'000 samples
  • Train custom embedding models using self-supervised learning
  • Option to only upload non-sensitive metadata
On-Premise (Docker)
  • Already used by Fortune500 companies to process > 1'000'000 samples
  • Neither your raw data nor metadata leave your server
  • Analytics reports

Customer Case Studies

AI Retailer Systems

Learn how AI Retailer Systems was able to reduce the data required to train an object detection model by 85% with almost no loss in accuracy thanks to Lightly.

"I was truly amazed once we received the results of Lightly. We knew we had a lot of similar images due to our video feed but the results showed us how we can work more efficiently by selecting the right data"

Alejandro Garcia, CEO 


"After training a model on the filtered data suggested by Lightly, I saw a dramatic increase in performance on our key metrics. Part of this is certainly because this was the first time we trained a model on any data that we've collected, but I'm fairly certain that performance would not have been as good if we had chosen what data to label at random."

Angelo Stekardis, Computer Vision Lead


"Lightly helped us understand more about our own data gathering process. Through their service, we were able to see, that a lot of data being collected was not meaningful enough for training an accurate model. This led us to change the way we gathered data and allowed us to ultimately create a much more information dense and higher quality dataset overall. Needless to say, the performance of our final model was greatly improved."

Nasib Adriano Naimi, Autonomy and Robotics Engineer

Our Blog

Data Preparation Tools for Computer Vision

This article provides a data preparation tool landscape for computer vision. The intention is to give an overview of the available solutions which machine learning engineers can use to build better models.

Sustainable AI and the New Data Pipeline

Deep learning's requirement of Big and Smart Data is currently met by labor-intensive processes of data labeling and cleaning. Self-supervised learning challenges this paradigm by enabling a more sustainable data pipeline.

The Advantage of Self-Supervised Learning

‍A few personal thoughts on why self-supervised learning will have a strong impact on AI. From recent NLP to computer vision papers.

As seen on

Improve your data
Today is the day to get the most out of your data. Share our mission with the world — unleash your data's true potential.
Contact us