What is Data Curation and Why is it Important?

blue and white visa card on black and gray laptop computer

Humans generate a large amount of data through modes ranging from mobile applications to edge computing devices such as smartwatches and smart cars. These gadgets produce a massive amount of data in a variety of formats. This type of data is usually referred to as Big Data. Big Data is large and has different varieties, such as structured, unstructured, or semi-structured. In order to gain insights from this data and train machine learning models on it, it must be curated. The reason for this is that otherwise the data is too messy to extract information from it and to train a robust machine learning model. But, what is data curation? In short data curation refers to the process of organizing, filtering, and managing data so that users and businesses can obtain insight from it. 

In this post, we'll talk about data curation and why you should worry about it. 

More specifically, we'll cover 

  • what data curation is,
  • how data curation works,
  • the problems associated with uncurated data,
  • how to apply data curation, and
  • modes of curation including using a manual base or AI-based solution.

What Is Data Curation?

The word curation comes from the Latin word curae, which means "to care for." To gain a deeper understanding of data curation, consider a curator in a museum or library. Their main responsibility is to acquire, arrange, and present a collection of artwork or books in such a way that people may readily access it. 

The same principles apply to data curation in machine learning. If the data is not well filtered, distributed, and annotated the machine learning model will struggle to learn from it. 

Data curation is the process of creating, organizing, and managing data collections so anyone who's looking for them can find them. It entails gathering, arranging, filtering, indexing, annotating, and categorizing data for users within a company or for public use. 

Data curation is the active and continued management of data throughout its lifespan. 

How Data Curation Works

Data curation works by iterating through all its processes in order to effectively manage the data's lifecycle. Its processes are as follows: 

Data collection

This entails gathering the information required to complete a task. It involves having the data or database necessary to carry out the curation process. When there's no data, there's no need for curation. 

Metadata

This process involves describing or providing information on the data for collection or collecting additional metadata describing the the collected data. To describe data, one must use appropriate metadata. Such descriptions must always include descriptive information about the data's context, quality, and state or feature. 

TextDescription automatically generated with medium confidence
Understanding Data

This entails looking for difficulties with quality assurance and usability, such as missing values, data distribution, label quality, data bias. It also involves attempting to detect and extract "hidden information" from data that could be useful. 

Furthermore, assessing whether the data documentation is adequate for a user—including equivalent qualifications for the data author to comprehend and reuse the data—is a part of understanding data. You might think of data documentation as a data guide. It contains all of the information needed to understand the data. It explains, among other things, how the data was created, its structure, and its contents. 

Curation

The curation process involves cleaning or wrangling data, validating if the data is free of missing values, assigning information representation, and assuring an acceptable data structure or file. It also includes for example rebalancing distributions, adding missing data by collecting more data or using syntethic data, and fixing label mistakes. 

Annotation

The data is transformed into a format that the user and the model can understand. For example, if you want to develop a model that can predict whether a picture represents a cat or a dog, you'll need to enrich the images with data annotations so called labels.  That waythe model can interpret them. Every person probably has done data annotation already at least once in their life by filling out captchas on websites.

Evaluation

In this phase, you want to see if the data is helping the machine learning model algorithm to learn appropriately. By training the algorithm with the curated data we are able to see how well the algorithm is performing based on what was learned from the data. Based on that one can conduct changes to the data curation process.   

macbook pro on persons lap

Problems Associated with Uncurated Data

Looking at the problems associated with uncurated data can best explain why data curation is important. 

With uncurated data, it's harder to access the right information and as a result machine learning models perform worse 

Let's continue with the library scenario. A book that's not properly cataloged and indexed will not be placed on the appropriate shelf. Therefore, readers who wish to get their hands on that information will have a hard time doing so. Additionally, having a lot of similar books in a shelve won’t help a reader learn more. 

Similarly, this scenario occurs with uncurated data. Users who want to access the data will have a difficult time doing so. Additionally, uncurated data is difficult to interpret. Because uncurated data is neither structured nor organized, users interacting with it may find it challenging to gain insights from it. This is also true for the machine learning models that learn from data. The data should be diverse and have accurate annotations for the model to learn most from it. 

Applying Data Curation

Data curation is applied where Big Data is concerned. 

For example, biomedical companies such as pharmaceutical companies use data curation to shorten the time it takes to deliver drugs to market and to cut costs. 

Car manufacturers can develop safer autonomous driving and driving assistance systems because they are able to train their machine learning systems with better curated data. 

Media industries also apply data curation, using it to organize large, unstructured data collections and to improve information accessibility and visibility. Data curation is applied and used to organize and structure large datasets in computer vision. 

Data curation also is applied in biocuration, which is a branch of biology devoted to organizing biomedical data and information into spreadsheets, tables, and graphs. 

Modes of Data Curation

There are two modes of data curation: manual and AI-based. 

Manual Curation

Manual data curation entails hiring a curator to collect, maintain, and organize data that's accessible anytime it's needed. As data becomes more tedious and voluminous, this technique can be costly and time-consuming. It can also be prone to mistakes. 

Furthermore, it can be difficult to maintain and may result in difficulties for a curator who wants to work with it. 

AI-Based Curation

AI-based curation involves employing an AI-based solution tool to perform a curator's work. In this case, an AI tool completes data curation, thus making it more efficient and faster to access. 

AI tools make it easy to work with tons of data. When employing an AI tool, a curator doesn't have to worry about the data's volume and complexity. One such data curation tool is Lightly—a tool for machine learning that scales tens of millions of data and uses self-supervised learning to locate clusters of similar data within a dataset. 

Conclusion

Data curation basically refers to managing Big Data during its lifetime. 

Data curation is a critical consideration when working with Big Data. It's often desirable to employ AI-based techniques to obtain accurate and efficient outcomes. Lightly is an AI-based data curation tool that works with machine learning, computer vision, autonomous vehicles, video analytics, video inspection, and geospatial data. 

Sign up for free here and try it out! 

This post was written by Ibrahim Ogunbiyi. Ibrahim is an entry-level IoT enthusiast and a machine learning engineer with skills in python, C++, data analysis, data visualization, and machine learning algorithms. He is also a technical author.

Improve your data
Today is the day to get the most out of your data. Share our mission with the world — unleash your data's true potential.
Contact us