Data version control

From Wikipedia - Reading time: 9 min

Data version control is a method of working with data sets. It is similar to the version control systems used in traditional software development, but is optimized to allow better processing of data and collaboration in the context of data analytics, research, and any other form of data analysis. Data version control may also include specific features and configurations designed to facilitate work with large data sets and data lakes.[1]

History

[edit]

Background

[edit]

As early as 1985, researchers recognized the need for defining timing attributes in database tables, which would be necessary for tracking changes to databases.[2] This research continued into the 1990s, and the theory was formalized into practical methods for managing data in relational databases,[3] providing some of the foundational concepts for what would later become data version control.

In the early 2010s the size of data sets was rapidly expanding, and relational databases were no longer sufficient to manage the amounts of data organizations were accumulating. The rise of the Apache Hadoop eco system, with HDFS as a storage layer, and later object storage had become dominant in big data operations.[4] Research into data management tools and data version control systems increased sharply, along with demand for such tools from both academia and the private and public sectors.[5]

Version controlled databases

[edit]

The first versioned database was proposed in 2012 for the SciDB database, and demonstrated it was possible to create chains and trees of different versions of the database while decreasing both the overall storage size and access speeds associated with previous methods.[6] In 2014, a proposal was made to generalize these principles into a platform that could be used for any application.[7]

In 2016, a prototype for a data version control system was developed during a Kaggle competition. This software was later used internally at an AI firm, and eventually spun off as a startup.[8] Since then, a number of data version control systems, both open and closed source, have been developed and offered commercially,[9] with a subset dedicated specifically to machine learning.[10]

Use cases

[edit]

Reproducibility

[edit]

A wide range of scientific disciplines have adopted automated analysis of large quantities of data, including astrophysics, seismology, biology and medicine, social sciences and economics, and many other fields. The principle of reproducibility is an important aspect of formalizing findings in scientific disciplines, and in the context of data science presents a number of challenges. Most datasets are constantly changing, whether due to the addition of more data or changes in the structure and format of the data, and small changes can have significant effects on the outcome of experiments. Data version control allows for recording the exact state of data sets at a particular moment of time, making it easier to reproduce and understand experimental outcomes.[11] If data practitioners can only know the present state of the data, they may run into a number of challenges such as difficulties in problem debugging or complying with data audits.

Development and testing

[edit]

Data version control is sometimes used in testing and development of applications that interact with large quantities of data. Some data version control tools allow users to create replicas of their production environment for testing purposes. This approach allows them to test data integration processes such as extract, transform and load (ETL) and understand the changes made to data without having a negative impact on the consumers of the production data.

Machine learning and artificial intelligence

[edit]

In the context of machine learning, data version control can be used for optimizing the performance of models. It can allow automating the process of analyzing outcomes with different versions of a data set to continuously improve performance.[12] It is possible that open source data version control software could eliminate the need for proprietary AI platforms by extending tools like Git and CI/CD for use by machine learning engineers.[13] Many open-source solutions build on Git-like semantics to provide these capabilities, as Git itself was designed for small text files and doesn't support typical machine learning datasets, which are very large.

CI/CD for data

[edit]

CI/CD methodologies can be applied to datasets using data version control.[14] Version control enables users to integrate with automation servers that allow establishing a CI/CD process for data. By adding testing platforms to the process, they can guarantee high quality of the data product. In this scenario, teams execute Continuous Integration (CI) tests on data and set checks in place to ensure the data is promoted to production only all the set data quality and data governance criteria are met.

Experimentation in isolated environments

[edit]

To experiment on a dataset without impacting production data, one can use data version control to create replicas of the production environment where tests can be carried out. Such replicas allow testing and understanding of changes safely applied to data.

Data version control tools allow replication environments without the time- and resource-consuming maintenance. Instead, such tools allow objects to be shared using metadata.

Rollback

[edit]

Continuous changes in data sets can sometimes cause functionality issues or lead to undesired outcomes, especially when applications are using the data. Data version control tools allow for the possibility to roll back a data set to an earlier state. This can be used to restore or improve functionality of an application or to correct errors or bad data which has been mistakenly included.[15]

Examples

[edit]

Version controlled data sources:

Data version control for data lakes:

ML-Ops systems that implement data version control:

See also

[edit]

References

[edit]
  1. ^ "A guide to open source data version control - Fuzzy Labs". www.fuzzylabs.ai. Retrieved 2023-01-05.
  2. ^ Snodgrass, Richard; Ahn, Ilsoo (1985-05-01). "A taxonomy of time databases". ACM SIGMOD Record. 14 (4): 236–246. doi:10.1145/971699.318921. ISSN 0163-5808.
  3. ^ Temporal databases : theory, design, and implementation. Redwood City, Calif.: Benjamin/Cummings Pub. Co. 1993. ISBN 978-0-8053-2413-6.
  4. ^ "Apache Hadoop turns 10: The Rise and Glory of Hadoop". ProjectPro. Retrieved 2023-01-18.
  5. ^ Bryan, Jennifer (2018-01-02). "Excuse Me, Do You Have a Moment to Talk About Version Control?". The American Statistician. 72 (1): 20–27. doi:10.1080/00031305.2017.1399928. ISSN 0003-1305. S2CID 40975582.
  6. ^ Seering, Adam; Cudre-Mauroux, Philippe; Madden, Samuel; Stonebraker, Michael (2012-04-01). "Efficient Versioning for Scientific Array Databases". 2012 IEEE 28th International Conference on Data Engineering. pp. 1013–1024. doi:10.1109/ICDE.2012.102. hdl:1721.1/90380. ISBN 978-0-7695-4747-3. S2CID 9144420.
  7. ^ Bhardwaj, Anant; Bhattacherjee, Souvik; Chavan, Amit; Deshpande, Amol; Elmore, Aaron J.; Madden, Samuel; Parameswaran, Aditya G. (2014-09-02). "DataHub: Collaborative Data Science & Dataset Version Management at Scale". arXiv:1409.0798 [cs.DB].
  8. ^ "neptune.ai | About us, our story, team and Neptune in the news". neptune.ai. Retrieved 2023-01-04.
  9. ^ StartupStash. "Top 16 Data Versioning Tools". Startup Stash. Retrieved 2023-01-04.
  10. ^ Aryan Jadon (26 December 2022). "Survey of Data Versioning Tools for Machine Learning Operations". Medium. Retrieved 2023-06-27.
  11. ^ Reproducibility and replicability in science. Engineering, and Medicine. Washington, DC. 2019. p. 114. ISBN 978-0-309-48617-0. OCLC 1122461743.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: others (link)
  12. ^ "Versionskontrolle für Machine-Learning-Projekte". Informatik Aktuell (in German). Retrieved 2023-01-05.
  13. ^ "Streamlining data science with open source: Data version control and continuous machine learning". ZDNET. Retrieved 2023-01-05.
  14. ^ "The Ultimate Guide to Database Version Control, CI/CD, and Deployment". Database Star. 2020-02-01. Retrieved 2023-01-18.
  15. ^ "Version Control for Data — The Turing Way". the-turing-way.netlify.app. Retrieved 2023-01-05.
  16. ^ "Day 1: Data Versioning & Creating Datasets". kaggle.com. Retrieved 2023-01-18.
  17. ^ "Quilt Data". Quilt Data. Retrieved 2023-01-18.
  18. ^ Hall, Susan (2020-08-19). "Dolt, a Relational Database with Git-Like Cloning Features". The New Stack. Retrieved 2023-01-05.
  19. ^ "X-MOL". en.x-mol.com. Retrieved 2023-01-18.
  20. ^ "Treeverse raises $23M to bring Git-like version control to data lakes". VentureBeat. 2021-07-28. Retrieved 2023-01-05.
  21. ^ "About Nessie - Project Nessie: Transactional Catalog for Data Lakes with Git-like semantics". projectnessie.org. Retrieved 2023-01-18.
  22. ^ "Git Large File Storage". Git Large File Storage. Retrieved 2023-01-05.
  23. ^ Lardinois, Frederic (2022-06-01). "Iterative launches MLEM, a tool to simplify ML model deployment". TechCrunch. Retrieved 2023-01-18.
  24. ^ "Top AI startup news of the week: InstaDeep, DeepL, Pachyderm and more". VentureBeat. 2023-01-13. Retrieved 2023-01-18.
  25. ^ Ingle, Prathamesh (2022-10-21). "Top Data Version Control Tools for Machine Learning Research in 2022". MarkTechPost. Retrieved 2023-01-18.
  26. ^ Miller, Ron (2021-11-02). "Activeloop snags $5M seed to build streaming database for AI applications". TechCrunch. Retrieved 2023-01-18.
  27. ^ "Edward Cui, Founder & CEO of Graviti - Interview Series - Unite.AI". www.unite.ai. Retrieved 2023-01-18.
  28. ^ Ingle, Prathamesh (2022-10-06). "Top Tools for Machine Learning (ML) Experiment Tracking and Management". MarkTechPost. Retrieved 2023-01-18.
  29. ^ "How to Set Yourself Apart from Other Applicants with Data-Centric AI". KDnuggets. Retrieved 2023-01-18.
  30. ^ Shields, Ronan (2023-01-05). "The Trade Desk attempts to woo advertisers at CES with 'Galileo' — a bid to chart the 'Open Internet' without cookies". Digiday. Retrieved 2023-01-18.
  31. ^ Wiggers, Kyle (2022-09-21). "Voxel51 lands funds for its platform to manage unstructured data". TechCrunch. Retrieved 2023-01-18.
  32. ^ Cheptsov, Andrey. "Reproducible ML workflows for teams - dstack". docs.dstack.ai. Retrieved 2023-01-18.
  33. ^ Katz, William T.; Plaza, Stephen M. (2019). "DVID: Distributed Versioned Image-Oriented Dataservice". Frontiers in Neural Circuits. 13: 5. doi:10.3389/fncir.2019.00005. ISSN 1662-5110. PMC 6371063. PMID 30804760.

Licensed under CC BY-SA 3.0 | Source: https://en.wikipedia.org/wiki/Data_version_control
2 views |
Download as ZWI file
Encyclosphere.org EncycloReader is supported by the EncyclosphereKSF