Close

Update: Some thoughts on the future of version control

A project log for DupVer

Minimalist deduplicating version control for large binary files

kumarKumar 01/02/2021 at 14:060 Comments

In response to a HN post, some thoughts about the future for version control. Full discussion is here: https://news.ycombinator.com/item?id=25535844

The state-of-the-art for backup is deduplicating software (Borg, Restic, Duplicacy). Gripes about Git's UI choices aside, Git was designed around human-readable text files and just doesn't do large binary files well. Sure, there's Git-LFS, but it sucks. The future of version control will:

  1. Make use of deduplication to handle large binary files
  2. Natively supports remotes via cloud storage
  3. Doesn't keep state in the working directory so that projects can live in a Dropbox/OneDrive/iCloud folder without corrupting the repo
  4. Is truly cross-platform with minimal POSIX dependencies. I love Linux, but I'm a practicing engineer, and the reality is that engineering software is a market where traditional Windows desktop software still rules.

Another thought I've been having for some time is if I could have gotten away with file level deduplication like Boar (or Git IIRC) does and drop compression. This would probably result in significant simplification, particularly for copying between repos. For most users this wouldn't impact disk space usage much as the bulk of files already have compression built in, and the trend seems to be increasingly to adopt compression in new file formats. This includes:

  1. Audio/Image/Video files with (usually) lossy compression. This suprisingly (to me) also includes raster image editor file formats such as Paint.net's pdn, which wraps everything in a gzip stream.
  2. MS office documents structured as a hierarchy of zipped .xml files. More recently, this format also includes Matlab's .slx Simulink file format and .mlx notebook format.

The gotcha to this is it's an 80% solution. There are still plenty of file formats that are uncompressed text, even newer ones such as JSON/YAML/TOML and a number of uncompressed binary file formats such as MessagePack, though most tend to be some sort of database such as the Geodatabase .gdb format which is based on Sqlite3 or PowerWorld's .pwb format. There is also the corner case of metadata in media files such as EXIF, which if modified would cause the whole file contents to be stored again. So I'm sticking with chunking for the time being.

This is all pretty opinionated, so feel free to prove me wrong. It wouldn't be Hackaday without internet arguments, right?

Discussions