6533b857fe1ef96bd12b43d5

RESEARCH PRODUCT

Sorted deduplication: How to process thousands of backup streams

André BrinkmannTim SussLars NagelJürgen Kaiser

subject

File system020203 distributed computingComputer scienceData domainFingerprint (computing)Search engine indexingSorting020206 networking & telecommunications02 engineering and technologyParallel computingcomputer.software_genreBackupServerData_FILES0202 electrical engineering electronic engineering information engineeringData deduplicationcomputer

description

The requirements of deduplication systems have changed in the last years. Early deduplication systems had to process dozens to hundreds of backup streams at the same time while today they are able to process hundreds to thousands of them. Traditional approaches rely on stream-locality, which supports parallelism, but which easily leads to many non-contiguous disk accesses, as each stream competes with all other streams for the available resources. This paper presents a new exact deduplication approach designed for processing thousands of backup streams at the same time on the same fingerprint index. The underlying approach destroys the traditionally exploited temporal chunk locality and creates a new one by sorting fingerprints. The sorting leads to perfectly sequential disk access patterns on the backup servers, while only slightly increasing the load on the clients. In our experiments, the new approach generates up to 113 times less I/Os than the exact Data Domain deduplication file system and up to 12 times less I/Os than the approximate Sparse Indexing, while consuming less memory at the same time.

https://doi.org/10.1109/msst.2016.7897082