No Repetition: Fast Streaming with Highly Concentrated Hashing
Research output: Working paper › Research
Standard
No Repetition : Fast Streaming with Highly Concentrated Hashing. / Aamand, Anders; Das, Debarati; Kipouridis, Evangelos; Knudsen, Jakob Bæk Tejs; Rasmussen, Peter Michael Reichstein; Thorup, Mikkel.
arXiv.org, 2020.Research output: Working paper › Research
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - UNPB
T1 - No Repetition
T2 - Fast Streaming with Highly Concentrated Hashing
AU - Aamand, Anders
AU - Das, Debarati
AU - Kipouridis, Evangelos
AU - Knudsen, Jakob Bæk Tejs
AU - Rasmussen, Peter Michael Reichstein
AU - Thorup, Mikkel
PY - 2020
Y1 - 2020
N2 - To get estimators that work within a certain error bound with high probability, a common strategy is to design one that works with constant probability, and then boost the probability using independent repetitions. Important examples of this approach are small space algorithms for estimating the number of distinct elements in a stream, or estimating the set similarity between large sets. Using standard strongly universal hashing to process each element, we get a sketch based estimator where the probability of a too large error is, say, 1/4. By performing $r$ independent repetitions and taking the median of the estimators, the error probability falls exponentially in $r$. However, running $r$independent experiments increases the processing time by a factor $r$.Here we make the point that if we have a hash function with strong concentration bounds, then we get the same high probability bounds without any need for repetitions. Instead of $r$ independent sketches, we have a single sketch that is $r$ times bigger, so the total space is the same. However, we only apply a single hash function, so we save a factor $r$ in time, and the overall algorithms just get simpler.Fast practical hash functions with strong concentration bounds were recently proposed by Aamand {\em et al.} (to appear in {\em STOC 2020}). Using their hashing schemes, the algorithms thus become very fast and practical, suitable for online processing of high volume data streams.
AB - To get estimators that work within a certain error bound with high probability, a common strategy is to design one that works with constant probability, and then boost the probability using independent repetitions. Important examples of this approach are small space algorithms for estimating the number of distinct elements in a stream, or estimating the set similarity between large sets. Using standard strongly universal hashing to process each element, we get a sketch based estimator where the probability of a too large error is, say, 1/4. By performing $r$ independent repetitions and taking the median of the estimators, the error probability falls exponentially in $r$. However, running $r$independent experiments increases the processing time by a factor $r$.Here we make the point that if we have a hash function with strong concentration bounds, then we get the same high probability bounds without any need for repetitions. Instead of $r$ independent sketches, we have a single sketch that is $r$ times bigger, so the total space is the same. However, we only apply a single hash function, so we save a factor $r$ in time, and the overall algorithms just get simpler.Fast practical hash functions with strong concentration bounds were recently proposed by Aamand {\em et al.} (to appear in {\em STOC 2020}). Using their hashing schemes, the algorithms thus become very fast and practical, suitable for online processing of high volume data streams.
M3 - Working paper
BT - No Repetition
PB - arXiv.org
ER -
ID: 257869626