So "Subset selective" means that you select a subset of characters for which you use a weighted coding, while the other characters are coded with the classic static coding?
I would appreciate if you can explain the "tuned" variant in more detail.
When talking about skip sizes f(s), you mean an equidistant selection of characters of a ranked alphabet,
meaning f(s) = k <=> the ranks of two selected characters are at a distance of at least k?
In that sense, the evaluated times following a geometric distribution seem plausible if updating the weights is the bottleneck in the computation.
Speaking about the time plots, can you explain what α is?
I assume that compression ratio is the fraction of the compressed file over the input length, so smaller ratio equals better compression.
Can I assume that the used test file is so small that a zeroth order entropy encoder such as Huffman can be better than gzip as seen on the left end of the left plot?
TY - DATA
T1 - Selective Weighted Adaptive Coding
AU - Gross; Klein; Opalinsky; Shapira
PY - 2022
PB - IEEE Signal Processing Society SigPort
UR - https://sigport.org/documents/selective-weighted-adaptive-coding
ER -
1. Gross, Klein, Opalinsky, Shapira.
Selective Weighted Adaptive Coding [Internet].
IEEE Signal Processing Society SigPort; 2022.
Available from :
https://sigport.org/documents/selective-weighted-adaptive-coding
Comments
About Skip Sizes
So "Subset selective" means that you select a subset of characters for which you use a weighted coding, while the other characters are coded with the classic static coding?
I would appreciate if you can explain the "tuned" variant in more detail.
When talking about skip sizes f(s), you mean an equidistant selection of characters of a ranked alphabet,
meaning f(s) = k <=> the ranks of two selected characters are at a distance of at least k?
In that sense, the evaluated times following a geometric distribution seem plausible if updating the weights is the bottleneck in the computation.
Speaking about the time plots, can you explain what α is?
I assume that compression ratio is the fraction of the compressed file over the input length, so smaller ratio equals better compression.
Can I assume that the used test file is so small that a zeroth order entropy encoder such as Huffman can be better than gzip as seen on the left end of the left plot?