Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

Required fields*

5
  • $\begingroup$ what do you mean "dyadic"? it works when some symbols are more common than others and the more common symbols are given "abbreviations". this is true of human languages. yes a simple algorithm is to try the encoding on different width strings & see which is optimal. it depends on the data of course. $\endgroup$ Commented Jun 1, 2014 at 3:42
  • $\begingroup$ @vzn A dyadic distribution is one for which the frequencies of the symbols are 2^-i for i between 1 and n, and another symbol with frequency 2^-n exists as well. This is the optimal case for Huffman encoding since the encoding tree is maximally skewed. Regarding your brute force solution, this certainly works but isn't very satisfying. $\endgroup$ Commented Jun 1, 2014 at 4:14
  • $\begingroup$ disagree that optimal huffman encoding is for dyadic frequencies (not sure where that assertion comes from, it would be better if you cite something for that). optimal encoding for all compression algorithms is when there is huge redundancy. afaict/afaik huffman encoding is generally applied to language text where there are natural coding "widths" (character sizes) and other applications are not very common. also not having to determine optimal width & instead assuming it is a "feature" not a "bug" of this compression method. Lempel-Ziv is used more for variable coding widths. try that one! $\endgroup$ Commented Jun 1, 2014 at 18:11
  • $\begingroup$ @vzn Wiki page, look under "main properties". It's pretty clear a dyadic leads to the most compression since then the codes are spread out entirely throughout the Huffman trie, weighted to the top. Not sure how LZW is relevant to my question at all, though. I'm not asking this because I have a file to compress, I'm investigating this particular compression algorithm. $\endgroup$ Commented Jun 1, 2014 at 20:48
  • $\begingroup$ You ask if there is an algorithm to find the optimal word width, and the answer is trivially yes: just try all possibilities. If that is not a suitable answer, then you haven't posed your question carefully enough: you must have some additional requirements you didn't list in the question -- what are they, and what is the motivation for them? $\endgroup$ Commented Jun 2, 2014 at 6:19