TY - GEN
T1 - The deterministic information bottleneck
AU - Strouse, D. J.
AU - Schwab, David J.
N1 - Funding Information:
The authors would like to thank Richard Turner, Bill Bialek, Stephanie Palmer, and Gordon Berman for helpful conversations, and the Hertz Foundation, DOE CSGF (DJ Strouse), and NIH grant K25 GM098875-06 (David Schwab) for funding.
PY - 2016
Y1 - 2016
N2 - Lossy compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. Here, we introduce an alternative formulation, the deterministic information bottleneck (DIB), that we argue better captures this notion of compression. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB.We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Moreover, the DIB offered a 1-2 order of magnitude speedup over the IB in our experiments. Our derivation of the DIB also offers a method for continuously interpolating between the soft clustering of the IB and the hard clustering of the DIB.
AB - Lossy compression fundamentally involves a decision about what is relevant and what is not. The information bottleneck (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. Here, we introduce an alternative formulation, the deterministic information bottleneck (DIB), that we argue better captures this notion of compression. As suggested by its name, the solution to the DIB problem is a deterministic encoder, as opposed to the stochastic encoder that is optimal under the IB.We then compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB vastly outperforms the IB in terms of the DIB cost function. Moreover, the DIB offered a 1-2 order of magnitude speedup over the IB in our experiments. Our derivation of the DIB also offers a method for continuously interpolating between the soft clustering of the IB and the hard clustering of the DIB.
UR - http://www.scopus.com/inward/record.url?scp=85001907650&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85001907650&partnerID=8YFLogxK
M3 - Conference contribution
AN - SCOPUS:85001907650
T3 - 32nd Conference on Uncertainty in Artificial Intelligence 2016, UAI 2016
SP - 696
EP - 705
BT - 32nd Conference on Uncertainty in Artificial Intelligence 2016, UAI 2016
A2 - Janzing, Dominik
A2 - Ihler, Alexander
PB - Association For Uncertainty in Artificial Intelligence (AUAI)
T2 - 32nd Conference on Uncertainty in Artificial Intelligence 2016, UAI 2016
Y2 - 25 June 2016 through 29 June 2016
ER -