TY - JOUR
T1 - The Deterministic Information Bottleneck
AU - Strouse, D. J.
AU - Schwab, David J.
N1 - Funding Information:
For insightful discussions, we thank Richard Turner, Máté Lengyel, Bill Bialek, Stephanie Palmer, Gordon Berman, Zack Nichols, and Spotify NYC's Paradox Squad. We also acknowledge financial support from NIH K25 GM098875 (Schwab), the Hertz Foundation (Strouse), and the Department of Energy Computational Sciences Graduate Fellowship (Strouse).
Publisher Copyright:
© 2017 Massachusetts Institute of Technology.
PY - 2017/6/1
Y1 - 2017/6/1
N2 - Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek (1999) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-offbetween throwing away as many bits as possible and selectively keeping those that are most important. In the IB, compression is measured by mutual information. Here, we introduce an alternative formulation that replaces mutual information with entropy, which we call the deterministic information bottleneck (DIB) and argue better captures this notion of compression. As suggested by its name, the solution to the DIB problem turns out to be a deterministic encoder, or hard clustering, as opposed to the stochastic encoder, or soft clustering, that is optimal under the IB. We compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB significantly outperforms the IB in terms of the DIB cost function. We also empirically find that the DIB offers a considerable gain in computational efficiency over the IB, over a range of convergence parameters. Our derivation of the DIB also suggests a method for continuously interpolating between the soft clustering of the IB and the hard clustering of the DIB.
AB - Lossy compression and clustering fundamentally involve a decision about which features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek (1999) formalized this notion as an information-theoretic optimization problem and proposed an optimal trade-offbetween throwing away as many bits as possible and selectively keeping those that are most important. In the IB, compression is measured by mutual information. Here, we introduce an alternative formulation that replaces mutual information with entropy, which we call the deterministic information bottleneck (DIB) and argue better captures this notion of compression. As suggested by its name, the solution to the DIB problem turns out to be a deterministic encoder, or hard clustering, as opposed to the stochastic encoder, or soft clustering, that is optimal under the IB. We compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB significantly outperforms the IB in terms of the DIB cost function. We also empirically find that the DIB offers a considerable gain in computational efficiency over the IB, over a range of convergence parameters. Our derivation of the DIB also suggests a method for continuously interpolating between the soft clustering of the IB and the hard clustering of the DIB.
UR - http://www.scopus.com/inward/record.url?scp=85019908114&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85019908114&partnerID=8YFLogxK
U2 - 10.1162/NECO_a_00961
DO - 10.1162/NECO_a_00961
M3 - Letter
C2 - 28410050
AN - SCOPUS:85019908114
SN - 0899-7667
VL - 29
SP - 1611
EP - 1630
JO - Neural Computation
JF - Neural Computation
IS - 6
ER -