TY - GEN
T1 - Training overhead for decoding random linear network codes
AU - Riemensberger, Maximilian
AU - Evren Sagduyu, Yalin
AU - Honig, Michael L.
AU - Utschick, Wolfgang
PY - 2008
Y1 - 2008
N2 - We consider multicast communications from a single source to multiple destinations over a network of erasure channels. Linear network coding maximizes the achievable (min-cut) rate, and a distributed code assignment can be realized by choosing codes randomly at the intermediate nodes. It is typically assumed that the coding information (combining coefficients) at each node is included in the packet overhead, and forwarded to the destination. Instead, we assume that the network coding matrix is communicated to the destinations by appending training bits to the data bits at the source. End-to-end channel coding can then be applied to the training and data either separately, or jointly, by coding across both training and information bits. Ideally, the training overhead should balance the reliability of communicating the network matrix with the reliability of data detection. We maximize data throughput as a function of the training overhead, and show how it depends on the network size, erasure probability, number of independent messages, and field size. The combination network is used to illustrate our results, and shows under what conditions throughput is limited by training overhead.
AB - We consider multicast communications from a single source to multiple destinations over a network of erasure channels. Linear network coding maximizes the achievable (min-cut) rate, and a distributed code assignment can be realized by choosing codes randomly at the intermediate nodes. It is typically assumed that the coding information (combining coefficients) at each node is included in the packet overhead, and forwarded to the destination. Instead, we assume that the network coding matrix is communicated to the destinations by appending training bits to the data bits at the source. End-to-end channel coding can then be applied to the training and data either separately, or jointly, by coding across both training and information bits. Ideally, the training overhead should balance the reliability of communicating the network matrix with the reliability of data detection. We maximize data throughput as a function of the training overhead, and show how it depends on the network size, erasure probability, number of independent messages, and field size. The combination network is used to illustrate our results, and shows under what conditions throughput is limited by training overhead.
UR - http://www.scopus.com/inward/record.url?scp=62349089623&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=62349089623&partnerID=8YFLogxK
U2 - 10.1109/MILCOM.2008.4753084
DO - 10.1109/MILCOM.2008.4753084
M3 - Conference contribution
AN - SCOPUS:62349089623
SN - 9781424426775
T3 - Proceedings - IEEE Military Communications Conference MILCOM
BT - 2008 IEEE Military Communications Conference, MILCOM 2008 - Assuring Mission Success
T2 - 2008 IEEE Military Communications Conference, MILCOM 2008 - Assuring Mission Success
Y2 - 17 November 2008 through 19 November 2008
ER -