Ting-Lan Lin

Learn More
In this paper, we propose a generalized linear model for video packet loss visibility that is applicable to different group-of-picture structures. We develop the model using three subjective experiment data sets that span various encoding standards (H.264 and MPEG-2), group-of-picture structures, and decoder error concealment choices. We consider factors(More)
In an error-prone communication channel, more important video packets should be assigned stronger channel codes. With various packet sizes and distortions for each packet, we use the subgradient method to search in the dual domain for the optimal RCPC channel code rate allocation for each packet, to minimize the end-to-end video quality degradation for an(More)
Our work builds a general visibility model of video packets which is applicable to various types of GOP (Group of Pictures). The data used for analysis and building the model come from three subjective experiment sets with different encoding and decoding parameters on H.264 and MPEG-2 videos. We consider factors not only within a packet but also across its(More)
We conduct subjective experiments on visual quality following packet loss, and then construct models to predict these visual importance scores. The models are fully self-contained at the packet level, meaning that they use only information within one packet to predict the importance of that packet, requiring no frame-level reconstruction nor any information(More)
We propose a packet dropping algorithm for various packet loss rates. A network-based packet loss visibility model is used to evaluate the visual importance of each H.264 packet inside the network. During network congestion, based on the estimated loss visibility of each packet, we drop the least visible frames and/or the least visible packets until the(More)
—Individual packet losses can have differing impact on video quality. Simple factors such as packet size, average motion, and DCT coefficient energy can be extracted from an individual compressed video packet inside the network without any inverse transforms or pixel-level decoding. Using only such factors that are self-contained within packets, we aim to(More)
Whole frame losses are introduced in H.264 compressed videos which are then decoded by two different decoders with different common concealment effects. The videos are seen by human observers who respond to each glitch they spot. We found that about 38% of whole frame losses of B frames are not observed by any of the subjects, and well over 58% of the B(More)
We conduct an objective experiment in which Video Quality Metric (VQM) scores are computed on compressed video GOPs following fixed-sized IP packet loss, and then construct a network-based model to predict these VQM scores. The model is created for H.264 SDTV videos using a no-reference method, meaning that we only use the information from the bitstream but(More)
In error-prone channels, forward error correction is necessary for protecting important data. In this paper, we use a packet loss visibility model to evaluate the visual importance of video packets to be transmitted. With the loss visibility of each packet, we use the Branch and Bound method to optimally allocate rates of Rate-Compatible Punctured(More)
For videos transmitted in an error-prone network, it is necessary to protect the source bitstream. Based on our packet loss visibility model, we minimize the end-to-end video quality degradation when transmitted in an AWGN channel using Rate-Compatible Punctured Convolutional codes for a given channel rate budget. We transform the original problem into a(More)