Ting-Lan Lin

Learn More
In this paper, we propose a generalized linear model for video packet loss visibility that is applicable to different group-of-picture structures. We develop the model using three subjective experiment data sets that span various encoding standards (H.264 and MPEG-2), group-of-picture structures, and decoder error concealment choices. We consider factors(More)
In an error-prone communication channel, more important video packets should be assigned stronger channel codes. With various packet sizes and distortions for each packet, we use the subgradient method to search in the dual domain for the optimal RCPC channel code rate allocation for each packet, to minimize the end-to-end video quality degradation for an(More)
Our work builds a general visibility model of video packets which is applicable to various types of GOP (Group of Pictures). The data used for analysis and building the model come from three subjective experiment sets with different encoding and decoding parameters on H.264 and MPEG-2 videos. We consider factors not only within a packet but also across its(More)
We conduct subjective experiments on visual quality following packet loss, and then construct models to predict these visual importance scores. The models are fully self-contained at the packet level, meaning that they use only information within one packet to predict the importance of that packet, requiring no frame-level reconstruction nor any information(More)
We propose a packet dropping algorithm for various packet loss rates. A network-based packet loss visibility model is used to evaluate the visual importance of each H.264 packet inside the network. During network congestion, based on the estimated loss visibility of each packet, we drop the least visible frames and/or the least visible packets until the(More)
—Individual packet losses can have differing impact on video quality. Simple factors such as packet size, average motion, and DCT coefficient energy can be extracted from an individual compressed video packet inside the network without any inverse transforms or pixel-level decoding. Using only such factors that are self-contained within packets, we aim to(More)
Whole frame losses are introduced in H.264 compressed videos which are then decoded by two different decoders with different common concealment effects. The videos are seen by human observers who respond to each glitch they spot. We found that about 38% of whole frame losses of B frames are not observed by any of the subjects, and well over 58% of the B(More)
We conduct an objective experiment in which Video Quality Metric (VQM) scores are computed on compressed video GOPs following fixed-sized IP packet loss, and then construct a network-based model to predict these VQM scores. The model is created for H.264 SDTV videos using a no-reference method, meaning that we only use the information from the bitstream but(More)
When video packets are lost in congested networks, one loss pattern creates a different visual impact than another. We conduct a subjective experiment with H.264 videos and conclude that isolated losses are better than bursty losses in terms of perceptual video quality. A network-implementable video quality model is developed for a router to drop packets so(More)
In video communications, the compressed video stream is very likely to be corrupted by channel errors. Recently, many error concealment algorithms have been proposed in order to combat channel errors. In this paper, we have combined two state-of-the-art motion vector recovery algorithms into a even better algorithm; we have modified the hybrid motion vector(More)