Corpus ID: 237940342

A Video Summarization Method Using Temporal Interest Detection and Key Frame Prediction

@article{An2021AVS,
  title={A Video Summarization Method Using Temporal Interest Detection and Key Frame Prediction},
  author={Yubo An and Shenghui Zhao},
  journal={ArXiv},
  year={2021},
  volume={abs/2109.12581}
}
  • Yubo An, Shenghui Zhao
  • Published 26 September 2021
  • Computer Science, Engineering
  • ArXiv
In this paper, a Video Summarization Method using Temporal Interest Detection and Key Frame Prediction is proposed for supervised video summarization, where video summarization is formulated as a combination of sequence labeling and temporal interest detection problem. In our method, we firstly built a flexible universal network frame to simultaneously predicts frame-level importance scores and temporal interest segments, and then combine the two components with different weights to achieve a… Expand

Figures and Tables from this paper

References

SHOWING 1-10 OF 26 REFERENCES
Video Summarization Using Fully Convolutional Sequence Networks
TLDR
This paper firstly establishes a novel connection between semantic segmentation and video summarization, and then adapt popular semantic segmentsation networks for video summarizations, and proposes fully convolutional sequence models to solveVideo summarization. Expand
A Novel Key-Frames Selection Framework for Comprehensive Video Summarization
  • C. Huang, Hongmei Wang
  • Computer Science
  • IEEE Transactions on Circuits and Systems for Video Technology
  • 2020
TLDR
A novel framework for an efficient video content summarization as well as video motion summarization is proposed, using Capsules Net as a spatiotemporal information extractor and a self-attention model to select key-frames sequences inside the shots. Expand
DSNet: A Flexible Detect-to-Summarize Network for Video Summarization
TLDR
This paper proposes a Detect-to-Summarize network (DSNet) framework for supervised video summarization that contains anchor-based and anchor-free counterparts, and provides a dense sampling of temporal interest proposals with multi-scale intervals that accommodate interest variations in length. Expand
Video Summarization via Semantic Attended Networks
TLDR
A semantic attended video summarization network (SASUM) which consists of a frame selector and video descriptor to select an appropriate number of video shots by minimizing the distance between the generated description sentence of the summarized video and the human annotated text of the original video. Expand
User-Ranking Video Summarization With Multi-Stage Spatio–Temporal Representation
TLDR
This paper presents a novel supervised video summarization scheme based on three-stage deep neural networks, and proposes a simple but effective user-ranking method to cope with the labeling subjectivity problem of user-created video summarizations, leading to the labeling quality refinement for robust supervised learning. Expand
Multi-video summarization based on Video-MMR
  • Yingbo Li, B. Mérialdo
  • Computer Science
  • 11th International Workshop on Image Analysis for Multimedia Interactive Services WIAMIS 10
  • 2010
TLDR
This paper presents a novel and effective approach for multi-video summarization: Video-MMR, which extends a classical algorithm of text summarization, Maximal Marginal Relevance, and compares it with popular K-means algorithm, supported by user-made summary. Expand
Video Summarization With Attention-Based Encoder–Decoder Networks
TLDR
This paper proposes a novel video summarization framework named attentive encoder–decoder networks forVideo summarization (AVS), in which the encoder uses a bidirectional long short-term memory (BiLSTM) to encode the contextual information among the input video frames. Expand
Video Summarization with Long Short-Term Memory
TLDR
Long Short-Term Memory (LSTM), a special type of recurrent neural networks are used to model the variable-range dependencies entailed in the task of video summarization to improve summarization by reducing the discrepancies in statistical properties across those datasets. Expand
A General Framework for Edited Video and Raw Video Summarization
TLDR
A general summarization framework for both of edited video and raw video summarization, designed to capture the properties of video summaries, including containing important people and objects, representative to the video content, no similar key-shots, diversity, and smoothness of the storyline. Expand
Creating Summaries from User Videos
TLDR
This paper proposes a novel approach and a new benchmark for video summarization, which focuses on user videos, which are raw videos containing a set of interesting events, and generates high-quality results, comparable to manual, human-created summaries. Expand
...
1
2
3
...