Video summarisation can be costly and time consuming, taking into account the increasing amount and diversity of multimedia information provided by multimedia production industries. Recent automatic film summarisation approaches focus on content-based video retrieval or cross-media retrieval through text summarisation based on statistical approaches. As a range of texts describing film content is freely available on the Internet or provided by video production organisations, this paper introduces a cross-media method, departing from existing textual summaries to produce the corresponding video summaries. A multi-disciplinary approach is suggested, combining cross-document coreference and information extraction and retrieval techniques, to produce on demand film summaries. The method is inspired by the lexical analysis of a corpus of plot summaries, including short overviews of the film story, and a corpus of audio description, including time-coded detailed descriptions by experts narrating what is happening on screen. The user evaluation of the method on one summary shows encouraging results regarding the precision and selection of the retrieved video shots. The suggested approach may also be customised for visually impaired people, adapted for different kinds of data and evaluated in different contexts, such as virtual meeting summarisation and browsing.
|Keywords:||Cross-Document Coreference, Cross-Media Summarisation, Collateral Text, Information Extraction, Intelligent Multimedia Information Retrieval|
The Open University, Milton Keynes, UK
There are currently no reviews of this product.Write a Review