Show simple item record

dc.contributor.author
Kanehira, Atsushi
dc.contributor.author
Van Gool, Luc
dc.contributor.author
Ushiku, Yoshitaka
dc.contributor.author
Harada, Tatsuya
dc.date.accessioned
2022-03-24T06:57:58Z
dc.date.available
2019-02-20T05:51:29Z
dc.date.available
2019-03-01T11:26:04Z
dc.date.available
2022-03-24T06:57:58Z
dc.date.issued
2018
dc.identifier.isbn
978-1-5386-6420-9
en_US
dc.identifier.isbn
978-1-5386-6421-6
en_US
dc.identifier.other
10.1109/CVPR.2018.00776
en_US
dc.identifier.uri
http://hdl.handle.net/20.500.11850/326367
dc.description.abstract
This paper introduces a novel variant of video summarization, namely building a summary that depends on the particular aspect of a video the viewer focuses on. We refer to this as viewpoint. To infer what the desired viewpoint may be, we assume that several other videos are available, especially groups of videos, e.g., as folders on a person's phone or laptop. The semantic similarity between videos in a group vs. the dissimilarity between groups is used to produce viewpoint-specific summaries. For considering similarity as well as avoiding redundancy, output summary should be (A) diverse, (B) representative of videos in the same group, and (C) discriminative against videos in the different groups. To satisfy these requirements (A)-(C) simultaneously, we proposed a novel video summarization method from multiple groups of videos. Inspired by Fisher's discriminant criteria, it selects summary by optimizing the combination of three terms (a) inner-summary, (b) inner-group, and (c) between-group variances defined on the feature representation of summary, which can simply represent (A)-(C). Moreover, we developed a novel dataset to investigate how well the generated summary reflects the underlying viewpoint. Quantitative and qualitative experiments conducted on the dataset demonstrate the effectiveness of proposed method.
en_US
dc.language.iso
en
en_US
dc.publisher
IEEE
en_US
dc.title
Viewpoint-Aware Video Summarization
en_US
dc.type
Conference Paper
dc.date.published
2018-12-17
ethz.book.title
2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition
en_US
ethz.pages.start
7435
en_US
ethz.pages.end
7444
en_US
ethz.event
31st IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2018)
en_US
ethz.event.location
Salt Lake City, UT, USA
en_US
ethz.event.date
June 18-23, 2018
en_US
ethz.identifier.wos
ethz.identifier.scopus
ethz.publication.place
Piscataway, NJ
en_US
ethz.publication.status
published
en_US
ethz.date.deposited
2019-02-20T05:51:30Z
ethz.source
WOS
ethz.eth
yes
en_US
ethz.availability
Metadata only
en_US
ethz.rosetta.installDate
2019-03-01T11:26:21Z
ethz.rosetta.lastUpdated
2022-03-29T20:46:54Z
ethz.rosetta.versionExported
true
ethz.COinS
ctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.atitle=Viewpoint-Aware%20Video%20Summarization&rft.date=2018&rft.spage=7435&rft.epage=7444&rft.au=Kanehira,%20Atsushi&Van%20Gool,%20Luc&Ushiku,%20Yoshitaka&Harada,%20Tatsuya&rft.isbn=978-1-5386-6420-9&978-1-5386-6421-6&rft.genre=proceeding&rft_id=info:doi/10.1109/CVPR.2018.00776&rft.btitle=2018%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition
 Search print copy at ETH Library

Files in this item

FilesSizeFormatOpen in viewer

There are no files associated with this item.

Publication type

Show simple item record