Semantic video segmentation: Exploring inference efficiency

Research output: Contribution to journalConference articleResearchpeer-review

We explore the efficiency of the CRF inference beyond image level semantic segmentation and perform joint inference in video frames. The key idea is to combine best of two worlds: semantic co-labeling and more expressive models. Our formulation enables us to perform inference over ten thousand images within seconds and makes the system amenable to perform video semantic segmentation most effectively. On CamVid dataset, with TextonBoost unaries, our proposed method achieves up to 8% improvement in accuracy over individual semantic image segmentation without additional time overhead. The source code is available at https: //github. com/subtri/video inference.

Original languageEnglish
JournalISOCC 2015 - International SoC Design Conference: SoC for Internet of Everything (IoE)
Pages (from-to)157-158
Number of pages2
DOIs
Publication statusPublished - 8 Feb 2016
Externally publishedYes
Event12th International SoC Design Conference, ISOCC 2015 - Gyeongju, Korea, Republic of
Duration: 2 Nov 20155 Nov 2015

Conference

Conference12th International SoC Design Conference, ISOCC 2015
CountryKorea, Republic of
CityGyeongju
Period02/11/201505/11/2015

Bibliographical note

Publisher Copyright:
© 2015 IEEE.

    Research areas

  • approximate inference, co-labelling, higher-order-clique, semantic segmentation

ID: 301828670