A Vision-free Baseline for Multimodal Grammar Induction

Research output: Working paperPreprintResearch

Documents

  • Fulltext

    Final published version, 750 KB, PDF document

  • Boyi Li
  • Rodolfo Corona
  • Karttikeya Mangalam
  • Catherine Chen
  • Daniel Flaherty
  • Belongie, Serge
  • Kilian Q. Weinberger
  • Jitendra Malik
  • Trevor Darrell
  • Dan Klein
Past work has shown that paired vision-language signals substantially improve grammar induction in multimodal datasets such as MSCOCO. We investigate whether advancements in large language models (LLMs) that are only trained with text could provide strong assistance for grammar induction in multimodal settings. We find that our text-only approach, an LLM-based C-PCFG (LC-PCFG), outperforms previous multi-modal methods, and achieves state-of-the-art grammar induction performance for various multimodal datasets. Compared to image-aided grammar induction, LC-PCFG outperforms the prior state-of-the-art by 7.9 Corpus-F1 points, with an 85% reduction in parameter count and 1.7x faster training speed. Across three video-assisted grammar induction benchmarks, LC-PCFG outperforms prior state-of-the-art by up to 7.7 Corpus-F1, with 8.8x faster training. These results shed light on the notion that text-only language models might include visually grounded cues that aid in grammar induction in multimodal contexts. Moreover, our results emphasize the importance of establishing a robust vision-free baseline when evaluating the benefit of multimodal approaches.
Original languageEnglish
PublisherarXiv.org
Number of pages12
Publication statusPublished - 2023

Links

ID: 384657807