Text-Driven Stylization of Video Objects

Research output: Contribution to conferencePaperResearch

Documents

  • Fulltext

    Final published version, 1.23 MB, PDF document

We tackle the task of stylizing video objects in an intuitive and semantic manner following a user-specified text prompt. This is a challenging taskas the resulting video must satisfy multiple properties: (1) it has to be temporally consistent and avoid jittering or similar artifacts, (2) the resultingstylization must preserve both the global semantics of the object and its finegrained details, and (3) it must adhere to the user-specified text prompt. Tothis end, our method stylizes an object in a video according to two targettexts. The first target text prompt describes the global semantics and the second target text prompt describes the local semantics. To modify the style ofan object, we harness the representational power of CLIP to get a similarity score between (1) the local target text and a set of local stylized views,and (2) a global target text and a set of stylized global views. We use a pretrained atlas decomposition network to propagate the edits in a temporallyconsistent manner. We demonstrate that our method can generate consistent style changes over time for a variety of objects and videos, that adhere to the specification of the target texts. We also show how varying thespecificity of the target texts and augmenting the texts with a set of prefixes results in stylizations with different levels of detail. Full results are givenin the supplementary and in full resolution in the project webpage: https://sloeschcke.github.io/Text-Driven-Stylization-of-Video-Objects/.
Original languageEnglish
Publication date2022
Number of pages17
Publication statusPublished - 2022
EventCVEU @ ECCV 2022 : AI for Creative Video Editing and Understanding ECCV Workshop - Tel Aviv, Israel
Duration: 24 Mar 2024 → …

Conference

ConferenceCVEU @ ECCV 2022
CountryIsrael
CityTel Aviv
Period24/03/2024 → …

ID: 384566532