Single Image Texture Translation for Data Augmentation

Publikation: Working paperPreprintForskning

Standard

Single Image Texture Translation for Data Augmentation. / Belongie, Serge; Cui, Yin; Lin, Tsung Yi; Li, Boyi.

2021.

Publikation: Working paperPreprintForskning

Harvard

Belongie, S, Cui, Y, Lin, TY & Li, B 2021 'Single Image Texture Translation for Data Augmentation'. <https://arxiv.org/pdf/2106.13804.pdf>

APA

Belongie, S., Cui, Y., Lin, T. Y., & Li, B. (2021). Single Image Texture Translation for Data Augmentation. https://arxiv.org/pdf/2106.13804.pdf

Vancouver

Belongie S, Cui Y, Lin TY, Li B. Single Image Texture Translation for Data Augmentation. 2021 jun. 25.

Author

Belongie, Serge ; Cui, Yin ; Lin, Tsung Yi ; Li, Boyi. / Single Image Texture Translation for Data Augmentation. 2021.

Bibtex

@techreport{81793e55c4ab445790e4238b13560211,
title = "Single Image Texture Translation for Data Augmentation",
abstract = "Recent advances in image synthesis enables one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of semantic image translation methods for image recognition tasks. In this paper, we explore the use of Single Image Texture Translation (SITT) for data augmentation. We first propose a lightweight model for translating texture to images based on a single input of source texture, allowing for fast training and testing. Based on SITT, we then explore the use of augmented data in long-tailed and few-shot image classification tasks. We find the proposed method is capable of translating input data into a target domain, leading to consistent improved image recognition performance. Finally, we examine how SITT and related image translation methods can provide a basis for a data-efficient, augmentation engineering approach to model training.",
author = "Serge Belongie and Yin Cui and Lin, {Tsung Yi} and Boyi Li",
year = "2021",
month = jun,
day = "25",
language = "English",
type = "WorkingPaper",

}

RIS

TY - UNPB

T1 - Single Image Texture Translation for Data Augmentation

AU - Belongie, Serge

AU - Cui, Yin

AU - Lin, Tsung Yi

AU - Li, Boyi

PY - 2021/6/25

Y1 - 2021/6/25

N2 - Recent advances in image synthesis enables one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of semantic image translation methods for image recognition tasks. In this paper, we explore the use of Single Image Texture Translation (SITT) for data augmentation. We first propose a lightweight model for translating texture to images based on a single input of source texture, allowing for fast training and testing. Based on SITT, we then explore the use of augmented data in long-tailed and few-shot image classification tasks. We find the proposed method is capable of translating input data into a target domain, leading to consistent improved image recognition performance. Finally, we examine how SITT and related image translation methods can provide a basis for a data-efficient, augmentation engineering approach to model training.

AB - Recent advances in image synthesis enables one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of semantic image translation methods for image recognition tasks. In this paper, we explore the use of Single Image Texture Translation (SITT) for data augmentation. We first propose a lightweight model for translating texture to images based on a single input of source texture, allowing for fast training and testing. Based on SITT, we then explore the use of augmented data in long-tailed and few-shot image classification tasks. We find the proposed method is capable of translating input data into a target domain, leading to consistent improved image recognition performance. Finally, we examine how SITT and related image translation methods can provide a basis for a data-efficient, augmentation engineering approach to model training.

UR - https://arxiv.org/abs/2106.13804

M3 - Preprint

BT - Single Image Texture Translation for Data Augmentation

ER -

ID: 303775643