Leveraging tensor kernels to reduce objective function mismatch in deep clustering

Publikation: Bidrag til tidsskriftTidsskriftartikelForskningfagfællebedømt

Dokumenter

  • Fulltext

    Forlagets udgivne version, 1,61 MB, PDF-dokument

Objective Function Mismatch (OFM) occurs when the optimization of one objective has a negative impact on the optimization of another objective. In this work we study OFM in deep clustering, and find that the popular autoencoder-based approach to deep clustering can lead to both reduced clustering performance, and a significant amount of OFM between the reconstruction and clustering objectives. To reduce the mismatch, while maintaining the structure-preserving property of an auxiliary objective, we propose a set of new auxiliary objectives for deep clustering, referred to as the Unsupervised Companion Objectives (UCOs). The UCOs rely on a kernel function to formulate a clustering objective on intermediate representations in the network. Generally, intermediate representations can include other dimensions, for instance spatial or temporal, in addition to the feature dimension. We therefore argue that the naïve approach of vectorizing and applying a vector kernel is suboptimal for such representations, as it ignores the information contained in the other dimensions. To address this drawback, we equip the UCOs with structure-exploiting tensor kernels, designed for tensors of arbitrary rank. The UCOs can thus be adapted to a broad class of network architectures. We also propose a novel, regression-based measure of OFM, allowing us to accurately quantify the amount of OFM observed during training. Our experiments show that the OFM between the UCOs and the main clustering objective is lower, compared to a similar autoencoder-based model. Further, we illustrate that the UCOs improve the clustering performance of the model, in contrast to the autoencoder-based approach. The code for our experiments is available at https://github.com/danieltrosten/tk-uco.

OriginalsprogEngelsk
Artikelnummer110229
TidsskriftPattern Recognition
Vol/bind149
Antal sider10
ISSN0031-3203
DOI
StatusUdgivet - 2024

Bibliografisk note

Funding Information:
This work was financially supported by the Research Council of Norway (RCN) , through its Centre for Research-based Innovation funding scheme (Visual Intelligence, grant no. 309439 ), and Consortium Partners. It was further funded by RCN FRIPRO, Norway grant no. 315029, RCN IKTPLUSS, Norway grant no. 303514 , and the UiT Thematic Initiative, Norway “Data-Driven Health Technology”.

Publisher Copyright:
© 2023 The Authors

ID: 380422262