Cross-view image geolocalization

Research output: Contribution to journalConference articleResearchpeer-review

Standard

Cross-view image geolocalization. / Lin, Tsung Yi; Belongie, Serge; Hays, James.

In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2013, p. 891-898.

Research output: Contribution to journalConference articleResearchpeer-review

Harvard

Lin, TY, Belongie, S & Hays, J 2013, 'Cross-view image geolocalization', Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 891-898. https://doi.org/10.1109/CVPR.2013.120

APA

Lin, T. Y., Belongie, S., & Hays, J. (2013). Cross-view image geolocalization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 891-898. [6618964]. https://doi.org/10.1109/CVPR.2013.120

Vancouver

Lin TY, Belongie S, Hays J. Cross-view image geolocalization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2013;891-898. 6618964. https://doi.org/10.1109/CVPR.2013.120

Author

Lin, Tsung Yi ; Belongie, Serge ; Hays, James. / Cross-view image geolocalization. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2013 ; pp. 891-898.

Bibtex

@inproceedings{501b7fe469ad49a1a15d9b714d90e346,
title = "Cross-view image geolocalization",
abstract = "The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.",
author = "Lin, {Tsung Yi} and Serge Belongie and James Hays",
year = "2013",
doi = "10.1109/CVPR.2013.120",
language = "English",
pages = "891--898",
journal = "I E E E Conference on Computer Vision and Pattern Recognition. Proceedings",
issn = "1063-6919",
publisher = "Institute of Electrical and Electronics Engineers",
note = "26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 ; Conference date: 23-06-2013 Through 28-06-2013",

}

RIS

TY - GEN

T1 - Cross-view image geolocalization

AU - Lin, Tsung Yi

AU - Belongie, Serge

AU - Hays, James

PY - 2013

Y1 - 2013

N2 - The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.

AB - The recent availability of large amounts of geotagged imagery has inspired a number of data driven solutions to the image geolocalization problem. Existing approaches predict the location of a query image by matching it to a database of georeferenced photographs. While there are many geotagged images available on photo sharing and street view sites, most are clustered around landmarks and urban areas. The vast majority of the Earth's land area has no ground level reference photos available, which limits the applicability of all existing image geolocalization methods. On the other hand, there is no shortage of visual and geographic data that densely covers the Earth - we examine overhead imagery and land cover survey data - but the relationship between this data and ground level query photographs is complex. In this paper, we introduce a cross-view feature translation approach to greatly extend the reach of image geolocalization methods. We can often localize a query even if it has no corresponding ground level images in the database. A key idea is to learn the relationship between ground level appearance and overhead appearance and land cover attributes from sparsely available geotagged ground-level images. We perform experiments over a 1600 km2 region containing a variety of scenes and land cover types. For each query, our algorithm produces a probability density over the region of interest.

UR - http://www.scopus.com/inward/record.url?scp=84887356836&partnerID=8YFLogxK

U2 - 10.1109/CVPR.2013.120

DO - 10.1109/CVPR.2013.120

M3 - Conference article

AN - SCOPUS:84887356836

SP - 891

EP - 898

JO - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

JF - I E E E Conference on Computer Vision and Pattern Recognition. Proceedings

SN - 1063-6919

M1 - 6618964

T2 - 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013

Y2 - 23 June 2013 through 28 June 2013

ER -

ID: 293151299