2 months
Rules Acceptance Deadline
85
Teams
90
Competitors
281
Entries
Points
This competition awards standard ranking points
Tiers
This competition counts towards tiers
image datax 1733
data type > image data
computer visionx 1053
technique > computer vision
custom metric
Automatic Tag
In this competition, you are asked to take test images and recognize which landmarks (if any) are depicted in them. The training set is available in the train/
folder, with corresponding landmark labels in train.csv
. The test set images are listed in the test/
folder. Each image has a unique id
. Since there are a large number of images, each image is placed within three subfolders according to the first three characters of the image id
(i.e. image abcdef.jpg
is placed in a/b/c/abcdef.jpg
).
This is a synchronous rerun code competition. The provided test set is a representative set of files to demonstrate the format of the private test set. When you submit your notebook, Kaggle will rerun your code on the private dataset. Additionally, this competition also has two unique characteristics:
- To facilitate recognition-by-retrieval approaches, the private training set contains only a 100k subset of the total public training set. This 100k subset contains all of the training set images associated with the landmarks in the private test set. You may still attach the full training set as an external data set if you wish.
- Submissions are given 12 hours to run, as compared to the site-wide session limit of 9 hours. While your commit must still finish in the 9 hour limit in order to be eligible to submit, the rerun may take the full 12 hours.
GLDv2
The training data for this competition comes from a cleaned version of the Google Landmarks Dataset v2 (GLDv2), which is available here. Please refer to the paper for more details on the dataset construction and how to use it. See this code example for an example of a pretrained model.
If you make use of this dataset in your research, please consider citing:
"Google Landmarks Dataset v2 - A Large-Scale Benchmark for Instance-Level Recognition and Retrieval", T. Weyand, A. Araujo, B. Cao and J. Sim, Proc. CVPR'20
Data Explorer
test
train
sample_submission.csv
train.csv
Summary
1.59m files
4 columns
sample_submission.csv(292.99 KB)
2 of 2 columns
【推荐】国内首个AI IDE,深度理解中文开发场景,立即下载体验Trae
【推荐】编程新体验,更懂你的AI,立即体验豆包MarsCode编程助手
【推荐】抖音旗下AI助手豆包,你的智能百科全书,全免费不限次数
【推荐】轻量又高性能的 SSH 工具 IShell:AI 加持,快人一步
· 记一次.NET内存居高不下排查解决与启示
· 探究高空视频全景AR技术的实现原理
· 理解Rust引用及其生命周期标识(上)
· 浏览器原生「磁吸」效果!Anchor Positioning 锚点定位神器解析
· 没有源码,如何修改代码逻辑?
· 分享4款.NET开源、免费、实用的商城系统
· 全程不用写代码,我用AI程序员写了一个飞机大战
· MongoDB 8.0这个新功能碉堡了,比商业数据库还牛
· 白话解读 Dapr 1.15:你的「微服务管家」又秀新绝活了
· 上周热点回顾(2.24-3.2)