graviti
产品服务
解决方案
知识库
公开数据集
关于我们
avatar
Google Landmarks 2020 - Triplet Loss tfrecords
许可协议: CC-BY-SA 4.0

Description

Context

The latest Google Landmark Retrieval competition contains a crazy large dataset (1.5 million images) and asks participants to only use notebooks. TPUs are a great way to quickly train models on large volumes of this data. To realise the full potential of a TPU while using Tensorflow it is worth feeding the data into it as tfrecords.

Content

This dataset contains a sample of the total dataset but transformed into tfrecords. As I created this for use with a model that uses triplet loss you will find three images inside each example (i.e. a triplet). If you'd like to find out more about how the dataset is formed you can check out the notebook I used to create it here.

Acknowledgements

The notebook I used to create this dataset was largely inspired by Chris Deottes notebook so this is me saying thanks

数据概要
数据格式
image,
数据量
16
文件大小
479.49MB
发布方
Matt
| 数据量 16 | 大小 479.49MB
Google Landmarks 2020 - Triplet Loss tfrecords
许可协议: CC-BY-SA 4.0

Description

Context

The latest Google Landmark Retrieval competition contains a crazy large dataset (1.5 million images) and asks participants to only use notebooks. TPUs are a great way to quickly train models on large volumes of this data. To realise the full potential of a TPU while using Tensorflow it is worth feeding the data into it as tfrecords.

Content

This dataset contains a sample of the total dataset but transformed into tfrecords. As I created this for use with a model that uses triplet loss you will find three images inside each example (i.e. a triplet). If you'd like to find out more about how the dataset is formed you can check out the notebook I used to create it here.

Acknowledgements

The notebook I used to create this dataset was largely inspired by Chris Deottes notebook so this is me saying thanks

0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号

Copyright@Graviti
沪ICP备19019574号
沪公网安备 31011002004865号