graviti
产品服务
解决方案
知识库
公开数据集
关于我们
CrowdPose
Key Points
Pose Estimation
|...
许可协议: Unknown

Overview

Multi-person pose estimation is fundamental to many computer vision tasks and has made significant progress in recent years. However, few previous methods explored the problem of pose estimation in crowded scenes while it remains challenging and inevitable in many scenarios. Moreover, current benchmarks cannot provide an appropriate evaluation for such cases. In this paper, we propose a novel and efficient method to tackle the problem of pose estimation in the crowd and a new dataset to better evaluate algorithms. Our model consists of two key components: joint-candidate single person pose estimation (SPPE) and global maximum joints association. With multi-peak prediction for each joint and global association using graph model, our method is robust to inevitable interference in crowded scenes and very efficient in inference. The proposed method surpasses the state-of-the-art methods on CrowdPose dataset by 5.2 mAP and results on MSCOCO dataset demonstrate the generalization ability of our method. Source code and dataset will be made publicly available.

Instruction

We provide evaluation tools for CrowdPose dataset. Our evaluation tools is developed based on @cocodataset/cocoapi. The source code of our model has been integrated into AlphaPose.

CrowdPose API (based on COCO API)

To install:

  • For Python, run "sh install.sh" under coco/PythonAPI

Citation

If you find our works useful in your reasearch, please consider citing:

@article{li2018crowdpose,
  title={CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark},
  author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
  journal={arXiv preprint arXiv:1812.00324},
  year={2018}
} 
数据概要
数据格式
image,
数据量
20K
文件大小
2.27GB
发布方
Machine Vision and Intelligence Group @ SJTU
Our research group is engaged in cutting-edge artificial intelligence and deep learning academic research, and publishes our academic results in top academic conferences, such as CVPR/ICCV/NIPS/ECCV, and continues to cooperate with the world's top laboratories (such as Stanford Vision Lab, MIT AI Lab, DeepMind) to cooperate.
| 数据量 20K | 大小 2.27GB
CrowdPose
Key Points
Pose Estimation
许可协议: Unknown

Overview

Multi-person pose estimation is fundamental to many computer vision tasks and has made significant progress in recent years. However, few previous methods explored the problem of pose estimation in crowded scenes while it remains challenging and inevitable in many scenarios. Moreover, current benchmarks cannot provide an appropriate evaluation for such cases. In this paper, we propose a novel and efficient method to tackle the problem of pose estimation in the crowd and a new dataset to better evaluate algorithms. Our model consists of two key components: joint-candidate single person pose estimation (SPPE) and global maximum joints association. With multi-peak prediction for each joint and global association using graph model, our method is robust to inevitable interference in crowded scenes and very efficient in inference. The proposed method surpasses the state-of-the-art methods on CrowdPose dataset by 5.2 mAP and results on MSCOCO dataset demonstrate the generalization ability of our method. Source code and dataset will be made publicly available.

Instruction

We provide evaluation tools for CrowdPose dataset. Our evaluation tools is developed based on @cocodataset/cocoapi. The source code of our model has been integrated into AlphaPose.

CrowdPose API (based on COCO API)

To install:

  • For Python, run "sh install.sh" under coco/PythonAPI

Citation

If you find our works useful in your reasearch, please consider citing:

@article{li2018crowdpose,
  title={CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark},
  author={Li, Jiefeng and Wang, Can and Zhu, Hao and Mao, Yihuan and Fang, Hao-Shu and Lu, Cewu},
  journal={arXiv preprint arXiv:1812.00324},
  year={2018}
} 
0
立即开始构建AI
graviti
wechat-QR
长按保存识别二维码,关注Graviti公众号

Copyright@Graviti
沪ICP备19019574号
沪公网安备 31011002004865号