Перейти к файлу
Naoki Wake e4217b8015
Update README.md
2022-03-04 12:56:18 -08:00
arr_gtr Update grasp.py 2022-03-04 01:49:47 -08:00
assets initial commit 2021-07-13 18:35:38 +09:00
data reupload large files 2022-03-04 02:04:03 -08:00
sample initial commit 2021-07-13 18:35:38 +09:00
.gitattributes added large files using git lfs 2021-09-21 21:00:47 +09:00
.gitignore initial commit 2021-07-13 18:35:38 +09:00
CODE_OF_CONDUCT.md CODE_OF_CONDUCT.md committed 2021-07-08 23:18:25 -07:00
LICENSE initial commit 2021-07-13 18:35:38 +09:00
README.md Update README.md 2022-03-04 12:56:18 -08:00
SECURITY.md SECURITY.md committed 2021-07-08 23:18:26 -07:00
SUPPORT.md SUPPORT.md committed 2021-07-08 23:18:27 -07:00
requirements.txt Upgrade pillow 2021-09-21 20:46:55 +09:00
setup.cfg initial commit 2021-07-13 18:35:38 +09:00
setup.py initial commit 2021-07-13 18:35:38 +09:00

README.md

MARR_logo.pngMicrosoft Applied Robotics Research Library

Affordance based grasp-type recognizer

This repository releases the source code for affrdance based grasp-type recognition, which is proposed in the following paper.

Installation:

pip install . -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html --use-feature=in-tree-build

Both CUDA and CPU options are supported. We recommend to use CUDA.

Usage:

Recognize a grasp type from an image

from arr_gtr.grasp import GraspTypeRecognitionModule
import numpy as np
from PIL import Image
from arr_gtr.grasp import grasp_types

nn = GraspTypeRecognitionModule()
#nn = GraspTypeRecognitionModule(pretrained_model = <path to a pth model>)

img = np.asarray(Image.open('sample/image.jpg'))
result = nn.inference(img)
print('Inference result (without affordance): ' + grasp_types[np.argmax(result)])

result = nn.inference_with_affordance(img, 'Apple', affordance_type='varied')
print('Inference result (inference with varied affordance): ' + grasp_types[np.argmax(result)])

result = nn.inference_with_affordance(img, 'Apple', affordance_type='uniformal')
print('Inference result (inference with uniformal affordance): ' + grasp_types[np.argmax(result)])
        
result = nn.inference_from_affordance('Apple', affordance_type='varied')
print('Inference result (inference only with varied affordance): ' + grasp_types[np.argmax(result)])

result = nn.inference_from_affordance('Apple', affordance_type='uniformal')
print('Inference result (inference only with uniformal affordance): ' + grasp_types[np.argmax(result)])

Create or pretrain a model

from arr_gtr.grasp import ModelTrainingModule
from arr_gtr.grasp import grasp_types

nn = ModelTrainingModule(<path to a dataset directory>, pretrained_model_path=None, batch_size=512)
nn.train(num_epochs=200, save_path=<path to a pth model e.g., './out/model.pth'>)

nn.visualize_result()

Preparing your own dataset

Expected dataset structure is as follows:
dataset_path
├ train
│ ├ grasp_type (e.g., AdductedThumb)
│ │ ├ 1.jpg (any name is OK)
│ │ ├ 2.jpg
│ │ ├ ...
├ valid
│ ├ grasp_type
│ │ ├ 1.jpg
│ │ ├ 2.jpg
│ │ ├ ...

Citation:

@article{wake2021object,
  title={Object affordance as a guide for grasp-type recognition},
  author={Wake, Naoki and Saito, Daichi and Sasabuchi, Kazuhiro and Koike, Hideki and Ikeuchi, Katsushi},
  journal={arXiv preprint arXiv:2103.00268},
  year={2021}
}

Trademarks This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsofts Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-partys policies.