output/rpn/default/eval/epoch_200/train_aug/featuresįor the offline GT sampling augmentation, the default setting to train the RCNN network is RCNN.ROI_SAMPLE_JIT=True, which means that we sample the RoIs and calculate their GTs in the GPU. output/rpn/default/eval/epoch_200/train_aug/detections/data -rcnn_training_feature_dir. Python train_rcnn.py -cfg_file cfgs/default.yaml -batch_size 4 -train_mode rcnn_offline -epochs 30 -ckpt_save_interval 1 -rcnn_training_roi_dir.
Faster PointNet++ inference and training supported by Pointnet2.PyTorchĪll the codes are tested in the following environment:.PointRCNN is evaluated on the KITTI dataset and achieves state-of-the-art performance on the KITTI 3D object detection leaderboard among all published works at the time of submission.įor more details of PointRCNN, please refer to our paper or project page. To the best of our knowledge, PointRCNN is the first two-stage 3D object detector for 3D object detection by using only the raw point cloud as input. In this work, we propose the PointRCNN 3D object detector to directly generated accurate 3D box proposals from raw point cloud in a bottom-up manner, which are then refined in the canonical coordinate by the proposed bin-based 3D box regression loss. New: We have provided another implementation of PointRCNN for joint training with multi-class in a general 3D object detection toolbox. PointRCNN PointRCNN: 3D Object Proposal Generation and Detection from Point CloudĬode release for the paper PointRCNN:3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.Īuthors: Shaoshuai Shi, Xiaogang Wang, Hongsheng Li.