TY - GEN
T1 - Object classification from 3D volumetric data with 3D capsule networks
AU - Kakillioglu, Burak
AU - Ahmad, Ayesha
AU - Velipasalar, Senem
N1 - Publisher Copyright:
© 2018 IEEE.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - The proliferation of 3D sensors induced 3D computer vision research for many application areas including virtual reality, autonomous navigation and surveillance. Recently, different methods have been proposed for 3D object classification. Many of the existing 2D and 3D classification methods rely on convolutional neural networks (CNNs), which are very successful in extracting features from the data. However, CNNs cannot sufficiently address the spatial relationship between features due to the max-pooling layers, and they require vast amount of training data. In this paper, we propose a model architecture for 3D object classification, which is an extension of Capsule Networks (CapsNets) to 3D data. Our proposed architecture called 3D CapsNet, takes advantage of the fact that a CapsNet preserves the orientation and spatial relationship of the extracted features, and thus requires less data to train the network. We compare our approach with ShapeNet on the ModelNet database, and show that our method provides performance improvement especially when training data size gets smaller.
AB - The proliferation of 3D sensors induced 3D computer vision research for many application areas including virtual reality, autonomous navigation and surveillance. Recently, different methods have been proposed for 3D object classification. Many of the existing 2D and 3D classification methods rely on convolutional neural networks (CNNs), which are very successful in extracting features from the data. However, CNNs cannot sufficiently address the spatial relationship between features due to the max-pooling layers, and they require vast amount of training data. In this paper, we propose a model architecture for 3D object classification, which is an extension of Capsule Networks (CapsNets) to 3D data. Our proposed architecture called 3D CapsNet, takes advantage of the fact that a CapsNet preserves the orientation and spatial relationship of the extracted features, and thus requires less data to train the network. We compare our approach with ShapeNet on the ModelNet database, and show that our method provides performance improvement especially when training data size gets smaller.
KW - 3D object
KW - Capsule networks
KW - Classification
KW - Deep learning
KW - Modelnet
UR - http://www.scopus.com/inward/record.url?scp=85063086612&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85063086612&partnerID=8YFLogxK
U2 - 10.1109/GlobalSIP.2018.8646333
DO - 10.1109/GlobalSIP.2018.8646333
M3 - Conference contribution
AN - SCOPUS:85063086612
T3 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
SP - 385
EP - 389
BT - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2018 IEEE Global Conference on Signal and Information Processing, GlobalSIP 2018
Y2 - 26 November 2018 through 29 November 2018
ER -