TY - GEN
T1 - Magnet
T2 - 25th International Conference on Pattern Recognition, ICPR 2020
AU - Shrestha, Amar
AU - Pugdeethosapol, Krittaphat
AU - Fang, Haowen
AU - Qiu, Qinru
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2020
Y1 - 2020
N2 - Grounding free-form textual queries necessitates an understanding of these textual phrases and its relation to the visual cues to reliably reason about the described locations. Spatial attention networks are known to learn this relationship and focus its gaze on salient objects in the image. Thus, we propose to utilize spatial attention networks for image-level visual-textual fusion preserving local (word) and global (phrase) information to refine region proposals with an in-network Region Proposal Network (RPN) and detect single or multiple regions for a phrase query. We focus only on the phrase query - ground truth pair (referring expression) for a model independent of the constraints of the datasets i.e. additional attributes, context etc. For such referring expression dataset ReferIt game, our Multi-region Attention-assisted Grounding network (MAGNet) achieves over 12% improvement over the state-of-the-art. Without the context from image captions and attribute information in Flickr30k Entities, we still achieve competitive results compared to the state-of-the-art.
AB - Grounding free-form textual queries necessitates an understanding of these textual phrases and its relation to the visual cues to reliably reason about the described locations. Spatial attention networks are known to learn this relationship and focus its gaze on salient objects in the image. Thus, we propose to utilize spatial attention networks for image-level visual-textual fusion preserving local (word) and global (phrase) information to refine region proposals with an in-network Region Proposal Network (RPN) and detect single or multiple regions for a phrase query. We focus only on the phrase query - ground truth pair (referring expression) for a model independent of the constraints of the datasets i.e. additional attributes, context etc. For such referring expression dataset ReferIt game, our Multi-region Attention-assisted Grounding network (MAGNet) achieves over 12% improvement over the state-of-the-art. Without the context from image captions and attribute information in Flickr30k Entities, we still achieve competitive results compared to the state-of-the-art.
UR - http://www.scopus.com/inward/record.url?scp=85110479921&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85110479921&partnerID=8YFLogxK
U2 - 10.1109/ICPR48806.2021.9412473
DO - 10.1109/ICPR48806.2021.9412473
M3 - Conference contribution
AN - SCOPUS:85110479921
T3 - Proceedings - International Conference on Pattern Recognition
SP - 8275
EP - 8282
BT - Proceedings of ICPR 2020 - 25th International Conference on Pattern Recognition
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 10 January 2021 through 15 January 2021
ER -