Available under a Creative Commons Attribution Non-Commercial Share Alike 4.0 International Licence
An urgent limitation in current Image Captioning models is their tendency to produce generic captions that avoid the interesting detail which makes each image unique. To address this limitation, we propose an approach that enforces a stronger alignment between image regions and specific segments of text. The model architecture is composed of a visual region proposer, a region-order planner and a region-guided caption generator. The region-guided caption generator incorporates a novel information gate which allows visual and textual input of different frequencies and dimensionalities in a Recurrent Neural Network.
Lindh, A., Ross, R. J., & Kelleher, J. D. (2018). Entity-Grounded Image Captioning. ECCV 2018 Workshop on Shortcomings in Vision and Language (SiVL), Munich, Germany, September 8, 2018. doi:10.21427/D7ZN6Q
ADAPT Centre for Digital Content Technology
ECCV 2018 Workshop on Shortcomings in Vision and Language (SiVL), Munich, Germany, September 8, 2018.