LZ_Jaja

  博客园  :: 首页  :: 新随笔  :: 联系 :: 订阅 订阅  :: 管理

Link of the Paper: http://papers.nips.cc/paper/4470-im2text-describing-images-using-1-million-captioned-photographs.pdf

Main Points:

  1. A large novel data set containing images from the web with associated captions written by people, filtered so that the descriptions are likely to refer to visual content.
  2. A description generation method that utilizes global image representations to retrieve and transfer captions from their data set to a query image: authors achieve this by computing the global similarity ( a sum of gist similarity and tiny image color similarity ) of a query image to their large web-collection of captioned images; they find the closest matching image ( or images ) and simply transfer over the description from the matching image to the query image.
  3. A description generation method that utilizes both global representations and direct estimates of image content (objects, actions, stuff, attributes, and scenes) to produce relevant image descriptions.

Other Key Points:

  1. Image captioning will help advance progress toward more complex human recognition goals, such as how to tell the story behind an image.
  2. An approach from Every picture tells a story: generating sentences for images produces image descriptions via a retrieval method, by translating both images and text descriptions to a shared meaning space represented by a single < object, action, scene > tuple. A description for a query image is produced by retrieving whole image descriptions via this meaning space from a set of image descriptions.
  3. The retrieval method relies on collecting and filtering a large data set of images from the internet to produce a novel web-scale captioned photo collection.
posted on 2018-08-10 14:54  LZ_Jaja  阅读(602)  评论(0编辑  收藏  举报