Alex He

...永远保持希望与激情...约会未来更强大的自己...

 

Machine Reading

The operational warfighter is inundated with information in narrative text, such as reports, email and chat, and the resulting task overload can preclude timely processing and exploitation of information.  Artificial intelligence (AI) offers a promising approach to this problem, however the cost of handcrafting information within the narrow confines of first order logic or other AI formalisms is currently prohibitive for many applications.

The Machine Reading program aims to address this issue by replacing expert and associated knowledge engineers with un-supervised or self-supervised learning systems that can ”read” natural text and insert it into AI knowledge bases (i.e., data stores especially encoded to support subsequent machine reasoning).  If successful, the Machine Reading program will produce language-understanding technology that will automatically process text in timelines consistent with operational tempo.

The Machine Research program is in its final phase and is expected to conclude at the end of FY 2012.  The program developed and evaluated numerous innovative prototypes and has laid the technical foundation for future research and development in operational-scale language-understanding capabilities.

http://www.darpa.mil/Our_Work/I2O/Programs/Machine_Reading.aspx

 

The time is ripe for the AI community to set its sights on machine reading—the automatic, unsupervised understanding of text. Over the last two decades or so, natural language processing (NLP) has developed powerful methods for low-level syntactic and semantic text processing tasks such as parsing, semantic role labeling, and text categorization. Over the same period, the fields of machine learning and probabilistic reasoning have yielded important breakthroughs as well. It is now time to investigate how to leverage these advances to understand text.

Machine reading (MR) is very different from current semantic NLP research areas such as information extraction (IE), or question answering (QA). Many NLP tasks utilize supervised learning techniques, which rely on hand-tagged training examples. For example, IE systems often utilize extraction rules learned from example extractions of each target relation. Yet MR is not limited to a small set of target relations. In fact, the relations encountered when reading arbitrary text are not known in advance! Thus, it is impractical to generate a set of hand-tagged examples of each relation of interest. In contrast with many NLP tasks, MR is inherently unsupervised.

Another important difference is that IE and QA focus on isolated “nuggets” obtained from text whereas MR is about forging and updating connections between beliefs. While MR will build on NLP techniques, it is a holistic process that synthesizes information gleaned from text with the machine’s existing knowledge.

http://www.aaai.org/Press/Reports/Symposia/Spring/ss-07-06.php

posted on 2013-01-27 20:40  Alex木头  阅读(323)  评论(0编辑  收藏  举报

导航