Stanford University professor Chris Manning is working to enable computers to process human language well enough to use the information it conveys. Manning says that as computers improve, and are better able to understand online content, they will be able to deliver more relevant search results and help summarize and act on information that is most important to the user. He says the fundamental challenge is getting computers to understand at least a reasonable amount of what they read.
Manning, along with Stanford professor Dan Jurafsky, has been developing software that uses probabilistic machine learning to understand sentences by recognizing parts of speech and sentence structure. The researchers also have created software that examines a word's context when deciding what a word means. The technical solution, called joint inference, is to look for other words in the sentence that are statistically shown to be relevant.
Another technology under development is robust textual inference, which can read a passage of text and determine if its conclusion is supported.
Abstracts Copyright © 2010 Information Inc., Bethesda, Maryland, USA
No entries found