Sign In

Communications of the ACM

ACM TechNews

Technology Merges Images, Data and Knowledge to Produce Smarter Searches

View as: Print Mobile App Share:
Latifur Khan and his team of Ph.D. students

The University of Texas at Dallas

University of Texas at Dallas researchers are working to improve the results of searches that request multiple pieces of information, such as searching for an apartment that is near different locations. Professor Latifur Khan and a team of graduate students (pictured) are working to create tools that take Web searches to the next level. "The tools we are developing utilize information regarding all the things that various people look for when searching for a place to live," Khan says. He says the key is merging evolving semantic Web technology with geospatial information systems.

Combining semantic Web technology with maps, photos, and other visual information could significantly simplify complex searches. Khan's goal is to develop software that extracts useful information from a variety of unstructured databanks and documents. The project is funded by the National Geospatial-Intelligence Agency, the Intelligence Advanced Research Projects Activity, and Raytheon, and is being conducted in collaboration with researchers from the University of Minnesota.

The researchers have already developed a semantic Web framework called Discovering Annotated Geospatial information Services to handle queries related to police blotter data, and they are now developing algorithms to facilitate the integration of additional geospatial data.

From The University of Texas at Dallas
View Full Article

Abstracts Copyright © 2009 Information Inc., Bethesda, Maryland, USA


No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account