Information retrieval
Information retrieval (IR) is the science of searching for information in documents, searching for documents themselves, searching for metadata which describe documents, or searching within databases, whether relational stand-alone databases or hypertext networked databases such as the Internet or intranets, for text, sound, images or data. There is a common confusion, however, between data retrieval, document retrieval, information retrieval, and text retrieval, and each of these has its own bodies of literature, theory, praxis and technologies.
The term "information retrieval" was coined by Calvin Mooers in 1948-50.
IR is a broad interdisciplinary field, that draws on many other disciplines. Indeed, because it is so broad, it is normally poorly understood, being approached typically from only one perspective or another. It stands at the junction of many established fields, and draws upon cognitive psychology, information architecture, information design, human information behaviour, linguistics, semiotics, information science, computer science, librarianship and statistics.
Automated information retrieval (IR) systems were originally used to manage information explosion in scientific literature in the last few decades. Many universities and public libraries use IR systems to provide access to books, journals, and other documents. IR systems are often related to object and query. Queries are formal statements of information needs that are put to an IR system by the user. An object is an entity which keeps or stores information in a database. User queries are matched to documents stored in a database. A document is, therefore, a data object. Often the documents themselves are not kept or stored directly in the IR system, but are instead represented in the system by document surrogates.
In 1992 the US Department of Defense, along with the National Institute of Standards and Technology (NIST), cosponsored the Text Retrieval Conference (TREC) as part of the TIPSTER text program. The aim of this was to look into the information retrieval community by supplying the infrastructure that was needed for such a huge evaluation of text retrieval methodologies.
Web search engines such as Google and Lycos are amongst the most visible applications of information retrieval research.
Performance measures
There are various ways to measure how well the retrieved information matches the intended information: The formulas for precision, recall and fall-out are translated from the german Wikipedia-article "Recall und Precision". See also this nice intuitive, graphical depiction.
Precision
The proportion of retrieved and relevant documents to all the documents retrieved:
In binary classification, precision is analogous to positive predictive value. Precision can also be evaluated at a given cut-off rank, denoted P@n, instead of all retrieved documents.
Note that the meaning and usage of "precision" in the field of Information Retrieval differs from the definition of accuracy and precision within other branches of science and technology.
Recall
The proportion of relevant documents that are retrieved, out of all relevant documents available:
In binary classification, recall is called sensitivity.
Fall-Out
The probability to find an irrelevant among the retrieved documents.
F-measure
The weighted harmonic mean of precision and recall, the traditional F-measure or balanced F-score is:
This is also known as the measure, because recall and precision are evenly weighted.
The general formula for non-negative real α is:
Two other commonly used F measures are the measure, which weights precision twice as much as recall, and the measure, which weights recall twice as much as precision.
Mean average precision
Over a set of queries, find the mean of the average precisions, where Average Precision is the average of the precision after each relevant document is retrieved.
Where r is the rank, N the number retrieved, rel() a binary function on the relevance of a given rank, and P() precision at a given cut-off rank:
This method emphasizes returning more relevant documents earlier.
Model types

For successful IR, it is necessary to represent the documents in some way. There are a number of models for this purpose. They can be classified according to two dimensions like shown in the figure on the right: the mathematical basis and the properties of the model. (translated from German entry, original source Dominik Kuropka)
First dimension: mathematical basis
- Set-theoretic Models represent documents by sets. Similarities are usually derived from set-theoretic operations on those sets. Common models are:
- Algebraic Models represent documents and queries usually as vectors, matrices or tuples. Those vectors, matrices or tuples are transformed by the use of a finite number of algebraic operations to a one-dimensional similarity measurement.
- Vector space model
- Generalized vector space model
- Topic-based vector space model (literature: [1], [2])
- Extended Boolean model
- Enhanced topic-based vector space model (literature: [3], [4])
- Latent semantic indexing aka latent semantic analysis
- Probabilistic Models treat the process of document retrieval as a multistage random experiment. Similarities are thus represented as probabilities. Probabilistic theorems like the Bayes' theorem are often used in these models.
- Binary independence retrieval
- Uncertain inference
- Language models
- Divergence from randomness models
Second dimension: properties of the model
- Models without term-interdependencies treat different terms/words as not interdependent. This fact is usually represented in vector space models by the orthogonality assumption of term vectors or in probabilistic models by an independency assumption for term veriables.
- Models with immanent term interdependencies allow a representation of interdependencies between terms. However the degree of the interdependency between two terms is defined by the model itself. It is usually directly or indirectly derived (e.g. by dimensional reduction) from the co-occurrence of those terms in the whole set of documents.
- Models with transcendent term interdependencies allow a representation of interdependencies between terms, but they do not allege how the interdependency between two terms is defined. They relay an external source for the degree of interdependency between two terms. (For example a human or sophisticated algorithms.)
Open source information retrieval systems
- ASPseek
- DataparkSearch, search engine written in C
- Fluid Dynamics Search Engine (FDSE) An open source search engine written in Perl, freeware and shareware versions are available
- GalaTex XQuery Full-Text Search (XML query text search)
- ht://dig Open source web crawling software
- Glimpse and Webglimpse [5] advanced site search sowfware
- iHOP Information retrieval system for the biomedical domain
- EBIMed Information retrieval (and extraction) system over Medline
- Information Storage and Retrieval Using Mumps (Online GPL Text)
- Lemur Language Modelling IR Toolkit
- Lucene [6] Apache Jakarta project
- mnoGoSearch the renowned SQL search engine
- MG full-text retrieval system Now maintained by the Greenstone Digital Library Software Project
- SMART Early IR engine from Cornell University
- Sphinx Free open-source SQL full-text search engine
- Terrier Information Retrieval Platform
- Tiny Search Engine written in C++
- Wumpus multi-user information retrieval system
- Xapian Open source IR platform based on Muscat
- Zebra GPL structured text/XML/MARC boolean search IR engine supporting Z39.50 and Web Services
- Zettair, compact and fast search engine written in C, able to handle large amounts of text
Major information retrieval research groups
- Center for Intelligent Information Retrieval at UMASS
- Information Retrieval at the Language Technologies Institute, Carnegie Mellon University
- Information Retrieval at Microsoft Research Cambridge
- Glasgow Information Retrieval Group
- CIR Centre for Information Retrieval
- Centre for Interactive Systems Research at City University, London
- IIT Information Retrieval Lab
- Information Retrieval Group at Université de Neuchâtel
- PSU Intelligent Systems Research Laboratory
Major figures in information retrieval
- Calvin Mooers
- Eugene Garfield
- Gerard Salton
- Hans Peter Luhn
- W. Bruce Croft
- Karen Spärck Jones
- Stephen Robertson
- C. J. van Rijsbergen
- Stephen E. Robertson
- Abraham Bookstein
- Vannevar Bush
- Don Swanson
- Stephen P Harter
- David Blair
- Paul deMaine
Awards in this field: Tony Kent Strix award.
ACM SIGIR Gerard Salton Award
- 1983 - Gerard Salton, Cornell University
- "About the future of automatic information retrieval"
- 1988 - Karen Sparck Jones, University of Cambridge
- "A look back and a look forward"
- 1991 - Cyril Cleverdon, Cranfield Institute of Technology
- "The significance of the Cranfield tests on index languages"
- 1994 - William S. Cooper, University of California, Berkeley
- "The formalism of probability theory in IR: a foundation or an encumbrance?"
- 1997 - Tefko Saracevic, Rutgers University
- "Users lost: reflections on the past, future, and limits of information science"
- 2000 - Stephen E. Robertson, City University, London
- "On theoretical argument in information retrieval"
- 2003 - W. Bruce Croft, University of Massachusetts, Amherst
- "Information retrieval and computer science: an evolving relationship"
- 2006 - C. J. van Rijsbergen, University of Glasgow, UK
- "Quantum haystacks"
See also
- Controlled vocabulary
- Cross Language Evaluation Forum
- Cross-language information retrieval
- Digital libraries
- Document classification
- Educational psychology
- Free text search
- Geographic information system
- Information extraction
- Information science
- Knowledge visualization
- Search engines
- Search index
- Spoken document retrieval
- tf-idf
- SP theory
External links
- ACM SIGIR: Information Retrieval Special Interest Group
- BCS IRSG: British Computer Society - Information Retrieval Specialist Group
- The Anatomy of a Large-Scale Hypertextual Web Search Engine
- Text Retrieval Conference (TREC)
- Chinese Web Information Retrieval Forum (CWIRF)
- Information Retrieval (online book) by C. J. van Rijsbergen
- International Conference on Image and Video retrieval, July 21-23, 2004
- Glasgow Information Retrieval Group Wiki
- An introduction to IR
- Innovations in Search Conference, September 27-29, 2005
- Measuring Search Effectiveness
- Resources for Text, Speech and Language Processing
- Stanford CS276 course - Information Retrieval and Web Mining
- Usability and Accesibility in the Information Recuperation Process (In Spanish)
- Standards and documents for information retrieval (In Spanish)
- List of Open source Search engines