Skip to main content

Enhancing access to the Bibliome: the TREC 2004 Genomics Track

Abstract

Background

The goal of the TREC Genomics Track is to improve information retrieval in the area of genomics by creating test collections that will allow researchers to improve and better understand failures of their systems. The 2004 track included an ad hoc retrieval task, simulating use of a search engine to obtain documents about biomedical topics. This paper describes the Genomics Track of the Text Retrieval Conference (TREC) 2004, a forum for evaluation of IR research systems, where retrieval in the genomics domain has recently begun to be assessed.

Results

A total of 27 research groups submitted 47 different runs. The most effective runs, as measured by the primary evaluation measure of mean average precision (MAP), used a combination of domain-specific and general techniques. The best MAP obtained by any run was 0.4075. Techniques that expanded queries with gene name lists as well as words from related articles had the best efficacy. However, many runs performed more poorly than a simple baseline run, indicating that careful selection of system features is essential.

Conclusion

Various approaches to ad hoc retrieval provide a diversity of efficacy. The TREC Genomics Track and its test collection resources provide tools that allow improvement in information retrieval systems.

Background

The growing amount of scientific research in genomics and related biomedical disciplines has led to a corresponding growth in the amount of on-line data and information, including scientific literature. A growing challenge for biomedical researchers is how to access and manage this ever-increasing quantity of information. A recent bioinformatics textbook notes, "Few areas of biological research call for a broader background in biology than the modern approach to genetics. This background is tested to the extreme in the selection of candidate genes for involvement with a disease process... Literature is the most powerful resource to support this process, but it is also the most complex and confounding data source to search" [1].

This situation presents opportunities and challenges for the information retrieval (IR) field. IR is the discipline concerned with the indexing and retrieval of information. While it has historically focused most of its research on textual documents, the field has expanded in recent years with the growth of new information needs (e.g., question-answering, cross-language), data types (e.g., sequence data, video) and platforms (e.g., the Web) [2]. An accompanying tutorial describes the basic terms and concepts of IR [3].

Biomedical motivations

With the advent of new technologies for sequencing the genome and proteome, along with other tools for identifying the expression of genes, structures of proteins, and so forth, the face of biological research has become increasingly data-intensive, creating great challenges for scientists who formerly dealt with relatively modest amounts of data in their research. The growth of biological data has resulted in a correspondingly large increase in scientific knowledge in what biologists sometimes call the bibliome or literature of biology. A great number of biological information resources have become available in recent years [4].

Probably the most important of these are from the National Center for Biotechnology Information (NCBI), a division of the National Library of Medicine (NLM) that maintains most of the NLM's genomics-related databases [5]. As IR has historically focused on text-based data, the NCBI resources of most interest to the IR community include MEDLINE (the bibliographic database of medical literature, accessed by PubMed and other systems) and textbooks such as Online Mendelian Inheritance in Man (OMIM). However, recognizing that literature is often a starting point for data exploration, there is also great interest in resources such as Entrez Gene [6], which serves as a switchboard to integrate gene information as well as provide annotation of its function using the widely accepted GeneOntology (GO) [7]. PubMed also provides linkages to full-text journal articles on the Web sites of publishers. Additional genomics resources exist beyond the NCBI, such as the model organism genome databases [8]. As with the NCBI resources, these resources provide rich linkage and annotation.

Because of the growing size and complexity of the biomedical literature, there is increasing effort devoted to structuring knowledge in databases. The use of these databases has become pervasive due to the growth of the Internet and Web as well as a commitment of the research community to put as much data as possible into the public domain. Figure 1 depicts the overall process of "funneling" the literature towards structured knowledge, showing the information system tasks used at different levels along the way. This figure shows our view of the optimal uses for IR and the related areas of information extraction and text mining.

Figure 1
figure 1

Steps in deriving knowledge from the biomedical literature and related application areas. This figure depicts the "funnel" of literature that occurs when a user seeks information and knowledge. The related information applications are shown to the right. The step of going from the entire literature to possibly relevant references is usually performed by an information retrieval system, whereas the step of identifying definitely relevant references and knowledge within them is the task of an information extraction system (or a person, since the state of information extraction is less developed than information retrieval).

Both the IR and bioinformatics communities have long histories of forums for evaluation of methods. The latter has the well-known Critical Assessment of Methods of Protein Structure Prediction (CASP) initiative for protein structure prediction [9, 10]. More recently, challenge evaluations have been initiated for researchers interested in information extraction (IE) [11], including the Knowledge Discovery from Databases (KDD) Cup [12] and the BioCreAtIvE initiative [13].

Text retrieval conference

The IR community has had an evaluation forum in the Text Retrieval Conference (TREC, trec.nist.gov) since 1992. TREC is an annual activity of the IR research community sponsored by the National Institute for Standards and Technology (NIST) that aims to provide a forum for evaluation of IR systems and users [14]. A key feature of TREC is that research groups work on a common source of data and a common set of queries or tasks. The goal is to allow comparisons across systems and approaches in a research-oriented, collegial manner. TREC activity is organized into "tracks" of common interest, such as question-answering, multi-lingual IR, Web searching, and interactive retrieval. TREC generally works on an annual cycle, with data distributed in the spring, experiments run in the summer, and the results presented at the annual conference that usually takes place in November.

Evaluation in TREC is based on the "Cranfield paradigm" that measures system success based on quantities of relevant documents retrieved, in particular the metrics of recall and precision [2]. Operationally, recall and precision are calculated using a test collection of known documents, topics, and judgments of relevance between them. In most TREC tracks, the two are combined into a single measure of performance, mean average precision (MAP). The first step in determining MAP is to calculate the average precision of each topic, which is measured by the average of precision after each relevant document is retrieved. The mean over all of these average precision values is the MAP.

TREC Genomics Track

The goal of the TREC Genomics Track is to create test collections for evaluation of information retrieval (IR) and related tasks in the genomics domain. The Genomics Track differs from all other TREC tracks in that it is focused on retrieval in a specific domain as opposed to general retrieval tasks, such as Web searching or question answering. The 2004 track was the second year of the TREC Genomics Track. This year was different from the first year, as we had resources available to us from a National Science Foundation (NSF) Information Technology Research (ITR) grant that allowed for programming support and relevance judgments. In contrast, for the 2003 track we had to rely on proxies for relevance judgments and other gold standard data [15]. The Genomics Track is overseen by a steering committee of individuals with a background in IR and/or genomics.

The TREC 2004 Genomics Track consisted of two tasks. The first task was a standard ad hoc retrieval task using topics obtained from real biomedical research scientists and documents from a large subset of the MEDLINE bibliographic database. The second task focused on categorization of full-text documents, simulating the task of curators of the Mouse Genome Informatics (MGI) system and consisting of three subtasks. The second task is described in a companion paper [16]. A total of 33 groups participated in the 2004 Genomics Track, making it the track with the most participants in all of TREC 2004. The remainder of this paper describes the methods and results from the ad hoc retrieval task, expanding upon the original report from the conference proceedings [17].

Methods

The goal of the ad hoc task was to mimic conventional searching. The use case was a scientist with a specific information need, searching the MEDLINE bibliographic database to find relevant articles to retrieve.

Documents

The document collection for the ad hoc retrieval task was a 10-year subset of MEDLINE. We contemplated the use of full-text documents in this task but were unable to procure an adequate amount to represent real-world searching. Therefore, we chose to use MEDLINE. As noted above, however, despite the widespread availability of on-line full-text scientific journals, at present most searchers of the biomedical literature still use MEDLINE as an entry point. Consequently, there is great value in being able to search MEDLINE effectively.

The subset of MEDLINE used for the track consisted of 10 years of completed citations from the database inclusive from 1994 to 2003. Records were extracted using the Date Completed (DCOM) field for all references in the range of 19940101 – 20031231. This provided a total of 4,591,008 records. We used the DCOM field and not the Date Published (DP). As a result, some records were published but not completed prior to 1994, i.e., the collection had:

• 2,814 (0.06%) DPs prior to 1980

• 8,388 (0.18%) DPs prior to 1990

• 138,384 (3.01%) DPs prior to 1994

The remaining 4,452,624 (96.99%) DPs were within the 10 year period of 1994–2004.

The data was made available in two formats:

• MEDLINE – the standard NLM format in ASCII text with fields indicated and delimited by 2–4 character abbreviations (uncompressed – 9,587,370,116 bytes, gzipped – 2,797,589,659 bytes)

• XML – the newer NLM XML format (uncompressed – 20,567,278,551 bytes, gzipped – 3,030,576,659 bytes)

Topics

The topics for the ad hoc retrieval task were developed from the information needs of real biologists and modified as little as possible to create needs statements with a reasonable estimated amount of relevant articles (i.e., more than zero but less than one thousand). The information needs capture began with interviews by 12 volunteers who sought biologists in their local environments. A total of 43 interviews yielded 74 information needs. Some of these volunteers, as well as an additional four individuals, created topics in the proposed format from the original interview data.

We aimed to have each information need reviewed more than once but were only able to do this with some, ending up with a total of 91 draft topics. The same individuals then were assigned different draft topics for searching on PubMed so they could be modified to generate final topics with a reasonable number of relevant articles. The track chair made one last pass to make the formatting consistent and extract the 50 that seemed most suitable as topics for the track.

The topics were formatted in XML and had the following fields:

• ID – 1 to 50

• Title – abbreviated statement of information need

• Information need – full statement information need

• Context – background information to place information need in context

We created an additional five "sample" topics, e.g., topic 51:

<TOPIC>

<ID>51</ID>

<TITLE>pBR322 used as a gene vector</TITLE>

<NEED>Find information about base sequences and restriction maps in plasmids that are used as gene vectors.</NEED>

<CONTEXT>The researcher would like to manipulate the plasmid by removing a particular gene and needs the original base sequence or restriction map information of the plasmid.</CONTEXT>

</TOPIC>

Relevance judgments

Relevance judgments were done using the conventional "pooling method" whereby a fixed number of top-ranking documents from each official run were pooled and provided to an individual (blinded to the number of groups who retrieved the document and what their search statements were). The relevance assessor then judged each document for the specific topic query as definitely relevant (DR), possibly relevant (PR), or not relevant (NR). For the official results, which required binary relevance judgments, documents that were rated DR or PR were considered relevant.

The pools were built as follows. Each of the 27 groups designated a top-precedence run that would be used for relevance judgments, typically what they thought would be their best-performing run. We took, on average, the top 75 documents for each topic from these 27 runs and eliminated the duplicates to create a single pool for each topic. The average pool size (average number of documents judged per topic) was 976, with a range of 476–1450.

The relevance judgments were done by two individuals with backgrounds in biology. One was a PhD biologist and the other an undergraduate biology student. Each topic was judged fully by one of the judges. In addition, to assess interjudge agreement, we selected every tenth article in the pool from six topics for duplicate judgment, allowing calculation of the kappa statistic for chance-corrected agreement [18].

Evaluation measures

The primary evaluation measure for the task was mean average precision (MAP). Results were calculated using the trec_eval program, a standard scoring system for TREC. A statistical analysis was performed using a repeated measures analysis of variance, with posthoc Tukey tests for pairwise comparisons. In addition to analyzing MAP, we also assessed precision at 10 and 100 documents.

Results

A total of 27 research groups submitted 47 different runs. Table 1 shows the pool size, number of relevant documents, mean average precision (MAP), average precision at 10 documents, and average precision at 100 documents for each topic. (Precision at 100 documents is potentially compromised due to a number of topics having many fewer than 100 relevant documents and thus being unable to score well with this measure no matter how effective they were at ranking relevant documents at the topic of list. However, as noted in Table 1, the mean and median number of relevant documents for all topics was over 100 and, as such, all runs would be affected by this issue.)

Table 1 Topics and associated data. Ad hoc retrieval topics, number of relevant documents, and average results for all runs.

The results of the duplicate judgments for the kappa statistic are shown in Table 2. The resulting value of kappa was 0.51, indicating a "fair" level of agreement but not being too different from similar relevance judgment activities in other domains, e.g., [19]. In general, the PhD biologist assigned more articles in the relevant category than the undergraduate.

Table 2 Kappa results. Kappa results for inter-judge agreement in relevant judgments for ad hoc retrieval task.

The results of all participating groups are shown in Table 3. The statistical analysis for MAP demonstrated significance across all the runs, with the pair-wise significance for the top run (pllsgen4a2) not obtained until the run RMITa about one-quarter of the way down the results.

Table 3 Ad hoc retrieval results. All runs, sorted by mean average precision.

The best official run was achieved by Patolis Corp., with a MAP of 0.4075. [20]. This run used a combination of Okapi weighting (BM25 for term frequency but with standard inverse document frequency), Porter stemming, expansion of symbols by LocusLink and MeSH records, blind relevance feedback (also known as blind query expansion), and use of all three fields of the topic (title, need, and context). This group also reported a post-submission run that added the language modelling technique of Dirichlet-Prior smoothing to achieve an even higher MAP of 0.4264. (See accompanying paper by Zhou et al. for definition of some of these terms. [3])

The next best run was achieved by the University of Waterloo [21]. This group used a variety of approaches including Okapi weighting, blind relevance feedback, and various forms of domain-specific query expansion. Their blind relevance feedback made use of usual document feedback as well as feedback from passages. Their domain-specific query expansion included expanding lexical variants as well as expanding acronym, gene, and protein name synonyms.

A number of groups used boosting of word weights in queries or documents. Tsinghua University boosted words in titles and abstracts, along with using blind query expansion [22]. Alias-i Corp. boosted query words in the title and need statements [23]. University of Tampere found value in identifying and using bi-gram phrases [24].

A number of groups implemented techniques, however, that were detrimental. This is evidenced by the OHSU runs, which used the Lucene system "out of the box" that applies TF*IDF weighting [25]. Approaches that attempted to map to controlled vocabulary terms did not fare as well, such as Indiana University [26], University of California Berkeley [27], and the National Library of Medicine [28]. Many groups tried a variety of approaches, beneficial or otherwise, but usually without comparing common baseline or running exhaustive experiments, making it difficult to discern exactly which techniques provided benefit. Figure 2 shows the official results graphically with annotations for the first run statistically significant from the top run as well as the OHSU "baseline."

Figure 2
figure 2

Ad hoc retrieval runs sorted by MAP. This figure shows all of the runs (x-axis) sorted by MAP (y-axis). The highest run to obtain statistical significance (RMITa) from the top run (pllsgen4a2) is denoted, along with the "out of the box" TF*IDF run (OHSUNeeds) are annotated. (Only every fifth run identifier is shown to make the x-axis more readable.)

As typically occurs in TREC ad hoc runs, there was a great deal of variation within individual topics, as is seen in Table 1. Figure 3 shows the average MAP across groups for each topic. Figure 4 presents the same data sorted to give a better indication of the variation across topics. There was a fairly strong relationship between the average and maximum MAP for each topic (Figure 5), while the number of relevant per topic versus MAP was less associated (Figure 6).

Figure 3
figure 3

MAP by topic for the ad hoc task.

Figure 4
figure 4

MAP by topic for the ad hoc task sorted by MAP.

Figure 5
figure 5

The maximum MAP plotted vs. average MAP for the ad hoc retrieval task runs.

Figure 6
figure 6

The number of relevant per topic plotted vs. MAP for the ad hoc retrieval task

Discussion

The TREC 2004 Genomics Track was very successful, with a great deal of enthusiastic participation. In all of the tasks, a diversity of approaches were used, resulting in wide variation across the results. Trying to discern the relative value of them is challenging, since few groups performed parameterized experiments or used common baselines.

In the ad hoc retrieval task, the best approaches employed techniques known to be effective in non-biomedical TREC IR tasks. These included Okapi weighting, blind relevance feedback, and language modelling. However, some domain-specific approaches appeared to be beneficial, such as expanding queries with synonyms from controlled vocabularies that are widely available. There also appeared to be some benefit for boosting parts of the queries. However, it was also easy for many groups to do detrimental things, as evidenced by the OHSU run of a TF*IDF system "out of the box" that scored well above the median.

How well do systems in the Genomics Track, i.e., systems focused on IR in the genomics domain, perform relative to systems in other domains? This is of course a challenging question to answer, since differing results may not only be due to different systems, but also different test collections, topics, and/or relevance judgments. The most comprehensive analysis of this issue to date has come from Buckley and Voorhees, who compared various yearly tasks and best performing systems with the general TREC ad hoc task data [29]. Tasks provided with greater topic elaboration performed better (MAP around 0.35–0.40) than those with shorter topics. The Genomics Track topics could be considered comparable to these topics, with comparable results. It has been noted that TREC tracks with far larger document collections, e.g., the Terabyte Track [30] and the Web Track [31], achieve much lower best MAP scores, with none better than 0.28. Although we did not address this issue explicitly, the data obtained through the experiments of the Genomics Track should allow further investigation of attributes that make genomics IR harder or easier than other IR task domains.

This work, and IR evaluation using test collections generally, have a number of methodological limitations. Generally, evaluation using test collections is more appropriate for evaluating IR systems than such systems in the hands of real users. TREC does have a small history of interactive IR evaluation [32], with the results showing that successful use of the system is not necessarily associated with better recall and precision [33].

Another limitation of evaluation using test collections is inconsistency of relevance judgments. This problem is well-known in the construction of test collections [19], but research has generally shown that using different judgments affects absolute but not relative performance [34]. In other words, different judgments lead to different MAP and other scores, but systems that perform well with one set of judgments tend to do as relatively well with others. Unfortunately we did not perform enough duplicate judgments to assess the impact of different judgments in the 2004 track. We will aim to perform this analysis in future offerings of the track.

Despite these limitations, the test collection and the results obtained provide substantial data for further research. A variety of additional issues can be investigated, such as attributes of documents and topics (including linguistics aspects like words and concepts present or absent) that are associated with relevance. In addition, a 2005 offering of the Genomics Track will take place, providing additional data for further research.

Conclusion

The ad hoc retrieval task of the TREC Genomics Track has developed resources that allow researchers to assess systems and algorithms for search in the genomics domain. The data for the 2004 track has been released to the general community for continued experimentation, and further annual offerings of the track will enhance these tools. The lessons learned from the 2004 track will guide both the operation and research of future offerings of the track in 2005 and beyond.

References

  1. Barnes MR, Gary RC: Bioinformatics for Geneticists. 2003, West Sussex, England, John Wiley & Sons

    Chapter  Google Scholar 

  2. Hersh WR: Information Retrieval: A Health and Biomedical Perspective (Second Edition). 2003, New York, Springer-Verlag

    Google Scholar 

  3. Zhou W, Smalheiser NR, Yu C: From PubMed to Google: basic terms and concepts of information retrieval. Journal of Biomedical Discovery and Collaboration. 2006,

    Google Scholar 

  4. Galperin MY: The Molecular Biology Database Collection: 2005 update. Nucleic Acids Research. 2005, 33: D5-D24. 10.1093/nar/gki139.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  5. Wheeler DL, Barrett T, Benson DA, Bryant SH, Canese K, Church DM, DiCuccio M, Edgar R, Federhen S, Helmberg W, Kenton DL, Khovayko O, Lipman DJ, Madden TL, Maglott DR, Ostell J, Pontius JU, Pruitt KD, Schuler GD, Schriml LM, Sequeira E, Sherry ST, Sirotkin K, Starchenko G, Suzek TO, Tatusov R, Tatusova TA, Wagner L, Yaschenko E: Database resources of the National Center for Biotechnology Information. Nucleic Acids Research. 2005, 33: 39-45. 10.1093/nar/gki062.

    Article  Google Scholar 

  6. Maglott D, Ostell J, Kim D, Pruitt KD, Tatusova T: Entrez Gene: gene-centered information at NCBI. Nucleic Acids Research. 2005, 33: D54-D58. 10.1093/nar/gki031.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  7. Anonymous: The Gene Ontology (GO) database and informatics resource. Nucleic Acids Research. 2004, 32: D258-D261. 10.1093/nar/gkh036.

    Article  Google Scholar 

  8. Bahls C, Weitzman J, RGallagher: Biology's models. The Scientist. 2003, 17: 5-

    Google Scholar 

  9. Moult J, Fidelis K, Zemla A, Hubbard T: Critical assessment of methods of protein structure prediction (CASP)-round V. Proteins. 2003, 53: 334-339. 10.1002/prot.10556.

    Article  CAS  PubMed  Google Scholar 

  10. Venclovas C, Zemla A, Fidelis K, Moult J: Assessment of progress over the CASP experiments. Proteins. 2003, 53: 585-595. 10.1002/prot.10530.

    Article  CAS  PubMed  Google Scholar 

  11. Hirschman L, Park JC, Tsujii J, Wong L, Wu CH: Accomplishments and challenges in literature data mining for biology. Bioinformatics. 2002, 18: 1553-1561. 10.1093/bioinformatics/18.12.1553.

    Article  CAS  PubMed  Google Scholar 

  12. Yeh AS, Hirschman L, Morgan AA: Evaluation of text data mining for database curation: lessons learned from the KDD Challenge Cup. Bioinformatics. 2003, 19: I331-I339. 10.1093/bioinformatics/btg1046.

    Article  PubMed  Google Scholar 

  13. Hirschman L, Yeh A, Blaschke C, Valencia A: Overview of BioCreAtIvE: critical assessment of information extraction for biology. BMC Bioinformatics. 2005, 6: S1-10.1186/1471-2105-6-S1-S1.

    Article  PubMed Central  PubMed  Google Scholar 

  14. Voorhees EM, Harman DK: TREC: Experiment and Evaluation in Information Retrieval. 2005, Cambridge, MA, MIT Press

    Google Scholar 

  15. Hersh WR, Bhupatiraju RT: TREC genomics track overview: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2003, NIST, 14-23.

    Google Scholar 

  16. Cohen AM, Hersh WR: The TREC 2004 Genomics Track categorization task: classifying full-text biomedical documents. Journal of Biomedical Discovery and Collaboration. 2006, submitted-

    Google Scholar 

  17. Hersh W, Bhuptiraju RT, Ross L, Johnson P, Cohen AM, Kraemer DF: TREC 2004 genomics track overview: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, NIST

    Google Scholar 

  18. Kramer MS, Feinstein AR: Clinical biostatistics: LIV. The biostatistics of concordance. Clinical Pharmacology and Therapeutics. 1981, 29: 111-123.

    Article  CAS  PubMed  Google Scholar 

  19. Hersh WR, Buckley C, Leone TJ, Hickam DH: OHSUMED: an interactive retrieval evaluation and new large test collection for research: ; Dublin, Ireland. Edited by: Croft WB and vanRijsbergen CJ. 1994, Springer-Verlag, 192-201.

    Google Scholar 

  20. Fujita S: Revisiting again document length hypotheses - TREC 2004 Genomics Track experiments at Patolis: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  21. Buttcher S, Clarke CLA, Cormack GV: Domain-specific synonym expansion and validation for biomedical information retrieval (MultiText experiments for TREC 2004): ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  22. Li J, Zhang X, Zhang M, Zhu X: THUIR at TREC 2004: Genomics Track: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  23. Carpenter B: Phrasal queries with LingPipe and Lucene: ad hoc genomics text retrieval: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  24. Pirkola A: TREC 2004 Genomics Track experiments at UTA: the effects of primary keys, bigram phrases and query expansion on retrieval performance: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  25. Cohen AM, Bhuptiraju RT, Hersh W: Feature generation, feature selection, classifiers, and conceptual drift for biomedical document triage: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  26. Seki K, Costello JC, Singan VR, Mostafa J: TREC 2004 Genomics Track experiments at IUB: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  27. Nakov PI, Schwartz AS, Stoica E, Hearst MA: BioText team experiments for the TREC 2004 Genomics Track: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  28. Aronson AR, Demmer D, Humphrey SH, Ide NC, Kim W, Loane RR, Mork JG, Smith LH, Tanabe LK, Wilbur WJ, Xie N, Demner D, Liu H: Knowledge-intensive and statistical approaches to the retrieval and annotation of genomics MEDLINE citations: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  29. Buckley C, Voorhees EM: Retrieval System Evaluation. TREC: Experiment and Evaluation in Information Retrieval. Edited by: Voorhees EM and Harman DK. 2005, Cambridge, MA, MIT Press, 53-75.

    Google Scholar 

  30. Clarke C, Craswell N, Soboroff I: Overview of the TREC 2004 Terabyte Track: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards & Technology, 80-88.

    Google Scholar 

  31. Craswell N, Hawking D: Overview of the TREC 2004 Web Track: ; Gaithersburg, MD. Edited by: Voorhees E and Buckland LP. 2004, National Institute of Standards & Technology, 89-97.

    Google Scholar 

  32. Hersh WR: Interactivity at the Text Retrieval Conference (TREC). Information Processing and Management. 2001, 37: 365-366. 10.1016/S0306-4573(00)00052-2.

    Article  Google Scholar 

  33. Hersh W, Turpin A, Price S, Kraemer D, Olson D, Chan B, Sacherek L: Challenging conventional assumptions of automated information retrieval with real users: Boolean searching and batch retrieval evaluations. Information Processing and Management. 2001, 37: 383-402. 10.1016/S0306-4573(00)00054-6.

    Article  CAS  Google Scholar 

  34. Voorhees EM: Variations in relevance judgments and the measurement of retrieval effectiveness: ; Melbourne, Australia. Edited by: Croft WB, Moffat A, vanRijsbergen C, Wilkinson R and Zobel J. 1998, ACM Press, 315-323.

    Google Scholar 

  35. Darwish K, Madkour A: The GUC goes to TREC 2004: using whole or partial documents for retrieval and classification in the Genomics Track: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  36. Kraaij W, Raaijmakers S, Weeber M, Jelier R: MeSH based feedback, concept recognition and stacked classification for curation tasks: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  37. Crangle C, Zbyslaw A, Cherry M, Hong E: Concept extraction and synonymy management for biomedical information retrieval: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  38. Billerbeck B, Cannane A, Chattaraj A, Lester N, Webber W, Williams HE, Yiannis J, Zobel J: RMIT University at TREC 2004: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  39. Tong RM: Information needs and automatic queries: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  40. Eichmann D, Zhang Y, Bradshaw S, Qiu XY, Zhou L, Srinivasan P, Sehgal AK, Wong H: Novelty, question answering and genomics: the University of Iowa response: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  41. Bacchin M, Melucci M: Expanding queries using stems and symbols: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  42. Ruiz ME, Srikanth M, Srihari R: UB at TREC 13: Genomics Track: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  43. Huang X, Huang YR, Zhong M, Wen M: York University at TREC 2004: HARD and Genomics Tracks: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  44. Dayanik A, Fradkin D, Genkin A, Kantor P, Madigan D, Lewis DD, Menkov V: DIMACS at the TREC 2004 Genomics Track: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  45. Yang K, Yu N, Wead A, LaRowe G, Li YH, Friend C, Lee Y: WIDIT in TREC 2004 Genomics, Hard, Robust and Web Tracks: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  46. Blott S, Camous F, Ferguson P, Gaughan G, Gurrin C, Jones JGF, Murphy N, O'Connor N, Smeaton AF, Wilkins P, Boydell O, Smyth B: Experiments in terabyte searching, genomic retrieval and novelty detection for TREC 2004: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  47. Guo Y, Harkema H, Gaizauskas R: Sheffield University and the TREC 2004 Genomics Track: query expansion using synonymous terms: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  48. Tomiyama T, Karoji K, Kondo T, Kakuta Y, Takagi T: Meiji University Web, Novelty and Genomic Track experiments: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  49. Guillen R: Categorization of genomics text based on decision rules: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

  50. Sinclair G, Webber B: TREC Genomics 2004: ; Gaithersburg, MD. Edited by: Voorhees EM and Buckland LP. 2004, National Institute of Standards and Technology

    Google Scholar 

Download references

Acknowledgements

The TREC 2004 Genomics Track was supported by NSF Grant ITR-0325160. We gratefully acknowledge the help of Ellen Voorhees and NIST in running TREC and the following individuals who interviewed biologists to obtain topics for the ad hoc searching task: Shannon Bradshaw, Marie Brandt, Rose Campbell, Marc Colosimo, Colleen Crangle, Anne-Marie Currie, Dina Demner-Fushman, Elizabeth Horn, Rob Jelier, Phoebe Johnson, Mike Kroeger, Marc Light, Rose Oughtred, Gail Sinclair, Lynne Sopchak, and Lorrie Tanabe.

The test collection described in this paper is available on the TREC Genomics Track Web site http://ir.ohsu.edu/genomics and can be downloaded after signing the Data Usage Agreement and returning it to NIST.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to William R Hersh.

Additional information

Competing interests

The author(s) declare that they have no competing interests.

Authors' contributions

WRH conceptualized the TREC Genomics Track, directed the organization of the ad hoc retrieval task, and drafted this paper. RTB did all of programming and collation of data. LR and PR performed the relevance judgments. AMC provided feedback and critical insights with the running of the track and editing of the paper. DFK performed the statistical analyses.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Hersh, W.R., Bhupatiraju, R.T., Ross, L. et al. Enhancing access to the Bibliome: the TREC 2004 Genomics Track. J Biomed Discov Collaboration 1, 3 (2006). https://doi.org/10.1186/1747-5333-1-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1747-5333-1-3

Keywords