NCBO Ontology Recommender 2.0: an enhanced approach for biomedical ontology recommendation

Background Ontologies and controlled terminologies have become increasingly important in biomedical research. Researchers use ontologies to annotate their data with ontology terms, enabling better data integration and interoperability across disparate datasets. However, the number, variety and complexity of current biomedical ontologies make it cumbersome for researchers to determine which ones to reuse for their specific needs. To overcome this problem, in 2010 the National Center for Biomedical Ontology (NCBO) released the Ontology Recommender, which is a service that receives a biomedical text corpus or a list of keywords and suggests ontologies appropriate for referencing the indicated terms. Methods We developed a new version of the NCBO Ontology Recommender. Called Ontology Recommender 2.0, it uses a novel recommendation approach that evaluates the relevance of an ontology to biomedical text data according to four different criteria: (1) the extent to which the ontology covers the input data; (2) the acceptance of the ontology in the biomedical community; (3) the level of detail of the ontology classes that cover the input data; and (4) the specialization of the ontology to the domain of the input data. Results Our evaluation shows that the enhanced recommender provides higher quality suggestions than the original approach, providing better coverage of the input data, more detailed information about their concepts, increased specialization for the domain of the input data, and greater acceptance and use in the community. In addition, it provides users with more explanatory information, along with suggestions of not only individual ontologies but also groups of ontologies to use together. It also can be customized to fit the needs of different ontology recommendation scenarios. Conclusions Ontology Recommender 2.0 suggests relevant ontologies for annotating biomedical text data. It combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness. Ontology Recommender 2.0 recommends over 500 biomedical ontologies from the NCBO BioPortal platform, where it is openly available (both via the user interface at http://bioportal.bioontology.org/recommender, and via a Web service API). Electronic supplementary material The online version of this article (doi:10.1186/s13326-017-0128-y) contains supplementary material, which is available to authorized users.


Background
During the last two decades, the biomedical community has grown progressively more interested in ontologies.Ontologies provide the common terminology necessary for biomedical researchers to describe their datasets, enabling better data integration and interoperability, and therefore facilitating translational discoveries [1,2].
BioPortal [3], developed by the National Center for Biomedical Ontology (NCBO) [4], serves as one of the primary platforms for hosting and sharing biomedical ontologies.BioPortal users can publish their ontologies as well as submit new versions.They can browse, search, review, and comment on ontologies, both interactively through a Web interface, and programmatically via Web services.In 2008, BioPortal1 contained 72 ontologies and 300.000 ontology classes.As of 2016, the number of ontologies exceeds 500, with more than 8.1 million classes, making it an indispensable resource for biomedical researchers.
The great number, complexity, and variety of ontologies in the biomedical field present a challenge for researchers: how to identify those ontologies that are most relevant for annotating, mining or indexing particular datasets.To address this problem, in 2010 the NCBO released the first version of its Ontology Recommender (henceforth 'Ontology Recommender 1.0' or 'original Ontology Recommender') [5], which informed the user of the most appropriate ontologies in BioPortal to annotate textual data.It was, to the best of our knowledge, the first biomedical ontology recommendation service, and it became widely known and used by the community 2 .However, the service has some limitations, and a significant amount of work has been done in the field of ontology recommendation since its release.This motivated us to analyze its weaknesses and to design a new recommendation approach.This paper presents our new approach for biomedical ontology recommendation, which we have used to implement the NCBO Ontology Recommender 2.0 (henceforth 'Ontology Recommender 2.0' or 'new Ontology Recommender').Given biomedical text data as input, our enhanced approach makes it possible to identify the most appropriate ontologies for annotating input data.Our research is relevant both for researchers and developers who need to identify ontologies that are best suited to specific datasets.

Related work
Much theoretical work has been done over the past two decades in the fields of ontology evaluation, selection, search, and recommendation.Ontology evaluation has been defined as the problem of assessing a given ontology from the point of view of a particular criterion, typically in order to determine which of several ontologies would best suit a particular purpose [6].As a consequence, ontology recommendation is fundamentally an ontology evaluation task because it addresses the problem of evaluating and consequently selecting the most appropriate ontologies for a specific context or goal [7,8].
Early contributions in the field of ontology evaluation date back to the early 1990s and were motivated by the necessity of having evaluation strategies to guide and improve the ontology engineering process [9][10][11].Some years later, with the birth of the Semantic Web [12], the need for reusing ontologies across the Web motivated the development of the first ontology search engines [13][14][15], which made it possible to retrieve all ontologies satisfying some basic requirements (e.g., find all ontologies that contain the class gestational diabetes).
The process of recommending ontologies involves more than traditional ontology search, however.It is a complex process that comprises not only enumerating a list of ontologies with class names matching a specific term, but also evaluating all candidate ontologies according to a variety of criteria, such as coverage, richness of the ontology structure [16][17][18], correctness, frequency of use [19], connectivity [16], formality, user ratings [20], and their suitability for the task at hand.
In biomedicine, the great number, size, and complexity of ontologies have motivated strategies to help researchers find the best ontologies to describe their datasets.Tan and Lambrix [21] proposed a theoretical framework for selecting the best ontology for a particular text-mining application and manually applied it to a gene-normalization task.Alani et al. [22] developed an ontology-search strategy that uses query-expansion techniques to find ontologies related to a particular domain (e.g., Anatomy).Maiga and Williams [23] conceived a semi-automatic tool that makes it possible to find the ontologies that best match a list of user-defined task requirements.
The most relevant alternative to the NCBO Ontology Recommender is BiOSS [19,24], which was released in 2011 by some of the authors of this paper.BiOSS evaluates each candidate ontology according to three criteria: (1) the input coverage; (2) the semantic richness of the ontology for the input; and (3) the acceptance of the ontology.However, this system has some weaknesses that make it insufficient to satisfy many ontology reuse needs in biomedicine.BiOSS' ontology repository is not updated regularly, so it does not take into account the most recent revisions to biomedical ontologies.Also, BiOSS evaluates ontology acceptance by counting the number of mentions of the ontology name in Web 2.0 resources, such as Twitter and Wikipedia.However, this method is not always appropriate because a large number of mentions do not always correspond to a high level of acceptance by the community (e.g., an ontology may be "popular" on Twitter because of a high number of negative comments about it).Another drawback is that the input to BiOSS is limited to comma-delimited keywords; it is not possible to suggest ontologies to annotate raw text, which is a very common use case in biomedical informatics.
In this work, we have applied our previous experience in the development of the original Ontology Recommender and the BiOSS system to conceive a new approach for biomedical ontology recommendation.The new approach has been used to design and implement the Ontology Recommender 2.0.The new system combines the strengths of previous methods with a range of enhancements, including new recommendation strategies and the ability to handle new use cases.
Because it is integrated within the NCBO BioPortal, this system works with a large corpus of current biomedical ontologies and can therefore be considered the most comprehensive biomedical ontology recommendation system developed to date.
Our recommendations for the choice of appropriate ontologies centers around the use of ontologies to perform annotation of textual data.We define annotation as a correspondence or relationship between a term and an ontology class that specifies the semantics of that term.For instance, an annotation might relate leucocyte in some text to a particular ontology class leucocyte in the Cell Ontology.The annotation process will also relate textual data such as white blood cell and lymphocyte to the class leucocyte in the Cell Ontology, via synonym and subsumption relationships, respectively.

Description of the original approach
The original NCBO Ontology Recommender supported two primary use cases: (1) corpus-based recommendation, and (2) keyword-based recommendation.In these scenarios, the system recommended appropriate ontologies from the BioPortal ontology repository to annotate a text corpus or a list of keywords, respectively.
The NCBO Ontology Recommender invoked the NCBO Annotator [25] to identify all annotations for the input data.The NCBO Annotator is a BioPortal service that annotates textual data with ontology classes.Then, the Ontology Recommender scored all BioPortal ontologies as a function of the number and relevance of the annotations found, and ranked the ontologies according to those scores.The first ontology in the ranking would be the most appropriate for the input data.
The score for each ontology was computed according to the following formula: 3 Here o is the ontology that is being evaluated, t is the input text, annotationScore(a) is the relevance score for the annotation a; hierarchyLevel(a) is the position of the matched class in the ontology tree; |o| is the number of classes in o; and annotations(o,t) is the list of annotations (a) of t with o returned by the NCBO Annotator.
The annotationScore(a) would depend on whether the annotation was achieved with a class 'preferred name' or with a class synonym.A preferred name is the human readable label that the authors of the ontology suggested to be used when referring to the class (e.g., vertebral column), whereas synonyms are alternate names for the class (e.g., spinal column, backbone, spine).Each class in BioPortal has a single preferred name and it may have any number of synonyms.Because synonyms can be imprecise, this approach favored matches on preferred names 4 .
The normalization by ontology size was intended to discriminate between large ontologies that offer good coverage of the input data, and small ontologies with both correct coverage and better specialization for the input data's domain.The granularities of the matched classes (i.e., hierarchyLevel(a)) were also considered, so that annotations performed with granular classes (e.g., epithelial cell proliferation) would receive higher scores than those performed with more abstract classes (e.g., biological process).
For example, Table 1 shows the top five suggestions of the original Ontology Recommender for the text Melanoma is a malignant tumor of melanocytes which are found predominantly in skin but also in the bowel and the eye.In this example, the system considered that the best ontology for the input data is the National Cancer Institute Thesaurus (NCIT). 3This formula is slightly different from the scoring method presented in the paper describing the original Ontology Recommender Web service [5].It corresponds to an upgrade done in the recommendation algorithm in December 2011, when BioPortal 3.5 was released, for which description and methodology was never published.The normalization strategy was improved by applying a logarithmic transformation to the ontology size to avoid a negative effect on very large ontologies.Mappings between ontologies, used to favor reference ontologies, were discarded due to the small number of manually created and curated mappings that could be used for such a purpose.The hierarchy-based semantic expansion was replaced by the position of the matched class in the ontology hierarchy. 4annotationScore(a) was equal to 10 if a was achieved with a class preferred name, and to 8 if a was achieved with a class synonym. 5See "List of abbreviations".
In the following sections, we summarize the most relevant shortcomings of the original approach, addressing input coverage, coverage of multi-word terms, input types and output information.

Input coverage
Input coverage refers to the fraction of input data that is annotated with ontology classes.Given that the goal is to find the best ontologies to annotate the user's data, high input coverage is the main requirement for ontology-recommendation systems.One of the shortcomings of the original approach is that it did not ensure that ontologies that provide high input coverage were ranked higher than ontologies with lower coverage.The approach was strongly based on the total number of annotations returned by the NCBO Annotator.However, a large number of annotations does not always imply high coverage.Ontologies with low input coverage can contain a great many classes that match only a few input terms, or match many repeated terms in a large text corpus.
In the previous example (see Since the recommendation score computed using the original approach is directly influenced by the number of annotations, EHDA obtains a high relevance score and thus the second position in the ranking.This issue was also identified by López-García et al. in their study of the efficiency of automatic summarization techniques [26].These authors noticed that EHDA was the most recommended ontology for a broad range of topics that the ontology actually did not cover well.

Multi-word terms
Biomedical texts frequently contain terms composed of several words, such as distinctive arrangement of microtubules, or dental disclosing preparation.Annotating a multi-word phrase or multi-word keyword with an ontological class that completely represents its semantics is a much better choice than annotating each word separately.The original recommendation approach was not designed to select the longest matches and consequently the results were affected.
As an example, Table 2 shows the top 5 ontologies suggested by the original Ontology Recommender for the phrase embryonic cardiac structure.Ideally, the first ontology in the ranking (SWEET) would contain the class embryonic cardiac structure.However, the SWEET ontology covers only the term structure.This ontology was ranked at the first position because it contains 3 classes matching the term structure and also because it is a small ontology (4549 classes).Furthermore, SNOMEDCT, which does contain a class that provides a precise representation of the input, was ranked in the 5th position.There are 3 other ontologies in BioPortal that contain the class embryonic cardiac structure: EP, BIOMODELS and FMA.However, they were ranked 8, 11 and 32, respectively.The recommendation algorithm should assign a higher score to an annotation that covers all words in a multi-word term than it does to different annotations that cover all words separately.

Input types
Related work in ontology recommendation highlights the importance of addressing two different input types: text corpora and lists of keywords [27].The original Ontology Recommender, while offering users the possibility of selecting among these two recommendation scenarios, would treat the input data in the same manner.To satisfy users' expectations, the system should process these two input types differently, to better reflect the information coded in the input about multi-word boundaries.

Output information
The output provided by the original Ontology Recommender consisted of a list of ontologies ranked by relevance score.For each ontology, the Web-based user interface displayed the number of classes matched and the size of each recommended ontology.In contrast, the Web service could additionally return the particular classes matched in each ontology.This information proved insufficient to assure users that a recommended ontology was appropriate and better than the alternatives.For example, it was not possible to know what specific input terms were covered by each class.The system should provide enough detail both to reassure users, and to give them information about alternative ontologies.
In this section we have described the fundamental limitations of the original Ontology Recommender and suggested methods for their improvement.Our goal is to design a new approach that enhances the advantages of the original one while addressing its shortcomings.The strategy for evaluating input coverage must be improved to ensure that ontologies that provide high input coverage are highly ranked.Annotations that cover all words in multi-word keywords should be prioritized over annotations that cover each word separately.The new approach must be also able to accept both plain text and keywords as input, but it should process the input data differently to better satisfy the users' expectations.Additionally, there is a diversity of other recently-proposed evaluation techniques [7,17,24] that could enhance our strategy.Particularly, there are two evaluation criteria that could substantially improve the output provided by the system: (1) ontology acceptance, which represents the degree of acceptance of the ontology by the community; and (2) ontology detail, which refers to the level of detail of the classes that cover the input data.

Description of the new approach
In this section, we present our new approach to biomedical ontology recommendation.First, we describe our ontology evaluation criteria and explain how the recommendation process works.We then provide some implementation details and discuss improvements to the user interface.
The execution starts from the input data and a set of configuration settings.The NCBO Annotator [25] is then used to obtain all annotations for the input using BioPortal ontologies.Those ontologies that do not provide annotations for the input data are considered irrelevant and are ignored in further processing.The ontologies that provide annotations are evaluated one by one according to four evaluation criteria that address the following questions: 1. Coverage: To what extent does the ontology represent the input data? 2. Acceptance: How well-known and trusted is the ontology by the biomedical community?3. Detail: How rich is the ontology representation for the input data? 4. Specialization: How specialized is the ontology to the domain of the input data?
According to our analysis of related work, these are the most relevant criteria for ontology recommendation.Note that other authors have referred to the coverage criterion as term matching [5], class match measure [17] and topic coverage [27].Acceptance is related to criteria such as popularity [19,24,27], connectivity [5] and connectedness [16].Detail is similar to structure measure [5], semantic richness [19,24] , structure [16], and granularity [23].
For each of these evaluation criteria, a score in the interval [0,1] is typically obtained.Then, all the scores for a given ontology are aggregated into a composite relevance score, also in the interval [0,1].This score represents the appropriateness of that ontology to describe the input data.The individual scores are combined in accordance with the following expression: Where o is the ontology that is being evaluated, t represents the input data, and {w c , w a , w d , w s } are a set of predefined weights that are used to give more or less importance to each evaluation criterion, such that w c + w a + w d + w s = 1.Note that acceptance is the only criterion independent from the input data.Ultimately, the system returns a list of ontologies ranked according to their relevance scores.

Ontology evaluation criteria
The relevance score of each candidate ontology is calculated based on coverage, acceptance, detail, and specialization.We now describe these criteria in more detail.

Ontology coverage
It is crucial that ontology recommendation systems suggest ontologies that provide high coverage of the input data.As with the original approach, the new recommendation process is driven by the annotations provided by the NCBO Annotator, but the method used to evaluate the candidate ontologies is different.In the new algorithm, each annotation is assigned a score computed in accordance with the following expression: 6 with: In this expression, annotationTypeScore(a) is a score based on the annotation type, which can be either 'PREF', if the annotation has been performed with a class preferred name, or 'SYN', if it has been performed with a class synonym.Our method assigns higher relevance to scores done with class preferred names than to those made with class synonyms because we have seen that many BioPortal ontologies contain synonyms that are not reliable (e.g., Other variants as a synonym of Other Variants of Basaloid Follicular Neoplasm of the Mouse Skin in the NCI Thesaurus).
The multiWordScore(a) score rewards multi-word annotations.It gives more importance to classes that annotate multi-word terms than to classes that annotate individual words separately (e.g., blood cell versus blood and cell).Such classes better reflect the input data than do classes that represent isolated words.
The annotatedWords(a) function represents the number of words matched by the annotation (e.g., 2 for the term blood cell).
Sometimes, an ontology provides overlapping annotations for the same input data.For instance, the text white blood cell may be covered by two different classes, white blood cell and blood cell.In the 6 The function is called annotationScore2 to differentiate it from the original annotationScore function.
original approach, ontologies with low input coverage were sometimes ranked among the top positions because they had multiple classes matching a few input terms, and all those annotations contributed to the final score.Our new approach addresses this issue.If an ontology provides several annotations for the same text fragment, only the annotation with the highest score is selected to contribute to the coverage score.
The coverage score for each ontology is computed as the sum of all the annotation scores, as follows: where A is the set of annotations performed with the ontology o for the input t, selectedAnnotations(A) is the set of annotations that are left after discarding overlapping annotations, and norm is a function that normalizes the coverage score to the interval [0,1].
As an example, Table 3 shows the annotations performed with SNOMEDCT for the input A thrombocyte is a kind of blood cell.In this example, the coverage score for SNOMEDCT would be calculated as 5+26=31, which would be normalized to the interval [0,1] by dividing it by the maximum coverage score.The maximum coverage score is obtained by adding the scores of all the annotations performed with all BioPortal ontologies, after discarding overlapping annotations.It is important to note that this evaluation of ontology coverage takes into account term frequency.That is, matched terms with several occurrences are considered more relevant to the input data than terms that occur less frequently.If an ontology covers a term that appears several times in the input, its corresponding annotation score will be counted each time and the coverage score for the ontology accordingly will be higher.In addition, because we select only the matches with the highest score, the frequencies are not distorted by terms embedded in one another (e.g., white blood cell and blood cell).
Our approach accepts two input types: free text and comma-delimited keywords.For the keyword input type, only those annotations that cover all the words in a multi-word term are considered.Partial annotations are immediately discarded.

Ontology acceptance
In biomedicine, some ontologies have been developed and maintained by widely known institutions or research projects.The content of these ontologies is periodically curated are extensively used and accepted by the community.Examples of broadly accepted ontologies are SNOMEDCT [28] and Gene Ontology [29].Some ontologies uploaded to BioPortal may be relatively less reliable, however.They may contain incorrect or poor quality content or simply be insufficiently up to date.
It is important that an ontology recommender be able to distinguish between ontologies that are accepted as trustworthy and those that are less so.
Our approach estimates the degree of acceptance of each BioPortal ontology based on two factors: 1.The number of visits to the ontology in BioPortal in a recent period of time (e.g., the last 6 months).Ontologies that receive more visits (pageviews) in BioPortal are presumed to be more relevant to the biomedical community than ontologies that receive a lower number of visits.This method takes into account changes in ontology popularity over time.2. The presence of the ontology in the Unified Medical Language System (UMLS) [30].
UMLS is a collection of more than 100 biomedical ontologies and controlled terminologies, developed and maintained by the National Library of Medicine (NLM).The ontologies that are included into UMLS are generally widely-accepted, and subjected to revision by the NLM.The current version of BioPortal contains 527 ontologies, 31 of which (5.9%) belong to UMLS.
The acceptance score for each ontology is calculated according to the following expression: where pageviewsScore(o) represents the number of visits to the ontology in BioPortal normalized to the interval [0,1]; umlsScore(o) is 1 if the ontology is included into UMLS and 0 if it is not.w pv and w umls are weights that are used to give more or less importance to each factor, with w pv + w umls = 1. Figure 1 shows the top 20 accepted BioPortal ontologies according to our approach at the time of writing this paper.Estimating the acceptance of an ontology by the community is inherently subjective, but the above ranking shows that our approach provides reasonable results.All ontologies in the ranking are widely known and accepted biomedical ontologies that are used in a variety of projects and applications.

Ontology detail
Ontologies containing a richer representation for a specific input are potentially more useful to describe the input than less detailed ontologies.As an example, the class melanoma in the Human Disease Ontology contains a definition, two synonyms, and twelve properties.However, the class melanoma from the GALEN ontology does not contain any definition, synonyms, or properties.If a user needs an ontology to represent that concept, the Human Disease Ontology would probably be more useful than the GALEN ontology because of this additional information.An ontology recommender should be able to analyze the level of detail of the classes that cover the input data and to give more or less weight to the ontology according to the degree to which its classes have been specified.
We evaluate the richness of the ontology representation for the input data based on a simplification of the "semantic richness" metric used by BiOSS [24].For each annotation selected during the coverage evaluation step, we calculate the detail score as follows: where detailScore(a) is a value in the interval [0,1] that represents the level of detail provided by the annotation a.This score is based on three functions that evaluate the detail of the knowledge representation according to the number of definitions, synonyms, and other properties of the matched class: where |D|, |S| and |P| are the number of definitions, synonyms, and other properties of the matched class, and k d , k s and k p are predefined constants that represent the number of definitions, synonyms, and other properties, respectively, necessary to get the maximum detail score.For example, using k s =4 means that, if the class has 4 or more synonyms, then it will be assigned the maximum synonyms score, which would be 1.If it has fewer than 4 synonyms, for example 3, the synonyms score will be computed proportionally according to the expression above (i.e., 3/4).Finally, the detail for the ontology would be calculated as the sum of the detail scores of the annotations done with the ontology, normalized to [0,1]: Example: Suppose that, for the input t = Penicillin is an antibiotic used to treat tonsillitis, there are two ontologies O1 and O2 with the classes shown in Table 4. Assuming that k d = 1, k s = 4 and k p = 10, the detail score for O1 and O2 would be calculated as follows: Given that O1 annotates the input with two classes that provide more detailed information than the classes from O2, the detail score for O1 is higher.

Ontology specialization
Some biomedical ontologies aim to represent detailed information about specific subdomains or particular tasks.Examples include the Ontology for Biomedical Investigations [31], the Human Disease Ontology [32] and the Biomedical Resource Ontology [33].These ontologies are usually much smaller than more general ones, with only several hundred or a few thousand classes, but they provide comprehensive knowledge for their fields.
To evaluate ontology specialization, an ontology recommender needs to quantify the extent to which a candidate ontology fits the specialized nature of the input data.To do that, we reused the evaluation approach applied by the original Ontology Recommender-which was designed to identify small, specialized ontologies-and adapted it to the new annotation scoring strategy.The specialization score for each candidate ontology is calculated according to the following expression: where o is the ontology being evaluated, t is the input text, annotationScore2(a) is the function that calculates the relevance score of an annotation (see Section 2.1.1),hierarchyLevel(a) returns the level of the matched class in the ontology hierarchy, and A is the set of all the annotations done with the ontology o for the input t.Unlike the coverage and detail criteria, which consider only selectedAnnotations(A), the specialization criterion takes into account all the annotations returned by the Annotator (i.e., A).This is appropriate because an ontology that provides multiple annotations for a specific text fragment is likely to be more specialized for that text than an ontology that provides only one annotation for it.The normalization by ontology size aims to assign a higher score to smaller, more specialized ontologies.Applying a logarithmic function decreases the impact of ontologies with a very large size.Finally, the norm function normalizes the score to the interval [0,1].Using the same hypothetical ontologies, input, and annotations from the previous example, and taking into account the size and annotation details shown in Table 5, the specialization score for O1 and O2 would be calculated as follows: It is possible to see that the classes from O2 are located deeper in the hierarchy than are those from O1. Also, O2 is a much smaller ontology than O1.As a consequence, according to our ontologyspecialization method, O2 would be considered more specialized for the input than O1, and would be assigned a higher specialization score.

Summary of the approach
In this section, we presented the four evaluation criteria used in our approach to recommending ontologies: ontology coverage, ontology acceptance, ontology detail, and ontology specialization.We explained how to calculate the evaluation score corresponding to each criterion and described the method used to aggregate the resulting four evaluation scores into a composite relevancy score for each ontology.The composite score represents the appropriateness of that ontology to describe the input data and is used to generate the ranking of recommended ontologies that is returned to the user.

Evaluation of ontology sets
When annotating a biomedical text corpus or a list of biomedical keywords, it is often difficult to identify a single ontology that covers all terms.In practice, it is more likely that several ontologies will jointly cover the input [7].Suppose that a researcher needs to find the best ontologies for a list of biomedical terms.If there is not a single ontology that provides an acceptable coverage it should then evaluate different combinations of ontologies and return a ranked list of ontology sets that, together, provide higher coverage.This is a multi-criteria optimization problem in which an ontology recommender should attempt to find the minimum set of ontologies that provides the maximum coverage.For instance, in our previous example (Penicillin is an antibiotic used to treat tonsillitis), O1 covers the terms penicillin and antibiotic and O2 covers penicillin and tonsillitis.None of those ontologies provides full coverage of all the relevant input terms.However, by using O1 and O2 together, it is possible to cover penicillin, antibiotic, and tonsillitis.
Our method to evaluate ontology sets is based on the "ontology combinations" approach used by the BiOSS system [19].The system generates all possible sets of 2 and 3 candidate ontologies (3 being the default maximum, though users may modify this limit according to their specific needs) and it evaluates them using the criteria presented previously.To improve performance, we use some heuristic optimizations to discard certain ontology sets without performing the full evaluation process for them.For example, a set containing two ontologies that cover exactly the same terms will be immediately discarded because that set's coverage will not be higher than that provided by each ontology individually.
The relevance score for each set of ontologies is calculated using the same approach as for single ontologies, in accordance with the following expression: where O = {o | o is an ontology} and |O| > 1.The scores for the different evaluation criteria are calculated as follows: • coverageSet: It is computed the same way as for a single ontology, but takes into account all the annotations performed with all the ontologies in the ontology set.The system selects the best annotations, and the set's input coverage is computed based on them.• acceptanceSet, detailSet, and specializationSet: For each ontology, the system calculates its coverage contribution (as a percentage) to the set's coverage score.The recommender then uses this contribution to calculate all the other scores proportionally.By using this method, the impact (in terms of acceptance, detail and specialization) of a particular ontology on the set score will vary according to the coverage provided by such ontology.The evaluation process starts.The NCBO Annotator is invoked to retrieve all annotations for the input data.The system uses these annotations to evaluate BioPortal ontologies, one by one, according to four criteria: coverage, acceptance, detail and specialization.Because of the system's modular design, additional evaluation criteria can be easily added.The system uses BioPortal services to retrieve any additional information required by the evaluation process.For example, evaluation of ontology acceptance requires the number of visits to the ontology in BioPortal (pageviews), and checking whether the ontology is present in the Unified Medical Language System (UMLS) or not.Four independent evaluation scores are returned for each ontology (one per evaluation criterion).( 3) The scores obtained are combined into a relevance score for the ontology.( 4) The relevance scores are used to generate a ranked list of ontologies or ontology sets, which ( 5) is returned via the corresponding system's interface.

Implementation details
The backend has been developed using Ruby and interacts with the BioPortal infrastructure to obtain all the information required to provide a recommendation.This information includes all annotations for the input data (generated by the Annotator service) and details about the candidate ontologies and their classes, such as the number of classes in each ontology, the number of properties for each class, and the number of user visits to each ontology in a period of time.The system has four independent evaluation modules that assess each candidate ontology according to four criteria: coverage, acceptance, detail and specialization.Because of the system's modular design, new ontology evaluation components can be plugged in easily.Ontology Recommender 2.0 works with all the ontologies available in the BioPortal ontology repository.The source code is available in GitHub7 under a BSD License.

User Interface
Figure 3 shows the Ontology Recommender 2.0 user interface.The system supports two input types: plain text and comma-separated keywords.It also provides two kinds of output: ranked ontologies and ranked ontology sets.As previously explained, the ranked ontology sets can be useful for researchers who seek maximum coverage of the input data, even if several ontologies have to be used.The advanced options section, which is initially hidden, allows the user to customize (1) the weights applied to the evaluation criteria, (2) the maximum number of ontologies in each set (when using the ontology sets output), and (3) the list of candidate ontologies to be evaluated.Figure 4 shows an example of the system's output when selecting "keywords" as input and "ontologies" as output.For each ontology in the output, the user interface shows its final score, the scores for the four evaluation criteria used, and the number of annotations performed with the ontology on the input.For instance, the most highly recommended ontology in Figure 4 is the Symptom Ontology (SYMP), which covers 17 of the 21 input keywords.The column "highlight annotations" allows the user to select any of the suggested ontologies and see which specific input terms are covered.Also, clicking on a particular term in the input reveals the details of the matched class in BioPortal.All scores are translated from the interval [0, 1] to [0, 100] for better readability.
Figure 5 shows the "Ontology sets" output for the same keywords displayed in Figure 4.In this case, the system looks for the minimum number of ontologies that provide the highest input coverage.The output shows that using three ontologies (SYMP, SNOMEDCT and MEDDRA) it is possible to cover all the input keywords.Different colors for the input terms and for the recommended ontologies in Figure 5 distinguish the specific terms covered by each ontology in the selected set.For each ontology, it shows the position of the ontology in the ranking, the ontology acronym, the final recommendation score, the scores for each evaluation criteria (i.e., coverage, acceptance, detail, and specialization), and the number of annotations performed with the ontology.The "highlight annotations" button highlights the input terms covered by the ontology.For each set, it shows its position in the ranking, the acronyms of the ontologies that belong to it, the final recommendation score, the scores for each evaluation criteria (i.e., coverage, acceptance, detail, and specialization), and the number of annotations performed with all the ontologies in the ontology set.The "highlight annotations" button highlights the input terms covered by the ontology set.

Evaluation
To evaluate our approach, we compared the performance of Ontology Recommender 2.0 to Ontology Recommender 1.0 using data from a variety of well-known public biomedical databases.
Examples of these databases are PubMed, which contains bibliographic information for the fields of biomedicine and health; the Gene Expression Omnibus (GEO), which is a repository of gene expression data; and ClinicalTrials.gov,which is a registry of clinical trials.We used the API provided by the NCBO Resource Index8 [34] to programmatically extract data from those databases.

Experiment 1: Input Coverage
We selected 12 widely known biomedical databases and extracted 600 biomedical texts from them, with 127 words on average, and 600 lists of biomedical keywords, with 17 keywords on average, producing a total of 1200 inputs (100 inputs per database).The databases used are listed in Table 6.
Given the importance of input coverage, we first executed both systems for all inputs and compared the coverage provided by the top-ranked ontology.We focused on the top-ranked ontology because the majority of users always select the first result obtained [35].The strategy we used to calculate the ontology coverage differed depending on the input type: • For texts, the coverage was computed as the percentage of input words covered by the ontology with respect to the total number of words that could be covered using all BioPortal ontologies together.• For keywords, the coverage was computed as the percentage of keywords covered by the ontology divided by the total number of keywords.Figure 6 and Figure 7 show a representation of the coverage provided by both systems for each database and input type.Table 7 and Table 8 provide a summary of the evaluation results.For some inputs, the first ontology suggested by Ontology Recommender 1.0 provides very low coverage (under 20%).This results from one of the shortcomings previously described: Ontology Recommender 1.0 occasionally assigns a high score to ontologies that provide low coverage because they contain several classes matching the input.The new recommendation approach used by Ontology Recommender 2.0 addresses this problem: Virtually none of its executions provide such low coverage.
For example, Table 9 shows the ontologies recommended if we input the following description of a disease, extracted from the Integrated Disease View (IDV) database: Chronic fatigue syndrome refers to severe, continued tiredness that is not relieved by rest and is not directly caused by other medical conditions.See also: Fatigue.The exact cause of chronic fatigue syndrome (CFS) is unknown.The following may also play a role in the development of CFS: CFS most commonly occurs in women ages 30 to 50.
Ontology Recommender 1.0 suggests the Bone Dysplasia Ontology (BDO), whereas Ontology Recommender 2.0 suggests the NCI Thesaurus (NCIT).Because BDO covers only 4 of the input terms, while NCIT covers 17, the recommendation provided by Ontology Recommender 2.0 is more appropriate than that of its predecessor.Ontology Recommender 2.0 also provides better mean coverage for both input types (i.e., text and keywords) across all the biomedical databases included in the evaluation.Compared to Ontology Recommender 1.0, the mean coverage reached using Ontology Recommender 2.0 was 14.9% higher for texts and 19.3% higher for keywords.That increase was even greater using the "ontology sets" output type provided by Ontology Recommender 2.0, which reached a mean coverage of 92.1% for texts (31.3% higher than the Ontology Recommender 1.0 ratings) and 89.8% for keywords (26.9% higher).

Experiment 2: Refining Recommendations
Our second experiment set out to examine whether Ontology Recommender 2.0 is effective at discerning how to make meaningful recommendations when ontologies exhibit similar coverage of the input text.Specifically, we were interested in analyzing how the new version uses ontology acceptance, detail and specialization to prioritize the most appropriate ontologies.
We started with the 1200 inputs (600 texts and 600 lists of keywords) from the previous experiment, and selected those inputs for which the two versions of Ontology Recommender suggested different ontologies with similar coverage.We considered two coverage values similar if the difference between them was less than 10%.This yielded a total of 284 inputs (32 input texts and 252 lists of keywords).We executed both systems for those 284 inputs and analyzed the ontologies obtained in terms of their acceptance, detail and specialization scores.
Figure 8 and Table 10 show the results obtained.The ontologies suggested by Ontology Recommender 2.0 have higher acceptance (87.1) and detail scores (72.1) than those suggested by Ontology Recommender 1.0.Importantly, the graphs show peaks of low acceptance (<30%) and detail (<20%) for Ontology Recommender 1.0 that are addressed by Ontology Recommender 2.0.
The ontologies suggested by Ontology Recommender 2.0 have, on average, lower specialization scores (65.1) than those suggested by Ontology Recommender 1.0 (95.1).This is an expected result, given that the recommendation approach used by Ontology Recommender 1.0 is based on the relation between the number of annotations provided by each ontology and its size, which is our measure for ontology specialization.
Ontology Recommender 1.0 is better than Ontology Recommender 2.0 at finding small ontologies that provide multiple annotations for the user's input.However, those ontologies are not necessarily the most appropriate to describe the input data.As we have seen (see Section 1.2.1), a large number of annotations does not always indicate a high input coverage.Ontology Recommender 1.0 sometimes suggests ontologies with high specialization scores but with very low input coverage, which makes the ontologies inappropriate for the user's input.The multi-criteria evaluation approach used by Ontology Recommender 2.0 has been designed to address this issue by evaluating ontology specialization in combination with other criteria, including ontology coverage.

Discussion
Recommending biomedical ontologies is a challenging task.The great number, size, and complexity of biomedical ontologies, as well as the diversity of user requirements and expectations, make it difficult to identify the most appropriate ontologies to annotate biomedical data.The analysis of the results demonstrates that ontologies suggested using our new recommendation approach are more appropriate than those recommended using the original method.Our acceptance evaluation method has proved to be successful, and it is currently used not only by the Ontology Recommender, but also by the BioPortal search engine.The classes returned when searching in BioPortal are ordered according to the general acceptance of the ontologies to which they belong.
We note that, because the system is designed in a modular way, it will be easy to add new evaluation criteria to extend its functionality.As a first priority, we intend to improve and extend the evaluation criteria currently used.In addition, we will investigate the effect of extending the Ontology Recommender to include relevant features not yet considered, such as the frequency of an ontology's updates, its levels of abstraction, formality, granularity, and the language in which the ontology is expressed.
Indeed, using metadata information is a simple by often ignored approach to select ontologies.Coverage-based approaches often miss relevant results because they focus on the content of ontologies and ignore more general information about the ontology.For example, applying the new Ontology Recommender to the Wikipedia definition of anatomy 9 will return some widely-known ontologies that contain the terms anatomy, structure, organism and biology, but the Foundational Model of Anatomy (FMA), which is the reference ontology about human anatomy will not show up in the top 25 results.To address this issue, we are currently refining, in collaboration with the AgroPortal ontology repository [36], the way BioPortal handles metadata for ontologies in order to support, in the future, even more ontology recommendation scenarios.
Our coverage evaluation approach may be further enhanced by complementing our annotation scoring method (i.e., annotationScore2) with term extraction techniques.We plan to analyze the application of a term extraction measure, called C-value [37], which is specialized for multi-word term extraction, and that has already been applied to the results of the NCBO Annotator, leading to significant improvements [38].
There are some possible avenues for enhancing our assessment of ontology acceptance.These include considering the number of projects that use a specific ontology, the number of mappings created manually that point to a particular ontology, the number of user contributions (e.g., mappings, notes, comments), the metadata available per ontology, and the number, publication date and publication frequency of ontology versions.There are other indicators external to BioPortal that could be useful for performing a more comprehensive evaluation of ontology acceptance, such as the number of Google results when searching for the ontology name or the number of PubMed publications that contain the ontology name [19].
The current version of Ontology Recommender uses a set of default parameters to control how the different evaluation scores are calculated, weighted and aggregated.These parameters provide acceptable results for general ontology recommendation scenarios, but some users may need to modify the default settings to match their needs.In the future, we would like the system to use an automatic weight adjustment approach.We will investigate whether it is possible to develop methods of adjusting the weights dynamically for specific scenarios.
Ontology Recommender helps to identify all the ontologies that would be suitable for semantic annotation.However, given the number of ontologies in BioPortal, it would be difficult, computationally expensive, and often useless to annotate user inputs with all the ontologies in the repository.Ontology Recommender could function within BioPortal as a means to screen ontologies for use with the NCBO Annotator.A user might be offered the possibility to "Run the Ontology Recommender first" before actually calling the Annotator.Then only the top-ranked ontologies would be used for annotations.
A user-based evaluation would help us understand the system's utility in real-world settings.Our experience evaluating the original Ontology Recommender and BiOSS showed us that obtaining a user-based evaluation of an ontology recommender system is a challenging task.For example, the evaluators of BiOSS reported that they would need at least 50 minutes to perform a high-quality evaluation of the system for each test case.We plan to investigate whether crowd-sourcing methods, as an alternative, can be useful to evaluate ontology recommendation systems from a usercentered perspective.
Our approach for ontology recommendation was designed for the biomedical field, but it can be adapted to work with ontologies from other domains so long as they have a resource equivalent to the NCBO Annotator, an API to obtain basic information about all the candidate ontologies, and their classes, and alternative resources for extracting information about the acceptance of each ontology.For example, AgroPortal [36] is an ontology repository based on NCBO BioPortal technology.AgroPortal uses Ontology Recommender 2.0 in the context of plant, agronomic and environmental sciences. 10

Conclusions
Biomedical ontologies are crucial for representing knowledge and annotating data.However, the large number, complexity, and variety of biomedical ontologies make it difficult for researchers to select the most appropriate ontologies for annotating their data.In this paper, we presented a novel approach for recommending biomedical ontologies.This approach has been implemented as release 2.0 of the NCBO Ontology Recommender, a system that is able to find the best ontologies for a biomedical text or set of keywords.Ontology Recommender 2.0 combines the strengths of its predecessor with a range of adjustments and new features that improve its reliability and usefulness.
Our evaluation shows that, on average, the new system is able to suggest ontologies that provide better input coverage, contain more detailed information, are more specialized, and are more widely accepted than those suggested by the original Ontology Recommender.In addition, the new version is able to evaluate not only individual ontologies, but also different ontology sets, in order to maximize input coverage.The new system can be customized to specific user needs and it provides more explanatory output information than its predecessor, helping users to understand the results returned.The new service, embedded into the NCBO BioPortal, will be a more valuable resource to the community of researchers, scientists, and developers working with ontologies.

Figure 1 .
Figure 1.Top 20 BioPortal ontologies according to their acceptance scores.The x-axis shows the acceptance score in the interval [0, 100].The y-axis shows the ontology acronyms.These acceptance scores were obtained by assigning the same weight to pageviewsScore(o) and umlsScore(o) (w pv =0.5, w umls =0.5).

Figure 2
Figure2shows the architecture and information flow of Ontology Recommender 2.0.Like its predecessor, it has two interfaces: a REST Web service interface, which makes it possible to invoke

Figure 2 .
Figure 2.An overview of the architecture and workflow of Ontology Recommender 2.0.(1)The input data and parameter settings are received through any of the system interfaces (i.e., Web service or Web UI), and are sent to the system's backend.(2)The evaluation process starts.The NCBO Annotator is invoked to retrieve all annotations for the input data.The system uses these annotations to evaluate BioPortal ontologies, one by one, according to four criteria: coverage, acceptance, detail and specialization.Because of the system's modular design, additional evaluation criteria can be easily added.The system uses BioPortal services to retrieve any additional information required by the evaluation process.For example, evaluation of ontology acceptance requires the number of visits to the ontology in BioPortal (pageviews), and checking whether the ontology is present in the Unified Medical Language System (UMLS) or not.Four independent evaluation scores are returned for each ontology (one per evaluation criterion).(3) The scores obtained are combined into a relevance score for the ontology.(4) The relevance scores are used to generate a ranked list of ontologies or ontology sets, which (5) is returned via the corresponding system's interface.

Figure 3 .
Figure 3. Ontology Recommender 2.0 user interface.The user interface has buttons to select the input type (i.e., text or keywords) and output type (i.e., ontologies and ontology sets).A text area enables the user to enter the input data.The "Get Recommendations" button triggers the execution.The "advanced options" button shows additional settings to customize the recommendation process.

Figure 4 .
Figure 4. Example of the "Ontologies" output.The user interface shows the top recommended ontologies.For each ontology, it shows the position of the ontology in the ranking, the ontology acronym, the final recommendation score, the scores for each evaluation criteria (i.e., coverage, acceptance, detail, and specialization), and the number of annotations performed with the ontology.The "highlight annotations" button highlights the input terms covered by the ontology.

Figure 5 .
Figure 5. Example of the "Ontology sets" output.The user interface shows the top recommended ontology sets.For each set, it shows its position in the ranking, the acronyms of the ontologies that belong to it, the final recommendation score, the scores for each evaluation criteria (i.e., coverage, acceptance, detail, and specialization), and the number of annotations performed with all the ontologies in the ontology set.The "highlight annotations" button highlights the input terms covered by the ontology set.

Figure 6 .
Figure 6.Coverage distribution for the first ontology suggested by Ontology Recommender 1.0 (dashed red line) and 2.0 (solid blue line), using the individual ontologies output, for 600 texts extracted from 6 widely known databases (100 texts each).Vertical lines represent the mean coverage provided by the first ontology returned by Ontology Recommender 1.0 (dotted red line) and 2.0 (dashed-dotted blue line).The X-axis indicates the percentage of words covered by the ontology.The Y-axis displays the number of inputs for which a particular coverage percentage was obtained.AUTDB: Autism Database; GEO: Gene Expression Omnibus; GM: ARRS GoldMiner; IDV: Integrated Disease View; PM: PubMed; PMH: PubMed Health Drugs.

Figure 7 .
Figure 7. Coverage distribution for the first ontology suggested by Ontology Recommender 1.0 (dashed red line) and 2.0 (solid blue line), using the individual ontologies output, for 600 lists of keywords extracted from 6 widely known databases (100 lists of keywords each).Vertical lines represent the mean coverage provided by the first ontology returned by Ontology Recommender 1.0 (dotted red line) and 2.0 (dashed-dotted blue line).The X-axis indicates the percentage of input keywords covered by the ontology.The Y-axis displays the number of inputs for which a particular coverage percentage was obtained.AERS: Adverse Event Reporting System; AGDB: AgingGenesDB; CT: ClinicalTrials.gov;DBK: DrugBank; PGGE: PharmGKB-Gene; UPKB: UniProt KB.

Figure 8 .
Figure 8. Acceptance, detail and specialization distribution for the first ontology suggested by Ontology Recommender 1.0 (dashed red line) and 2.0 (solid blue line), for the 284 inputs selected.Vertical lines represent the mean acceptance, detail and specialization scores provided by Ontology Recommender 1.0 (dotted red line) and 2.0 (dashed-dotted blue line).The X-axis indicates the acceptance, detail and specialization score provided by the top ranked ontology.The Y-axis displays the number of inputs for which a particular score was obtained.

Table 1 . Ontologies suggested by the original Ontology Recommender for the sample input text Melanoma is a malignant tumor of melanocytes which are found predominantly in skin but also in the bowel and the eye
. For each ontology, the table shows its position in the ranking, the acronym of the ontology in BioPortal 5 , the number of annotations returned by the NCBO Annotator for the sample input, the terms annotated (or 'covered') by those annotations and the ontology score.

Table 2 . Top 5 ontologies suggested by Ontology Recommender 1.0 for the sample input text embryonic cardiac structure. For
each ontology, the table shows its position in the ranking, the acronym of the ontology in BioPortal, the number of annotations returned by the NCBO Annotator for the sample input, the terms annotated (or 'covered') by those annotations and the ontology score.

Table 3 . SNOMEDCT annotations for the input A thrombocyte is a kind of blood cell.
The table shows the text fragment covered by each annotation, the name and type of the matched class, the annotation score, and the annotations selected to compute the relevance score for SNOMEDCT.

Table 4 . Example of ontology classes for the input Penicillin is an antibiotic used to treat tonsillitis.
The table shows the ontology name, the class name, and the number of class definitions, synonyms and properties.

Table 5 . Ontology size and annotation details for the ontologies in Table
4.This table shows the number of classes (size) of each ontology, the class names, the annotation types, the annotation scores, and the level of each class in the ontology hierarchy, such that '1' corresponds to the root (or top) level, '2' correspond to the level below the root classes, '3' to the next level, and so on.

Table 6 .
Databases used for experiment 1.The table shows the database name, its acronym, the main topic of the database, the specific field from which the information was extracted, and the type of textual data extracted (i.e., text or keywords).

Table 7 . Summary of evaluation results for text inputs.
Mean of the number of words for the inputs extracted from the database.b Percentage of executions where the coverage of the top recommended ontology was lower than 20%.c Mean coverage provided by the top ranked ontology.Ontology Recommender 1.0; ** Ontology Recommender 2.0; *** Ontology Recommender 2.0 (ontology sets output).
a *

Table 8 . Summary of evaluation results for keyword inputs.
Mean of the number of words for the inputs extracted from the database.b Percentage of executions where the coverage of the top recommended ontology was lower than 20%.
a c Mean coverage provided by the top ranked ontology.*

Table 11 .
Experiment 3 results.The table shows the input size (number of keywords) and domain, as well as the first ontology suggested by Ontology Recommender 1.0 and Ontology Recommender 2.0.The size of each ontology (number of classes) and the coverage provided are also shown.The best results for each input (lowest ontology size and highest coverage), are highlighted in bold.