BioHackathon series in 2011 and 2012: penetration of ontology and linked data in life science domains
- Toshiaki Katayama1Email author,
- Mark D Wilkinson2,
- Kiyoko F Aoki-Kinoshita3,
- Shuichi Kawashima1,
- Yasunori Yamamoto1,
- Atsuko Yamaguchi1,
- Shinobu Okamoto1,
- Shin Kawano1,
- Jin-Dong Kim1,
- Yue Wang1,
- Hongyan Wu1,
- Yoshinobu Kano4,
- Hiromasa Ono1,
- Hidemasa Bono1,
- Simon Kocbek1,
- Jan Aerts5, 6,
- Yukie Akune3,
- Erick Antezana7,
- Kazuharu Arakawa8,
- Bruno Aranda9,
- Joachim Baran10,
- Jerven Bolleman11,
- Raoul JP Bonnal12,
- Pier Luigi Buttigieg13,
- Matthew P Campbell14,
- Yi-an Chen15,
- Hirokazu Chiba16,
- Peter JA Cock17,
- K Bretonnel Cohen18,
- Alexandru Constantin19,
- Geraint Duck19,
- Michel Dumontier20,
- Takatomo Fujisawa21,
- Toyofumi Fujiwara22,
- Naohisa Goto23,
- Robert Hoehndorf24,
- Yoshinobu Igarashi15,
- Hidetoshi Itaya8,
- Maori Ito15,
- Wataru Iwasaki25,
- Matúš Kalaš26,
- Takeo Katoda3,
- Taehong Kim27,
- Anna Kokubu3,
- Yusuke Komiyama28,
- Masaaki Kotera29,
- Camille Laibe30,
- Hilmar Lapp31,
- Thomas Lütteke32,
- M Scott Marshall33,
- Takaaki Mori3,
- Hiroshi Mori34,
- Mizuki Morita35,
- Katsuhiko Murakami36,
- Mitsuteru Nakao37,
- Hisashi Narimatsu38,
- Hiroyo Nishide16,
- Yosuke Nishimura29,
- Johan Nystrom-Persson15,
- Soichi Ogishima39,
- Yasunobu Okamura40,
- Shujiro Okuda41,
- Kazuki Oshita8,
- Nicki H Packer42,
- Pjotr Prins43,
- Rene Ranzinger44,
- Philippe Rocca-Serra45,
- Susanna Sansone45,
- Hiromichi Sawaki38,
- Sung-Ho Shin27,
- Andrea Splendiani46, 47,
- Francesco Strozzi48,
- Shu Tadaka40,
- Philip Toukach49,
- Ikuo Uchiyama16,
- Masahito Umezaki50,
- Rutger Vos51,
- Patricia L Whetzel52,
- Issaku Yamada53,
- Chisato Yamasaki15, 36,
- Riu Yamashita54,
- William S York44,
- Christian M Zmasek55,
- Shoko Kawamoto1 and
- Toshihisa Takagi56
© Katayama et al.; licensee BioMed Central Ltd. 2014
Received: 16 May 2013
Accepted: 26 November 2013
Published: 5 February 2014
The application of semantic technologies to the integration of biological data and the interoperability of bioinformatics analysis and visualization tools has been the common theme of a series of annual BioHackathons hosted in Japan for the past five years. Here we provide a review of the activities and outcomes from the BioHackathons held in 2011 in Kyoto and 2012 in Toyama. In order to efficiently implement semantic technologies in the life sciences, participants formed various sub-groups and worked on the following topics: Resource Description Framework (RDF) models for specific domains, text mining of the literature, ontology development, essential metadata for biological databases, platforms to enable efficient Semantic Web technology development and interoperability, and the development of applications for Semantic Web data. In this review, we briefly introduce the themes covered by these sub-groups. The observations made, conclusions drawn, and software development projects that emerged from these activities are discussed.
KeywordsBioHackathon Bioinformatics Semantic Web Web services Ontology Visualization Knowledge representation Databases Semantic interoperability Data models Data sharing Data integration
In life sciences, the Semantic Web is an enabling technology which could significantly improve the quality and effectiveness of the integration of heterogeneous biomedical resources. The first wave of life science Semantic Web publishing focused on availability - exposing data as RDF without significant consideration for the quality of the data or the adequacy or accuracy of the RDF model used. This allowed a proliferation of proof-of-concept projects that highlighted the potential of Semantic technologies. However, now that we are entering a phase of adoption of Semantic Web technologies in research, quality of data publication must become a serious consideration. This is a prerequisite for the development of translational research and for achieving ambitious goals such as personalized medicine.
While Semantic technologies, in and of themselves, do not fully solve the interoperability and integration problem, they provide a framework within which interoperability is dramatically facilitated by requiring fewer pre-coordinated agreements between participants and enabling unanticipated post hoc integration of their resources. Nevertheless, certain choices must be made, in a harmonized manner, to maximize interoperability. The yearly BioHackathon series [1–3] of events attempts to provide the environment within which these choices can be explored, evaluated, and then implemented on a collaborative and community-guided basis. These BioHackathons were hosted by the National Bioscience Database Center (NBDC)  and the Database Center for Life Science (DBCLS)  as a part of the Integrated Database Project to integrate life science databases in Japan. In order to take advantage of the latest technologies for the integration of heterogeneous life science data, researchers and developers from around the world were invited to these hackathons.
This paper contains an overview of the activities and outcomes of two highly interrelated BioHackathon events which took place in 2011  and 2012 . The themes of these two events focused on representation, publication, and exploration of bioinformatics data and tools using standards and guidelines set out by the Linked Data and Semantic Web initiatives.
Summary of investigated issues and results covered during BioHackathons 2011 and 2012
Domain specific models
Genome and proteome data
Issue: No standard RDF data model and tools existed for major genomic data
Result: Created FALDO, INSDC, GFF, GVF ontologies and developed converters
Software: Converters are now packaged in the BioInterchange tool; improved PSICQUIC service
Issue: Glycome and proteome databases are not effectively linked
Result: Developed a standard RDF representation for carbohydrate structures by BCSDB, GlycomeDB, GLYCOSCIENCES.de, JCGGDB, MonosaccharideDB, RINGS, UniCarbKB and UniProt developers
Software: RDFized data from these databases, stored them in Virtuoso and tested SPARQL queries among the different data resources
Text extraction from PDF and metadata retrieval
Issue: Text for mining is often buried in the PDF formatted literature and requires preprocessing
Result: Incorporated a tool for text extraction combined with a metadata retrieval service for DOIs or PMIDs
Software: Used PDFX for text extraction; retrieved metadata by the TogoDoc service
Named entity recognition and RDF generation
Issue: No standard existed for combining the results of various NER tools
Result: Developed a system for combining, viewing, and editing the extracted gene names to provide RDF data
Software: Extended SIO ontology for NER and newly developed the BioInterchange tool for RDF generation
Natural language query conversion to SPARQL
Issue: Automatic conversion of natural language queries to SPARQL queries is necessary to develop a human friendly interface
Result: Incorporated the SNOMED-CT dataset to answer biomedical questions and improved linguistic analysis
Software: Improved the in-house LODQA system; used ontologies from BioPortal
IRI mapping and normalization
Issue: IRIs for entities automatically generated by BioPortal do not always match with submitted RDF-based ontologies
Result: Normalized IRIs in the BioPortal SPARQL endpoint as either the provider IRI, the Identifiers.org IRI, or the Bio2RDF IRI
Software: Used services of BioPortal, the MIRIAM registry, Identifires.org and Bio2RDF
Environmental ontologies for metagenomics
Issue: Semantically controlled description of a sample’s original environment is needed in the domain of metagenomics
Result: Developed the Metagenome Environment Ontology (MEO) for the MicrobeDB project
Software: References the Environment Ontology (EnvO) and other ontologies
Issue: Standard machine-readable English-Japanese / Japanese-English dictionaries are required for multilingual utilization of RDF data
Result: Developed ontology for LSD to serialize the lexical resource in RDF and published it at a SPARQL endpoint
Software: Data provided by the Life Science Dictionary (LSD) project
Enzyme reaction equations
Issue: New ontology must be developed to represent incomplete enzyme reactions which are not supported by IUBMB
Result: Designed semantic representation of incomplete reactions with terms to describe chemical transformation patterns
Software: Obtained data from the KEGG database and the result is available at GenomeNet
Service quality indicators
Issue: Quality of the published datasets (SPARQL endpoints) is not clearly measured
Result: Measured the availability, response time, content amount and other quality metrics of SPARQL endpoints
Software: Web site is under development to illustrate the summary of periodical measurements
Database content descriptors
Issue: Uniform description of the core attributes of biological databases should be semantically described
Result: Developed the RDF Schema for the BioDBCore and improved the BioDBCore Web interface for submission and retrieval
Software: Evaluated identifiers for DBs in NAR, DBpedia, Identifiers.org and ORCID and vocabularies from Biositemaps, EDAM, BRO and OBI
Generic metadata for dataset description
Issue: Database catalogue metadata needs to be machine-readable for enabling automatic discovery
Result: Conventions to describe the nature and availability of datasets will be formalized as a community agreement
Software: Members from the W3C HCLS, DBCLS, MEDALS, BioDBCore, Biological Linked Open Data, Biositemaps, Uniprot, Bio2RDF, Biogateway, Open PHACTS, EURECA, and Identifiers.org continue the discussion in teleconferences
Issue: RDF generation tools supporting various data formats and data sources are not yet sufficient
Result: Tools to generate RDF from CSV, TSV, XML, GFF3, GVF and other formats including text mining results were developed
Software: BioInterchange can be used as a tool, Web services and libraries; bio-table is a generic tool for tabular data
Issue: Survey is needed to test scalability of distributed/cluster-based triple stores for multi-resource integration
Result: Hadoop-based and Cluster-based triple stores were still immature and federated queries on OWLIM-SE was still inefficient
Software: HadoopRDF, SHARD and WebPIE for Hadoop-based triple stores; 4store and bigdata for Cluster-based triple stores
Semantic Web exploration and visualization
Issue: Interactive exploration and visualization tools for Semantic Web resources are required to make effective queries
Result: Tools are reviewed from viewpoints of requirements and availability, features, assistance and support, technical aspects, and specificity to life sciences use cases
Software: More than 30 tools currently available are reviewed and classified for benchmarking and evaluations in the future
Ontology mapping visualization
Issue: Visualization of ontology mapping is required to understand how different ontologies with relating concepts are interconnected
Result: Ontology mappings of all BioPortal ontologies and a subset of BioPortal ontologies suitable for OntoFinder/Factory were visualized
Software: Applicability of Google Fusion Tables and Gephi were investigated
Identifier conversion service
Issue: Multiple synonyms for the same data inhibits cross-resource querying and data mining
Result: Developed a new service to extract cross references from UniProt and KEGG databases, eliminate redundancy and visualize the result
Software: G-Links resolves and retrieves all corresponding resource URIs
Semantic query via voice recognition
Issue: Intuitive search interface similar to “Siri for biologists” would be useful
Result: Developed a context-aware virtual research assistant Genie which recognizes spoken English and replies in a synthesized voice
Software: The G-language GAE, G-language Maps, KBWS EMBASSY and EMBOSS, and G-Links are used for Genie
In terms of RDF data generation, data were generated for genomic and glycomic databases (domain-specific models) and from the literature using text processing technologies. We describe these two subcategories here.
Domain specific models
Genome and proteome data Due to the high-throughput generation of genomic data, it is of high priority to generate RDF models for both nucleotide sequence annotations and amino acid sequence annotations. Up to now, nucleotide sequence annotations are provided in a variety of formats such as the International Nucleotide Sequence Database Collaboration (INSDC) , Generic Feature Format (GFF)  and Genome Variation Format (GVF) . By RDFizing this information, all of the annotations from various sequencing projects can be integrated in a straightforward manner. This would in turn accommodate the data integration requirements of the H-InvDB . In general, due to the large variety of genomic annotations possible, it was decided that in the first iteration of a genomic RDF model, opaque Universally Unique IDentifiers (UUIDs) are to be used to represent sequence features. Each UUID would then be typed with its appropriate ontology, such as Sequence Ontology (SO), and sequence location would be specified using Feature Annotation Location Description Ontology (FALDO) [12, 13]. FALDO was newly developed at the BioHackathon 2012 by representatives of UniProt , DDBJ  and genome scientists for the purpose of generically locating regions on the biological sequences (e.g., modification sites on a protein sequence, fuzzy promoter locations on a DNA sequence etc.). A locally-defined vocabulary was used to annotate other aspects such as sequence version and synonymy. Thus, a generic system for nucleotide and amino acid sequence annotations could be proposed. Converters were also developed that would output compatible RDF documents, such as HMMER3 , GenBank/DDBJ , GTF  and GFF2OWL . The RDF output for Proteomics Standard Initiative Common QUery InterfaCe (PSICQUIC) , a tool to retrieve molecular interaction data from multiple repositories with more than 150 milion interactions available at the time of writing, was modified during the Biohackathon 2011 to improve the mapping of identifiers and ontologies. Identifiers.org was chosen as the provider for the new IRIs for the interacting proteins and ontology terms to allow a better integration with other sources. PSICQUIC RDF output is based on the popular BioPAX format  for interactions and pathways.
Glycome data The Glycomics working group consisted of developers from the major glycomics databases including Bacterial Carbohydrate Structure Database (BCSDB) , GlycomeDB [23, 24], GLYCOSCIENCES.de , Japan Consortium for Glycobiology and Glycotechnology Database (JCGGDB) , MonosaccharideDB , Resource for INformatics of Glycomes at Soka (RINGS) , and UniCarbKB . These databases contain information about glycan structures, or complex carbohydrates, which are often covalently linked to proteins forming glycoproteins. The connections between glycomics and proteomics databases are required to accurately describe the properties and potential biological functions of glycoproteins. In order to establish such a connection this working group cooperated with UniProt developers present at the BioHackathon to agree upon and develop a standard RDF representation for carbohydrate structures, along with the relevant biological and bibliographic annotations and experimental evidence. Data from the individual databases have been exported in the newly developed RDF format (version 0.1) and stored in a triple store, allowing for cross-database queries. Several proof-of-concept queries were tested to show that federated queries could be made across multiple databases to demonstrate the potential for this technology in glycomics research. For example, both UniProt and JCGGDB are important databases in their respective domains of protein sequences and glycomics data. Moreover, UniCarbKB is becoming an important glycomics resource as well. However, since UniCarbKB is not linked with JCGGDB, a SPARQL query was described to find the JCGGDB entries for each respective UniCarbKB entry. Aoki-Kinoshita et al., 2013  this was made possible by the integration of UniCarbKB, JCGGDB and GlycomeDB data, which served as the link between the former two datasets. This would not have been possible without agreement upon the standardization of the pertinent glycomics data in each database, discussed at BioHackathons.
The Data Mining and Natural Language Processing (NLP) groups focused their efforts in two primary domains: information extraction from scientific text - particularly from PDF articles - in the form of ontology-grounded triples, and the conversion of natural language questions into triples and/or SPARQL queries. Both of these were pursued with an eye to standardization and interoperability between life science databases.
Text extraction from PDF and metadata retrieval The first step in information extraction is ensuring that accurate plain-text representations of scientific documents are available. A widely recognized “choke point” that inhibits the processing and mining of vast biomedical document stores has been the fact that the bulk of information within them is often available only as PDF-formatted documents. Access to this information is crucial for a variety of needs, including accessibility to model organism database curators and the population of RDF triple stores. In confronting this issue, the BioHackers worked on a novel software project called PDFX [31, 32], which automatically converts the PDF scientific articles to XML form. The general use case was to include PDFX as a pre-processing step within a wide variety of more involved processing pipelines, such as the additional concerns of the BioHackathon data mining and NLP groups presented next. Complementing text extraction from PDF documents, when this process is employed, it also becomes necessary to retrieve relevant metadata information. This was done using DBCLS’s TogoDoc  literature management and recommendation system, which detects the Digital Object Identifier (DOI) or PubMed identifiers of PDF submissions in order to retrieve metadata information such as MeSH terms and make recommendations to users.
Named entity recognition and RDF generation Once text is in processable form, the next phase of information extraction is entity recognition within the text. The field of gene name extraction suffers from a prevalence of diverse annotation schemata, ontologies, definitions of semantic classes, and standards regarding where the edges of gene names should be marked within a corpus (an annotated collection of topic-specific text). In 2011, the NLP/text mining group worked on an application for combining, viewing and editing the outputs of a variety of gene-mention-detection systems, with the goal of providing RDF outputs of protein/gene annotation tools such as GNAT , GeneTUKit , and BANNER . The Annotation Ontology was used to represent these metadata. However, at the 2012 event, the SIO ontology  was extended to enable representation of entity-recognition outputs directly in RDF: resources were described in terms of a number of novel relation types (properties) and incorporated in an inheritance and partonymy hierarchy. Using these various components as a proof of concept, the NLP sub-group began developing a generic RDFization framework, BioInterchange , comprised of three pipelined steps - data deserialization, object model generation, and RDF serialization - to enable easy data conversion into RDF with automatic ontological mappings primarily to SIO and secondarily to other ontologies.
Natural language query conversion to SPARQL The final activity within the NLP theme was the conversion of natural language queries to SPARQL queries. SPARQL queries are a natural interface to RDF triple-store endpoints, but they remain challenging to construct, even for those with intimate knowledge of the target data schema. It would be easier, for example, to enable users to ask a question such as “What is the sequence length for human TP53?” and receive an answer from the UniProt database, based on a SPARQL query that the system constructs automatically. A pre-existing tool from the DBCLS that can accomplish natural-language-to-SPARQL conversion was targeted and customized for the SNOMED-CT  dataset in BioPortal . A large set of natural language test queries were developed, and for a subset of those queries the post-conversion output was analyzed and compared to a manually created gold standard output; subsequently, the group undertook a linguistic analysis of what conversions would have to be carried out in order to transform the current system output to the gold standard. These efforts included using natural language generation technology to build a Python solution that generates hundreds of morphological and syntactic variants of various natural language question types.
IRI mapping and normalization
The first step in any semantic integration activity is to agree on the identifiers for various concepts. BioPortal, a central repository for biomedical ontologies, allows users to download original ontology files in a variety of formats (OWL , OBO , etc.), but also makes these ontologies available using RDF through a Web service and SPARQL endpoint . In RDF, entities (classes, relations and individuals) are identified using an Internationalized Resource Identifier (IRI); however, the identifiers that are automatically generated by BioPortal do not always match with those used in submitted RDF-based ontologies, thereby impeding integration across ontologies. Moreover, since ontologies are also used to semantically annotate biomedical data, there is a lack of semantic integration between data and ontology. BioHackathon activities included surveying, mapping, and normalizing the IRIs present in the RDF-based ontologies found in the BioPortal SPARQL endpoint to a canonical set of IRIs in a custom dataset and namespace registry, primarily used by the Bio2RDF project . This registry is being integrated with the MIRIAM Registry  which powers Identifiers.org, thereby enabling users to select either the provider IRI (if available), the Identifiers.org IRI (if available), or the Bio2RDF IRI (for all data and ontologies) .
Environmental ontologies for metagenomics
In the domain of metagenomics, establishing a semantically controlled description of a sample’s original environment is essential for reliably archiving and retrieving relevant datasets. The BioHackathon resulted in a strategy for the re-engineering of the Metagenome Environment Ontology (MEO) , closely linked to the MicrobeDB project , to serve as community-specific portal to resources such as the Environment Ontology (EnvO) . In this role, MEO will deliver curated, high-value subsets of such resources to the (meta)genomics community for use in efficient, semantically controlled annotation of sample environments. Additionally, MEO will enrich and shape the ontologies and vocabularies it references through persistently consolidating and submitting feedback from its users.
An ontology for lexical resources
The Life Science Dictionary (LSD)  consists of various lexical resources including English-Japanese/Japanese-English dictionaries with >230,000 terms, a thesaurus using the MeSH vocabulary [51, 52], and co-occurring data that show how often a pair of terms appear in a MEDLINE  entry. LSD has been edited and maintained by the LSD project since 1993 and provides a search service on the Web, as well as a downloadable version. To assist with machine-readability of this important lexical resource, the group developed an ontology for this dataset , and an RDF serialization of the LSD was designed and coded at the BioHackathon. As a result, a total of 5,600,000 triples were generated and made available at the SPARQL endpoint .
An ontology for incomplete enzyme reaction equations
Incomplete enzyme reactions are not of interest to International Union of Biochemistry and Molecular Biology (IUBMB; who manage EC numbers) , but are common in metabolomics. Enzymes and reactions are described in Gene Ontology (GO)  and Enzyme Mechanism Ontology (EMO) , but they just follow the classification of IUBMB. It would be helpful to establish a structured representation to describe the available knowledge out of the reaction of interest even if the equation is not complete. Semantic representation of incomplete enzyme reaction equations was designed based on ontological principles. About 6,800 complete reaction equations taken from the KEGG [59, 60] database were decomposed into 13,733 incomplete reactions, from which 2,748 chemical transformation patterns were obtained. They were classified into a semantic data structure, consisting of about 1,100 terms (functional groups, substructures, and reaction types) commonly used in organic chemistry and biochemistry. We keep curating the ontology for incomplete enzyme reaction equations aiming at its use in metabolome and other omics-level researches (available at GenomeNet ).
Metadata activities at the BioHackathon could be grouped into three areas of focus: service quality indicators, database content descriptors, and a broader inclusive discussion of generic metadata that could be used to characterize datasets in a database catalogue for enhanced data discovery, assessment, and access (not limited to but still useful for biodatabases).
Service quality indicators
With respect to data quality, the BioHackers coined the phrase “Yummy Data” as a shorthand way of expressing not only data quality, but more importantly, the ability to explicitly determine the quality of a given dataset. While quality of the published data is an important issue, it is a domain that depends as much on the underlying biological experiments as the code that analyses them. As such, the data quality working group at the BioHackathon focused on the issue of testing the quality of the published data endpoint, with respect to endpoint availability and other metrics. Therefore, the Yummy Data project  was initiated that periodically inspects the availability, response time, content amount and a few quality metrics for a selection of SPARQL endpoints of interest to biomedical investigators. While neither defining, nor executing, an exhaustive set of useful quality-measurements, it is hoped that this software may act as a starting point that encourages others to measure the “yumminess” of the data they provide, and thereby improve the quality of the published semantic resources for the global community.
Database content descriptors
The BioDBCore project [63, 64] has created a community-defined, uniform, generic description of the core attributes of biological databases that will allow potential users of that database to determine its suitability for their task at hand (e.g. taxonomic range, update frequency, etc.). The proposed BioDBCore core descriptors are overseen by the International Society for Biocuration (ISB) , in collaboration with the BioSharing initiative . One of the key activities of BioDBCore discussion at the BioHackathon was to define the RDF Schema and relevant annotation vocabularies and ontologies capable of representing the nature of biological data resources. As mentioned above, RDF representations necessitate the choice of a stable URI for each resource. The persistent identifiers considered for biological databases included NAR database collection [67, 68], DBpedia [69, 70], Identifiers.org and ORCID , while vocabularies from Biositemaps , EMBRACE Data and Methods (EDAM) , Biomedical Resource Ontology (BRO)  and The Ontology for Biomedical Investigations (OBI)  were evaluated to describe features such as resource and data types, and area-of-research. The exploration involved several specific use cases, including METI Life science integrated database portal (MEDALS)  and NBDC/DBCLS . Another key activity at the hackathons was focused on the BioDBCore Web interface , both for submission and retrieval. Open issues include how to specify the useful interconnectivity between databases, for example, in planning cross-resource queries, and how to describe the content of biological resources in a machine-readable way to make it easily queried by SPARQL even if the vocabularies of any given resources are used. Currently, the group is considering the idea of using the named graph of a resource to store these kinds of metadata. There was also inter-group discussion of how to integrate BioDBCore with other projects such as DRCAT , which defines a similar, overlapping set of biological resources and their features.
Generic metadata for dataset description
The generic metadata discussion started by defining the problem of making database catalogue metadata machine-readable, so that a given dataset is automatically discoverable and accessible by machine agents using SPARQL. We discussed a set of conventions to describe the nature and availability of datasets on the emerging life science Semantic Web. In addition to basic descriptions, we focused our effort on elements of origin, licensing, (re-)distribution, update frequency, data formats and availability, language, vocabulary and content summaries. We expect that adherence to a small number of simple conventions will not only facilitate discovery of independently generated and published data, but also create the basis for the emergence of a data marketplace, a competitive environment to offer redundant access to ever higher quality data. These discussions have continued in teleconferences hosted by the W3C Health Care and Life Sciences Interest Group (HCLSIG) , and included at various times stakeholders such as DBCLS, MEDALS, BioDBCore, Biological Linked Open Data (BioLOD) , Biositemaps, UniProt, Bio2RDF, Biogateway , Open PHACTS , EURECA  and Identifiers.org.
Generation of RDF data often requires iterative trials. In an early stage of prototyping RDF data, it is recommended to use OpenRefine  (formerly known as Google Refine) with the RDF extension  for correcting fluctuations of data, generation of URIs from ID literals and eventually converting tabular data into RDF. To automate the procedure, various hackathon initiatives generated RDFization tools and libraries, particularly for the Bio* projects. A generic tool, bio-table , can be used for converting tabular data into RDF, using powerful filters and overrides. This command-line tool is freely available as a biogem package and expanded during the BioHackathon to include support for named columns. Another Ruby biogem binary and library called bio-rdf  utilizes bio-table and generates RDF data from the results of genomic analysis including gene enrichment, QTL and other protocols implemented in the R/Bioconductor. The BioInterchange was conceived and designed during BioHackathon 2012 as a tool, web services and libraries for Ruby, Python and Java languages to create RDF triples from files in TSV, XML, GFF3, GVF and other formats including text mining results. User can specify external ontologies for the conversion and the project also developed biomedical ontologies of necessity for GFF3 and GVF data . ONTO-PERL , a tool to handle ontologies represented in the OBO format, was extended to allow conversion of Gene Ontology (GO) annotations as RDF (GOA2RDF). Moreover, given that most legacy data resources have a corresponding XML schema, some effort was put into exploring and coding automated Schema-to-RDF translation tools for many of the widely used bioinformatics data formats such as BioXSD . After working with the EDAM developers at the BioHackathon to modify their URI format to fit more naturally with an RDF representation, the EDAM ontology was successfully used to annotate the relevant portions of an automated BioXSD transformation, suggesting that significantly greater interoperability between bioinformatics resources should soon be enabled.
Moving from individual endpoints to multi-resource integration, the BioHackathon working group on triplestores also explored the problem of deploying multiple, interdependent and distributed triplestores, as well as searching over these, which included the examination of cluster-based triplestores, Hadoop-based triple stores [92–94], and emergent federated search systems. The group determined that Hadoop-based stores were not mature enough to be used for production use because it works with only limited types of data, and lacks functionality such as exposing a SPARQL endpoint, user interface, and so on. Regarding cluster-based triplestores, the group found that there was insufficient documentation regarding installation so this could not be tested sufficiently. Federated search using SPARQL 1.1  could only be tested on OWLIM  at the time, and it was found that queries could not work efficiently across multiple endpoints. Thus, while single-source semantic publication seems to be well supported, the technologies backing distributed semantic datasets - both from the publisher’s and the consumer’s perspective - are lacking at this time.
Semantic Web exploration and visualization
The Semantic Web simplifies the integration of heterogeneous information without the need for a pre-coordinated comprehensive schema. As a trade-off, querying Semantic Web resources poses particular challenges: how can a researcher understand what is in a knowledge base, and how can he or she understand its information structure enough to make effective queries? Interactive exploration and visualization tools offer intuitive approaches to information discovery and can help applied researchers to effectively make use of Semantic Web resources. In the previous edition of the BioHackathon, a working group focused on the development of prototypes to visualize RDF knowledge bases. As Semantic Web and Linked Data resources are becoming more available, in the life sciences and beyond, several new tools (interactive or not) for visualization of these kinds of resources have been proposed. The 2011 edition of the BioHackathon has created a review of such available tools, in view of their applicability in the biomedical domain. Through inspections and surveys we have gathered basic information on more than 30 tools currently available. In particular we have gathered information on:
Requirements and availability The operating systems supported, hardware requirements, licensing and costs. Relevant to an applied biomedical domain, we have also considered the availability of simplified install procedures.
Features The type of data access supported (e.g., via SPARQL endpoint or files-based), the type of query formulation supported (creating of graphic patterns, text based queries, boolean queries), whether some reasoning services is provided or exploited. Finally, when possible we have recorded some indication of type of user interaction proposed (e.g., browsing versus link discovery).
Assistance and support Whenever possible, we have collected information on the availability of community-based or commercial support, the availability of documentations, the frequency of software updates and the availability of user groups and mailing lists, for which we have sketched approximate activity metrics.
Technical aspects Whether the observed tools can be embedded in other systems, or if they provide a plugin architecture. When relevant, in which language they are developed, and finally which standards they support (e.g., VoID , SPARQL 1.1).
Specificity to life sciences use cases Finally, we have tried to collect information highlighting the usability of these tools in life sciences research (e.g., life sciences bundled datasets, relevant demo cases, citations per research area).
This collection of information is useful to decide which tools are potentially usable given constraints of technical, expertise or reliability nature. Following this data collection exercise, we have started to devise a classification of tools, by identifying some defining key characteristics. For instance, a key characteristic of the surveyed tools is their approach to data: some focus more on instance data and tend to provide a graph-like metaphor. Some focus more on classes and relations and tend to present a class-based access. Another key aspect is the degree to which visualization tools aim at supporting data exploration, rather than explanation. Based on our classification, we aim at choosing a few representative tools, provide some benchmarking and evaluate how different types of tools are effective in simple tasks.
Ontology mapping visualization
Ontology mapping deals with relating concepts from different ontologies and is typically concerned with the representation and storage of mappings between the concepts . BioPortal ontologies  are usually interconnected, and mappings between them are available, although a visualization of these mappings is not currently available. Two types of mapping visualizations were explored at the BioHackathon: (1) A visualization of ontology mappings of all BioPortal ontologies, and (2) A visualization of a subset of BioPortal ontologies that would be useful in OntoFinder/Factory  - a tool for finding relevant BioPortal ontologies and also building new ontologies. The hackers investigated the applicability and utility of two tools/environments: Google Fusion Tables , and Gephi . This work is ongoing.
Identifier conversion service
Example queries using G-Links
Natural language semantic query via voice recognition
Finally, the project that generated the most “buzz” among the participants in BioHackathon 2012 was Genie - a “Siri  for Biologists”. The G-language Project members undertook the development of a virtual research assistant for bioinformatics, designed to be an intuitive entry-level gateway for database searches. The prototype developed and demonstrated at the BioHackathon was limited to gene- and genome-centric questions. Users communicate with Genie using spoken English, and Genie replies in a synthesized voice. Genie can find information on three main categories: 1. Anything about a gene of interest, such as, what is the sequence, function, cellular localization, pathway, related disease, related SNPs and polymorphisms, interactions, regulations, expression levels; 2. Anything about a set of genes, based on multiple criteria. For example, all SNPs in genes that are related to cancer, that work as transferases, that are expressed in the cytoplasm, and that have orthologs in mice; 3. Anything about a genome, such as, production of different types of visual maps, calculation of GC skews, prediction of origins and terminus of replication, calculation of codon usage bias, and so on. Using an NLP and dictionary-based approach, with the species name as a top-level filter to reduce the search/retrieval space, annotations are fetched for this species, and a dictionary of gene names is created dynamically. In order to implement integrated information retrieval, the following software systems were used:
The G-language Genome Analysis Environment and its REST service which allows for extremely rapid genome-centric information retrieval.
G-language Maps (Genome Projector and Pathway Projector, as well as Chaos Game Representation REST Service) which visualizes that genomic information.
Keio Bioinformatics Web Services EMBASSY package and EMBOSS , which provides more than 400 tools that can be applied to the information.
G-Links - an extremely rapid gene-centric data aggregator.
BioHackathon series started out with the Integrated Database Project of Japan, aiming to integrate all life science databases in Japan. Initially, the focus was on Web services and workflows to enable efficient data retrieval. However, the focus eventually shifted towards Semantic Web technologies due to the increasing heterogeneity and interlinked nature of the data at hand, for example, from the accumulation of next-generation sequencing data and their annotations. From this, the community recognized the importance of RDF and ontology development - fundamental Semantic Web technologies that have also come to gain the attention of other domains in the life sciences, including genome science, glycosciences and protein science. For example, BioMart and InterMine, which were initially developed to aid the integration of life science data, has now started to support Semantic Web technologies. These hackathons have served as a driving force towards integration of data “islands” that have slowly started linking to one another through RDF development. However, insufficient guidelines, ontologies and tools to support RDF development has hampered true integration. The development of such guidelines, ontologies and tools has been the central focus of these hackathons, bringing together the community on a consistent basis, and we have finally started to grow buds from these efforts. We expect to bear fruit in the near future by the development of biomedical and metagenome applications on top of these developments. Moreover, we expect that text mining will become increasingly vital to enriching life science Semantic Web data with the knowledge currently hidden within the literature.
American Standard Code for Information Interchange
Bacterial Carbohydrate Structure Database
Biomedical Resource Ontology
Codon Adaptation Index
Database Center for Life Sciences
Digital Object Identifier
Data Resource CATalogue
EMBRACE Data And Methods
European Molecular Biology Open Software Suite
Enabling information re-Use by linking clinical REsearch and Care
Feature Annotation Location Description Ontology
Frequency of OPtimal codons
Genome Analysis Environment
Generic Feature Format version 3
Generic Feature Format to Web Ontology Language
Gene Ontology Annotations to RDF
International Nucleotide Sequence Database Collaboration
Internationalized Resource Identifier
International Society for Biocuration
International Union of Biochemistry and Molecular Biology
Japan Consortium for Glycobiology and Glycotechnology Database
Kyoto Encyclopedia of Genes and Genomes
Life Science Dictionary
METI DAtabase portal for Life Science
Metagenome Environment Ontology
Medical Subject Headings
Minimal Information Required In the Annotation of Models
National Bioscience Database Center
Natural Language Processing
National Center for Biomedical Ontology
Ontology for Biomedical Investigations
Open Biomedical Ontology
- Open PHACTS:
Open Pharmaceutical Triple Store
Web Ontology Language
Portable Document Format
Predicted Highly eXpressed genes
Resource Description Framework
REpresentational State Transfer
Resource for INformatics of Glycomes at Soka
Semanticscience Integrated Ontology
Single Nucleotide Polymophisms
SPARQL Protocol and RDF Query Language
Universally Unique Identifier
eXtensible Markup Language.
BioHackathon 2011 and 2012 were supported by the Integrated Database Project (Ministry of Education, Culture, Sports, Science and Technology of Japan) and hosted by the National Bioscience Database Center (NBDC) and the Database Center for Life Science (DBCLS).
- Katayama T, Arakawa K, Nakao M: The DBCLS BioHackathon: standardization and interoperability for bioinformatics web services and workflows. J Biomed Semantics. 2010, 1: 8-10.1186/2041-1480-1-8.View Article
- Katayama T, Wilkinson MD, Vos R: The 2nd DBCLS BioHackathon: interoperable bioinformatics Web services for integrated applications. J Biomed Semantics. 2011, 2: 4-10.1186/2041-1480-2-4.View Article
- Katayama T, Wilkinson MD, Micklem G: The 3rd DBCLS BioHackathon: improving life science data integration with semantic Web technologies. J Biomed Semantics. 2013, 4: 6-10.1186/2041-1480-4-6.View Article
- BioHackathon. 2011,http://2011.biohackathon.org/,
- BioHackathon. 2012,http://2012.biohackathon.org/,
- Nakamura Y, Cochrane G, Karsch-Mizrachi I: The international nucleotide sequence database collaboration. Nucleic Acids Res. 2013, 41: D21-D24. 10.1093/nar/gks1084.View Article
- Reese MG, Moore B, Batchelor C: A standard variation file format for human genome sequences. Genome Biol. 2010, 11: R88-10.1186/gb-2010-11-8-r88.View Article
- Takeda J-I, Yamasaki C, Murakami K: H-InvDB in 2013: an omics study platform for human functional gene and transcript discovery. Nucleic Acids Res. 2013, 41: D915-D919. 10.1093/nar/gks1245.View Article
- Bolleman J, Mungall CJ, Strozzi F: FALDO: A semantic standard for describing the location of nucleotide and protein feature annotation. bioRxiv. doi:10.1101/002121
- Consortium UP: Reorganizing the protein space at the universal protein resource (UniProt). Nucleic Acids Res. 2012, 40: D71-D75.View Article
- Ogasawara O, Mashima J, Kodama Y: DDBJ new system and service refactoring. Nucleic Acids Res. 2013, 41: D25-D29. 10.1093/nar/gks1152.View Article
- BioHackathon/HMMER3 to RDF.https://github.com/dbcls/bh11/wiki/Hmmer3-rdf-xml,
- BioHackathon/INSDC to RDF.https://github.com/dbcls/bh11/wiki/OpenBio,
- BioHackathon/GTF to RDF.https://github.com/dbcls/bh12/wiki/Cufflinks-rdf,
- BioHackathon/GFF3 to OWL.https://code.google.com/p/gff3-to-owl/source/browse/trunk/GFF2OWL.groovy,
- Aranda B, Blankenburg H, Kerrien S: PSICQUIC and PSISCORE: accessing and scoring molecular interactions. Nat Methods. 2011, 8: 528-529. 10.1038/nmeth.1637.View Article
- Demir E, Cary MP, Paley S: The BioPAX community standard for pathway data sharing. Nat Biotechnol. 2010, 28: 935-942. 10.1038/nbt.1666.View Article
- Toukach PV: Bacterial carbohydrate structure database 3: principles and realization. J Chem Inf Model. 2011, 51: 159-170. 10.1021/ci100150d.View Article
- Ranzinger R, Frank M, von der Lieth C-W, Herget S: Glycome-DB.org: a portal for querying across the digital world of carbohydrate sequences. Glycobiology. 2009, 19: 1563-1567. 10.1093/glycob/cwp137.View Article
- Ranzinger R, Herget S, von der Lieth C-W, Frank M: GlycomeDB–a unified database for carbohydrate structures. Nucleic Acids Res. 2011, 39: D373-D376. 10.1093/nar/gkq1014.View Article
- Lütteke T, Bohne-Lang A, Loss A: GLYCOSCIENCES.de: an Internet portal to support glycomics and glycobiology research. Glycobiology. 2006, 16: 71R-81R. 10.1093/glycob/cwj049.View Article
- Akune Y, Hosoda M, Kaiya S, Shinmachi D, Aoki-Kinoshita KF: The RINGS resource for glycome informatics analysis and data mining on the Web. Omics. 2010, 14: 475-486. 10.1089/omi.2009.0129.View Article
- Campbell MP, Hayes CA, Struwe WB: UniCarbKB: putting the pieces together for glycomics research. Proteomics. 2011, 11: 4117-4121. 10.1002/pmic.201100302.View Article
- Aoki-Kinoshita KF, Bolleman J, Campbell MP: Introducing glycomics data into the Semantic Web. J Biomed Semantics. 2013, 4: 39-10.1186/2041-1480-4-39.View Article
- Constantin A, Pettifer S, Voronkov A: PDFX: Fully-automated PDF-to-XML Conversion of Scientific Literature. Proceedings of the 13th ACM symposium on Document Engineering: 10-13 September 2013; Florence, Italy. 2013, 177-180.View Article
- Iwasaki W, Yamamoto Y, Takagi T: TogoDoc server/client system: smart recommendation and efficient management of life science literature. PLoS One. 2010, 5: e15305-10.1371/journal.pone.0015305.View Article
- Hakenberg J, Gerner M, Haeussler M: The GNAT library for local and remote gene mention normalization. Bioinformatics. 2011, 27: 2769-2771. 10.1093/bioinformatics/btr455.View Article
- Huang M, Liu J, Zhu X: GeneTUKit: a software for document-level gene normalization. Bioinformatics. 2011, 27: 1032-1033. 10.1093/bioinformatics/btr042.View Article
- Leaman R, Gonzalez G: BANNER: an executable survey of advances in biomedical named entity recognition. Pac Symp Biocomput. 2008, 13: 652-663.
- Stearns MQ, Price C, Spackman KA, Wang AY: SNOMED clinical terms: overview of the development process and project status. Proceedings of AMIA Symposium: 3-7 November 2001; Washington, DC. 2001, 662-666.
- Whetzel PL, Noy NF, Shah NH: BioPortal: enhanced functionality via new web services from the national center for biomedical ontology to access and use ontologies in software applications. Nucleic Acids Res. 2011, 39: W541-W545. 10.1093/nar/gkr469.View Article
- BioPortal SPARQL endpoint.http://sparql.bioontology.org/,
- Callahan A, Cruz-Toledo J, Dumontier M: Ontology-based querying with Bio2RDF’s linked open data. J Biomed Semantics. 2012, 4: S1-View Article
- Juty N, Novère NL, Laibe C: Identifiers.org and MIRIAM registry: community resources to provide persistent identification. Nucleic Acids Res. 2012, 40: D580-D586. 10.1093/nar/gkr1097.View Article
- Juty N, Le NN, Hermjakob H, Laibe C: Towards the Collaborative Curation of the Registry underlying identifiers.org. Database. 2013, 2013: bat017-View Article
- MicrobeDB. : ,http://microbedb.jp/,
- Rogers FB: Medical subject headings. Bull Med Libr Assoc. 1963, 51: 114-116.
- LSD ontology.http://purl.jp/bio/10/lsd/ontology/201209,
- LSD SPARQL endpoint.http://purl.jp/bio/10/lsd/sparql,
- McDonald AG, Boyce S, Tipton KF: ExplorEnz: the primary source of the IUBMB enzyme list. Nucleic Acids Res. 2009, 37: D593-D597. 10.1093/nar/gkn582.View Article
- Ashburner M, Ball CA, Blake JA: Gene Ontology: tool for the unification of biology. Nat Genet. 2000, 25: 25-29. 10.1038/75556.View Article
- Muto A, Kotera M, Tokimatsu T: Modular architecture of metabolic pathways revealed by conserved sequences of reactions. J Chem Inf Model. 2013, 53: 613-622. 10.1021/ci3005379.View Article
- Kanehisa M, Goto S, Sato Y, Furumichi M, Tanabe M: KEGG for integration and interpretation of large-scale molecular data sets. Nucleic Acids Res. 2012, 40: D109-D114. 10.1093/nar/gkr988.View Article
- Gaudet P, Bairoch A, Field D: Towards BioDBcore: a community-defined information specification for biological databases. Nucleic Acids Res. 2011, 39: D7-D10. 10.1093/nar/gkq1173.View Article
- Gaudet P, Bairoch A, Field D: Towards BioDBcore: a community-defined information specification for biological databases. Database. 2011, 2011: baq027-View Article
- Baker NA, Klemm JD, Harper SL: Standardizing data. Nat Nanotechnol. 2013, 8: 73-74.View Article
- Fernández-Suárez XM, Galperin MY: The 2013 nucleic acids research database issue and the online molecular biology database collection. Nucleic Acids Res. 2013, 41: D1-D7. 10.1093/nar/gks1297.View Article
- NAR Database summary paper.http://www.oxfordjournals.org/nar/database/cap/,
- Yamamoto Y, Yamaguchi A, Yonezawa A: Building linked open data towards integration of biomedical scientific literature with DBpedia. J Biomed Semantics. 2013, 4: 8-10.1186/2041-1480-4-8.View Article
- Ison J, Kalas M, Jonassen I: EDAM: An ontology of bioinformatics operations, types of data and identifiers, topics, and formats. Bioinformatics. 2013, 29: 1325-1332. 10.1093/bioinformatics/btt113.View Article
- LSDB catalog.http://integbio.jp/dbcatalog/?lang=en,
- BioDBCore web interface.http://biosharing.org/biodbcore,
- W3C HCLSIG.http://www.w3.org/wiki/HCLSIG,
- Open PHACTS.http://www.openphacts.org/,
- OpenRefine RDF extention.http://refine.deri.ie/,
- GFF3 and GVF ontology.http://www.biointerchange.org/ontologies.html,
- Antezana E, Egaña M, De Baets B, Kuiper M, Mironov V: ONTO-PERL: an API for supporting the development and analysis of bio-ontologies. Bioinformatics. 2008, 24: 885-887. 10.1093/bioinformatics/btn042.View Article
- Kalaš M, Puntervoll P, Joseph A: BioXSD: the common data-exchange format for everyday bioinformatics web services. Bioinformatics. 2010, 26: i540-i546. 10.1093/bioinformatics/btq391.View Article
- SHARD Triple-Store.http://www.avometric.com/shard.shtml,
- SPARQL 1.1.http://www.w3.org/TR/sparql11-query/,
- Granitzer M, Sabol V, Onn KW, Lukose D, Tochtermann K: Ontology alignment—a survey with focus on visually supported semi-automatic techniques. Future Internet. 2010, 2: 238-258. 10.3390/fi2030238.View Article
- Google fusion tables.http://www.google.com/fusiontables/,
- Arakawa K, Mori K, Ikeda K: G-language genome analysis environment: a workbench for nucleotide sequence data mining. Bioinformatics. 2003, 19: 305-306. 10.1093/bioinformatics/19.2.305.View Article
- Arakawa K, Kido N, Oshita K, Tomita M: G-language genome analysis environment with REST and SOAP web service interfaces. Nucleic Acids Res. 2010, 38: W700-W705. 10.1093/nar/gkq315.View Article
- Croft D, O’Kelly G, Wu G: Reactome: a database of reactions, pathways and biological processes. Nucleic Acids Res. 2011, 39: D691-D697. 10.1093/nar/gkq1018.View Article
- Caspi R, Altman T, Dreher K: The MetaCyc database of metabolic pathways and enzymes and the BioCyc collection of pathway/genome databases. Nucleic Acids Res. 2012, 40: D742-D753. 10.1093/nar/gkr1014.View Article
- G-Links demo 1.http://ws.g-language.org/toys/bh11/,
- G-Links demo 2.http://ws.g-language.org/toys/bh11/index2.html,
- Rice P, Longden I, Bleasby A: EMBOSS: the European molecular biology open software suite. Trends Genet. 2000, 16: 276-277. 10.1016/S0168-9525(00)02024-2.View Article
- Genie video.http://www.youtube.com/watch?v=V4jsuIOAwyM.
This article is published under license to BioMed Central Ltd. This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.