ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
  • Artikel  (4.442)
  • BioMed Central  (2.158)
  • IOS Press  (1.873)
  • MDPI Publishing  (236)
  • American Geophysical Union
  • American Institute of Physics (AIP)
  • American Meteorological Society
  • American Physical Society (APS)
  • Emerald
  • Reed Business Information
  • 2010-2014  (4.442)
  • 1985-1989
  • 1955-1959
  • 1950-1954
  • 1935-1939
  • 2014  (2.537)
  • 2011  (1.905)
  • Informatik  (4.267)
  • Technik allgemein  (359)
Sammlung
  • Artikel  (4.442)
Verlag/Herausgeber
Erscheinungszeitraum
  • 2010-2014  (4.442)
  • 1985-1989
  • 1955-1959
  • 1950-1954
  • 1935-1939
Jahr
  • 1
    Publikationsdatum: 2014-12-14
    Beschreibung: Background: Biomedical ontologies are increasingly instrumental in the advancement of biological research primarily through their use to efficiently consolidate large amounts of data into structured, accessible sets. However, ontology development and usage can be hampered by the segregation of knowledge by domain that occurs due to independent development and use of the ontologies. The ability to infer data associated with one ontology to data associated with another ontology would prove useful in expanding information content and scope. We here focus on relating two ontologies: the Gene Ontology (GO), which encodes canonical gene function, and the Mammalian Phenotype Ontology (MP), which describes non-canonical phenotypes, using statistical methods to suggest GO functional annotations from existing MP phenotype annotations. This work is in contrast to previous studies that have focused on inferring gene function from phenotype primarily through lexical or semantic similarity measures. Results: We have designed and tested a set of algorithms that represents a novel methodology to define rules for predicting gene function by examining the emergent structure and relationships between the gene functions and phenotypes rather than inspecting the terms semantically. The algorithms inspect relationships among multiple phenotype terms to deduce if there are cases where they all arise from a single gene function.We apply this methodology to data about genes in the laboratory mouse that are formally represented in the Mouse Genome Informatics (MGI) resource. From the data, 7444 rule instances were generated from five generalized rules, resulting in 4818 unique GO functional predictions for 1796 genes. Conclusions: We show that our method is capable of inferring high-quality functional annotations from curated phenotype data. As well as creating inferred annotations, our method has the potential to allow for the elucidation of unforeseen, biologically significant associations between gene function and phenotypes that would be overlooked by a semantics-based approach. Future work will include the implementation of the described algorithms for a variety of other model organism databases, taking full advantage of the abundance of available high quality curated data.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-12-14
    Beschreibung: This paper describes the Ontologies of Linguistic Annotation (OLiA) as one of the data sets currently available as part of Linguistic Linked Open Data (LLOD) cloud. Within the LLOD cloud, the OLiA ontologies serve as a reference hub for annotation terminology for linguistic phenomena on a great band-width of languages, they have been used to facilitate interoperability and information integration of linguistic annotations in corpora, NLP pipelines, and lexical-semantic resources and mediate their linking with multiple community-maintained terminology repositories. Content Type Journal Article Pages - DOI 10.3233/SW-140167 Authors Christian Chiarcos, Applied Computational Linguistics (ACoLi), Department of Computer Science and Mathematics, Goethe-University Frankfurt am Main, Germany, http://acoli.cs.uni-frankfurt.de Maria Sukhareva, Applied Computational Linguistics (ACoLi), Department of Computer Science and Mathematics, Goethe-University Frankfurt am Main, Germany, http://acoli.cs.uni-frankfurt.de Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Digitale ISSN: 2210-4968
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-12-14
    Beschreibung: This paper describes the publication and linking of (parts of) PAROLE SIMPLE CLIPS (PSC), a large scale Italian lexicon, to the Semantic Web and the Linked Data cloud using the lemon model. The main challenge of the conversion is discussed, namely the reconciliation between the PSC semantic structure which contains richly encoded semantic information, following the qualia structure of the Generative Lexicon theory and the lemon view of lexical sense as a reified pairing of a lexical item and a concept in an ontology. The result is two datasets: one consists of a list of lemon lexical entries with their lexical properties, relations and senses; the other consists of a list of OWL individuals representing the referents for the lexical senses. These OWL individuals are linked to each other by a set of semantic relations and mapped onto the SIMPLE OWL ontology of higher level semantic types. Content Type Journal Article Pages - DOI 10.3233/SW-140168 Authors Riccardo Del Gratta, Istituto Di Linguistica Computazionale ‘A. Zampolli’, Consiglio Nazionale delle Ricerche, Via Moruzzi 1, Pisa, Italy. E-mail: first.last@ilc.cnr.it Francesca Frontini, Istituto Di Linguistica Computazionale ‘A. Zampolli’, Consiglio Nazionale delle Ricerche, Via Moruzzi 1, Pisa, Italy. E-mail: first.last@ilc.cnr.it Fahad Khan, Istituto Di Linguistica Computazionale ‘A. Zampolli’, Consiglio Nazionale delle Ricerche, Via Moruzzi 1, Pisa, Italy. E-mail: first.last@ilc.cnr.it Monica Monachini, Istituto Di Linguistica Computazionale ‘A. Zampolli’, Consiglio Nazionale delle Ricerche, Via Moruzzi 1, Pisa, Italy. E-mail: first.last@ilc.cnr.it Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Digitale ISSN: 2210-4968
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 4
    Publikationsdatum: 2014-12-18
    Beschreibung: The decentralized many-to-many negotiation for resource allocation in Cloud and multi-agent systems presents numerous challenges, including ones related to the buyer strategy which is the focus of the present paper. Current approaches deriving required resources each bid must ask for aren't in all market cases an optimal choice. For this reason, we have proposed a hybrid negotiation strategy consisting of a combination of two modes of negotiation strategies that generates required resources of each bid in parallel, the first one is an existent fixed negotiation strategy and the second one is a learning selection strategy over the buyer's agreement space. Moreover, acting dynamically in the market place by adjusting appropriately the buyer's resource provisioning times and calling for proposal to hand over contracted resources in order to break some deadlocks involving buyers' tasks has been shown via simulation results to achieve better performances both in terms of social welfare and buyer utility. Content Type Journal Article Pages 165-183 DOI 10.3233/MGS-140221 Authors Mohamed Raouf Habes, Department of Computer Science, University of Badji Mokhtar, Annaba, Algeria Habiba Belleili-Souici, Department of Computer Science, University of Badji Mokhtar, Annaba, Algeria Laurent Vercouter, INSA de Rouen, Saint-Etienne du Rouvray, France Journal Multiagent and Grid Systems Online ISSN 1875-9076 Print ISSN 1574-1702 Journal Volume Volume 10 Journal Issue Volume 10, Number 3 / 2014
    Print ISSN: 1574-1702
    Digitale ISSN: 1875-9076
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 5
    Publikationsdatum: 2014-12-18
    Beschreibung: In future energy systems, peaks in the daily electricity generation and consumption are expected to increase. The "smart grid" concept aims to maintain high levels of efficiency in the energy system by establishing distributed intelligence. Software agents (operating on devices with unknown computational capabilities) can implement dynamic and autonomous decision making about energy usage and generation, e.g. in domestic households, farms or offices. To reach satisfactory levels of efficiency and reliability, it is crucial to include planning-ahead of the energy-involving activities. Market mechanisms are a promising approach for large-scale coordination problems about energy supply and demand, but existing electricity markets either do not involve planning-ahead sufficiently or require a high level of sophistication and computing power from participants, which is not suitable for smart grid settings. This paper proposes a new market mechanism for smart grids, ABEM (Ahead- and Balancing Energy Market). ABEM performs an ahead market and a last-minute balancing market, where planning-ahead in the ahead market supports both binding ahead-commitments and reserve capacities in bids (which can be submitted as price functions). These features of planning-ahead reflect the features in modern wholesale electricity markets. However, constructing bids in ABEM is straightforward and fast. We also provide a model of a market with the features mentioned above, which a strategic agent can use to construct a bid (e.g. in ABEM), using a decision-theoretic approach. We evaluate ABEM experimentally in various stochastic scenarios and show favourable outcomes in comparison with a benchmark mechanism. Content Type Journal Article Pages 137-163 DOI 10.3233/MGS-140220 Authors Nicolas Höning, Centrum Wiskunde en Informatica, Amsterdam, The Netherlands Han La Poutré, Centrum Wiskunde en Informatica, Amsterdam, The Netherlands Journal Multiagent and Grid Systems Online ISSN 1875-9076 Print ISSN 1574-1702 Journal Volume Volume 10 Journal Issue Volume 10, Number 3 / 2014
    Print ISSN: 1574-1702
    Digitale ISSN: 1875-9076
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 6
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-12-18
    Beschreibung: The Belief-Desire-Intention (BDI) model of agency provides a powerful technique for describing goal-based behavior for both individual agents and, more recently, agent teams. Numerous frameworks have been developed since the model was first proposed in the early 1980's. However, none of these frameworks have explicitly represented intention, which has meant that intention-based reasoning has had no direct framework support. Given the importance of this in the development of practical agent applications, we consider this to be a major shortcoming of existing frameworks. This paper explores how explicitly represented goals can be used as both a unifying modeling concept for the management of intention, and as the basis for a BDI framework. The exploration is grounded both in terms of software – a recently developed BDI framework called GORITE and in application – an execution system for a robotic assembly cell. Both are discussed in detail. Content Type Journal Article Pages 119-136 DOI 10.3233/MGS-140219 Authors D. Jarvis, School of Engineering and Technology, Central Queensland University, Brisbane, Queensland, Australia J. Jarvis, School of Engineering and Technology, Central Queensland University, Brisbane, Queensland, Australia R. Rönnquist, Intendico Pty. Ltd., Carlton, Australia Journal Multiagent and Grid Systems Online ISSN 1875-9076 Print ISSN 1574-1702 Journal Volume Volume 10 Journal Issue Volume 10, Number 3 / 2014
    Print ISSN: 1574-1702
    Digitale ISSN: 1875-9076
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 7
    Publikationsdatum: 2014-12-18
    Beschreibung: Background: Identification of individual components in complex mixtures is an important and sometimes daunting task in several research areas like metabolomics and natural product studies. NMR spectroscopy is an excellent technique for analysis of mixtures of organic compounds and gives a detailed chemical fingerprint of most individual components above the detection limit. For the identification of individual metabolites in metabolomics, correlation or covariance between peaks in 1H NMR spectra has previously been successfully employed. Similar correlation of 2D 1H-13C Heteronuclear Single Quantum Correlation spectra was recently applied to investigate the structure of heparine. In this paper, we demonstrate how a similar approach can be used to identify metabolites in human biofluids (post-prostatic palpation urine). Results: From 50 1H-13C Heteronuclear Single Quantum Correlation spectra, 23 correlation plots resembling pure metabolites were constructed. The identities of these metabolites were confirmed by comparing the correlation plots with reported NMR data, mostly from the Human Metabolome Database. Conclusions: Correlation plots prepared by statistically correlating 1H-13C Heteronuclear Single Quantum Correlation spectra from human biofluids provide unambiguous identification of metabolites. The correlation plots highlight cross-peaks belonging to each individual compound, not limited by long-range magnetization transfer as conventional NMR experiments.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 8
    Publikationsdatum: 2014-12-18
    Beschreibung: Background: Alternative Splicing (AS) as a post-transcription regulation mechanism is an important application of RNA-seq studies in eukaryotes. A number of software and computational methods have been developed for detecting AS. Most of the methods, however, are designed and tested on animal data, such as human and mouse. Plants genes differ from those of animals in many ways, e.g., the average intron size and preferred AS types. These differences may require different computational approaches and raise questions about their effectiveness on plant data. The goal of this paper is to benchmark existing computational differential splicing (or transcription) detection methods so that biologists can choose the most suitable tools to accomplish their goals. Results: This study compares the eight popular public available software packages for differential splicing analysis using both simulated and real Arabidopsis thaliana RNA-seq data. All software are freely available. The study examines the effect of varying AS ratio, read depth, dispersion pattern, AS types, sample sizes and the influence of annotation. Using a real data, the study looks at the consistences between the packages and verifies a subset of the detected AS events using PCR studies. Conclusions: No single method performs the best in all situations. The accuracy of annotation has a major impact on which method should be chosen for AS analysis. DEXSeq performs well in the simulated data when the AS signal is relative strong and annotation is accurate. Cufflinks achieve a better tradeoff between precision and recall and turns out to be the best one when incomplete annotation is provided. Some methods perform inconsistently for different AS types. Complex AS events that combine several simple AS events impose problems for most methods, especially for MATS. MATS stands out in the analysis of real RNA-seq data when all the AS events being evaluated are simple AS events.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 9
    Publikationsdatum: 2014-11-07
    Beschreibung: Background: PGxClean is a new web application that performs quality control analyses for data produced by the Affymetrix DMET chip or other candidate gene technologies. Importantly, the software does not assume that variants are biallelic single-nucleotide polymorphisms, but can be used on the variety of variant characteristics included on the DMET chip. Once quality control analyses has been completed, the associated PGxClean-Viz web application performs principal component analyses and provides tools for characterizing and visualizing population structure.FindingsThe PGxClean web application accepts genotype data from the Affymetrix DMET chip or the PLINK PED format with genotypes annotated as (A,C,G,T or 1,2,3,4). Options for removing missing data and calculating genotype and allele frequencies are offered. Data can be subdivided by cohort characteristics, such as family ID, sex, phenotype, or case-control status. Once the data has been processed through the PGxClean web application, the output files can be entered into the PGxClean-Viz web application for performing principal component analysis to visualize population substructure. Conclusions: The PGxClean software provides rapid quality-control processing, data analysis, and data visualization for the Affymetrix DMET chip or other candidate gene technologies while improving on common analysis platforms by not assuming that variants are biallelic. The web application is available at www.pgxclean.com.
    Digitale ISSN: 1756-0381
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 10
    Publikationsdatum: 2014-11-07
    Beschreibung: In recent years, Bayesian Network has become an important modeling method for decision making problems of real-world applications. In this paper learning parameters of a fuzzy Bayesian Network (BN) based on imprecise/fuzzy observations is considered, where imprecise observations particularly refers to triangular fuzzy numbers. To achieve this, an extension to fuzzy probability theory based on imprecise observations is proposed which employs both the "truth" concept of Yager and the Extension Principle in fuzzy set theory. In addition, some examples are given to demonstrate the concepts of the proposed idea. The aim of our suggestion is to be able to estimate joint fuzzy probability and the conditional probability tables (CPTs) of Bayesian Network based on imprecise observations. Two real-world datasets, Car Evaluation Database (CED) and Extending Credibility (EC), are employed where some of attributes have crisp (exact) and some of them have fuzzy observations. Estimated parameters of the CED's corresponding network, using our extension, are shown in tables. Then, using Kullback-Leibler divergence, two scenarios are considered to show that fuzzy parameters preserve more knowledge than that of crisp parameters. This phenomenon is also true in cases where there are a small number of observations. Finally, to examine a network with fuzzy parameters versus the network with crisp parameters, accuracy result of predictions is provided which shows improvements in the predictions. Content Type Journal Article Pages 167-180 DOI 10.3233/KES-140296 Authors Mostafa Ghazizadeh-Ahsaee, Ferdowsi University of Mashhad, Mashhad, Iran Mahmoud Naghibzadeh, Ferdowsi University of Mashhad, Mashhad, Iran Bahram Sadeghpour Gildeh, Ferdowsi University of Mashhad, Mashhad, Iran Journal International Journal of Knowledge-Based and Intelligent Engineering Systems Online ISSN 1875-8827 Print ISSN 1327-2314 Journal Volume Volume 18 Journal Issue Volume 18, Number 3 / 2014
    Print ISSN: 1327-2314
    Digitale ISSN: 1875-8827
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 11
    Publikationsdatum: 2014-11-07
    Beschreibung: This paper presents an interactive verifier for logic programs. These logic programs are constructed by a schema-based method. Each program is associated with proof schemes due to the program development method. The correctness proof of a program is guided by its associated proof schemes. The main components of the verifier are the prover which carries out the proof steps, the knowledge base (KB) which includes representations of all theories and transformation rules, the KB update which supports the update of KB and the graphical user interface (GUI). The emphasis in the design of this proof checker is on effective guidance of the proof based on the activated proof schemes and on performance by the verifier of tedious, trivial and time consuming tasks. The difficult proof decisions are taken by the user, then, the proof checker applies them. The design of the interface is based on providing the user the required support for the proof of a theorem and for the update of KB. This system is an effective and useful tool for the interactive verification of non-trivial logic programs. Content Type Journal Article Pages 143-156 DOI 10.3233/KES-140294 Authors Emmanouil Marakakis, Department of Informatics Engineering, Technological Educational Institute of Crete, Heraklion, Greece Haridimos Kondylakis, Department of Informatics Engineering, Technological Educational Institute of Crete, Heraklion, Greece Nikos Papadakis, Department of Informatics Engineering, Technological Educational Institute of Crete, Heraklion, Greece Journal International Journal of Knowledge-Based and Intelligent Engineering Systems Online ISSN 1875-8827 Print ISSN 1327-2314 Journal Volume Volume 18 Journal Issue Volume 18, Number 3 / 2014
    Print ISSN: 1327-2314
    Digitale ISSN: 1875-8827
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 12
    Publikationsdatum: 2014-11-07
    Beschreibung: We concern the issue of preference recommendation towards more reliable recommender system. General recommender system provides a collection of items or several bests of them, on the basis of a fixed preference constraint. However, the realistic preference from users may be such complicated that makes conventional recommender system unreliable. In real-world applications, e.g. travel planning or hotel selection, specific constraints may be involved like a limited travel time or an appropriate budget for hotel accommodation. Motivated by these applications, we investigate the constrained preference recommendation (CPR), in which two main types are studied including Threshold-CPR (T-CPR) and Range-CPR (R-CPR). We firstly analyze and define the related problems. Then, we provide solutions with illustration of problem-solving procedure. The results are analogized with other representative techniques. Finally, we provide an extension study on the general CPR problems. Therein, we firstly provide a normalized system for constraint representation. Secondly, we utilize the proposed normalized system to analyze representative literature. Content Type Journal Article Pages 157-165 DOI 10.3233/KES-140295 Authors Anming Li, Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China Junyi Chai, Department of Computing, The Hong Kong Polytechnic University, Hung Hom, Hong Kong, China Journal International Journal of Knowledge-Based and Intelligent Engineering Systems Online ISSN 1875-8827 Print ISSN 1327-2314 Journal Volume Volume 18 Journal Issue Volume 18, Number 3 / 2014
    Print ISSN: 1327-2314
    Digitale ISSN: 1875-8827
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 13
    Publikationsdatum: 2014-11-07
    Beschreibung: The purpose of security checks at airports is to achieve a reduction in the risk of malevolent attacks on the aviation system. The introduction of new security measures aims at reducing this perceived level of risk, and often takes place as a direct reaction to (attempted) attacks. This procedure means that offenders remain one step ahead of security agents. The aim of the approach presented here is to overcome this shortfall by supporting decision-making in the context of airport security by a systematically created knowledge base. The combination of two well-accepted methods – scenario analysis and structural complexity management – supports a structured knowledge acquisition process that serves as a basis for the proactive identification of system weaknesses. Furthermore, this combination of methods can be applied to the search for optimisation potentials concerned with possible future threats. The basis for the approach is composed of threat scenario components, security measures and dependencies between these elements. A Multiple-Domain Matrix is applied for system modelling. Clustering of threat scenarios and intensity of relations to security measures are used for analysis. The interpretation of findings makes use of portfolio representations. Content Type Journal Article Pages 191-200 DOI 10.3233/KES-140300 Authors Mara Cole, Bauhaus Luftfahrt e.V., Munich, Germany Maik Maurer, Institute of Product Development, Technische Universität München, Garching, Germany Journal International Journal of Knowledge-Based and Intelligent Engineering Systems Online ISSN 1875-8827 Print ISSN 1327-2314 Journal Volume Volume 18 Journal Issue Volume 18, Number 3 / 2014
    Print ISSN: 1327-2314
    Digitale ISSN: 1875-8827
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 14
    Publikationsdatum: 2014-11-07
    Beschreibung: A generalized Bayesian inference nets model (GBINM) to aid developers to construct self-adaptive Bayesian inference nets for various applications and a new approach of defining and assigning statistical parameters to Bayesian inference nodes needed to calculate propagation of probabilities and address uncertainties are proposed. GBINM and the proposed approach are applied to design an intelligent medical system to diagnose cardiovascular diseases. Thousands of site-sampled clinical data are used for designing and testing such a constructed system. The preliminary diagnostic results show that the proposed methodology has salient validity and effectiveness Content Type Journal Article Pages 181-190 DOI 10.3233/KES-140299 Authors Booma Devi Sekar, Department of ECE, Faculty of Science and Technology, University of Macau, Macau, China Mingchui Dong, Department of ECE, Faculty of Science and Technology, University of Macau, Macau, China Journal International Journal of Knowledge-Based and Intelligent Engineering Systems Online ISSN 1875-8827 Print ISSN 1327-2314 Journal Volume Volume 18 Journal Issue Volume 18, Number 3 / 2014
    Print ISSN: 1327-2314
    Digitale ISSN: 1875-8827
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 15
    Publikationsdatum: 2014-11-07
    Beschreibung: This paper presents a cost optimization model for scheduling scientific workflows on IaaS clouds such as Amazon EC2 or RackSpace. We assume multiple IaaS clouds with heterogeneous virtual machine instances, with limited number of instances per cloud and hourly billing. Input and output data are stored on a cloud object store such as Amazon S3. Applications are scientific workflows modeled as DAGs as in the Pegasus Workflow Management System. We assume that tasks in the workflows are grouped into levels of identical tasks. Our model is specified using mathematical programming languages (AMPL and CMPL) and allows us to minimize the cost of workflow execution under deadline constraints. We present results obtained using our model and the benchmark workflows representing real scientific applications in a variety of domains. The data used for evaluation come from the synthetic workflows, from general purpose cloud benchmarks, as well as from the data measured in our own experiments with Montage, an astronomical application, executed on Amazon EC2 cloud. We indicate how this model can be used for scenarios that require resource planning for scientific workflows and their ensembles. Content Type Journal Article Pages - DOI 10.3233/SPR-140406 Authors Maciej Malawski, Department of Computer Science AGH, Kraków, Poland Kamil Figiela, Department of Computer Science AGH, Kraków, Poland Marian Bubak, Department of Computer Science AGH, Kraków, Poland Ewa Deelman, USC Information Sciences Institute, Marina del Rey, CA, USA Jarek Nabrzyski, Center for Research Computing, University of Notre Dame, Notre Dame, IN, USA. E-mails: malawski@agh.edu.pl, kfigiela@agh.edu.pl, bubak@agh.edu.pl, deelman@isi.edu, naber@nd.edu Journal Scientific Programming Online ISSN 1875-919X Print ISSN 1058-9244
    Print ISSN: 1058-9244
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 16
    Publikationsdatum: 2014-11-09
    Beschreibung: Semantic Web services frameworks provides the means to automatically discover, rank, compose and invoke services according to user requirements and preferences. However, current preference models offer limited expressiveness and they are tightly coupled with underlying discovery and ranking mechanisms. Furthermore, these mechanisms present performance, interoperability and integration issues that prevent the uptake of semantic technologies in these scenarios. In this work, we discuss three interrelated contributions on preference modeling, discovery optimization, and flexible, integrated ranking, tackling specifically the identified challenges on those areas using a lightweight approach. Content Type Journal Article Pages - DOI 10.3233/AIC-140644 Authors José María García, University of Seville, Sevilla, Spain. E-mail: josemgarcia@us.es Journal AI Communications Online ISSN 1875-8452 Print ISSN 0921-7126
    Print ISSN: 0921-7126
    Digitale ISSN: 1875-8452
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 17
    Publikationsdatum: 2014-11-09
    Beschreibung: Background: The rapid accumulation of whole-genome data has renewed interest in the study of using gene-order data for phylogenetic analyses and ancestral reconstruction. Current software and web servers typically do not support duplication and loss events along with rearrangements. Results: MLGOMLGO (Maximum Likelihood for Gene-Order Analysis) is a web tool for the reconstruction of phylogeny and/or ancestral genomes from gene-order data. MLGOMLGO is based on likelihood computation and shows advantages over existing methods in terms of accuracy, scalability and flexibility. Conclusions: To the best of our knowledge, it is the first web tool for analysis of large-scale genomic changes including not only rearrangements but also gene insertions, deletions and duplications. The web tool is available from http://www.geneorder.org/server.php.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 18
    Publikationsdatum: 2014-11-09
    Beschreibung: This work proposes novel methodologies to improve the use of Light Detection And Ranging (LiDAR) for environmental purposes, especially for thematic mapping (LiDAR only or fused with other remote sensors) and the estimation of forest variables. The methodologies make use of well-known techniques from soft computing (machine learning and evolutionary computation) and their adaptation to develop LiDAR-derived products. Content Type Journal Article Pages - DOI 10.3233/AIC-140643 Authors Jorge Garcia-Gutierrez, Department of Computer Languages and Systems, University of Seville, Seville, Spain. E-mail: jorgarcia@us.es Journal AI Communications Online ISSN 1875-8452 Print ISSN 0921-7126
    Print ISSN: 0921-7126
    Digitale ISSN: 1875-8452
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 19
    Publikationsdatum: 2014-11-05
    Beschreibung: Background: The major histocompatibility complex (MHC) is responsible for presenting antigens (epitopes) on the surface of antigen-presenting cells (APCs). When pathogen-derived epitopes are presented by MHC class II on an APC surface, T cells may be able to trigger an specific immune response. Prediction of MHC-II epitopes is particularly challenging because the open binding cleft of the MHC-II molecule allows epitopes to bind beyond the peptide binding groove; therefore, the molecule is capable of accommodating peptides of variable length. Among the methods proposed to predict MHC-II epitopes, artificial neural networks (ANNs) and support vector machines (SVMs) are the most effective methods. We propose a novel classification algorithm to predict MHC-II called sparse representation via l1-minimization. Results: We obtained a collection of experimentally confirmed MHC-II epitopes from the Immune Epitope Database and Analysis Resource (IEDB) and applied our l1-minimization algorithm. To benchmark the performance of our proposed algorithm, we compared our predictions against a SVM classifier. We measured sensitivity, specificity and accuracy; then we used Receiver Operating Characteristic (ROC) analysis to evaluate the performance of our method.The prediction performance of MHC-II epitopes of the l1-minimization algorithm was generally comparable and, in some cases, superior to the standard SVM classification method and overcame the lack of robustness of other methods with respect to outliers. While our method consistently favored DPPS encoding with the alleles tested, SVM showed a slightly better accuracy when "11-factor" encoding was used. Conclusions: l1-minimization has similar accuracy than SVM, and has additional advantages, such as overcoming the lack of robustness with respect to outliers. With l1-minimization no model selection dependency is involved.
    Digitale ISSN: 1756-0381
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 20
    Publikationsdatum: 2014-11-05
    Beschreibung: The constantly growing amount ofWeb content and the success of the SocialWeb lead to increasing needs for Web archiving. These needs go beyond the pure preservationo of Web pages. Web archives are turning into “community memories” that aim at building a better understanding of the public view on, e.g., celebrities, court decisions and other events. Due to the size of the Web, the traditional “collect-all” strategy is in many cases not the best method to build Web archives. In this paper, we present the ARCOMEM (From Future Internet 2014, 6 689 Collect-All Archives to Community Memories) architecture and implementation that uses semantic information, such as entities, topics and events, complemented with information from the Social Web to guide a novel Web crawler. The resulting archives are automatically enriched with semantic meta-information to ease the access and allow retrieval based on conditions that involve high-level concepts..
    Digitale ISSN: 1999-5903
    Thema: Informatik
    Publiziert von MDPI Publishing
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 21
    Publikationsdatum: 2014-12-16
    Beschreibung: Background: Genomic selection (GS) promises to improve accuracy in estimating breeding values and genetic gain for quantitative traits compared to traditional breeding methods. Its reliance on high-throughput genome-wide markers and statistical complexity, however, is a serious challenge in data management, analysis, and sharing. A bioinformatics infrastructure for data storage and access, and user-friendly web-based tool for analysis and sharing output is needed to make GS more practical for breeders. Results: We have developed a web-based tool, called solGS, for predicting genomic estimated breeding values (GEBVs) of individuals, using a Ridge-Regression Best Linear Unbiased Predictor (RR-BLUP) model. It has an intuitive web-interface for selecting a training population for modeling and estimating genomic estimated breeding values of selection candidates. It estimates phenotypic correlation and heritability of traits and selection indices of individuals. Raw data is stored in a generic database schema, Chado Natural Diversity, co-developed by multiple database groups. Analysis output is graphically visualized and can be interactively explored online or downloaded in text format. An instance of its implementation can be accessed at the NEXTGEN Cassava breeding database, http://cassavabase.org/solgs. Conclusions: solGS enables breeders to store raw data and estimate GEBVs of individuals online, in an intuitive and interactive workflow. It can be adapted to any breeding program.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 22
    Publikationsdatum: 2014-12-16
    Beschreibung: Background: According to Regulation (EU) No 619/2011, trace amounts of non-authorised genetically modified organisms (GMO) in feed are tolerated within the EU if certain prerequisites are met. Tolerable traces must not exceed the so-called `minimum required performance limit? (MRPL), which was defined according to the mentioned regulation to correspond to 0.1% mass fraction per ingredient. Therefore, not yet authorised GMO (and some GMO whose approvals have expired) have to be quantified at very low level following the qualitative detection in genomic DNA extracted from feed samples. As the results of quantitative analysis can imply severe legal and financial consequences for producers or distributors of feed, the quantification results need to be utterly reliable. Results: We developed a statistical approach to investigate the experimental measurement variability within one 96-well PCR plate. This approach visualises the frequency distribution as zygosity-corrected relative content of genetically modified material resulting from different combinations of transgene and reference gene Cq values. One application of it is the simulation of the consequences of varying parameters on measurement results. Parameters could be for example replicate numbers or baseline and threshold settings, measurement results could be for example median (class) and relative standard deviation (RSD). All calculations can be done using the built-in functions of Excel without any need for programming. The developed Excel spreadsheets are available (see section `Availability of supporting data? for details). In most cases, the combination of four PCR replicates for each of the two DNA isolations already resulted in a relative standard deviation of 15% or less. Conclusions: The aims of the study are scientifically based suggestions for minimisation of uncertainty of measurement especially in ?but not limited to? the field of GMO quantification at low concentration levels. Four PCR replicates for each of the two DNA isolations seem to be a reasonable minimum number to narrow down the possible spread of results.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 23
    Publikationsdatum: 2014-12-16
    Beschreibung: Background: Last generations of Single Nucleotide Polymorphism (SNP) arrays allow to study copy-number variations in addition to genotyping measures. Results: MPAgenomicsMPAgenomics, standing for multi-patient analysis (MPA) of genomic markers, is an R-package devoted to: (i) efficient segmentation and (i i) selection of genomic markers from multi-patient copy number and SNP data profiles. It provides wrappers from commonly used packages to streamline their repeated (sometimes difficult) manipulation, offering an easy-to-use pipeline for beginners in R.The segmentation of successive multiple profiles (finding losses and gains) is performed with an automatic choice of parameters involved in the wrapped packages. Considering multiple profiles in the same time, MPAgenomics MPAgenomics wraps efficient penalized regression methods to select relevant markers associated with a given outcome. Conclusions: MPAgenomics MPAgenomics provides an easy tool to analyze data from SNP arrays in R. The R-package MPAgenomics MPAgenomics is available on CRAN.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 24
    Publikationsdatum: 2014-12-16
    Beschreibung: Background: With the ever increasing use of computational models in the biosciences, the need to share models and reproduce the results of published studies efficiently and easily is becoming more important. To this end, various standards have been proposed that can be used to describe models, simulations, data or other essential information in a consistent fashion. These constitute various separate components required to reproduce a given published scientific result. Results: We describe the Open Modeling EXchange format (OMEX). Together with the use of other standard formats from the Computational Modeling in Biology Network (COMBINE), OMEX is the basis of the COMBINE Archive, a single file that supports the exchange of all the information necessary for a modeling and simulation experiment in biology. An OMEX file is a ZIP container that includes a manifest file, listing the content of the archive, an optional metadata file adding information about the archive and its content, and the files describing the model. The content of a COMBINE Archive consists of files encoded in COMBINE standards whenever possible, but may include additional files defined by an Internet Media Type. Several tools that support the COMBINE Archive are available, either as independent libraries or embedded in modeling software. Conclusions: The COMBINE Archive facilitates the reproduction of modeling and simulation experiments in biology by embedding all the relevant information in one file. Having all the information stored and exchanged at once also helps in building activity logs and audit trails. We anticipate that the COMBINE Archive will become a significant help for modellers, as the domain moves to larger, more complex experiments such as multi-scale models of organs, digital organisms, and bioengineering.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 25
    Publikationsdatum: 2014-12-16
    Beschreibung: Background: Management of diabetes mellitus is complex and involves controlling multiple risk factors that may lead to complications. Given that patients provide most of their own diabetes care, patient self-management training is an important strategy for improving quality of care. Web-based interventions have the potential to bridge gaps in diabetes self-care and self-management. The objective of this study was to determine the effect of a web-based patient self-management intervention on psychological (self-efficacy, quality of life, self-care) and clinical (blood pressure, cholesterol, glycemic control, weight) outcomes. Methods: For this cohort study we used repeated-measures modelling and qualitative individual interviews. We invited patients with type 2 diabetes to use a self-management website and asked them to complete questionnaires assessing self-efficacy (primary outcome) every three weeks for nine months before and nine months after they received access to the website. We collected clinical outcomes at three-month intervals over the same period. We conducted in-depth interviews at study conclusion to explore acceptability, strengths and weaknesses, and mediators of use of the website. We analyzed the data using a qualitative descriptive approach and inductive thematic analysis. Results: Eighty-one participants (mean age 57.2?years, standard deviation 12) were included in the analysis. The self-efficacy score did not improve significantly more than expected after nine months (absolute change 0.12; 95% confidence interval ?0.028, 0.263; p?=?0.11), nor did clinical outcomes. Website usage was limited (average 0.7 logins/month). Analysis of the interviews (n?=?21) revealed four themes:1) mediators of website use; 2) patterns of website use, including role of the blog in driving site traffic; 3) feedback on website; and 4) potential mechanisms for website effect. Conclusions: A self-management website for patients with type 2 diabetes did not improve self-efficacy. Website use was limited. Although its perceived reliability, availability of a blog and emailed reminders drew people to the website, participants? struggles with type 2 diabetes, competing priorities in their lives, and website accessibility were barriers to its use. Future interventions should aim to integrate the intervention seamlessly into the daily routine of end users such that it is not seen as yet another chore.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 26
    Publikationsdatum: 2014-12-15
    Beschreibung: Background: With the advent of low cost, fast sequencing technologies metagenomic analyses are made possible. The large data volumes gathered by these techniques and the unpredictable diversity captured in them are still, however, a challenge for computational biology. Results: In this paper we address the problem of rapid taxonomic assignment with small and adaptive data models ( 〈 5 MB) and present the accelerated k-mer explorer (AKE). Acceleration in AKE?s taxonomic assignments is achieved by a special machine learning architecture, which is well suited to model data collections that are intrinsically hierarchical. We report classification accuracy reasonably well for ranks down to order, observed on a study on real world data (Acid Mine Drainage, Cow Rumen). Conclusion: We show that the execution time of this approach is orders of magnitude shorter than competitive approaches and that accuracy is comparable. The tool is presented to the public as a web application (url: https://ani.cebitec.uni-bielefeld.de/ake/, username: bmc, password: bmcbioinfo).
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 27
    Publikationsdatum: 2014-12-15
    Beschreibung: Background: Next generation sequencing produces base calls with low quality scores that can affect the accuracy of identifying simple nucleotide variation calls, including single nucleotide polymorphisms and small insertions and deletions. Here we compare the effectiveness of two data preprocessing methods, masking and trimming, and the accuracy of simple nucleotide variation calls on whole-genome sequence data from Caenorhabditis elegans. Masking substitutes low quality base calls with `N?s (undetermined bases), whereas trimming removes low quality bases that results in a shorter read lengths. Results: We demonstrate that masking is more effective than trimming in reducing the false-positive rate in single nucleotide polymorphism (SNP) calling. However, both of the preprocessing methods did not affect the false-negative rate in SNP calling with statistical significance compared to the data analysis without preprocessing. False-positive rate and false-negative rate for small insertions and deletions did not show differences between masking and trimming. Conclusions: We recommend masking over trimming as a more effective preprocessing method for next generation sequencing data analysis since masking reduces the false-positive rate in SNP calling without sacrificing the false-negative rate although trimming is more commonly used currently in the field. The perl script for masking is available at http://code.google.com/p/subn/. The sequencing data used in the study were deposited in the Sequence Read Archive (SRX450968 and SRX451773).
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 28
    Publikationsdatum: 2014-12-19
    Beschreibung: Due to instability and poor identification ability of single pyroelectric infrared (PIR) detector for human target identification, this paper proposes a new approach to fuse the information collected from multiple PIR sensors for human identification. Firstly, Fast Fourier Transform (FFT), Short Time Fourier Transform (STFT), Wavelet Transform (WT) and Wavelet Packet Transform (WPT) are adopted to extract features of the human body, which can be achieved by single PIR sensor. Then, we apply Principal Component Analysis (PCA) and Support Vector Machine (SVM) to reduce the characteristic dimensions and to classify the human targets, respectively. Finally, Fuzzy Comprehensive Evaluation (FCE) is utilized to fuse recognition results from multiple PIR sensors to finalize human identification. The pyroelectric characteristics under scenarios with different people and/or different paths are analyzed by various experiments, and the recognition results with/without fusion procedure are also shown and compared. The experimental results demonstrate our scheme has improved efficiency for human identification.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Publiziert von MDPI Publishing
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 29
    Publikationsdatum: 2014-12-09
    Beschreibung: Background: Online cancer information can support patients in making treatment decisions. However, such information may not be adequately tailored to the patient?s perspective, particularly if healthcare professionals do not sufficiently engage patient groups when developing online information. We applied qualitative user testing during the development of a patient information website on stereotactic ablative radiotherapy (SABR), a new guideline-recommended curative treatment for early-stage lung cancer. Methods: We recruited 27 participants who included patients referred for SABR and their relatives. A qualitative user test of the website was performed in 18 subjects, followed by an additional evaluation by users after website redesign (N?=?9). We primarily used the `thinking aloud? approach and semi-structured interviewing. Qualitative data analysis was performed to assess the main findings reported by the participants. Results: Study participants preferred receiving different information that had been provided initially. Problems identified with the online information related to comprehending medical terminology, understanding the scientific evidence regarding SABR, and appreciating the side-effects associated with SABR. Following redesign of the website, participants reported fewer problems with understanding content, and some additional recommendations for better online information were identified. Conclusions: Our findings indicate that input from patients and their relatives allows for a more comprehensive and usable website for providing treatment information. Such a website can facilitate improved patient participation in treatment decision-making for cancer.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 30
    Publikationsdatum: 2014-12-01
    Beschreibung: Background: The identification of new diagnostic or prognostic biomarkers is one of the main aims of clinical cancer research. Technologies like mass spectrometry are commonly being used in proteomic research. Mass spectrometry signals show the proteomic profiles of the individuals under study at a given time. These profiles correspond to the recording of a large number of proteins, much larger than the number of individuals. These variables come in addition to or to complete classical clinical variables. The objective of this study is to evaluate and compare the predictive ability of new and existing models combining mass spectrometry data and classical clinical variables. This study was conducted in the context of binary prediction. Results: To achieve this goal, simulated data as well as a real dataset dedicated to the selection of proteomic markers of steatosis were used to evaluate the methods. The proposed methods meet the challenge of high-dimensional data and the selection of predictive markers by using penalization methods (Ridge, Lasso) and dimension reduction techniques (PLS), as well as a combination of both strategies through sparse PLS in the context of a binary class prediction. The methods were compared in terms of mean classification rate and their ability to select the true predictive values. These comparisons were done on clinical-only models, mass-spectrometry-only models and combined models. Conclusions: It was shown that models which combine both types of data can be more efficient than models that use only clinical or mass spectrometry data when the sample size of the dataset is large enough.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 31
    Publikationsdatum: 2014-12-01
    Beschreibung: Background: In order to extract meaningful information from electronic medical records, such as signs and symptoms, diagnoses, and treatments, it is important to take into account the contextual properties of the identified information: negation, temporality, and experiencer. Most work on automatic identification of these contextual properties has been done on English clinical text. This study presents ContextD, an adaptation of the English ConText algorithm to the Dutch language, and a Dutch clinical corpus.We created a Dutch clinical corpus containing four types of anonymized clinical documents: entries from general practitioners, specialists? letters, radiology reports, and discharge letters. Using a Dutch list of medical terms extracted from the Unified Medical Language System, we identified medical terms in the corpus with exact matching. The identified terms were annotated for negation, temporality, and experiencer properties. To adapt the ConText algorithm, we translated English trigger terms to Dutch and added several general and document specific enhancements, such as negation rules for general practitioners? entries and a regular expression based temporality module. Results: The ContextD algorithm utilized 41 unique triggers to identify the contextual properties in the clinical corpus. For the negation property, the algorithm obtained an F-score from 87% to 93% for the different document types. For the experiencer property, the F-score was 99% to 100%. For the historical and hypothetical values of the temporality property, F-scores ranged from 26% to 54% and from 13% to 44%, respectively. Conclusions: The ContextD showed good performance in identifying negation and experiencer property values across all Dutch clinical document types. Accurate identification of the temporality property proved to be difficult and requires further work. The anonymized and annotated Dutch clinical corpus can serve as a useful resource for further algorithm development.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 32
    Publikationsdatum: 2014-12-06
    Beschreibung: Background: Early recognition of severe sepsis and septic shock is challenging. The aim of this study was to determine the diagnostic accuracy of an electronic alert system in detecting severe sepsis or septic shock among emergency department (ED) patients. Methods: An electronic sepsis alert system was developed as a part of a quality-improvement project for severe sepsis and septic shock. The system screened all adult ED patients for a combination of systemic inflammatory response syndrome and organ dysfunction criteria (hypotension, hypoxemia or lactic acidosis). This study included all patients older than 14?years who presented to the ED of a tertiary care academic medical center from Oct. 1, 2012 to Jan. 31, 2013. As a comparator, emergency medicine physicians or the critical care physician identified the patients with severe sepsis or septic shock.In the ED, vital signs were manually entered into the hospital electronic heath record every hour in the critical care area and every two hours in other areas. We also calculated the time from the alert to the intensive care unit (ICU) referral. Results: Of the 49,838 patients who presented to the ED, 222 (0.4%) were identified to have severe sepsis or septic shock. The electronic sepsis alert had a sensitivity of 93.18% (95% CI, 88.78% - 96.00%), specificity of 98.44 (95% CI, 98.33% ? 98.55%), positive predictive value of 20.98% (95% CI, 18.50% ? 23.70%) and negative predictive value of 99.97% (95% CI, 99.95% ? 99.98%) for severe sepsis and septic shock. The alert preceded ICU referral by a median of 4.02?hours (Q1 - Q3: 1.25?8.55). Conclusions: Our study shows that electronic sepsis alert tool has high sensitivity and specificity in recognizing severe sepsis and septic shock, which may improve early recognition and management.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 33
    facet.materialart.
    Unbekannt
    BioMed Central
    Publikationsdatum: 2014-01-14
    Beschreibung: Background: Gene selection is an important part of microarray data analysis because it provides information thatcan lead to a better mechanistic understanding of an investigated phenomenon. At the same time,gene selection is very difficult because of the noisy nature of microarray data. As a consequence,gene selection is often performed with machine learning methods. The Random Forest method isparticularly well suited for this purpose. In this work, four state-of-the-art Random Forest-basedfeature selection methods were compared in a gene selection context. The analysis focused on thestability of selection because, although it is necessary for determining the significance of results, it isoften ignored in similar studies. Results: The comparison of post-selection accuracy in the validation of Random Forest classifiers revealed thatall investigated methods were equivalent in this context. However, the methods substantially differedwith respect to the number of selected genes and the stability of selection. Of the analysed methods,the Boruta algorithm predicted the most genes as potentially important. Conclusions: The post-selection classifier error rate, which is a frequently used measure, was found to be apotentially deceptive measure of gene selection quality. When the number of consistently selectedgenes was considered, the Boruta algorithm was clearly the best. Although it was also the mostcomputationally intensive method, the Boruta algorithm's computational demands could be reducedto levels comparable to those of other algorithms by replacing the Random Forest importance witha comparable measure from Random Ferns (a similar but simplified classifier). Despite their designassumptions, the minimal-optimal selection methods, were found to select a high fraction of falsepositives.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 34
    Publikationsdatum: 2014-01-15
    Beschreibung: Background: The Kruskal-Wallis test is a popular non-parametric statistical test for identifying expression quantitativetrait loci (eQTLs) from genome-wide data due to its robustness against variations in the underlyinggenetic model and expression trait distribution, but testing billions of marker-trait combinationsone-by-one can become computationally prohibitive. Results: We developed kruX, an algorithm implemented in Matlab, Python and R that uses matrix multiplicationsto simultaneously calculate the Kruskal-Wallis test statistic for several millions of marker-traitcombinations at once. KruX is more than ten thousand times faster than computing associations oneby-one on a typical human dataset. We used kruX and a dataset of more than 500k SNPs and 20kexpression traits measured in 102 human blood samples to compare eQTLs detected by the Kruskal-Wallis test to eQTLs detected by the parametric ANOVA and linear model methods. We found that theKruskal-Wallis test is more robust against data outliers and heterogeneous genotype group sizes anddetects a higher proportion of non-linear associations, but is more conservative for calling additivelinear associations. Conclusion: kruX enables the use of robust non-parametric methods for massive eQTL mapping without the needfor a high-performance computing infrastructure and is freely available from http://krux.googlecode.com.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 35
    Publikationsdatum: 2014-01-17
    Beschreibung: To meet the increasing challenges on global warming and customer demands, the current hierarchical structure of electric power grid is undergoing a rapid change by focusing on high efficiency, reliability and flexibility. To address such challenges, the new concept of smart grid with a sophisticated communication infrastructure has emerged. While power grid systems are based on traditional information architectures at present, the evolving smart grid has a strong need on communication connections among a huge amount of distributed elements, such as generators, substations, monitoring sensors and customers. However, the power industry is facing dilemma that communication resources are limited due to the lack of wireless spectrum resources and the restrictions of wired applications. This paper firstly provides the architecture of the wide-area monitoring system for smart grid by applying a spectrum sensing and sharing technique of cognitive radio, based on an overview of the current communication technologies. Afterwards, the paper presents the feasible application of cognitive communication architecture on the existing power quality monitoring system by combining long-term and short-term monitoring approaches. Content Type Journal Article Pages 335-350 DOI 10.3233/MGS-130212 Authors Ling Luo, State Grid Shanghai Electric Power Research Institute, Shanghai, China Yucan Yang, State Grid Shanghai Training Center, Shanghai, China Jian Zhou, State Grid Shanghai Electric Power Research Institute, Shanghai, China Qi Zhao, State Grid Shanghai Maintenance Company, Shanghai, China Journal Multiagent and Grid Systems Online ISSN 1875-9076 Print ISSN 1574-1702 Journal Volume Volume 9 Journal Issue Volume 9, Number 4 / 2013
    Print ISSN: 1574-1702
    Digitale ISSN: 1875-9076
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 36
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-17
    Beschreibung: Criminal behaviour often involves a combination of physical, mental, social and environmental (multi-)agent aspects, such as neurological deviations, hormones, arousal, (non)empathy, targets and social control. To study the dynamics of these aspects, this paper contributes a dynamical agent-based approach for analysis and simulation of criminal behaviour. It involves dynamically generated desires and beliefs in opportunities within the social environment, both based on literature on criminal behaviour. The approach is illustrated for the case of an Intermittent Explosive Disorder. Content Type Journal Article Pages 315-334 DOI 10.3233/MGS-130211 Authors Tibor Bosse, Department of Artificial Intelligence, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands Charlotte Gerritsen, Department of Artificial Intelligence, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands Jan Treur, Department of Artificial Intelligence, Vrije Universiteit Amsterdam, Amsterdam, the Netherlands Journal Multiagent and Grid Systems Online ISSN 1875-9076 Print ISSN 1574-1702 Journal Volume Volume 9 Journal Issue Volume 9, Number 4 / 2013
    Print ISSN: 1574-1702
    Digitale ISSN: 1875-9076
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 37
    Publikationsdatum: 2014-01-19
    Beschreibung: Background: Glioblastoma is the most aggressive primary central nervous tumor and carries a very poor prognosis. Invasion precludes effective treatment and virtually assures tumor recurrence. In the current study, we applied analytical and bioinformatics approaches to identify a set of microRNAs (miRs) from several different human glioblastoma cell lines that exhibit significant differential expression between migratory (edge) and migration-restricted (core) cell populations. The hypothesis of the study is that differential expression of miRs provides an epigenetic mechanism to drive cell migration and invasion. Results: Our research data comprise gene expression values for a set of 805 human miRs collected from matched pairs of migratory and migration-restricted cell populations from seven different glioblastoma cell lines. We identified 62 down-regulated and 2 up-regulated miRs that exhibit significant differential expression in the migratory (edge) cell population compared to matched migration-restricted (core) cells. We then conducted target prediction and pathway enrichment analysis with these miRs to investigate potential associated gene and pathway targets. Several miRs in the list appear to directly target apoptosis related genes. The analysis identifies a set of genes that are predicted by 3 different algorithms, further emphasizing the potential validity of these miRs to promote glioblastoma. Conclusions: The results of this study identify a set of miRs with potential for decreased expression in invasive glioblastoma cells. The verification of these miRs and their associated targeted proteins provides new insights for further investigation into therapeutic interventions. The methodological approaches employed here could be applied to the study of other diseases to provide biomedical researchers and clinicians with increased opportunities for therapeutic interventions.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 38
    Publikationsdatum: 2014-01-21
    Beschreibung: Background: The comparative modeling approach to protein structure prediction inherently relies on a template structure. Before building a model such a template protein has to be found and aligned with the query sequence. Any error made on this stage may dramatically affects the quality of result. There is a need, therefore, to develop accurate and sensitive alignment protocols. Results: BioShell threading software is a versatile tool for aligning protein structures, protein sequences or sequence profiles and query sequences to a template structures. The software is also capable of suboptimal alignment generation. It can be executed as an application from the UNIX command line, or as a set of Java classes called from a script or a Java application. The implemented Monte Carlo search engine greatly facilitates the development and benchmarking of new alignment scoring schemes evenwhen the functions exhibit non-deterministic polynomial-time complexity. Conclusions: Numerical experiments indicate that the new threading application offers template detection abilities and provides much better alignments than other methods. The package along with documentation and examples is available at: http://bioshell.pl/threading3d
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 39
    Publikationsdatum: 2014-01-15
    Beschreibung: Background: Breast cancer risk reduction has the potential to decrease the incidence of the disease, yet remains underused. We report on the development a web-based tool that provides automated risk assessment and personalized decision support designed for collaborative use between patients and clinicians. Methods: Under Institutional Review Board approval, we evaluated the decision tool through a patient focus group, usability testing, and provider interviews (including breast specialists, primary care physicians, genetic counselors). This included demonstrations and data collection at two scientific conferences (2009 International Shared Decision Making Conference, 2009 San Antonio Breast Cancer Symposium). Results: Overall, the evaluations were favorable. The patient focus group evaluations and usability testing (N = 34) provided qualitative feedback about format and design; 88% of these participants found the tool useful and 94% found it easy to use. 91% of the providers (N = 23) indicated that they would use the tool in their clinical setting. Conclusion: BreastHealthDecisions.org represents a new approach to breast cancer prevention care and a framework for high quality preventive healthcare. The ability to integrate risk assessment and decision support in real time will allow for informed, value-driven, and patient-centered breast cancer prevention decisions. The tool is being further evaluated in the clinical setting.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 40
    Publikationsdatum: 2014-01-16
    Beschreibung: Background: Independent data sources can be used to augment post-marketing drug safety signal detection. The vast amount of publicly available biomedical literature contains rich side effect information for drugs at all clinical stages. In this study, we present a large-scale signal boosting approach that combines over 4 million records in the US Food and Drug Administration (FDA) Adverse Event Reporting System (FAERS) and over 21 million biomedical articles. Results: The datasets are comprised of 4,285,097 records from FAERS and 21,354,075 MEDLINE articles. We first extracted all drug-side effect (SE) pairs from FAERS. Our study implemented a total of seven signal ranking algorithms. We then compared these different ranking algorithms before and after they were boosted with signals from MEDLINE sentences or abstracts. Finally, we manually curated all drug-cardiovascular (CV) pairs that appeared in both data sources and investigated whether our approach can detect many true signals that have not been included in FDA drug labels. We extracted a total of 2,787,797 drug-SE pairs from FAERS with a low initial precision of 0.025. The ranking algorithm combined signals from both FAERS and MEDLINE, significantly improving the precision from 0.025 to 0.371 for top-ranked pairs, representing a 13.8 fold elevation in precision. We showed by manual curation that drug-SE pairs that appeared in both data sources were highly enriched with true signals, many of which have not yet been included in FDA drug labels. Conclusions: We have developed an efficient and effective drug safety signal ranking and strengthening approach We demonstrate that large-scale combining information from FAERS and biomedical literature can significantly contribute to drug safety surveillance.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 41
    Publikationsdatum: 2014-01-16
    Beschreibung: Background: Computational methods for the prediction of protein features from sequence are a long-standing focusof bioinformatics. A key observation is that several protein features are closely inter-related, that is,they are conditioned on each other. Researchers invested a lot of effort into designing predictors thatexploit this fact. Most existing methods leverage inter-feature constraints by including known (orpredicted) correlated features as inputs to the predictor, thus conditioning the result. Results: By including correlated features as inputs, existing methods only rely on one side of the relation:the output feature is conditioned on the known input features. Here we show how to jointly improvethe outputs of multiple correlated predictors by means of a probabilistic-logical consistencylayer. The logical layer enforces a set of weighted first-order rules encoding biological constraintsbetween the features, and improves the raw predictions so that they least violate the constraints. Inparticular, we show how to integrate three stand-alone predictors of correlated features: subcellular localization(Loctree [J Mol Biol 348:85-100, 2005]), disulfide bonding state (Disulfind [Nucleic AcidsRes 34:W177-W181, 2006]), and metal bonding state (MetalDetector [Bioinformatics 24:2094-2095,2008]), in a way that takes into account the respective strengths and weaknesses, and does not requireany change to the predictors themselves. We also compare our methodology against two alternativerefinement pipelines based on state-of-the-art sequential prediction methods. Conclusions: The proposed framework is able to improve the performance of the underlying predictors by removingrule violations. We show that different predictors offer complementary advantages, and our method isable to integrate them using non-trivial constraints, generating more consistent predictions. In addition,our framework is fully general, and could in principle be applied to a vast array of heterogeneouspredictions without requiring any change to the underlying software. On the other hand, the alternativestrategies are more specific and tend to favor one task at the expense of the others, as shown byour experimental evaluation. The ultimate goal of our framework is to seamlessly integrate full predictionsuites, such as Distill [BMC Bioinformatics 7:402, 2006] and PredictProtein [Nucleic AcidsRes 32:W321-W326, 2004].
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 42
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-17
    Beschreibung: This paper presents the Juste-Neige system for predicting the snow height on the ski runs of a resort using an agent-based simulation software. The aim of Juste-Neige is to facilitate snow cover management in order to i) reduce the production cost of artificial snow and to improve the profit margin for the companies managing the ski resorts; and ii) to reduce the water and energy consumption, and thus to reduce the environmental impact, by producing only the snow needed for a good skiing experience. The software provides maps with the predicted snow heights for the predicted days. On these maps, the areas most exposed to snow erosion are highlighted. The software proceeds in three steps: i) interpolation of snow height measurements with a neural network; ii) local meteorological forecasts for every ski resort; iii) simulation of the impact caused by skiers using a multi-agent system. The software has been evaluated in the ski resort of Verbier in Switzerland and provides predictions that are useful for the management of the ski runs. This paper presents the software in general and the agent-based simulation in particular. Content Type Journal Article Pages 279-299 DOI 10.3233/MGS-130209 Authors Marc Revilloud, French Institute of Science Technology for Transport, Development and Networks, Versailles, France Jean-Christophe Loubier, University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland Marut Doctor, University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland Mikhail Kanevski, University of Lausanne, Lausanne, Switzerland Vadim Timonin, University of Lausanne, Lausanne, Switzerland Michael Ignaz Schumacher, University of Applied Sciences and Arts Western Switzerland, Sierre, Switzerland Journal Multiagent and Grid Systems Online ISSN 1875-9076 Print ISSN 1574-1702 Journal Volume Volume 9 Journal Issue Volume 9, Number 4 / 2013
    Print ISSN: 1574-1702
    Digitale ISSN: 1875-9076
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 43
    Publikationsdatum: 2014-01-11
    Beschreibung: The next generation of the Internet Protocol (IPv6) is currently about to be introduced in many organizations. However, its security features are still a very novel area of expertise for many practitioners. This study evaluates guidelines for secure deployment of IPv6, published by the U.S. NIST and the German federal agency BSI, for topicality, completeness and depth. The later two are scores defined in this paper and are based on the Requests for Comments relevant for IPv6 that were categorized, weighted and ranked for importance using an expert survey. Both guides turn out to be of practical value, but have a specific focus and are directed towards different audiences. Moreover, recommendations for possible improvements are presented. Our results could also support strategic management decisions on security priorities as well as for the choice of security guidelines for IPv6 roll-outs.
    Digitale ISSN: 1999-5903
    Thema: Informatik
    Publiziert von MDPI Publishing
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 44
    Publikationsdatum: 2014-01-14
    Beschreibung: Background: Logos are commonly used in molecular biology to provide a compact graphical representation of the conservation pattern of a set of sequences. They render the information contained in sequence alignments or profile hidden Markov models by drawing a stack of letters for each position, where the height of the stack corresponds to the conservation at that position, and the height of each letter within a stack depends on the frequency of that letter at that position. Results: We present a new tool and web server, called Skylign, which provides a unified framework for creating logos for both sequence alignments and profile hidden Markov models. In addition to static image files, Skylign creates a novel interactive logo plot for inclusion in web pages. These interactive logos enable scrolling, zooming, and inspection of underlying values. Skylign can avoid sampling bias in sequence alignments by down-weighting redundant sequences and by combining observed counts with informed priors. It also simplifies the representation of gap parameters, and can optionally scale letter heights based on alternate calculations of the conservation of a position. Conclusion: Skylign is available as a website, a scriptable web service with a RESTful interface, and as a software package for download. Skylign's interactive logos are easily incorporated into a web page with just a few lines of HTML markup. Skylign may be found at http://skylign.org.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 45
    Publikationsdatum: 2014-01-15
    Beschreibung: Background: Gene set analysis (GSA) is useful in deducing biological significance of gene lists using a priori defined gene sets such as gene ontology (GO) or pathways. Phenotypic annotation is sparse for human genes, but is far more abundant for other model organisms such as mouse, fly, and worm. Often, GSA needs to be done highly interactively by combining or modifying gene lists or inspecting gene-gene interactions in a molecular network.Description: We developed gsGator, a web-based platform for functional interpretation of gene sets with useful features such as cross-species GSA, simultaneous analysis of multiple gene sets, and a fully integrated network viewer for visualizing both GSA results and molecular networks. An extensive set of gene annotation information is amassed including GO & pathways, genomic annotations, protein-protein interaction, transcription factor-target (TF-target), miRNA targeting, and phenotype information for various model organisms. By combining the functionalities of Set Creator, Set Operator and Network Navigator, user can perform highly flexible and interactive GSA by creating a new gene list by any combination of existing gene sets (intersection, union and difference) or expanding genes interactively along the molecular networks such as protein-protein interaction and TF-target. We also demonstrate the utility of our interactive and cross-species GSA implemented in gsGator by several usage examples for interpreting genome-wide association study (GWAS) results. gsGator is freely available at http://gsGator.ewha.ac.kr. Conclusions: Interactive and cross-species GSA in gsGator greatly extends the scope and utility of GSA, leading to novel insights via conserved functional gene modules across different species.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 46
    Publikationsdatum: 2014-01-15
    Beschreibung: Background: Interpretation of binding modes of protein-small ligand complexes from 3D structure data is essential for understanding selective ligand recognition by proteins. It is often performed by visual inspection and sometimes largely depends on a priori knowledge about typical interactions such as hydrogen bonds and pi-pi stacking. Because it can introduce some biases due to scientists' subjective perspectives, more objective viewpoints considering a wide range of interactions are required.Description: In this paper, we present a web server for analyzing protein-small ligand interactions on the basis of patterns of atomic contacts, or "interaction patterns" obtained from the statistical analyses of 3D structures of protein-ligand complexes in our previous study. This server can guide visual inspection by providing information about interaction patterns for each atomic contact in 3D structures. Users can visually investigate what atomic contacts in user-specified 3D structures of protein-small ligand complexes are statistically overrepresented. This server consists of two main components: "Complex Analyzer," and "Pattern Viewer." The former provides a 3D structure viewer with annotations of interacting amino acid residues, ligand atoms, and interacting pairs of these. In the annotations of interacting pairs, assignment to an interaction pattern of each contact and statistical preferences of the patterns are presented. The "Pattern Viewer" provides details of each interaction pattern. Users can see visual representations of probability density functions of interactions, and a list of protein-ligand complexes showing similar interactions. Conclusions: Users can interactively analyze protein-small ligand binding modes with statistically determined interaction patterns rather than relying on a priori knowledge of the users, by using our new web server named GIANT that is freely available at http://giant.hgc.jp/.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 47
    Publikationsdatum: 2014-01-16
    Beschreibung: Background: Different methods have been proposed for analyzing differentially expressed (DE) genes in microarray data. Methods based on statistical tests that incorporate expression level variability are used more commonly than those based on fold change (FC). However, FC based results are more reproducible and biologically relevant. Results: We propose a new method based on fold change rank ordering statistics (FCROS). We exploit the variation in calculated FC levels using combinatorial pairs of biological conditions in the datasets. A statistic is associated with the ranks of the FC values for each gene, and the resulting probability is used to identify the DE genes within an error level. The FCROS method is deterministic, requires a low computational runtime and also solves the problem of multiple tests which usually arises with microarray datasets. Conclusion: We compared the performance of FCROS with those of other methods using synthetic and real microarray datasets. We found that FCROS is well suited for DE gene identification from noisy datasets when compared with existing FC based methods.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 48
    Publikationsdatum: 2014-01-19
    Beschreibung: Background: Physiologic signals, such as cardiac interbeat intervals, exhibit complex fluctuations. However, capturing important dynamical properties, including nonstationarities may not be feasible from conventional time series graphical representations. Methods: We introduce a simple-to-implement visualisation method, termed dynamical density delay mapping ("D3-Map" technique) that provides an animated representation of a system's dynamics. The method is based on a generalization of conventional two-dimensional (2D) Poincare plots, which are scatter plots where each data point, x(n), in a time series is plotted against the adjacent one, x(n + 1). First, we divide the original time series, x(n) (n = 1,..., N), into a sequence of segments (windows). Next, for each segment, a three-dimensional (3D) Poincare surface plot of x(n), x(n + 1), h[x(n),x(n + 1)] is generated, in which the third dimension, h, represents the relative frequency of occurrence of each (x(n),x(n + 1)) point. This 3D Poincare surface is then chromatised by mapping the relative frequency h values onto a colour scheme. We also generate a colourised 2D contour plot from each time series segment using the same colourmap scheme as for the 3D Poincare surface. Finally, the original time series graph, the colourised 3D Poincare surface plot, and its projection as a colourised 2D contour map for each segment, are animated to create the full "D3-Map." Results: We first exemplify the D3-Map method using the cardiac interbeat interval time series from a healthy subject during sleeping hours. The animations uncover complex dynamical changes, such as transitions between states, and the relative amount of time the system spends in each state. We also illustrate the utility of the method in detecting hidden temporal patterns in the heart rate dynamics of a patient with atrial fibrillation. The videos, as well as the source code, are made publicly available. Conclusions: Animations based on density delay maps provide a new way of visualising dynamical properties of complex systems not apparent in time series graphs or standard Poincare plot representations. Trainees in a variety of fields may find the animations useful as illustrations of fundamental but challenging concepts, such as nonstationarity and multistability. For investigators, the method may facilitate data exploration.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 49
    facet.materialart.
    Unbekannt
    MDPI Publishing
    Publikationsdatum: 2014-01-23
    Beschreibung: We describe a flow model related to ordinary network flows the same way as stable matchings are related to maximum matchings in bipartite graphs. We prove that there always exists a stable flow and generalize the lattice structure of stable marriages to stable flows. Our main tool is a straightforward reduction of the stable flow problem to stable allocations. For the sake of completeness, we prove the results we need on stable allocations as an application of Tarski’s fixed point theorem.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Publiziert von MDPI Publishing
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 50
    Publikationsdatum: 2014-01-23
    Beschreibung: Background: The interest of the scientific community in investigating the impact of rare variants on complex traits has stimulated the development of novel statistical methodologies for association studies. The fact that many of the recently proposed methods for association studies suffer from low power to identify a genetic association motivates the incorporation of prior knowledge into statistical tests. Results: In this article we propose a methodology to incorporate prior information into the region-based score test. Within our framework prior information is used to partition variants within a region into several groups, following which asymptotically independent group statistics are constructed and then combined into a global test statistic. Under the null hypothesis the distribution of our test statistic has lower degrees of freedom compared with those of the region-based score statistic. Theoretical power comparison, population genetics simulations and results from analysis of the GAW17 sequencing data set suggest that under some scenarios our method may perform as well as or outperform the score test and other competing methods. Conclusions: An approach which uses prior information to improve the power of the region-based score test is proposed. Theoretical power comparison, population genetics simulations and the results of GAW17 data analysis showed that for some scenarios power of our method is on the level with or higher than those of the score test and other methods.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 51
    Publikationsdatum: 2014-01-23
    Beschreibung: Background: Interactive multimedia is an emerging technology that is being used to facilitate interactions between patients and health professionals. The purpose of this review was to identify and evaluate the impact of multimedia interventions (MIs), delivered in the context of paediatric healthcare, in order to inform the development of a MI to promote the communication of dietetic messages with overweight preadolescent children. Of particular interest were the effects of these MIs on child engagement and participation in treatment, and the subsequent effect on health-related treatment outcomes. Methods: An extensive search of 12 bibliographic databases was conducted in April 2012. Studies were included if: one or more child-participant was 7 to 11-years-of-age; a MI was used to improve health-related behaviour; child-participants were diagnosed with a health condition and were receiving treatment for that condition at the time of the study. Data describing study characteristics and intervention effects on communication, satisfaction, knowledge acquisition, changes in self-efficacy, healthcare utilisation, and health outcomes were extracted and summarised using qualitative and quantitative methods. Results: A total of 14 controlled trials, published between 1997 and 2006 met the selection criteria. Several MIs had the capacity to facilitate engagement between the child and a clinician, but only one sought to utilise the MI to improve communication between the child and health professional. In spite of concerns over the quality of some studies and small study populations, MIs were found useful in educating children about their health, and they demonstrated potential to improve children's health-related self-efficacy, which could make them more able partners in face-to-face communications with health professionals. Conclusions: The findings of this review suggest that MIs have the capacity to support preadolescent child-clinician communication, but further research in this field is needed. Particular attention should be given to designing appropriate MIs that are clinically relevant.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 52
    Publikationsdatum: 2014-01-23
    Beschreibung: A novel learning methodology based on a hybrid mechanism for training interval singleton type-2 Takagi-Sugeno-Kang fuzzy logic systems uses recursive orthogonal least-squares to tune the type-1 consequent parameters and the steepest descent method to tune the interval type-2 antecedent parameters. The proposed hybrid-learning algorithm changes the interval type-2 model parameters adaptively to minimize some criteria function as new information becomes available and to match desired input-output data pairs. Its antecedent sets are type-2 fuzzy sets, its consequent sets are type-1 fuzzy sets, and its inputs are singleton fuzzy numbers without uncertain standard deviations. As reported in the literature, the performance indices of hybrid models have proved to be better than those of the individual training mechanisms used alone. Experiments were carried out involving the application of hybrid interval type-2 Takagi-Sugeno-Kang fuzzy logic systems for modeling and prediction of the scale-breaker entry temperature in a hot strip mill for three different types of coils. The results demonstrate how the interval type-2 fuzzy system learns from selected input-output data pairs and improves its performance as hybrid training progresses. Content Type Journal Article Pages 125-135 DOI 10.3233/HIS-130188 Authors Gerardo M. Méndez, Centro de Manufactura Avanzada, Corporación Mexicana de Investigación en Materiales SA de CV – COMIMSA, Saltillo, Coah, México J. Cruz Martinez, Departamento de Economía y Administración, Instituto Tecnológico de Nuevo León, Cd. Guadalupe, N.L., México David S. González, Centro de Manufactura Avanzada, Corporación Mexicana de Investigación en Materiales SA de CV – COMIMSA, Saltillo, Coah, México F. Javier Rendón-Espinoza, Departamento de Economía y Administración, Instituto Tecnológico de Nuevo León, Cd. Guadalupe, N.L., México Journal International Journal of Hybrid Intelligent Systems Online ISSN 1875-8819 Print ISSN 1448-5869 Journal Volume Volume 11 Journal Issue Volume 11, Number 2 / 2014
    Print ISSN: 1448-5869
    Digitale ISSN: 1875-8819
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 53
    Publikationsdatum: 2014-01-23
    Beschreibung: Obstacle detection is a fundamental issue of robot navigation and there have been several proposed methods for this problem. In this paper, we propose a new approach to find out obstacles on Depth Camera streams. The proposed approach consists of three stages. First, preprocessing stage is for noise removal. Second, different depths in a frame are clustered based on the Interval Type-2 Fuzzy Subtractive Clustering algorithm. Third, the objects of interest are detected from the obtained clusters. Beside that, it gives an improvement in the Interval Type-2 Fuzzy Subtractive Clustering algorithm to reduce the time consuming. In theory, it is at least 3700 times better than the original one, and approximate 980100 in practice on our depth frames. The results conducted on frames demonstrate that the distance from the camera to objects retrieved is exact enough for indoor robot navigation problems. Content Type Journal Article Pages 97-107 DOI 10.3233/HIS-130186 Authors Mau Uyen Nguyen, Department of Information Systems, Le Quy Don Technical University, Hanoi, Vietnam Long Thanh Ngo, Department of Information Systems, Le Quy Don Technical University, Hanoi, Vietnam Thanh Tinh Dao, Department of Information Systems, Le Quy Don Technical University, Hanoi, Vietnam Journal International Journal of Hybrid Intelligent Systems Online ISSN 1875-8819 Print ISSN 1448-5869 Journal Volume Volume 11 Journal Issue Volume 11, Number 2 / 2014
    Print ISSN: 1448-5869
    Digitale ISSN: 1875-8819
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 54
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-23
    Beschreibung: In this paper we study L-fuzzy proximity spaces, where L represents a completely distributive lattice.We shall investigate the level decomposition of L-fuzzy proximity on X and the corresponding L-fuzzy proximity continuous maps. In addition, we shall establish the representation theorems of L-fuzzy proximity on X. Content Type Journal Article Pages 137-144 DOI 10.3233/HIS-130189 Authors M. El-Dardery, Department of Mathematics, Faculty of Science, Fayoum University, Fayoum, Egypt J. Zhang, College of Science, North China University of Technology, Beijing, China Journal International Journal of Hybrid Intelligent Systems Online ISSN 1875-8819 Print ISSN 1448-5869 Journal Volume Volume 11 Journal Issue Volume 11, Number 2 / 2014
    Print ISSN: 1448-5869
    Digitale ISSN: 1875-8819
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 55
    Publikationsdatum: 2014-01-23
    Beschreibung: This paper revisits a class of recently proposed so-called invariant manifold methods for zero finding of ill-posed problems, showing that they can be profitably viewed as homotopy methods, in which the homotopy parameter is interpreted as a learning parameter. Moreover, it is shown that the choice of this learning parameter can be made in a natural manner from a control Liapunov function approach (CLF). From this viewpoint, maintaining manifold invariance is equivalent to ensuring that the CLF satisfies a certain ordinary differential equation, involving the learning parameter, that allows an estimate of rate of convergence. In order to illustrate this approach, algorithms recently proposed using the invariant manifold approach, are rederived, via CLFs, in a unified manner. Adaptive regularization parameters for solving linear algebraic ill-posed problems were also proposed. This paper also shows that the discretizations of the ODEs to solve the zero finding problem, as well as the different adaptive choices of the regularization parameter, yield iterative methods for linear systems, which are also derived using the Liapunov optimizing control (LOC) method. Content Type Journal Article Pages 109-123 DOI 10.3233/HIS-130187 Authors Fernando Pazos, Department of Electrical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil Amit Bhaya, Department of Electrical Engineering, Federal University of Rio de Janeiro, Rio de Janeiro, Brazil Journal International Journal of Hybrid Intelligent Systems Online ISSN 1875-8819 Print ISSN 1448-5869 Journal Volume Volume 11 Journal Issue Volume 11, Number 2 / 2014
    Print ISSN: 1448-5869
    Digitale ISSN: 1875-8819
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 56
    Publikationsdatum: 2014-01-23
    Beschreibung: In this paper, Particle Swarm Optimization with Constriction Factor and Inertia Weight Approach is hybridized with Wavelet Mutation (PSOCFIWA-WM) strategy for the optimal design of linear phase FIR filters. Real coded genetic algorithm (RGA), particle swarm optimization (PSO) and particle swarm optimization with constriction factor and inertia weight (PSOCFIWA) have also been adopted for the sake of comparison. PSOCFIWA-WM incorporates a new definition of swarm updating in PSOCFIWA with the help of wavelet based mutation. Wavelet mutation enhances the effectiveness of PSOCFIWA to explore the multidimensional solution space more effectively. In this design approach, filter length, pass band and stop band edge frequencies, feasible pass band and stop band ripple sizes are specified. A comparison of simulation results reveals the optimization superiority of the proposed technique over the other optimization techniques for the solution of FIR low pass (LP), high pass (HP), band pass (BP) and band stop (BS) filter designs. Content Type Journal Article Pages 81-96 DOI 10.3233/HIS-130185 Authors S.K. Saha, Department of Electronics and Communication Engineering, National Institute of Technology, Durgapur, West Bengal, India R. Kar, Department of Electronics and Communication Engineering, National Institute of Technology, Durgapur, West Bengal, India D. Mandal, Department of Electronics and Communication Engineering, National Institute of Technology, Durgapur, West Bengal, India S.P. Ghoshal, Department of Electrical Engineering, National Institute of Technology, Durgapur, West Bengal, India Journal International Journal of Hybrid Intelligent Systems Online ISSN 1875-8819 Print ISSN 1448-5869 Journal Volume Volume 11 Journal Issue Volume 11, Number 2 / 2014
    Print ISSN: 1448-5869
    Digitale ISSN: 1875-8819
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 57
    Publikationsdatum: 2014-01-24
    Beschreibung: Background: An ion mobility (IM) spectrometer coupled with a multi-capillary column (MCC) measures volatile organic compounds (VOCs) in the air or in exhaled breath. This technique is utilized in several biotechnological and medical applications. Each peak in an MCC/IM measurement represents a certain compound, which may be known or unknown. For clustering and classification of measurements, the raw data matrix must be reduced to a set of peaks. Each peak is described by its coordinates (retention time in the MCC and reduced inverse ion mobility) and shape (signal intensity, further shape parameters). This fundamental step is referred to as peak extraction. It is the basis for identifying discriminating peaks, and hence putative biomarkers, between two classes of measurements, such as a healthy control group and a group of patients with a confirmed disease. Current state-of-the-art peak extraction methods require human interaction, such as hand-picking approximate peak locations, assisted by a visualization of the data matrix. In a high-throughput context, however, it is preferable to have robust methods for fully automated peak extraction. Results: We introduce PEAX, a modular framework for automated peak extraction. The framework consists of several steps in a pipeline architecture. Each step performs a specific sub-task and can be instantiated by different methods implemented as modules. We provide open-source software for the framework and several modules for each step. Additionally, an interface that allows easy extension by a new module is provided. Combining the modules in all reasonable ways leads to a large number of peak extraction methods. We evaluate all combinations using intrinsic error measures and by comparing the resulting peak sets with an expert-picked one. Conclusions: Our software PEAX is able to automatically extract peaks from MCC/IM measurements within a few seconds. The automatically obtained results keep up with the results provided by current state-of-theart peak extraction methods. This opens a high-throughput context for the MCC/IM application field. Our software is available at http://www.rahmannlab.de/research/ims.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 58
    facet.materialart.
    Unbekannt
    BioMed Central
    Publikationsdatum: 2014-03-12
    Beschreibung: Contributing reviewersThe editors of BMC Bioinformatics would like to thank all our reviewers who have contributed their time to the journal in Volume 14 (2013).
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 59
    Publikationsdatum: 2014-03-13
    Beschreibung: Background: Mandatory deposit of raw microarray data files for public access, prior to study publication, provides significant opportunities to conduct new bioinformatics analyses within and across multiple datasets. Analysis of raw microarray data files (e.g. Affymetrix .cel files) can be time consuming, complex, and requires fundamental computational and bioinformatics skills. The development of analytical workflows to automate these tasks simplifies the processing of, improves the efficiency of, and serves to standardize multiple and sequential analyses. Once installed, workflows facilitate the tedious steps required to run rapid intra- and inter-dataset comparisons. Results: We developed a workflow to facilitate and standardize Meta-Analysis of Affymetrix Microarray Data analysis (MAAMD) in Kepler. Two freely available stand-alone software tools, R and AltAnalyze were embedded in MAAMD. The inputs of MAAMD are user-editable csv files, which contain sample information and parameters describing the locations of input files and required tools. MAAMD was tested by analyzing 4 different GEO datasets from mice and drosophila.MAAMD automates data downloading, data organization, data quality control assesment, differential gene expression analysis, clustering analysis, pathway visualization, gene-set enrichment analysis, and cross-species orthologous-gene comparisons MAAMD was utilized to identify gene orthologues responding to hypoxia or hyperoxia in both mice and drosophila. The entire set of analyses for 4 datasets (34 total microarrays) finished in ~ one hour. Conclusions: MAAMD saves time, minimizes the required computer skills, and offers a standardized procedure for users to analyze microarray datasets and make new intra- and inter-dataset comparisons.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 60
    Publikationsdatum: 2014-03-15
    Beschreibung: Background: Modeling high-dimensional data involving thousands of variables is particularly important for gene expression profiling experiments, nevertheless,it remains a challenging task . One of the challenges is to implement an effective method for selecting a small set of relevant genes, buried in high-dimensional irrelevant noises. RELIEF is a popular and widely used approach for feature selection owing to its low computational cost and high accuracy. However, RELIEF based methods suffer from instability, especially in the presence of noisy and/or high-dimensional outliers. Results: We propose an innovative feature weighting algorithm, called LHR, to select informative genes from highly noisy data. LHR is based on RELIEF for feature weighting using classical margin maximization. The key idea of LHR is to estimate the feature weights through local approximation rather than global measurement, which is typically used in existing methods. The weights obtained by our method are very robust in terms of degradation of noisy features, even those with vast dimensions. To demonstrate the performance of our method, extensive experiments involving classification tests have been carried out on both synthetic and real microarray benchmark datasets by combining the proposed technique with standard classifiers, including the support vector machine (SVM), $k$-nearest neighbor (KNN), hyperplane $k$-nearest neighbor (HKNN), linear discriminant analysis (LDA) and naive Bayes (NB). Conclusion: Experiments on both synthetic and real-world datasets demonstrate the superior performance of the proposed feature selection method combined with supervised learning in three aspects: 1) high classification accuracy, 2) excellent robustness to noise and 3) good stability using to various classification algorithms.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 61
    Publikationsdatum: 2014-03-15
    Beschreibung: Background: Transposition event detection of transposable element (TE) in the genome using short reads from the next-generation sequence (NGS) was difficult, because the nucleotide sequence of TE itself is repetitive, making it difficult to identify locations of its insertions by alignment programs for NGS. We have developed a program with a new algorithm to detect the transpositions from NGS data. Results: In the process of tool development, we used next-generation sequence (NGS) data of derivative lines (ttm2 and ttm5) of japonica rice cv. Nipponbare, regenerated through cell culture. The new program, called a transposon insertion finder (TIF), was applied to detect the de novo transpositions of Tos17 in the regenerated lines. TIF searched 300 million reads of a line within 20 min, identifying 4 and 12 de novo transposition in ttm2 and ttm5 lines, respectively. All of the transpositions were confirmed by PCR/electrophoresis and sequencing. Using the program, we also detected new transposon insertions of P-element from NGS data of Drosophila melanogaster. Conclusion: TIF operates to find the transposition of any elements provided that target site duplications (TSDs) are generated by their transpositions.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 62
    Publikationsdatum: 2014-05-06
    Beschreibung: Stochastic diffusion search (SDS) is a multi-agent global optimisation technique based on the behaviour of ants, rooted in the partial evaluation of an objective function and direct communication between agents. Standard SDS, the fundamental algorithm at work in all SDS processes, is presented here. Parameter estimation is the task of suitably fitting a model to given data; some form of parameter estimation is a key element of many computer vision processes. Here, the task of hyperplane estimation in many dimensions is investigated. Following RANSAC (random sample consensus), a widely used optimisation technique and a standard technique for many parameter estimation problems, increasingly sophisticated data-driven forms of SDS are developed. The performance of these SDS algorithms and RANSAC is analysed and compared for a hyperplane estimation task. SDS is shown to perform similarly to RANSAC, with potential for tuning to particular search problems for improved results.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Publiziert von MDPI Publishing
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 63
    facet.materialart.
    Unbekannt
    BioMed Central
    Publikationsdatum: 2014-04-30
    Beschreibung: Background: In silco Biology is increasingly important and is often based on public data. While the problem of contamination is well recognised in microbiology labs the corresponding problem of database corruption has received less attention. Results: Mapping 50 billion next generation DNA sequences from The Thousand Genome Project against published genomes reveals many that match one or more Mycoplasma but are not included in the reference human genome GRCh37.p5. Many of these are of low quality but NCBI BLAST searches confirm some high quality, high entropy sequences match Mycoplasma but no human sequences. Conclusions: It appears at least 7% of 1000G samples are contaminated.
    Digitale ISSN: 1756-0381
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 64
    Publikationsdatum: 2014-05-04
    Beschreibung: Background: The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. Results: To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Conclusions: Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to select an assembly best tailored to their specific needs.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 65
    Publikationsdatum: 2014-05-03
    Beschreibung: Background: Computational discovery of microRNAs (miRNA) is based on pre-determined sets of features frommiRNA precursors (pre-miRNA). Some feature sets are composed of sequence-structure patternscommonly found in pre-miRNAs, while others are a combination of more sophisticated RNA features.In this work, we analyze the discriminant power of seven feature sets, which are used in six premiRNAprediction tools. The analysis is based on the classification performance achieved with thesefeature sets for the training algorithms used in these tools. We also evaluate feature discriminationthrough the F-score and feature importance in the induction of random forests. Results: Small or non-significant differences were found among the estimated classification performances ofclassifiers induced using sets with diversification of features, despite the wide differences in theirdimension. Inspired in these results, we obtained a lower-dimensional feature set, which achieved asensitivity of 90% and a specificity of 95%. These estimates are within 0.1% of the maximal valuesobtained with any feature set (SELECT, Section ¿Results and discussion¿) while it is 34 times fasterto compute. Even compared to another feature set (FS2, see Section ¿Results and discussion¿), whichis the computationally least expensive feature set of those from the literature which perform within0.1% of the maximal values, it is 34 times faster to compute. The results obtained by the tools used asreferences in the experiments carried out showed that five out of these six tools have lower sensitivityor specificity. Conclusion: In miRNA discovery the number of putative miRNA loci is in the order of millions. Analysisof putative pre-miRNAs using a computationally expensive feature set would be wasteful or evenunfeasible for large genomes. In this work, we propose a relatively inexpensive feature set and exploremost of the learning aspects implemented in current ab-initio pre-miRNA prediction tools, which maylead to the development of efficient ab-initio pre-miRNA discovery tools.The material to reproduce the main results from this paper can be downloaded fromhttp://bioinformatics.rutgers.edu/Static/Software/discriminant.tar.gz.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 66
    Publikationsdatum: 2014-05-07
    Beschreibung: Background: It is currently accepted that the perturbation of complex intracellular networks, rather than the dysregulation of a single gene, is the basis for phenotypical diversity. High-throughput gene expression data allow to investigate changes in gene expression profiles among different conditions. Recently, many efforts have been made to individuate which biological pathways are perturbed, given a list of differentially expressed genes (DEGs). In order to understand these mechanisms, it is necessary to unveil the variation of genes in relation to each other, considering the different phenotypes. In this paper, we illustrate a pipeline, based on Structural Equation Modeling (SEM) that allowed to investigate pathway modules, considering not only deregulated genes but also the connections between the perturbed ones. Results: The procedure was tested on microarray experiments relative to two neurological diseases: frontotemporal lobar degeneration with ubiquitinated inclusions (FTLD-U) and multiple sclerosis (MS). Starting from DEGs and dysregulated biological pathways, a model for each pathway was generated using databases information contained in STRING and KEGG, in order to design how DEGs were connected in a causal structure. Successively, SEM analysis proved if pathways differ globally, between groups, and for specific path relationships. The results confirmed the importance of certain genes in the analyzed diseases, and unveiled which connections are modified among them. Conclusions: We propose a framework to perform differential gene expression analysis on microarray data based on SEM, which is able to: 1) find relevant genes and perturbed biological pathways; 2) investigate putative sub-pathway models based on the concept of disease module; 3) test and improve the generated models; 4) individuate a differential expression level of one gene, and differential connection between two genes. This could shed a light, not only on the mechanisms affecting variations in gene expression, but also on the causes of gene-gene relationship modifications in diseased phenotypes.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 67
    Publikationsdatum: 2014-03-20
    Beschreibung: Background: Chromothripsis, a newly discovered type of complex genomic rearrangement, has been implicated inthe evolution of several types of cancers. To date, it has been described in bone cancer,SHH-medulloblastoma and acute myeloid leukemia, amongst others, however there are still noformal or automated methods for detecting or annotating it in high throughput sequencing data. Assuch, findings of chromothripsis are difficult to compare and many cases likely escape detectionaltogether. Results: We introduce ShatterProof, a software tool for detecting and quantifying chromothriptic events.ShatterProof takes structural variation calls (translocations, copy-number variations, short insertionsand loss of heterozygosity) produced by any algorithm and using an operational definition ofchromothripsis performs robust statistical tests to accurately predict the presence and location ofchromothriptic events. Validation of our tool was conducted using clinical data sets includingmatched normal, prostate cancer samples in addition to the colorectal cancer and SCLC data setsused in the original description of chromothripsis. Conclusions: ShatterProof is computationally efficient, having low memory requirements and near linearcomputation time. This allows it to become a standard component of sequencing analysis pipelines,enabling researchers to routinely and accurately assess samples for chromothripsis. Source code anddocumentation can be found at http://search.cpan.org/~sgovind/Shatterproof.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 68
    Publikationsdatum: 2014-03-20
    Beschreibung: Background: Metagenomics, based on culture-independent sequencing, is a well-fitted approach to provide insights into the composition, structure and dynamics of environmental viral communities. Following recent advances in sequencing technologies, new challenges arise for existing bioinformatic tools dedicated to viral metagenome (i.e. virome) analysis as (i) the number of viromes is rapidly growing and (ii) large genomic fragments can now be obtained by assembling the huge amount of sequence data generated for each metagenome. Results: To face these challenges, a new version of Metavir was developed. First, all Metavir tools have been adapted to support comparative analysis of viromes in order to improve the analysis of multiple datasets. In addition to the sequence comparison previously provided, viromes can now be compared through their k-mer frequencies, their taxonomic compositions, recruitment plots and phylogenetic trees containing sequences from different datasets. Second, a new section has been specifically designed to handle assembled viromes made of thousands of large genomic fragments (i.e. contigs). This section includes an annotation pipeline for uploaded viral contigs (gene prediction, similarity search against reference viral genomes and protein domains) and an extensive comparison between contigs and reference genomes. Contigs and their annotations can be explored on the website through specifically developed dynamic genomic maps and interactive networks. Conclusions: The new features of Metavir 2 allow users to explore and analyze viromes composed of raw reads or assembled fragments through a set of adapted tools and a user-friendly interface.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 69
    Publikationsdatum: 2014-03-20
    Beschreibung: Background: An increasing number of older adults drive automobiles. Given that the prevalence of dementia is rising, it is necessary to address the issue of driving retirement. The purpose of this study is to evaluate how a self-administered decision aid contributed to decision making about driving retirement by individuals living with dementia. The primary outcome measure in this study was decisional conflict. Knowledge, decision, satisfaction with decision, booklet use and booklet acceptability were the secondary outcome measures. Methods: A mixed methods approach was adopted. Drivers with dementia were recruited from an Aged Care clinic and a Primary Care center in NSW, Australia. Telephone surveys were conducted before and after participants read the decision aid. Results: Twelve participants were recruited (mean age 75, SD 6.7). The primary outcome measure, decisional conflict, improved following use of the decision aid. Most participants felt that the decision aid: (i) was balanced; (ii) presented information well; and (iii) helped them decide about driving. In addition, mean knowledge scores improved after booklet use. Conclusions: This decision aid shows promise as an acceptable, useful and low-cost tool for drivers with dementia. A self-administered decision aid can be used to assist individuals with dementia decide about driving retirement. A randomized controlled trial is underway to evaluate the effectiveness of the tool.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 70
    Publikationsdatum: 2014-03-20
    Beschreibung: Background: Emerging developments in nanomedicine allow the development of genome-based technologies for non-invasive and individualised screening for diseases such as colorectal cancer. The main objective of this study was to measure user preferences for colorectal cancer screening using a nanopill. Methods: A discrete choice experiment was used to estimate the preferences for five competing diagnostic techniques including the nanopill and iFOBT. Alternative screening scenarios were described using five attributes namely, preparation involved, sensitivity, specificity, complication rate and testing frequency. Fourteen random and two fixed choice tasks, each consisting of three alternatives, were offered to 2225 individuals. Data were analysed using the McFadden conditional logit model. Results: Thirteen hundred and fifty-six respondents completed the questionnaire. Most important attributes (and preferred levels) were the screening technique (nanopill), sensitivity (100%) and preparation (no preparation). Stated screening uptake for the nanopill was 79%, compared to 76% for iFOBT. In the case of screening with the nanopill, the percentage of people preferring not to be screened would be reduced from 19.2% (iFOBT) to 16.7%. Conclusions: Although the expected benefits of nanotechnology based colorectal cancer screening are improved screening uptake, assuming more accurate test results and less preparation involved, the relative preference of the nanopill is only slightly higher than iFOBT. Estimating user preferences during the development of diagnostic technologies could be used to identify relative performance, including perceived benefits and harms compared to competitors allowing for significant changes to be made throughout the process of development.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 71
    Publikationsdatum: 2014-03-21
    Beschreibung: Background: Clinical decision support (CDS) has been shown to be effective in improving medical safety and quality but there is little information on how telephone triage benefits from CDS. The aim of our study was to compare triage documentation quality associated with the use of a clinical decision support tool, ExpertRN(C). Methods: We examined 50 triage documents before and after a CDS tool was used in nursing triage. To control for the effects of CDS training we had an additional control group of triage documents created by nurses who were trained in the CDS tool, but who did not use it in selected notes. The CDS intervention cohort of triage notes was compared to both the pre-CDS notes and the CDS trained (but not using CDS) cohort. Cohorts were compared using the documentation standards of the Association of American Ambulatory Care Nurses (AAACN). We also compared triage note content (documentation of associated positive and negative features relating to the symptoms, self-care instructions, and warning signs to watch for), and documentation defects pertinent to triage safety. Results: Three of five AAACN documentation standards were significantly improved with CDS. There was a mean of 36.7 symptom features documented in triage notes for the CDS group but only 10.7 symptom features in the pre-CDS cohort (p 〈 0.0001) and 10.2 for the cohort that was CDS-trained but not using CDS (p 〈 0.0001). The difference between the mean of 10.2 symptom features documented in the pre-CDS and the mean of 10.7 symptom features documented in the CDS-trained but not using was not statistically significant (p = 0.68). Conclusions: CDS significantly improves triage note documentation quality. CDS-aided triage notes had significantly more information about symptoms, warning signs and self-care. The changes in triage documentation appeared to be the result of the CDS alone and not due to any CDS training that came with the CDS intervention. Although this study shows that CDS can improve documentation, further study is needed to determine if it results in improved care.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 72
    Publikationsdatum: 2014-03-12
    Beschreibung: Background: Information about drug-target relations is at the heart of drug discovery. There are now dozens of databases providing drug-target interaction data with varying scope, and focus. Therefore, and due to the large chemical space, the overlap of the different data sets is surprisingly small. As searching through these sources manually is cumbersome, time-consuming and error-prone, integrating all the data is highly desirable. Despite a few attempts, integration has been hampered by the diversity of descriptions of compounds, and by the fact that the reported activity values, coming from different data sets, are not always directly comparable due to usage of different metrics or data formats.Description: We have built Drug2Gene, a knowledge base, which combines the compound/drug-gene/protein information from 19 publicly available databases. A key feature is our rigorous unification and standardization process which makes the data truly comparable on a large scale, allowing for the first time effective data mining in such a large knowledge corpus. As of version 3.2, Drug2Gene contains 4,372,290 unified relations between compounds and their targets most of which include reported bioactivity data. We extend this set with putative (i.e. homology-inferred) relations where sufficient sequence homology between proteins suggests they may bind to similar compounds. Drug2Gene provides powerful search functionalities, very flexible export procedures, and a user-friendly web interface. Conclusions: Drug2Gene v3.2 has become a mature and comprehensive knowledge base providing unified, standardized drug-target related information gathered from publicly available data sources. It can be used to integrate proprietary data sets with publicly available data sets. Its main goal is to be a 'one-stop shop' to identify tool compounds targeting a given gene product or for finding all known targets of a drug. Drug2Gene with its integrated data set of public compound-target relations is freely accessible without restrictions at http://www.drug2gene.com.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 73
    Publikationsdatum: 2014-03-05
    Beschreibung: Background: Many biomedical relation extraction systems are machine-learning based and have to be trained on large annotated corpora that are expensive and cumbersome to construct. We developed a knowledge-based relation extraction system that requires minimal training data, and applied the system for the extraction of adverse drug events from biomedical text. The system consists of a concept recognition module that identifies drugs and adverse effects in sentences, and a knowledge-base module that establishes whether a relation exists between the recognized concepts. The knowledge base was filled with information from the Unified Medical Language System. The performance of the system was evaluated on the ADE corpus, consisting of 1644 abstracts with manually annotated adverse drug events. Fifty abstracts were used for training, the remaining abstracts were used for testing. Results: The knowledge-based system obtained an F-score of 50.5%, which was 34.4 percentage points better than the co-occurrence baseline. Increasing the training set to 400 abstracts improved the F-score to 54.3%. When the system was compared with a machine-learning system, jSRE, on a subset of the sentences in the ADE corpus, our knowledge-based system achieved an F-score that is 7 percentage points higher than the F-score of jSRE trained on 50 abstracts, and still 2 percentage points higher than jSRE trained on 90% of the corpus. Conclusion: A knowledge-based approach can be successfully used to extract adverse drug events from biomedical text without need for a large training set. Whether use of a knowledge base is equally advantageous for other biomedical relation-extraction tasks remains to be investigated.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 74
    Publikationsdatum: 2014-04-27
    Beschreibung: Background: Complex designs are common in (observational) clinical studies. Sequencing data for such studies are produced more and more often, implying challenges for the analysis, such as excess of zeros, presence of random effects and multi-parameter inference. Moreover, when sample sizes are small, inference is likely to be too liberal when, in a Bayesian setting, applying a non-appropriate prior or to lack power when not carefully borrowing information across features. Results: We show on microRNA sequencing data from a clinical cancer study how our software ShrinkBayes tackles the aforementioned challenges. In addition, we illustrate its comparatively good performance on multi-parameter inference for groups using a data-based simulation. Finally, in the small sample size setting, we demonstrate its high power and improved FDR estimation by use of Gaussian mixture priors that include a point mass. Conclusion: ShrinkBayes is a versatile software package for the analysis of count-based sequencing data, which is particularly useful for studies with small sample sizes or complex designs.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 75
    Publikationsdatum: 2014-04-27
    Beschreibung: A common class of biomedical analysis is to explore expression data from high throughput experiments for the purpose of uncovering functional relationships that can lead to a hypothesis about mechanisms of a disease. We call this analysis expression driven, -omics hypothesizing. In it, scientists use interactive data visualizations and read deeply in the research literature. Little is known, however, about the actual flow of reasoning and behaviors (sensemaking) that scientists enact in this analysis, end-to-end. Understanding this flow is important because if bioinformatics tools are to be truly useful they must support it. Sensemaking models of visual analytics in other domains have been developed and used to inform the design of useful and usable tools. We believe they would be helpful in bioinformatics. To characterize the sensemaking involved in expression-driven, -omics hypothesizing, we conducted an in-depth observational study of one scientist as she engaged in this analysis over six months. From findings, we abstracted a preliminary sensemaking model. Here we describe its stages and suggest guidelines for developing visualization tools that we derived from this case. A single case cannot be generalized. But we offer our findings, sensemaking model and case-based tool guidelines as a first step toward increasing interest and further research in the bioinformatics field on scientists' analytical workflows and their implications for tool design.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 76
    Publikationsdatum: 2014-04-28
    Beschreibung: Background: Periodic proteins, characterized by the presence of multiple repeats of short motifs, form an interesting and seldom-studied group. Due to often extreme divergence in sequence, detection and analysis of such motifs is performed more reliably on the structural level. Yet, few algorithms have been developed for the detection and analysis of structures of periodic proteins. Results: ConSole recognizes modularity in protein contact maps, allowing for precise identification of repeats in solenoid protein structures, an important subgroup of periodic proteins. Tests on benchmarks show that ConSole has higher recognition accuracy as compared to Raphael, the only other publicly available solenoid structure detection tool. As a next step of ConSole analysis, we show how detection of solenoid repeats in structures can be used to improve sequence recognition of these motifs and to detect subtle irregularities of repeat lengths in three solenoid protein families. Conclusions: The ConSole algorithm provides a fast and accurate tool to recognize solenoid protein structures as a whole and to identify individual solenoid repeat units from a structure. ConSole is available as a web-based, interactive server and is available for download at http://console.sanfordburnham.org.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 77
    Publikationsdatum: 2014-04-30
    Beschreibung: Background: The health sector is faced with constant changes as new approaches to tackle illnesses are unveiled through research. Information, communication and technology have greatly transformed healthcare practice the world over. Nursing is continually exposed to a variety of changes. Variables including age, educational level, years worked in nursing, computer knowledge and experience have been found to influence the attitudes of nurses towards computerisation. The purpose of the study was to determine the attitudes of nurses towards the use of computers and the factors that influence these attitudes. Methods: This cross sectional descriptive study was conducted among staff nurses working at one public hospital (Kenyatta National Hospital, (KNH) and one private hospital (Aga Khan University Hospital (AKUH). A convenience sample of 200 nurses filled the questionnaires. Data was collected using the modified Nurses' Attitudes Towards Computerisation (NATC) questionnaire. Results: Nurses had a favorable attitude towards computerisation. Non-users had a significantly higher attitude score compared to the users (p = 0.0274). Statistically significant associations were observed with age (p = 0.039), level of education (p = 0.025), duration of exposure to computers (p = 0.025) and attitudes towards computerisation. Conclusion: Generally, nurses have positive attitudes towards computerisation.This information is important for the planning and implementation of computerisation in the hospital as suggested in other studies.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 78
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-04-27
    Beschreibung: In this paper we describe the Semantic Quran dataset, a multilingual RDF representation of translations of the Quran. The dataset was created by integrating data from two different semi-structured sources and aligned to an ontology designed to represent multilingual data from sources with a hierarchical structure. The resulting RDF data encompasses 43 different languages which belong to the most under-represented languages in the Linked Data Cloud, including Arabic, Amharic and Amazigh. We designed the dataset to be easily usable in natural-language processing applications with the goal of facilitating the development of knowledge extraction tools for these languages. In particular, the Semantic Quran is compatible with the Natural-Language Interchange Format and contains explicit morpho-syntactic information on the utilized terms. We present the ontology devised for structuring the data. We also provide the transformation rules implemented in our extraction framework. Finally, we detail the link creation process as well as possible usage scenarios for the Semantic Quran dataset. Content Type Journal Article Pages - DOI 10.3233/SW-140137 Authors Mohamed Ahmed Sherif, Universität Leipzig, Institut für Informatik, AKSW, Postfach 100920, D-04009 Leipzig, Germany. E-mail: {Sherif,NgongaNgomo}@informatik.uni-leipzig.de Axel-Cyrille Ngonga Ngomo, Universität Leipzig, Institut für Informatik, AKSW, Postfach 100920, D-04009 Leipzig, Germany. E-mail: {Sherif,NgongaNgomo}@informatik.uni-leipzig.de Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Digitale ISSN: 2210-4968
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 79
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-04-27
    Beschreibung: The world is losing some of its 7,000 languages. Hypothesizing that language attrition might subside if all languages were intertranslatable, the PanLex project supports panlingual lexical translation by integrating all known lexical translations. Semantic Web technologies can flexibly represent and reason with the content of its database and interlink it with linguistic and other resources and annotations. Conversely, PanLex, with its collection of translation links between more than a billion pairs of lexemes from more than 9,000 language varieties, can improve the coverage of the Linguistic Web of Data. We detail how we transformed the content of the PanLex database to RDF, established conformance with the lemon and GOLD data models, interlinked it with Lexvo and DBpedia, and published it as Linked Data and via SPARQL. Content Type Journal Article Pages - DOI 10.3233/SW-140138 Authors Patrick Westphal, University of Leipzig, Germany. E-mail: {pwestphal,cstadler}@informatik.uni-leipzig.de Claus Stadler, University of Leipzig, Germany. E-mail: {pwestphal,cstadler}@informatik.uni-leipzig.de Jonathan Pool, The Long Now Foundation, San Francisco, USA. E-mail: pool@panlex.org Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Digitale ISSN: 2210-4968
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 80
    Publikationsdatum: 2014-04-28
    Beschreibung: Background: The identification of functionally important residue positions is an important task of computational biology. Methods of correlation analysis allow for the identification of pairs of residue positions, whose occupancy is mutually dependent due to constraints imposed by protein structure or function. A common measure assessing these dependencies is the mutual information, which is based on Shannon's information theory that utilizes probabilities only. Consequently, such approaches do not consider the similarity of residue pairs, which may degrade the algorithm's performance. One typical algorithm is H2r, which characterizes each individual residue position k by the conn(k)-value, which is the number of significantly correlated pairs it belongs to. Results: To improve specificity of H2r, we developed a revised algorithm, named H2rs, which is based on the von Neumann entropy (vNE). To compute the corresponding mutual information, a matrix A is required, which assesses the similarity of residue pairs. We determined A by deducing substitution frequencies from contacting residue pairs observed in the homologs of 35 809 proteins, whose structure is known. In analogy to H2r, the enhanced algorithm computes a normalized conn(k)-value. Within the framework of H2rs, only statistically significant vNE values were considered. To decide on significance, the algorithm calculates a p-value by performing a randomization test for each individual pair of residue positions. The analysis of a large in silico testbed demonstrated that specificity and precision were higher for H2rs than for H2r and two other methods of correlation analysis. The gain in prediction quality is further confirmed by a detailed assessment of five well-studied enzymes. The outcome of H2rs and of a method that predicts contacting residue positions (PSICOV) overlapped only marginally. H2rs can be downloaded from www-bioinf.uni-regensburg.de. Conclusions: Considering substitution frequencies for residue pairs by means of the von Neumann entropy and a p-value improved the success rate in identifying important residue positions. The integration of proven statistical concepts and normalization allows for an easier comparison of results obtained with different proteins. Comparing the outcome of the local method H2rs and of the global method PSICOV indicates that such methods supplement each other and have different scopes of application.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 81
    Publikationsdatum: 2014-04-29
    Beschreibung: Background: It is important to predict the quality of a protein structural model before its native structure is known. The method that can predict the absolute local quality of individual residues in a single protein model is rare, yet particularly needed for using, ranking and refining protein models. Results: We developed a machine learning tool (SMOQ) that can predict the distance deviation of each residue in a single protein model. SMOQ uses support vector machines (SVM) with protein sequence and structural features (i.e. basic feature set), including amino acid sequence, secondary structures, solvent accessibilities, and residue-residue contacts to make predictions. We also trained a SVM model with two new additional features (profiles and SOV scores) on 20 CASP8 targets and found that including them can only improve the performance when real deviations between native and model are higher than 5A. The SMOQ tool finally released uses the basic feature set trained on 85 CASP8 targets. Moreover, SMOQ implemented a way to convert predicted local quality scores into a global quality score. SMOQ was tested on the 84 CASP9 single-domain targets. The average difference between the residue-specific distance deviation predicted by our method and the actual distance deviation on the test data is 2.637A. The global quality prediction accuracy of the tool is comparable to other good tools on the same benchmark. Conclusion: SMOQ is a useful tool for protein single model quality assessment. Its source code and executable are available at: http://sysbio.rnet.missouri.edu/multicom_toolbox/.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 82
    Publikationsdatum: 2014-03-04
    Beschreibung: Background: Whole-genome sequencing represents a powerful experimental tool for pathogen research. We present methods for the analysis of small eukaryotic genomes, including a streamlined system (called Platypus) for finding single nucleotide and copy number variants as well as recombination events. Results: We have validated our pipeline using four sets of Plasmodium falciparum drug resistant data containing 26 clones from 3D7 and Dd2 background strains, identifying an average of 11 single nucleotide variants per clone. We also identify 8 copy number variants with contributions to resistance, and report for the first time that all analyzed amplification events are in tandem. Conclusions: The Platypus pipeline provides malaria researchers with a powerful tool to analyze short read sequencing data. It provides an accurate way to detect SNVs using known software packages, and a novel methodology for detection of CNVs, though it does not currently support detection of small indels. We have validated that the pipeline detects known SNVs in a variety of samples while filtering out spurious data. We bundle the methods into a freely available package.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 83
    Publikationsdatum: 2014-04-30
    Beschreibung: Background: RNA-binding proteins interact with specific RNA molecules to regulate important cellular processes. It is therefore necessary to identify the RNA interaction partners in order to understand the precise functions of such proteins. Protein-RNA interactions are typically characterized using in vivo and in vitro experiments but these may not detect all binding partners. Therefore, computational methods that capture the protein-dependent nature of such binding interactions could help to predict potential binding partners in silico. Results: We have developed three methods to predict whether an RNA can interact with a particular RNAbinding protein using support vector machines and different features based on the sequence (the Oli method), the motif score (the OliMo method) and the secondary structure (the OliMoSS method). We applied these approaches to different experimentally-derived datasets and compared the predictions with RNAcontext and RPISeq. Oli outperformed OliMoSS and RPISeq, confirming our protein-specific predictions and suggesting that tetranucleotide frequencies are appropriate discriminative features. Oli and RNAcontext were the most competitive methods in terms of the area under curve. A precisionrecall curve analysis achieved higher precision values for Oli. On a second experimental dataset including real negative binding information, Oli outperformed RNAcontext with a precision of 0.73 vs. 0.59. Conclusions: Our experiments showed that features based on primary sequence information are sufficiently discriminating to predict specific RNA-protein interactions. Sequence motifs and secondary structure information were not necessary to improve these predictions. Finally we confirmed that proteinspecific experimental data concerning RNA-protein interactions are valuable sources of information that can be used for the efficient training of models for in silico predictions. The scripts are available upon request to the corresponding author.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 84
    Publikationsdatum: 2014-04-30
    Beschreibung: Background: Tandem mass spectrometry-based database searching is currently the main method for protein identification in shotgun proteomics. The explosive growth of protein and peptide databases, which is a result of genome translations, enzymatic digestions, and post-translational modifications (PTMs), is making computational efficiency in database searching a serious challenge. Profile analysis shows that most search engines spend 50%-90% of their total time on the scoring module, and that the spectrum dot product (SDP) based scoring module is the most widely used. As a general purpose and high performance parallel hardware, graphics processing units (GPUs) are promising platforms for speeding up database searches in the protein identification process. Results: We designed and implemented a parallel SDP-based scoring module on GPUs that exploits the efficient use of GPU registers, constant memory and shared memory. Compared with the CPU-based version, we achieved a 30 to 60 times speedup using a single GPU. We also implemented our algorithm on a GPU cluster and achieved an approximately favorable speedup. Conclusions: Our GPU-based SDP algorithm can significantly improve the speed of the scoring module in mass spectrometry-based protein identification. The algorithm can be easily implemented in many database search engines such as X!Tandem, SEQUEST, and pFind. A software tool implementing this algorithm is available at http://www.comp.hkbu.edu.hk/~youli/ProteinByGPU.html
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 85
    Publikationsdatum: 2014-04-30
    Beschreibung: Background: RNA-seq and its variant differential RNA-seq (dRNA-seq) are today routine methods for transcriptome analysis in bacteria. While expression profiling and transcriptional start site prediction are standard tasks today, the problem of identifying transcriptional units in a genome-wide fashion is still not solved for prokaryotic systems. Results: We present RNASEG, an algorithm for the prediction of transcriptional units based on dRNA-seq data. A key feature of the algorithm is that, based on the data, it distinguishes between transcribed and un-transcribed genomic segments. Furthermore, the program provides many different predictions in a single run, which can be used to infer the significance of transcriptional units in a consensus procedure. We show the performance of our method based on a well-studied dRNA-seq data set for Helicobacter pylori. Conclusions: With our algorithm it is possible to identify operons and 5'- and 3'-UTRs in an automated fashion. This alleviates the need for labour intensive manual inspection and enables large-scale studies in the area of comparative transcriptomics.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 86
    Publikationsdatum: 2014-03-20
    Beschreibung: Background: CA_C2195 from Clostridium acetobutylicum is a protein of unknown function. Sequence analysis predicted that part of the protein contained a metallopeptidase-related domain. There are over 200 homologs of similar size in large sequence databases such as UniProt, with pairwise sequence identities in the range of ~40-60%. CA_C2195 was chosen for crystal structure determination for structure-based function annotation of novel protein sequence space. Results: The structure confirmed that CA_C2195 contained an N-terminal metallopeptidase-like domain. The structure revealed two extra domains: an alpha+beta domain inserted in the metallopeptidase-like domain and a C-terminal circularly permuted winged-helix-turn-helix domain. Conclusions: Based on our sequence and structural analyses using the crystal structure of CA_C2195 we provide a view into the possible functions of the protein. From contextual information from gene-neighborhood analysis, we propose that rather than being a peptidase, CA_C2195 and its homologs might play a role in biosynthesis of a modified cell-surface carbohydrate in conjunction with several sugar-modification enzymes. These results provide the groundwork for the experimental verification of the function.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 87
    Publikationsdatum: 2014-03-20
    Beschreibung: Background: Recent efforts in HIV-1 vaccine design have focused on immunogens that evoke potent neutralizing antibody responses to a broad spectrum of viruses circulating worldwide. However, the development of effective vaccines will depend on the identification and characterization of the neutralizing antibodies and their epitopes. We developed bioinformatics methods to predict epitope networks and antigenic determinants using structural information, as well as corresponding genotypes and phenotypes generated by a highly sensitive and reproducible neutralization assay.282 clonal envelope sequences from a multiclade panel of HIV-1 viruses were tested in viral neutralization assays with an array of broadly neutralizing monoclonal antibodies (mAbs: b12, PG9,16, PGT121 - 128, PGT130 - 131, PGT135 - 137, PGT141 - 145, and PGV04). We correlated IC50 titers with the envelope sequences, and used this information to predict antibody epitope networks. Structural patches were defined as amino acid groups based on solvent-accessibility, radius, atomic depth, and interaction networks within 3D envelope models. We applied a boosted algorithm consisting of multiple machine-learning and statistical models to evaluate these patches as possible antibody epitope regions, evidenced by strong correlations with the neutralization response for each antibody. Results: We identified patch clusters with significant correlation to IC50 titers as sites that impact neutralization sensitivity and therefore are potentially part of the antibody binding sites. Predicted epitope networks were mostly located within the variable loops of the envelope glycoprotein (gp120), particularly in V1/V2. Site-directed mutagenesis experiments involving residues identified as epitope networks across multiple mAbs confirmed association of these residues with loss or gain of neutralization sensitivity. Conclusions: Computational methods were implemented to rapidly survey protein structures and predict epitope networks associated with response to individual monoclonal antibodies, which resulted in the identification and deeper understanding of immunological hotspots targeted by broadly neutralizing HIV-1 antibodies.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 88
    Publikationsdatum: 2014-01-25
    Beschreibung: Background: The introduction of next-generation sequencing (NGS) technology has made it possible to detect genomic alterations within tumor cells on a large scale. However, most applications of NGS show the genetic content of mixtures of cells. Recently developed single cell sequencing technology can identify variation within a single cell. Characterization of multiple samples from a tumor using single cell sequencing can potentially provide information on the evolutionary history of that tumor. This may facilitate understanding how key mutations accumulate and evolve in lineages to form a heterogeneous tumor. Results: We provide a computational method to infer an evolutionary mutation tree based on single cell sequencing data. Our approach differs from traditional phylogenetic tree approaches in that our mutation tree directly describes temporal order relationships among mutation sites. Our method also accommodates sequencing errors. Furthermore, we provide a method for estimating the proportion of time from the earliest mutation event of the sample to the most recent common ancestor of the sample of cells. Finally, we discuss current limitations on modeling with single cell sequencing data and possible improvements under those limitations. Conclusions: Inferring the temporal ordering of mutational sites using current single cell sequencing data is a challenge. Our proposed method may help elucidate relationships among key mutations and their role in tumor progression.
    Digitale ISSN: 1471-2105
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 89
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-25
    Beschreibung: In this paper, we present the methodology for the introduction to scientific computing based on model-centered learning. We propose multiphase queueing systems as a basis for learning objects. We use Python and parallel programming for implementing the models and present the computer code and results of stochastic simulations. Content Type Journal Article Pages - DOI 10.3233/SPR-140377 Authors Vladimiras Dolgopolovas, Informatics Methodology Department, Institute of Mathematics and Informatics of Vilnius University, Vilnius, Lithuania Valentina Dagienė, Informatics Methodology Department, Institute of Mathematics and Informatics of Vilnius University, Vilnius, Lithuania Saulius Minkevčius, Operational Research Sector at System Analysis Department, Institute of Mathematics and Informatics of Vilnius University, Vilnius, Lithuania Leonidas Sakalauskas, Operational Research Sector at System Analysis Department, Institute of Mathematics and Informatics of Vilnius University, Vilnius, Lithuania Journal Scientific Programming Online ISSN 1875-919X Print ISSN 1058-9244
    Print ISSN: 1058-9244
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 90
    Publikationsdatum: 2014-01-26
    Beschreibung: Background: Networks are commonly used to represent and analyze large and complex systems of interactingelements. In systems biology, human disease networks show interactions between disorders sharingcommon genetic background. We built pathway-based human phenotype network (PHPN) of over 800physical attributes, diseases, and behavioral traits; based on about 2,300 genes and 1,200 biologicalpathways. Using GWAS phenotype-to-genes associations, and pathway data from Reactome, weconnect human traits based on the common patterns of human biological pathways, detecting morepleiotropic effects, and expanding previous studies from a gene-centric approach to that of sharedcell-processes. Results: The resulting network has a heavily right-skewed degree distribution, placing it in the scale-free regionof the network topologies spectrum. We extract the multi-scale information backbone of thePHPN based on the local densities of the network and discarding weak connection. Using a standardcommunity detection algorithm, we construct phenotype modules of similar traits without applyingexpert biological knowledge. These modules can be assimilated to the disease classes. However, weare able to classify phenotypes according to shared biology, and not arbitrary disease classes. Wepresent examples of expected clinical connections identified by PHPN as proof of principle. Conclusions: We unveil a previously uncharacterized connection between phenotype modules and discuss potentialmechanistic connections that are obvious only in retrospect. The PHPN shows tremendous potentialto become a useful tool both in the unveiling of the diseases' common biology, and in the elaborationof diagnosis and treatments.
    Digitale ISSN: 1756-0381
    Thema: Biologie , Informatik
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 91
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-26
    Beschreibung: The paper presents a formal description of a subset of the Alvis language designed for the modelling and formal verification of concurrent systems. Alvis combines possibilities of a formal models verification with flexibility and simplicity of practical programming languages. Alvis provides a graphical modelling of interconnections among agents and a high level programming language used for the description of agents behaviour. Its semantic depends on the so-called system layer. The most universal system layer α 0 , described in the paper, makes Alvis similar to other formal languages like Petri nets, process algebras, time automata, etc. Content Type Journal Article Pages 161-176 DOI 10.3233/FI-2014-967 Authors Marcin Szpyrka, AGH University of Science and Technology, Department of Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow, Poland. {mszpyrka,ptm,Rafal.Mrowka,kotulski}@agh.edu.pl Piotr Matyasik, AGH University of Science and Technology, Department of Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow, Poland. {mszpyrka,ptm,Rafal.Mrowka,kotulski}@agh.edu.pl Rafał Mrówka, AGH University of Science and Technology, Department of Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow, Poland. {mszpyrka,ptm,Rafal.Mrowka,kotulski}@agh.edu.pl Leszek Kotulski, AGH University of Science and Technology, Department of Applied Computer Science, Al. Mickiewicza 30, 30-059 Krakow, Poland. {mszpyrka,ptm,Rafal.Mrowka,kotulski}@agh.edu.pl Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 92
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-26
    Beschreibung: In this essay, we analyze various often semantically identified notions of separating things. In doing this, we contrast the set–theoretical approach based on the notion of an element/point with the mereological approach based on the notion of a part, hence, pointless. We address time aspect of the notion of a boundary and related notions as well as approximate notions defined in the realm of rough (approximate) mereology. Content Type Journal Article Pages 149-159 DOI 10.3233/FI-2014-966 Authors Lech Polkowski, Polish–Japanese Institute of Information Technology, Koszykowa 86, 02008 Warszawa, Poland. polkow@pjwstk.edu.pl Maria Semeniuk-Polkowska, Chair of Formal Linguistics, Warsaw University, Dobra 55, 00312 Warszawa, Poland. m.polkowska@uw.edu.pl Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 93
    Publikationsdatum: 2014-01-26
    Beschreibung: We discuss an approach for dealing with uncertainty in complex systems. The approach is based on interactive computations over complex objects called here complex granules (c-granules, for short). Any c-granule consists of a physical part and a mental part linked in a special way. We begin from the rough set approach and next we move toward interactive computations on c-granules. From our considerations it follows that the fundamental issues of intelligent systems based on interactive computations are related to risk management in such systems. Our approach is a step toward realization of the Wisdom Technology (WisTech) program. The approach was developed over years of work on different real-life projects. Content Type Journal Article Pages 69-84 DOI 10.3233/FI-2014-961 Authors Andrzej Jankowski, Institute of Computer Science, Warsaw University of Technology, Nowowiejska 15/19, 00-665 Warsaw, Poland. a.jankowski@ii.pw.edu.pl Andrzej Skowron, Institute of Mathematics, The University of Warsaw, Banacha 2, 02-097 Warsaw, Poland. skowron@mimuw.edu.pl Roman W. Swiniarski, Department of Computer Science, San Diego State University, 5500 Campanile Drive San Diego, CA 92182, USA, and Institute of Computer Science Polish Academy of Sciences, Jana Kazimierza 5, 01-248 Warsaw, Poland. rswiniarski@mail.sdsu.edu Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 94
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-26
    Beschreibung: Networks of Timed Automata (NTA) and Time Petri Nets (TPNs) are well-established formalisms used to model, analyze and control industrial real-time systems. The underlying theories are usually developed in different scientific communities and both formalisms have distinct strong points: for instance, conciseness for TPNs and a more flexible notion of urgency for NTA. The objective of the paper is to introduce a new model allowing the joint use of both TPNs and NTA for the modeling of timed systems. We call it Clock Transition System (CTS). This new model incorporates the advantages of the structure of Petri nets, while introducing explicitly the concept of clocks. Transitions in the network can be guarded by an expression on the clocks and reset a subset of them as in timed automata. The urgency is introduced by a separate description of invariants. We show that CTS allow to express TPNs (even when unbounded) and NTA. For those two classical models, we identify subclasses of CTSs equivalent by isomorphism of their operational semantics and provide (syntactic) translations. The classical state-space computation developed for NTA and then adapted to TPNs can easily be defined for general CTSs. Armed with these merits, the CTS model seems a good candidate to serve as an intermediate theoretical and practical model to factor out the upcoming developments in the TPNs and the NTA scientific communities. Content Type Journal Article Pages 85-100 DOI 10.3233/FI-2014-962 Authors Claude Jard, Université de Nantes, LINA, Nantes, France. Claude.Jard@univ-nantes.fr Didier Lime, École Centrale de Nantes, IRCCyN, Nantes, France. Didier.Lime@irccyn.ec-nantes.fr Olivier H. Roux, École Centrale de Nantes, IRCCyN, Nantes, France. Olivier-H.Roux@irccyn.ec-nantes.fr Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 95
    Publikationsdatum: 2014-01-26
    Beschreibung: This paper outlines the primary steps to investigate if artificial agents can be considered as true substitutes of humans. Based on a Socially augmented microworld (SAM) human tracking behavior was analyzed using time series. SAM involves a team of navigators jointly steering a driving object along different virtual tracks containing obstacles and forks. Speed and deviances from track are logged, producing high-resolution time series of individual (training) and cooperative tracking behavior. In the current study 52 time series of individual tracking behavior on training tracks were clustered according to different similarity measures. Resulting clusters were used to predict cooperative tracking behavior in fork situations. Results showed that prediction was well for tracking behavior shown at the first and, moderately well at the third fork of the cooperative track: navigators switched from their trained to a different tracking style and then back to their trained behavior. This matches with earlier identified navigator types, which were identified on visual examination. Our findings on navigator types will serve as a basis for the development of artificial agents, which can be compared later to behavior of human navigators. Content Type Journal Article Pages 133-147 DOI 10.3233/FI-2014-965 Authors Vladimir Kurbalija, Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Trg D. Obradovica 4, 21000 Novi Sad, Serbia. kurba@dmi.uns.ac.rs, mira@dmi.uns.ac.rs Mirjana Ivanović, Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Trg D. Obradovica 4, 21000 Novi Sad, Serbia. kurba@dmi.uns.ac.rs, mira@dmi.uns.ac.rs Charlotte von Bernstorff, Institute of Psychology, Humboldt University Berlin, Rudower Chaussee 18, 12489 Berlin, Germany. charlotte.bernstorff@hu-berlin.de, jens.nachtwei@hu-berlin.de Jens Nachtwei, Institute of Psychology, Humboldt University Berlin, Rudower Chaussee 18, 12489 Berlin, Germany. charlotte.bernstorff@hu-berlin.de, jens.nachtwei@hu-berlin.de Hans-Dieter Burkhard, Institute of Informatics, Humboldt University Berlin, Rudower Chaussee 25, 12489 Berlin, Germany. hdb@informatik.hu-berlin.de Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 96
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-26
    Beschreibung: We present structural properties of languages constructed with catenation and shuffle, comprising iteration lemmata and closure properties of the language classes, as well as decidability results that follow. Content Type Journal Article Pages 117-132 DOI 10.3233/FI-2014-964 Authors Manfred Kudlek, University of Hamburg, Germany Nils Erik Flick, University of Oldenburg, Germany. flick@informatik.uni-oldenburg.de Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 97
    facet.materialart.
    Unbekannt
    IOS Press
    Publikationsdatum: 2014-01-26
    Beschreibung: Unified State Model (USM) is a single data model that allows conveying objects of major programming languages and databases. USM exploits and emphasizes common properties of their data models. USM is equipped with mappings from these data models onto it. With USM at hand, we have faced the next natural research question whether numerous query languages for the data subsumed by USM can be clearly mapped onto a common language. We have designed and proposed such a language called the Unified Query Language (UQL). UQL is intended to be a minimalistic and elegant query language that allows expressing queries of languages of data models covered by USM. In this paper we define UQL and its concise set of operators. Next we conduct a mild introduction into UQL features by showing examples of SQL and ODMG OQL queries and their mapping onto UQL. We conclude by presenting the mapping of the theoretical foundations of these two major query languages onto UQL. They are the multiset relational algebra and the object query algebra. This is an important step towards the establishment of a fully-fledged common query language for USM and its subsumed data models. Content Type Journal Article Pages 177-192 DOI 10.3233/FI-2014-968 Authors Piotr Wiśniewski, Faculty of Mathematics and Computer Science, Nicolaus Copernicus University, Toruń, Poland. pikonrad@mat.umk.pl Krzysztof Stencel, Institute of Informatics, University of Warsaw, Warsaw, Poland. stencel@mimuw.edu.pl Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 98
    Publikationsdatum: 2014-01-26
    Beschreibung: In this paper we study the complexity of HORNETS, an algebraic extension of object nets. We define a restricted class: safe, elementary HORNETS, to guarantee finite state spaces. It will turn out, that the reachability problem for this class requires exponential space, which is a major increase when compared to safe, elementary object nets, which require polynomial space. Content Type Journal Article Pages 101-115 DOI 10.3233/FI-2014-963 Authors Michael Köhler-Bußmeier, University of Hamburg, Department for Informatics, Vogt-Kölln-Straße 30, D-22527 Hamburg, Germany. koehler@informatik.uni-hamburg.de Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 99
    Publikationsdatum: 2014-01-26
    Beschreibung: Computational steering is an interactive remote control of a long running application. The user can adopt it, e.g., to adjust simulation parameters on the fly. Simulation of large-scale biochemical networks is often computationally expensive, particularly stochastic and hybrid simulation. Such extremely time-consuming computations necessitate an interactive mechanism to permit users to try different paths and ask “what-if-questions” while the simulation is in progress. Furthermore, with the progress of computational modelling and the simulation of biochemical networks, there is a need to manage multi-scale models, which may contain species or reactions at different scales. In this context, Petri nets are of special importance, since they provide an intuitive visual representation of reaction networks. In this paper, we introduce a framework and its implementation for combining Petri nets and computational steering for the representation and interactive simulation of biochemical networks. The main merits of the developed framework are: intuitive representation of biochemical networks by means of Petri nets, distributed collaborative and interactive simulation, and tight coupling of simulation and visualisation. Content Type Journal Article Pages 49-67 DOI 10.3233/FI-2014-960 Authors Mostafa Herajy, Department of Mathematics and Computer Science, Faculty of Science, Port Said University, 42521 - Port Said, Egypt. mherajy@sci.psu.edu.eg Monika Heiner, Computer Science Institute, Brandenburg University of Technology, D-03013 Cottbus, Germany. monika.heiner@informatik.tu-cottbus.de Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 100
    Publikationsdatum: 2014-01-26
    Beschreibung: In the paper, we attempt to identify the crucial determinants of innovativeness economy and the correlations between the determinants. We based our research on the Innovativeness Union Scoreboard (IUS) dataset. In order to solve the problem, we propose to use the Double Self-Organizing Feature Map (SOM) approach. In the first step, countries, described by determinants of innovativeness economy, are clustered using SOMs according to five year time series for each determinant separately. In the second step, results of the first step are clustered again using SOM to obtain the final correlation represented in the form of a minimal spanning tree. We propose some modifications of the clustering process using SOMs to improve classification results and efficiency of the learning process. Content Type Journal Article Pages 37-48 DOI 10.3233/FI-2014-959 Authors Marta Czyżewska, University of Information Technology and Management, Sucharskiego Str. 2, 35-225 Rzeszów, Poland. mczyzewska@wsiz.rzeszow.pl, jszkola@wsiz.rzeszow.pl, kpancerz@wsiz.rzeszow.pl Jarosław Szkoła, University of Information Technology and Management, Sucharskiego Str. 2, 35-225 Rzeszów, Poland. mczyzewska@wsiz.rzeszow.pl, jszkola@wsiz.rzeszow.pl, kpancerz@wsiz.rzeszow.pl Krzysztof Pancerz, University of Information Technology and Management, Sucharskiego Str. 2, 35-225 Rzeszów, Poland. mczyzewska@wsiz.rzeszow.pl, jszkola@wsiz.rzeszow.pl, kpancerz@wsiz.rzeszow.pl Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 129 Journal Issue Volume 129, Number 1-2 / 2014
    Print ISSN: 0169-2968
    Digitale ISSN: 1875-8681
    Thema: Informatik
    Publiziert von IOS Press
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie hier...