ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (18,657)
  • 2015-2019
  • 2010-2014  (18,657)
  • 2000-2004
  • 2012  (18,657)
  • Computer Science  (18,657)
Collection
Years
  • 2015-2019
  • 2010-2014  (18,657)
  • 2000-2004
Year
  • 1
    Publication Date: 2012-01-01
    Electronic ISSN: 2192-1962
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    facet.materialart.
    Unknown
    Springer
    In: Computing
    Publication Date: 2012-03-10
    Description:    The deployment of sensors without enough coverage can result in unreliable outputs in wireless sensor networks (WSNs). Thus sensing coverage is one of the most important quality of service factors in WSNs. A useful metric for quantifying the coverage reliability is the coverage rate that is the area covered by sensor nodes in a region of interest. The network sink can be informed about locations of all nodes and calculate the coverage rate centrally. However, this approach creates huge load on the network nodes that had to send their location information to the sink. Thus, a distributed approach is required to calculate the coverage rate. This paper is among the very first to provide a localized approach to calculate the coverage rate. We provide two coverage rate calculation (CRC) protocols, namely distributed exact coverage rate calculation (DECRC) and distributed probabilistic coverage rate calculation (DPCRC). DECRC calculates the coverage rate precisely using the idealized disk graph model. Precise calculation of the coverage rate is a unique property of DECRC compared to similar works that have used the disk graph model. In contrast, DPCRC uses a more realistic model that is probabilistic coverage model to determine an approximate coverage rate. DPCRC is in fact an extended version of DECRC that uses a set of localized techniques to make it a low cost protocol. Simulation results show significant overall performance improvement of CRC protocols compared to related works. Content Type Journal Article Pages 1-24 DOI 10.1007/s00607-012-0192-1 Authors Saeed Sedighian Kashi, Iran University of Science and Technology, Tehran, Iran Mohsen Sharifi, Iran University of Science and Technology, Tehran, Iran Journal Computing Online ISSN 1436-5057 Print ISSN 0010-485X
    Print ISSN: 0010-485X
    Electronic ISSN: 1436-5057
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2012-03-10
    Description:    Nanoinformatics has recently emerged to address the need of computing applications at the nano level. In this regard, the authors have participated in various initiatives to identify its concepts, foundations and challenges. While nanomaterials open up the possibility for developing new devices in many industrial and scientific areas, they also offer breakthrough perspectives for the prevention, diagnosis and treatment of diseases. In this paper, we analyze the different aspects of nanoinformatics and suggest five research topics to help catalyze new research and development in the area, particularly focused on nanomedicine. We also encompass the use of informatics to further the biological and clinical applications of basic research in nanoscience and nanotechnology, and the related concept of an extended “nanotype” to coalesce information related to nanoparticles. We suggest how nanoinformatics could accelerate developments in nanomedicine, similarly to what happened with the Human Genome and other -omics projects, on issues like exchanging modeling and simulation methods and tools, linking toxicity information to clinical and personal databases or developing new approaches for scientific ontologies, among many others. Content Type Journal Article Pages 1-19 DOI 10.1007/s00607-012-0191-2 Authors Victor Maojo, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Martin Fritts, SAIC-Frederick Inc., National Cancer Institute at Frederick, Frederick, MD, USA Fernando Martin-Sanchez, Health and Biomedical Informatics Research Laboratory Medical School-IBES, The University of Melbourne, Parkville, VIC, Australia Diana De la Iglesia, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Raul E. Cachau, Advanced Biomedical Computing Center, National Cancer Institute, SAIC-Frederick Inc., Frederick, MD, USA Miguel Garcia-Remesal, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Jose Crespo, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Joyce A. Mitchell, Biomedical Informatics Department, University of Utah, Salt Lake City, Utah, USA Alberto Anguita, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Nathan Baker, Pacific Northwest National Laboratory, Richland, WA, USA Jose Maria Barreiro, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Sonia E. Benitez, Department of Medical Informatics, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina Guillermo De la Calle, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain Julio C. Facelli, Biomedical Informatics Department, University of Utah, Salt Lake City, Utah, USA Peter Ghazal, Division of Pathway Medicine and Centre for Systems Biology Edinburgh, University of Edinburgh, Edinburgh, UK Antoine Geissbuhler, Division of Medical Informatics, University Hospitals of Geneva, Geneva, Switzerland Fernando Gonzalez-Nilo, Center for Bioinformatics and Molecular Simulation, Universidad de Talca, Talca, Chile Norbert Graf, Department of Pediatric Oncology and Hematology, University of the Saarland, Homburg, Germany Pierre Grangeat, Laboratoire d’Electronique et de Technologie de l’Information (LETI), MINATEC Campus, Commissariat à l’Energie Atomique et aux Energies Alternatives (CEA), Grenoble, France Isabel Hermosilla, Department of Medical Bioinformatics, Institute of Health “Carlos III”, Madrid, Spain Rada Hussein, Information Technology Institute, Ministry of Communications and Information Technology, Cairo, Egypt Josipa Kern, Department of Medical Statistics, Epidemiology and Medical Informatics, Andrija Stampar School of Public Health, Zagreb University Medical School, Zagreb, Croatia Sabine Koch, Health Informatics Centre, Department of Learning, Informatics, Management and Ethics, Karolinska Institutet, Stockholm, Sweden Yannick Legre, Health Grid Organization, Clermont-Ferrand, France Victoria Lopez-Alonso, Department of Medical Bioinformatics, Institute of Health “Carlos III”, Madrid, Spain Guillermo Lopez-Campos, Department of Medical Bioinformatics, Institute of Health “Carlos III”, Madrid, Spain Luciano Milanesi, Institute of Biomedical Technology, National Research Council, Milan, Italy Vassilis Moustakis, Technical University of Crete, Chania, Crete, Greece Cristian Munteanu, Department of Information and Communication Technologies, University of A Coruña, A Coruña, Spain Paula Otero, Department of Medical Informatics, Hospital Italiano de Buenos Aires, Buenos Aires, Argentina Alejandro Pazos, Department of Information and Communication Technologies, University of A Coruña, A Coruña, Spain David Perez-Rey, Biomedical Informatics Group, Departamento de Inteligencia Artificial, Faculdad de Informática, Universidad Politécnica de Madrid, Boadilla del Monte, 28660 Madrid, Spain George Potamias, Institute of Computer Science, FORTH, Heraklion, Crete, Greece Ferran Sanz, Biomedical Informatics Research Programme (GRIB), IMIM, Universitat Pompeu Fabra, Barcelona, Spain Casimir Kulikowski, Department of Computer Science, Rutgers, The State University of New Jersey, Newark, USA Journal Computing Online ISSN 1436-5057 Print ISSN 0010-485X
    Print ISSN: 0010-485X
    Electronic ISSN: 1436-5057
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2012-03-10
    Description:    Establishing and successfully implementing new numerical computational technique(s) with simulation tool is one of the most important difficult tasks in order to carry out real time scientific astronomical and other sophisticated problems. The main focus and highlight of this paper is concerned with the introduction of a newly proposed simple embedded RKAHeM(4,4) technique to determine the distribution of thermodynamic variables inside protoplanets during pre-collapse stage, formed by gravitational instability, for protoplanetary masses between 0.3 to 10 Jupiter masses. The case of convection is a significant concern for transference of heat inside the protoplanet and the graphical solution demonstrates positively better performance by inducting the newly proposed effective RKAHeM(4,4) algorithm for any length of time in comparison with the results through Classical Runge–Kutta method. Furthermore, the analytic expression for local truncation error, global truncation error and error estimates are derived for better understanding. A viable quantitative analysis has been carried out to clearly visualize the goodness and robustness of the proposed RKAHeM(4,4) algorithm. Content Type Journal Article Category Research Article Pages 1-9 DOI 10.1007/s12145-011-0093-4 Authors Sukumar Senthilkumar, School of Mathematical Sciences, Universiti Sains Malaysia, Pulau Pinang, 11800 Penang, Malaysia Gour Chandra Paul, School of Mathematical Sciences, Universiti Sains Malaysia, Pulau Pinang, 11800 Penang, Malaysia Journal Earth Science Informatics Online ISSN 1865-0481 Print ISSN 1865-0473
    Print ISSN: 1865-0473
    Electronic ISSN: 1865-0481
    Topics: Geosciences , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2012-03-09
    Description: The construction of a web site is a great challenge that integrates different elements such as the hyperlink structure, colors, pictures, movies and textual contents. In the latter, the correct textual content can be the key to attracting users to visit the site. In fact, many users visit a web site by using a web search engine such as, Google or Yahoo!, and continue exploring the site if it contains the information that they are looking for. In this paper, a methodology to extract the main words in a static web site is proposed. Furthermore, one of the key elements in this methodology is to determine which pages in a web site can further attract the users attention when they are browsing the site. These words are called web site keywords and by using them in the site textual content, significant improvements, from the point of view of the user, can be achieved. A web user's browsing behaviour can be classified in two categories: those of amateurs and experienced. The former is a user with little or no experience in using web-based systems. Their browsing behaviour is normally erratic and it can take them a considerable amount of time to find what they are looking for. The latter is a user with a greater amount of experience with web-based systems whose behaviour is more controlled and purpose driven, and thus takes them less time in determining whether the site contains worthwhile information. What is important, regarding the experienced web users is that there is a correlation between the amount of time spent on a webpage during a session and the extent to which they are interested in the page content. By using this characteristic, a feature vector is created in relation to the time spent on each page during a user's session. The described vectors are the input for two clustering algorithms: SOFM and K-means, which enables the extraction of significant patterns about users with similar or identical browsing behaviour and content preferences. Then, these patterns form the basis in identification of the web site keywords. In order to validate the proposed methodology, web data originated in a complex static web site belonging to a Chilean bank was used. From the clusters identified, a set of web site keywords were identified and their utility was tested on a group of real users, thus illustrating the effectiveness of the proposed methodology. Content Type Journal Article Pages 327-348 DOI 10.3233/IDA-2012-0526 Authors Juan D. Velasquez, Department of Industrial Engineering, University of Chile, Santiago, Chile. E-mail: jvelasqu@dii.uchile.cl Journal Intelligent Data Analysis Online ISSN 1571-4128 Print ISSN 1088-467X Journal Volume Volume 16 Journal Issue Volume 16, Number 2 / 2012
    Print ISSN: 1088-467X
    Electronic ISSN: 1571-4128
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2012-03-09
    Description: This paper presents a new design heuristic for hybrid classifier ensembles in machine learning. The heuristic entails inclusion of both global and local learners in the composition of base classifiers of a hybrid classifier ensemble, while also inducing both heterogeneous and homogenous diversity through the co-existence of global and local learners. Realization of the proposed heuristic is demonstrated within a hybrid ensemble classifier framework. The utility of proposed heuristic for enhancing the hybrid classifier ensemble performance is assessed and evaluated through a simulation study. Weka machine learning tool bench along with 46 datasets from the UCI machine learning repository are used. Simulation results indicate that the proposed heuristic enhances the performance of a hybrid classification ensemble. Content Type Journal Article Pages 233-246 DOI 10.3233/IDA-2012-0521 Authors Dustin Baumgartner, Electrical Engineering and Computer Science Department, University of Toledo, Toledo, OH, USA Gursel Serpen, Electrical Engineering and Computer Science Department, University of Toledo, Toledo, OH, USA Journal Intelligent Data Analysis Online ISSN 1571-4128 Print ISSN 1088-467X Journal Volume Volume 16 Journal Issue Volume 16, Number 2 / 2012
    Print ISSN: 1088-467X
    Electronic ISSN: 1571-4128
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2012-03-09
    Description: The Case-Based Reasoning (CBR) solves problems by using the past problem solving experiences. How to apply these experiences depends on the type of the problem. The method presented in this paper tries to overcome this difficulty in CBR for optimization problems, using Bayesian Optimization Algorithm (BOA). BOA evolves a population of candidate solutions through constructing Bayesian networks and sampling them. After solving the problems through BOA, Bayesian networks describing solutions features are obtained. In our method, these Bayesian networks are stored in a case-base. For solving a new problem, the Bayesian networks of those problems which are similar to the new problem, are retrieved and combined. This compound Bayesian network is used for generating the initial population and constructing the probabilistic models of BOA in solving the new problem. Our method improves CBR in two ways: first, in our method, how to use the knowledge stored in the case-base is disregarding the problem itself and is universally; second, this method stores the probabilistic descriptions of the previous solutions in order to make the stored knowledge more flexible. Experimental results showed that in addition to the mentioned advantages, our method improved the solutions quality. Content Type Journal Article Pages 199-210 DOI 10.3233/IDA-2012-0519 Authors Marjan Kaedi, Department of Computer Engineering, University of Isfahan, Isfahan, Iran Nasser Ghasem-Aghaee, Department of Computer Engineering, University of Isfahan, Isfahan, Iran Journal Intelligent Data Analysis Online ISSN 1571-4128 Print ISSN 1088-467X Journal Volume Volume 16 Journal Issue Volume 16, Number 2 / 2012
    Print ISSN: 1088-467X
    Electronic ISSN: 1571-4128
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2012-03-09
    Description: Feature selection is a preprocessing technique in the field of data analysis, which is used to reduce the number of features by removing irrelevant, noisy, and redundant data, thus resulting in acceptable classification accuracy. This process constitutes a commonly encountered problem of global combinatorial optimization. This paper presents a novel optimization algorithm called complementary distribution binary particle swarm optimization (CD-BPSO). CD-BPSO uses a complementary distribution strategy to improve the search capability of binary particle swarm optimization (BPSO) by facilitating global exploration and local exploitation via complementary particles and original particles, respectively. This complementary approach introduces new "complementary particles" into the search space. These new particles are generated by using half of all particles selected at random, and replace the selected particles when the fitness of the global best particle has not improved for a number of consecutive iterations. The K-nearest neighbor (K-NN) method with leave-one-out cross-validation (LOOCV) was used to evaluate the quality of the solutions. The proposed method was applied and compared to ten classification problems taken from the literature. Experimental results indicate that CD-BPSO improves on the BPSO algorithm with a complementary strategy that prevents entrapment in a local optimum. In the feature selection problem, BPSO preserves knowledge of good feature selection combinations in all the particles and thus the swarm can find optimum combinations of solutions by following the best particle, and either obtains higher classification accuracy or uses fewer features than other feature selection methods. Content Type Journal Article Pages 183-198 DOI 10.3233/IDA-2012-0518 Authors Li-Yeh Chuang, Institute of Biotechnology and Chemical Engineering, I-Shou University, Kaohsiung, Taiwan Cheng-Hong Yang, Department of Network Systems, Toko University, Chiayi, Taiwan Sheng-Wei Tsai, Department of Electronic Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan Journal Intelligent Data Analysis Online ISSN 1571-4128 Print ISSN 1088-467X Journal Volume Volume 16 Journal Issue Volume 16, Number 2 / 2012
    Print ISSN: 1088-467X
    Electronic ISSN: 1571-4128
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2012-02-22
    Description: This article focuses on user experiences on reading location-aware news on the mobile platform and aims to explore what experiences this kind of locative journalism generates and how such experiences change the users’ social interaction with news. We produced a specially designed mobile application and tailored news stories specific to this project called LocaNews in order to explore participants’ relation to the content in this journalistic format. The result is generated through a field study and a questionnaire of 32 people to find out how they experience the news presented in this format. The user participants’ responses are analyzed based on their news experiences, contextualizing places and their social interaction with the news within this form of journalism. Results showed that the local, semi-local and non-local user approaches the locative news in a different manner, but that the average user found this kind of news more interesting and more informative than ordinary news. The participants also have a problem identifying this as journalism, rather than an information service.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2012-02-22
    Description:    The vision of Organic Computing addresses challenges that arise in the design of future information systems that are comprised of numerous, heterogeneous, resource-constrained and error-prone components. The notion organic highlights the idea that, in order to be manageable, such systems should exhibit self-organization, self-adaptation and self-healing characteristics similar to those of biological systems. In recent years, the principles underlying these characteristics are increasingly being investigated from the perspective of complex systems science, particularly using the conceptual framework of statistical physics and statistical mechanics. In this article, we review some of the interesting relations between statistical physics and networked systems and discuss applications in the engineering of organic overlay networks with predictable macroscopic properties. Content Type Journal Article Category HAUPTBEITRAG Pages 1-12 DOI 10.1007/s00287-012-0597-4 Authors Ingo Scholtes, Chair of Systems Design, ETH Zürich, 8032 Zürich, Switzerland Claudio Juan Tessone, Chair of Systems Design, ETH Zürich, 8032 Zürich, Switzerland Journal Informatik-Spektrum Online ISSN 1432-122X Print ISSN 0170-6012
    Print ISSN: 0170-6012
    Electronic ISSN: 1432-122X
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-02-22
    Description:    Zukünftige Systeme zur Überwachung von großen Flächen werden auf der Basis von verteilten intelligenten Kamerasystemen entwickelt werden. Aufgaben solcher Systeme sind beispielsweise das Verfolgen und Zählen von bewegten Objekten und die Analyse ihres Bewegungsverhaltens. Jede dieser Smart-Kameras ist ein autonomer Knoten, ausgestattet mit einem Schwenk-/Neige-/Zoom-Aktuator (PTZ für pan/tilt/zoom), Verarbeitungsressourcen und einer Kommunikationsschnittstelle. Dieser Beitrag gibt einen Überblick über das Forschungsgebiet der Smart-Kamera-Systeme und Beispiele für verteilte Steuerungsalgorithmen, welche die systemweite Selbstorganisation ermöglichen. Der Begriff Selbstorganisation beinhaltet einen integrierten Ansatz zur Selbstkonfiguration, Selbstoptimierung (Smart-Kameras konfigurieren und optimieren ihre Sichtbereiche) und Selbstheilung (Smart-Kameras übernehmen Aufgaben ausgefallener Knoten). Smart-Kamera-Systeme werden dabei als verteilte Systeme auf der Basis von Ad-hoc-Netzen modelliert. Diese Architektur erlaubt es, die Nachteile bisheriger zentraler Ansätze in den Bereichen Skalierbarkeit und Fehlertoleranz zu vermeiden. Content Type Journal Article Category HAUPTBEITRAG Pages 1-9 DOI 10.1007/s00287-012-0594-7 Authors Jörg Hähner, Fakultät für Angewandte Informatik, Universität Augsburg, Eichleitnerstraße 30, 86159 Augsburg, Deutschland Uwe Jänen, Fakultät für Elektrotechnik und Informatik, Leibniz Universität Hannover, Appelstraße 4, 30167 Hannover, Deutschland Carsten Grenz, Fakultät für Elektrotechnik und Informatik, Leibniz Universität Hannover, Appelstraße 4, 30167 Hannover, Deutschland Martin Hoffmann, Volavis GmbH – IT & Vision Systems, Schuckenteichweg 31, 33818 Leopoldshöhe, Deutschland Journal Informatik-Spektrum Online ISSN 1432-122X Print ISSN 0170-6012
    Print ISSN: 0170-6012
    Electronic ISSN: 1432-122X
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-02-22
    Description:    Im Energiesystem ist der Ausgleich von Ungleichgewichten zwischen Erzeugung und Verbrauch eine wesentliche Aufgabe. Diese sogenannte Regelenergie wird hauptsächlich von Großkraftwerken bereitgestellt, welche ihre Leistung entsprechend anpassen. Zusätzlich zu den Großkraftwerken werden auch mittlere Kraftwerke oder größere schaltbare Lasten eingesetzt. In diesem Artikel wird ein selbstorganisierendes Konzept vorgestellt, mit dem kleinere Verbraucher wie Waschmaschinen, Gefrierschränke oder private Kraft-Wärme-Kopplungsanlagen ebenfalls in das System integriert werden können. Im Gegensatz zu den Großkraftwerken steht die Leistung dieser Geräte aber nicht ausschließlich für die Bereitstellung von Regelenergie zur Verfügung, da vorrangig die Anforderungen der Eigentümer hinsichtlich ihrer primären Funktion erfüllt werden müssen. Nur die zusätzlich verfügbaren Freiheitsgrade können genutzt werden. Eine hierarchische Observer/Controller-Architektur erlaubt es, die Geräte über ein P2P-Netzwerk zu einem Gerätepool zusammenzuschließen. Dadurch können alle Geräte untereinander kommunizieren, um alle Anforderungen der Eigentümer zu erfüllen und zusätzlich weitere Dienste für das Energiesystem bereitzustellen. Dies ermöglicht eine kurzfristige Reaktion der Geräte sowie das lokale Aushandeln von neuen Energiefahrplänen, die den Bedarf an Regelenergie deutlich reduzieren können. Content Type Journal Article Category HAUPTBEITRAG Pages 1-10 DOI 10.1007/s00287-012-0595-6 Authors Andreas Kamper, Cirquent GmbH, Ludwig-Erhard-Straße 2, 76275 Ettlingen, Deutschland Hartmut Schmeck, Institut AIFB – Geb. 05.20, Karlsruher Institut für Technologie (KIT), Kaiserstr. 89, 76133 Karlsruhe, Deutschland Journal Informatik-Spektrum Online ISSN 1432-122X Print ISSN 0170-6012
    Print ISSN: 0170-6012
    Electronic ISSN: 1432-122X
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-02-22
    Description:    In den vergangenen Jahren hat die automatische Gesichtserkennung so große Fortschritte gemacht, dass kommerzielle Systeme teilweise bessere Erkennungsraten erreichen als Versuchspersonen. Dennoch bleibt die Erkennung in verschiedenen Situationen schwierig. Wir stellen ein System vor, das auf Ranglistenähnlichkeit basiert und, getreu den Prinzipien von Organic Computing, Veränderung der Situation rein aus Beispielen lernt. Wir demonstrieren den Erfolg am Beispiel von Posen- und Beleuchtungsänderungen anhand einer großen Datenbank chinesischer Gesichter. Die Auswertung kann durch ein schnelles pulsbasiertes neuronales Netzwerk erfolgen. Content Type Journal Article Category HAUPTBEITRAG Pages 1-7 DOI 10.1007/s00287-012-0598-3 Authors Marco K. Müller, Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Deutschland Michael Tremer, Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Deutschland Christian Bodenstein, Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Deutschland Rolf P. Würtz, Institut für Neuroinformatik, Ruhr-Universität Bochum, 44780 Bochum, Deutschland Journal Informatik-Spektrum Online ISSN 1432-122X Print ISSN 0170-6012
    Print ISSN: 0170-6012
    Electronic ISSN: 1432-122X
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2012-02-22
    Description:    Organic computing (OC) recognizes that the behaviors of deployed systems can be much more interesting than those predicted by simulation. By exploiting self-X properties such as self-organization, self-optimization, self-protection, and context-awareness, OC researchers are developing methods for creating robust, trustworthy systems. This paper presents two examples of unanticipated behaviors that we have observed in our OC test bed of robotic toy cars, an unmodeled phenomenon with surprisingly large effects and an agent behavior that was believed (incorrectly) to be so rare that it could be ignored. We discuss the use of computational reflection as a tool for identifying such situations, consider the challenges posed by the large variety of such situations faced by real systems, and list lessons learned about the importance of test beds for advancing OC research. Content Type Journal Article Category HAUPTBEITRAG Pages 1-6 DOI 10.1007/s00287-012-0596-5 Authors Phyllis R. Nelson, Department of Electrical and Computer Engineering, California State Polytechnic University, Pomona, California 91768, USA Minhdao H. Nguyen, Department of Electrical and Computer Engineering, California State Polytechnic University, Pomona, California 91768, USA Chien-Cheng J. Yang, Department of Electrical and Computer Engineering, California State Polytechnic University, Pomona, California 91768, USA Journal Informatik-Spektrum Online ISSN 1432-122X Print ISSN 0170-6012
    Print ISSN: 0170-6012
    Electronic ISSN: 1432-122X
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2012-03-13
    Description: Readability indices measure how easy or difficult it is to read and comprehend a text. In this paper we look at the relation between readability indices and web documents from two different perspectives. On the one hand we analyse how to reliably measure the readability of web documents by applying content extraction techniques and incorporating a bias correction. On the other hand we investigate how web based corpus statistics can be used to measure readability in a novel and language independent way.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2012-03-14
    Description: With the emergence of Web 2.0, new applications arise and evolve into more interactive forms of collective intelligence. These applications offer to both professionals and citizens an open and expanded access to geographic information. In this paper, we develop the conceptual foundations of a new technology solution called WikiGIS. WikiGIS’s strength lies in its ability to ensure the traceability of changes in spatial-temporal geographic components (geometric location and shape, graphics: iconography and descriptive) generated by users. The final use case highlights to what extent WikiGIS could be a relevant and useful technological innovation in Geocollaboration.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2012-03-15
    Description: The Bologna Process initiated a radical change within higher education institutions. This change triggered the creation of new administrative procedures in the every day life of European universities. It also gave rise to the emergence of new concepts for the description of curricula. It is critical for the successful continuation of this process to support the publication and exchange of information among universities. With this aim in mind, we created the Bowlogna Ontology to model an academic setting as proposed by the Bologna reform. In this paper, we present our efforts to design this ontology and the entire process that lead to its creation starting from the definition of a linguistic lexicon derived from the Bologna reform and its conversion to a formal ontology. We also describe practical applications of our ontology for end-users at universities (such as a faceted search and browsing system for course information). Content Type Journal Article Pages - DOI 10.3233/SW-2012-0064 Authors Gianluca Demartini, eXascale Infolab, University of Fribourg, Switzerland Iliya Enchev, eXascale Infolab, University of Fribourg, Switzerland Joël Gapany, Faculty of Humanities, University of Fribourg, Switzerland, E-mail: {firstname.lastname}@unifr.ch Philippe Cudré-Mauroux, eXascale Infolab, University of Fribourg, Switzerland Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Electronic ISSN: 2210-4968
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2012-03-15
    Description: In this paper, we integrate a grid system and a wireless network to present a convenient computational service system, called the Semi-Preemptive Computational Service system (SePCS for short), which provides users with a wireless access environment and through which a user can share his/her resources with others. In the SePCS, each node is dynamically given a score based on its CPU level, available memory size, current length of waiting queue, CPU utilization and bandwidth. With the scores, resource nodes are classified into three levels. User requests based on their time constraints are also classified into three types. Resources of higher levels are allocated to more tightly constrained requests so as to increase the total performance of the system. To achieve this, a resource broker with the Semi-Preemptive Algorithm (SPA) is also proposed. When the resource broker cannot find suitable resources for the requests of higher type, it preempts the resource that is now executing a lower type request so that the request of higher type can be executed immediately. The SePCS can be applied to a Vehicular Ad Hoc Network (VANET), users of which can then exploit the convenient mobile network services and the wireless distributed computing. As a result, the performance of the system is higher than that of the tested schemes.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2012-03-16
    Description: A new efficient heuristic algorithm has been developed for the dynamic facility layout problem with budget constraint (DFLPB) using optimization via simulation technique. The heuristic integrates integer programming and discrete event simulation to address DFLPB. In the proposed algorithm, the nonlinear model of the DFLP has been changed to a pure integer programming (PIP) model. Then, the optimal solution of the PIP model has been used in a simulation model that has been designed in a similar manner as the DFLP for determining the probability of assigning a facility to a location. After a sufficient number of runs, the simulation model obtains near optimum solutions. Finally, to test the performance of the algorithm, several test problems have been taken from the literature and solved. The results show that the proposed algorithm is more efficient in terms of speed and accuracy than other heuristic algorithms presented in previous works.
    Print ISSN: 1687-5591
    Electronic ISSN: 1687-5605
    Topics: Computer Science , Technology
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2012-02-02
    Description: This paper presents a benchmarking, performance analysis and optimization study of the OP2 ‘active’ library, which provides an abstraction framework for the parallel execution of unstructured mesh applications. OP2 aims to decouple the scientific specification of the application from its parallel implementation, and thereby achieve code longevity and near-optimal performance through re-targeting the application to execute on different multi-core/many-core hardware. Runtime performance results are presented for a representative unstructured mesh application on a variety of many-core processor systems, including traditional X86 architectures from Intel (Xeon based on the older Penryn and current Nehalem micro-architectures) and GPU offerings from NVIDIA (GTX260, Tesla C2050). Our analysis demonstrates the contrasting performance between the use of CPU (OpenMP) and GPU (CUDA) parallel implementations for the solution of an industrial-sized unstructured mesh consisting of about 1.5 million edges. Results show the significance of choosing the correct partition and thread-block configuration, the factors limiting the GPU performance and insights into optimizations for improved performance.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2012-02-02
    Description: In this paper, we describe the integrated power, area and thermal modeling framework in the structural simulation toolkit (SST) for large-scale high performance computer simulation. It integrates various power and thermal modeling tools and computes run-time energy dissipation for core, network on chip, memory controller and shared cache. It also provides functionality to update the leakage power as temperature changes. We illustrate the utilization of the framework by applying it to explore interconnect options in manycore systems with consideration of temperature variation and leakage feedback. We compare power, energy-delay-area product (EDAP) and energy-delay product (EDP) of four manycore configurations-1 core, 2 cores, 4 cores and 8 cores per cluster. Results from simulation with or without consideration of temperature variation both show that the 4-core per cluster configuration has the best EDAP and EDP. Even so, considering that temperature variation increases total power dissipation, we demonstrate the importance of considering temperature variation in the design flow. With this power, area and thermal modeling capability, the SST can be used for hardware/software co-design of future exascale systems.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2012-02-02
    Description: With the advent of heterogeneous computing systems consisting of multi-core central processing units (CPUs) and many-core graphics processing units (GPUs), robust methods are needed to facilitate fair benchmark comparisons between different systems. In this paper, we present a benchmarking methodology for measuring a number of performance metrics for heterogeneous systems. Methods for comparing performance and energy efficiency are included. Consideration is given to further metrics, such as associated running costs and carbon emissions. We give a case study for these metrics applied to Bristol University Docking engine, a molecular mechanics-based docking application that has been ported to open computing language at the University of Bristol. Results are included for both AMD and NVIDIA GPUs, and for a highly optimized code on the latest x86 CPUs.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2012-02-02
    Description: Java embedded systems often include Java middleware classes installed on the client device. For higher performance, they can be compiled into machine code before runtime using an ahead-of-time compiler (AOTC). There are many approaches to AOTC, yet a bytecode-to-C (b-to-C) AOTC which translates the bytecode into the C code and then compiles it using an existing optimizing compiler such as gcc would be the most straightforward one. This paper explores a few important design and optimization issues of a b-to-C AOTC, including the compilation form for the translated C code, the call interfaces among translated and interpreted Java methods, and Java-specific optimizations by the AOTC that can complement the gcc optimizations. We evaluate these issues with our b-to-C AOTC implemented on the MIPS platform for the Sun's CDC VM to understand their performance impact.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2012-02-02
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2012-02-06
    Description:    XM L I ndexing and S torage (XMIS) techniques are crucial for the functionality and the overall performance of an XML database management system (XDBMS). Because of the complexity of XQuery and performance demands of XML query processing, efficient path processing operators—including those for tree-pattern queries (so-called twigs)—are urgently needed for which tailor-made indexes and their flexible use are indispensable. Although XML indexing and storage are standard problems and, of course, manifold approaches have been proposed in the last decade, adaptive and broad-enough solutions for satisfactory query evaluation support of all path processing operators are missing in the XDBMS context. Therefore, we think that it is worthwhile to take a step back and look at the complete picture to derive a salient and holistic solution. To do so, we first compile an XMIS wish list containing what—in our opinion—are essential functional storage and indexing requirements in a modern XDBMS. With these desiderata in mind, we then develop a new XMIS scheme, which—by reconsidering previous work—can be seen as a practical and general approach to XML storage and indexing. Interestingly, by working on both problems at the same time, we can make the storage and index managers live in a kind of symbiotic partnership, because the document store re-uses ideas originally proposed by the indexing community and vice versa. The XMIS scheme is implemented in XTC, an XDBMS used for empirical tests. Content Type Journal Article Category Regular Paper Pages 1-18 DOI 10.1007/s00450-012-0204-6 Authors Christian Mathis, Dept. of Computer Science, University of Kaiserslautern, P.O. Box 3049, 67663 Kaiserslautern, Germany Theo Härder, Dept. of Computer Science, University of Kaiserslautern, P.O. Box 3049, 67663 Kaiserslautern, Germany Karsten Schmidt, Dept. of Computer Science, University of Kaiserslautern, P.O. Box 3049, 67663 Kaiserslautern, Germany Sebastian Bächle, Dept. of Computer Science, University of Kaiserslautern, P.O. Box 3049, 67663 Kaiserslautern, Germany Journal Computer Science - Research and Development Online ISSN 1865-2042 Print ISSN 1865-2034
    Print ISSN: 1865-2034
    Electronic ISSN: 1865-2042
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2012-02-09
    Description: In this paper we present a methodology which enables the graphical representation, in a bi-dimensional Euclidean space, of atmospheric pollutants emissions in European countries. This approach relies on the use of Multidimensional Unfolding (MDU), an exploratory multivariate data analysis technique. This technique illustrates both the relationships between the emitted gases and the gases and their geographical origins. The main contribution of this work concerns the evaluation of MDU solutions. We use simulated data to define thresholds for the model fitting measures, allowing the MDU output quality evaluation. The quality assessment of the model adjustment is thus carried out as a step before interpretation of the gas types and geographical origins results. The MDU maps analysis generates useful insights, with an immediate substantive result and enables the formulation of hypotheses for further analysis and modeling. Content Type Journal Article Pages 153-164 DOI 10.3233/IDA-2011-0515 Authors Ana Alexandra A.F. Martins, Department of Mathematics, Instituto Superior de Engenharia de Lisboa, Lisboa, Portugal Margarida G.M.S. Cardoso, Department of Quantitative Methods, Business School, ISCTE – Lisbon University Institute, Lisboa, Portugal Iola M.S. Pinto, Department of Mathematics, Instituto Superior de Engenharia de Lisboa, Lisboa, Portugal Journal Intelligent Data Analysis Online ISSN 1571-4128 Print ISSN 1088-467X Journal Volume Volume 16 Journal Issue Volume 16, Number 1 / 2012
    Print ISSN: 1088-467X
    Electronic ISSN: 1571-4128
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    facet.materialart.
    Unknown
    IOS Press
    Publication Date: 2012-02-11
    Description: A central problem in the area of Process Mining is to obtain a formal model that represents selected behavior of a system. The theory of regions has been applied to address this problem, enabling the derivation of a Petri net whose language includes a set of traces. However, when dealing with real-life systems, the available tool support for performing such a task is unsatisfactory, due to the complex algorithms that are required. In this paper, the theory of regions is revisited to devise a novel technique that explores the space of regions by combining the elements of a region basis. Due to its light space requirements, the approach can represent an important step for bridging the gap between the theory of regions and its industrial application. Experimental results show that there is improvement in orders of magnitude in comparison with state-of-the-art tools for the same task. Content Type Journal Article Pages 343-376 DOI 10.3233/FI-2011-612 Authors Marc Solé, Universitat Politècnica de Catalunya, Barcelona, Spain. msole@ac.upc.edu, jcarmona@lsi.upc.edu Josep Carmona, Universitat Politècnica de Catalunya, Barcelona, Spain. msole@ac.upc.edu, jcarmona@lsi.upc.edu Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 113 Journal Issue Volume 113, Number 3-4 / 2011
    Print ISSN: 0169-2968
    Electronic ISSN: 1875-8681
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    facet.materialart.
    Unknown
    IOS Press
    Publication Date: 2012-02-11
    Description: Literature on the stubborn set and similar state space reduction methods presents numerous seemingly ad-hoc conditions for selecting the transitions that are investigated in the current state. There are good reasons to believe that the choice between them has a significant effect on reduction results, but not much has been published on this topic. This article presents theoretical results and examples that aim at shedding light on the issue. Because the topic is extensive, we only consider the detection of deadlocks. We distinguish between different places where choices can be made and investigate their effects. It is usually impractical to aim at choices that are “best” in some sense. However, one non-trivial practical optimality result is proven. Content Type Journal Article Pages 377-397 DOI 10.3233/FI-2011-613 Authors Antti Valmari, Tampere University of Technology, Department of Software Systems, PO Box 553, FI-33101 Tampere, Finland. {antti.valmari,henri.hansen}@tut.fi Henri Hansen, Tampere University of Technology, Department of Software Systems, PO Box 553, FI-33101 Tampere, Finland. {antti.valmari,henri.hansen}@tut.fi Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 113 Journal Issue Volume 113, Number 3-4 / 2011
    Print ISSN: 0169-2968
    Electronic ISSN: 1875-8681
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    facet.materialart.
    Unknown
    IOS Press
    Publication Date: 2012-02-11
    Description: Separability in Petri nets means the property for a net k · N with an initial marking k · M to behave in the same way as k parallel instances of the same net N with an initial marking M, thus divided by k. We prove the separability of plain, bounded, reversible and persistent Petri nets, a class of nets that extends the well-known live and bounded marked graphs. We establish first a weak form of separability, already known to hold for marked graphs, in which every firing sequence of k · N is simulated by a firing sequence of k parallel instances of N with an identical firing count. We establish on top of this a strong form of separability, in which every firing sequence of k · N is simulated by an identical firing sequence of k parallel instances of N. Content Type Journal Article Pages 179-203 DOI 10.3233/FI-2011-606 Authors Eike Best, Parallel Systems, Department of Computing Science, Carl von Ossietzky Universität Oldenburg, D-26111 Oldenburg, Germany. eike.best@informatik.uni-oldenburg.de Philippe Darondeau, INRIA, Centre Rennes - Bretagne Atlantique, Campus de Beaulieu, F-35042 Rennes Cedex, France. Philippe.Darondeau@inria.fr Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 113 Journal Issue Volume 113, Number 3-4 / 2011
    Print ISSN: 0169-2968
    Electronic ISSN: 1875-8681
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2012-02-11
    Description: The combined trace (i.e., comtrace) notion was introduced by Janicki and Koutny in 1995 as a generalization of the Mazurkiewicz trace notion. Comtraces are congruence classes of step sequences, where the congruence relation is defined from two relations simultaneity and serializability on events. They also showed that comtraces correspond to some class of labeled stratified order structures, but left open the question of what class of labeled stratified orders represents comtraces. In this work, we proposed a class of labeled stratified order structures that captures exactly the comtrace notion. Our main technical contributions are representation theorems showing that comtrace quotient monoid, combined dependency graph (Kleijn and Koutny 2008) and our labeled stratified order structure characterization are three different and yet equivalent ways to represent comtraces. This paper is a revised and expanded version of Lê (in Proceedings of PETRI NETS 2010, LNCS 6128, pp. 104-124). Content Type Journal Article Pages 265-293 DOI 10.3233/FI-2011-609 Authors Dai Tri Man Lê, Department of Computer Science, University of Toronto, Toronto, ON, M5S 3G4 Canada. ledt@cs.toronto.edu Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 113 Journal Issue Volume 113, Number 3-4 / 2011
    Print ISSN: 0169-2968
    Electronic ISSN: 1875-8681
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2012-02-11
    Description: In this paper, we introduce a possibilistic argumentation-based decision making framework which is able to capture uncertain information and exceptions/defaults. In particular, we define the concept of a possibilistic decision making framework which is based on a possibilistic default theory, a set of decisions and a set of prioritized goals. This set of goals captures user preferences related to the achievement of a particular state in a decision making problem. By considering the inference of the possibilistic well-founded semantics, the concept of argument with respect to a decision is defined. This argument captures the feasibility of reaching a goal by applying a decision in a given context. The inference in the argumentation decision making framework is based on basic argumentation semantics. Since some basic argumentation semantics can infer more than one possible scenario of a possibilistic decision making problem, we define some criteria for selecting potential solutions of the problem. Content Type Journal Article Pages 41-61 DOI 10.3233/FI-2011-598 Authors Juan Carlos Nieves, Universitat Politècnica de Catalunya, Dept. Llenguatges i Sistemes Informàtics, Spain. jcnieves@lsi.upc.edu Roberto Confalonieri, Universitat Politècnica de Catalunya, Dept. Llenguatges i Sistemes Informàtics, Spain. confalonieri@lsi.upc.edu Journal Fundamenta Informaticae Online ISSN 1875-8681 Print ISSN 0169-2968 Journal Volume Volume 113 Journal Issue Volume 113, Number 1 / 2011
    Print ISSN: 0169-2968
    Electronic ISSN: 1875-8681
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2012-02-13
    Description:    This paper describes an on-board vision sensor system that is developed specifically for small unmanned vehicle applications. For small vehicles, vision sensors have many advantages, including size, weight, and power consumption, over other sensors such as radar, sonar, and laser range finder, etc. A vision sensor is also uniquely suited for tasks such as target tracking and recognition that require visual information processing. However, it is difficult to meet the computing needs of real-time vision processing on a small robot. In this paper, we present the development of a field programmable gate array-based vision sensor and use a small ground vehicle to demonstrate that this vision sensor is able to detect and track features on a user-selected target from frame to frame and steer the small autonomous vehicle towards it. The sensor system utilizes hardware implementations of the rank transform for filtering, a Harris corner detector for feature detection, and a correlation algorithm for feature matching and tracking. With additional capabilities supported in software, the operational system communicates wirelessly with a base station, receiving commands, providing visual feedback to the user and allowing user input such as specifying targets to track. Since this vision sensor system uses reconfigurable hardware, other vision algorithms such as stereo vision and motion analysis can be implemented to reconfigure the system for other real-time vision applications. Content Type Journal Article Category Original Paper Pages 1-13 DOI 10.1007/s00138-012-0413-9 Authors Beau J. Tippetts, Electrical and Computer Engineering Department, Brigham Young University, Provo, UT 84602, USA Dah-Jye Lee, Electrical and Computer Engineering Department, Brigham Young University, Provo, UT 84602, USA James K. Archibald, Electrical and Computer Engineering Department, Brigham Young University, Provo, UT 84602, USA Journal Machine Vision and Applications Online ISSN 1432-1769 Print ISSN 0932-8092
    Print ISSN: 0932-8092
    Electronic ISSN: 1432-1769
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2012-02-16
    Description:    We previously identified the appearance of negative Magnus lift on a sphere rotating about an axis perpendicular to an incoming flow at a critical Reynolds number using large-eddy simulation and obtained the statistically time-averaged lift and pressure coefficients around the sphere. We have now numerically investigated the unsteady characteristics of the boundary layer around a rotating sphere at three Reynolds numbers (1.0 × 10 4 , 2.0 × 10 5 , and 1.14 × 10 6 ). At a Reynolds number in the subcritical or supercritical region, the direction of the lift force followed the Magnus effect independent of the rotational speed. In contrast, at the critical Reynolds number when a particular rotational speed was imposed, negative lift was observed and a boundary-layer transition occurred only on one side of the sphere, as indicated by the visualization of the vortical structures around the sphere. A change in these structures and a shift of the separation points along with a change in the Reynolds number or rotational speed of the sphere were investigated in the context of boundary-layer transition by using visualization around the sphere. Graphical Abstract   Graphical Abstract text Content Type Journal Article Category Regular Paper Pages 1-8 DOI 10.1007/s12650-012-0125-2 Authors Masaya Muto, High Efficiency Power Generation Sector, Energy Engineering Research Laboratory, Central Research Institute of Electric Power Industry, 2-6-1 Nagasaka, Yokosuka, Kanagawa 240-0196, Japan Makoto Tsubokura, Division of Mechanical and Space Engineering, Faculty of Engineering, Hokkaido University, Kita 13, Nishi 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan Nobuyuki Oshima, Division of Mechanical and Space Engineering, Faculty of Engineering, Hokkaido University, Kita 13, Nishi 8, Kita-ku, Sapporo, Hokkaido 060-8628, Japan Journal Journal of Visualization Online ISSN 1875-8975 Print ISSN 1343-8875
    Print ISSN: 1343-8875
    Electronic ISSN: 1875-8975
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2012-02-17
    Description: The Semantic Web, especially in the light of the current focus on its nature as a Web of Data, is a data-centric system, and arguably the largest such system in existence. Data is being created, published, exported, imported, used, transformed and re-used, by different parties and for different purposes. Together, these actions form a lifecycle of data on the Semantic Web. Understanding this lifecycle will help to better understand the nature of data on the SW, to explain paradigm shifts, to compare the functionality of different platforms, to aid the integration of previously disparate implementation efforts or to position various actors on the SW and relate them to each other. However, while conceptualisations of many aspects of the SW exist, no exhaustive data lifecycle has been proposed. This paper proposes a data lifecycle model for the Semantic Web by first looking outward, and performing an extensive survey of lifecycle models in other data-centric domains, such as digital libraries, multimedia, eLearning, knowledge and Web content management or ontology development. For each domain, an extensive list of models is taken from the literature, and then described and analysed in terms of its different phases, actor roles and other characteristics. By contrasting and comparing the existing models, a meta vocabulary of lifecycle models for data-centric systems — the Abstract Data Lifecycle Model, or ADLM — is developed. In particular, a common set of lifecycle phases, lifecycle features and lifecycle roles is established, as well as additional actor features and generic features of data and metadata. This vocabulary now provides a tool to describe each individual model, relate them to each other, determine similarities and overlaps and eventually establish a new such model for the Semantic Web. Content Type Journal Article Pages - DOI 10.3233/SW-2012-0060 Authors Knud Möller, DERI, National University of Ireland, Galway, Ireland, E-mail: knud.moeller@deri.org Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Electronic ISSN: 2210-4968
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    facet.materialart.
    Unknown
    IOS Press
    Publication Date: 2012-02-17
    Description: We present BibBase, a system for publishing and managing bibliographic data available in BiBTeX files. BibBase uses a powerful yet light-weight approach to transform BiBTeX files into rich Linked Data as well as custom HTML code and RSS feed that can readily be integrated within a user's website while the data can instantly be queried online on the system's SPARQL endpoint. In this paper, we present an overview of several features of our system. We outline several challenges involved in on-the-fly transformation of highly heterogeneous BiBTeX files into high-quality Linked Data, and present our solution to these challenges. Content Type Journal Article Pages - DOI 10.3233/SW-2012-0062 Authors Reynold S. Xin, Department of EECS, University of California, Berkeley, California, USA E-mail: rxin@cs.berkeley.edu Oktie Hassanzadeh, Department of Computer Science, University of Toronto, 10 King's College Rd., Toronto, Ontario, M5S 3G4, Canada E-mail: {oktie,shirin,miller}@cs.toronto.edu Christian Fritz, Palo Alto Research Center, 3333 Coyote Hill Road, Palo Alto, California, USA E-mail: cfritz@parc.com Shirin Sohrabi, Department of Computer Science, University of Toronto, 10 King's College Rd., Toronto, Ontario, M5S 3G4, Canada E-mail: {oktie,shirin,miller}@cs.toronto.edu Renée J. Miller, Department of Computer Science, University of Toronto, 10 King's College Rd., Toronto, Ontario, M5S 3G4, Canada E-mail: {oktie,shirin,miller}@cs.toronto.edu Journal Semantic Web Online ISSN 2210-4968 Print ISSN 1570-0844
    Print ISSN: 1570-0844
    Electronic ISSN: 2210-4968
    Topics: Computer Science
    Published by IOS Press
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2012-02-17
    Description: Motivation: High-throughput sequencing (HTS) technologies have made low-cost sequencing of large numbers of samples commonplace. An explosion in the type, not just number, of sequencing experiments has also taken place including genome re-sequencing, population-scale variation detection, whole transcriptome sequencing and genome-wide analysis of protein-bound nucleic acids. Results: We present Artemis as a tool for integrated visualization and computational analysis of different types of HTS datasets in the context of a reference genome and its corresponding annotation. Availability: Artemis is freely available (under a GPL licence) for download (for MacOSX, UNIX and Windows) at the Wellcome Trust Sanger Institute websites: http://www.sanger.ac.uk/resources/software/artemis/ . Contact: artemis@sanger.ac.uk ; tjc@sanger.ac.uk
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2012-02-17
    Description: : microRibonucleic acid (miRNAs) are small regulatory molecules that act by mRNA degradation or via translational repression. Although many miRNAs are ubiquitously expressed, a small subset have differential expression patterns that may give rise to tissue-specific complexes. Motivation: This work studies gene targeting patterns amongst miRNAs with differential expression profiles, and links this to control and regulation of protein complexes. Results: We find that, when a pair of miRNAs are not expressed in the same tissues, there is a higher tendency for them to target the direct partners of the same hub proteins. At the same time, they also avoid targeting the same set of hub-spokes. Moreover, the complexes corresponding to these hub-spokes tend to be specific and nonoverlapping. This suggests that the effect of miRNAs on the formation of complexes is specific. Contact: wongls@comp.nus.edu.sg Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2012-02-17
    Description: Motivation: Small interfering RNAs (siRNAs) are produced from much longer sequences of double-stranded RNA precursors through cleavage by Dicer or a Dicer-like protein. These small RNAs play a key role in genetic and epigenetic regulation; however, a full understanding of the mechanisms by which they operate depends on the characterization of the precursors from which they are derived. Results: High-throughput sequencing of small RNA populations allows the locations of the double-stranded RNA precursors to be inferred. We have developed methods to analyse small RNA sequencing data from multiple biological sources, taking into account replicate information, to identify robust sets of siRNA precursors. Our methods show good performance on both a set of small RNA sequencing data in Arabidopsis thaliana and simulated datasets. Availability: Our methods are available as the Bioconductor ( www.bioconductor.org ) package segmentSeq (version 1.5.6 and above). Contact: tjh48@cam.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2012-02-17
    Description: Motivation: Intrinsically disordered regions are key for the function of numerous proteins, and the scant available experimental annotations suggest the existence of different disorder flavors. While efficient predictions are required to annotate entire genomes, most existing methods require sequence profiles for disorder prediction, making them cumbersome for high-throughput applications. Results: In this work, we present an ensemble of protein disorder predictors called ESpritz. These are based on bidirectional recursive neural networks and trained on three different flavors of disorder, including a novel NMR flexibility predictor. ESpritz can produce fast and accurate sequence-only predictions, annotating entire genomes in the order of hours on a single processor core. Alternatively, a slower but slightly more accurate ESpritz variant using sequence profiles can be used for applications requiring maximum performance. Two levels of prediction confidence allow either to maximize reasonable disorder detection or to limit expected false positives to 5%. ESpritz performs consistently well on the recent CASP9 data, reaching a S w measure of 54.82 and area under the receiver operator curve of 0.856. The fast predictor is four orders of magnitude faster and remains better than most publicly available CASP9 methods, making it ideal for genomic scale predictions. Conclusions: ESpritz predicts three flavors of disorder at two distinct false positive rates, either with a fast or slower and slightly more accurate approach. Given its state-of-the-art performance, it can be especially useful for high-throughput applications. Availability: Both a web server for high-throughput analysis and a Linux executable version of ESpritz are available from: http://protein.bio.unipd.it/espritz/ Contact: silvio.tosatto@unipd.it Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2012-02-17
    Description: Motivation: Clustering protein structures is an important task in structural bioinformatics. De novo structure prediction, for example, often involves a clustering step for finding the best prediction. Other applications include assigning proteins to fold families and analyzing molecular dynamics trajectories. Results: We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by first mapping structures to Gauss integral vectors—which were introduced by Røgen and co-workers—and subsequently performing K-means clustering. Conclusions: Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a significantly larger number of structures, while providing state-of-the-art results. The number of low energy structures generated in a typical folding study, which is in the order of 50 000 structures, can be clustered within seconds to minutes. Contact: thamelry@binf.ku.dk ; harder@binf.ku.dk Supplementary Information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2012-02-17
    Description: Motivation: Transmembrane β barrel proteins (TMBs) are found in the outer membrane of Gram-negative bacteria, chloroplast and mitochondria. They play a major role in the translocation machinery, pore formation, membrane anchoring and ion exchange. TMBs are also promising targets for antimicrobial drugs and vaccines. Given the difficulty in membrane protein structure determination, computational methods to identify TMBs and predict the topology of TMBs are important. Results: Here, we present BOCTOPUS; an improved method for the topology prediction of TMBs by employing a combination of support vector machines (SVMs) and Hidden Markov Models (HMMs). The SVMs and HMMs account for local and global residue preferences, respectively. Based on a 10-fold cross-validation test, BOCTOPUS performs better than all existing methods, reaching a Q3 accuracy of 87%. Further, BOCTOPUS predicted the correct number of strands for 83% proteins in the dataset. BOCTOPUS might also help in reliable identification of TMBs by using it as an additional filter to methods specialized in this task. Availability: BOCTOPUS is freely available as a web server at: http://boctopus.cbr.su.se/ . The datasets used for training and evaluations are also available from this site. Contact: arne@bioinfo.se Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2012-02-17
    Description: Motivation: High-dimensional data such as microarrays have created new challenges to traditional statistical methods. One such example is on class prediction with high-dimension, low-sample size data. Due to the small sample size, the sample mean estimates are usually unreliable. As a consequence, the performance of the class prediction methods using the sample mean may also be unsatisfactory. To obtain more accurate estimation of parameters some statistical methods, such as regularizations through shrinkage, are often desired. Results: In this article, we investigate the family of shrinkage estimators for the mean value under the quadratic loss function. The optimal shrinkage parameter is proposed under the scenario when the sample size is fixed and the dimension is large. We then construct a shrinkage-based diagonal discriminant rule by replacing the sample mean by the proposed shrinkage mean. Finally, we demonstrate via simulation studies and real data analysis that the proposed shrinkage-based rule outperforms its original competitor in a wide range of settings. Contact: tongt@hkbu.edu.hk
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2012-02-17
    Description: Motivation: The advent of high-throughput sequencing technologies is revolutionizing our ability in discovering and genotyping DNA copy number variants (CNVs). Read count-based approaches are able to detect CNV regions with an unprecedented resolution. Although this computational strategy has been recently introduced in literature, much work has been already done for the preparation, normalization and analysis of this kind of data. Results: Here we face the many aspects that cover the detection of CNVs by using read count approach. We first study the characteristics and systematic biases of read count distributions, focusing on the normalization methods designed for removing these biases. Subsequently, we compare the algorithms designed to detect the boundaries of CNVs and we investigate the ability of read count data to predict the exact number of DNA copy. Finally, we review the tools publicly available for analysing read count data. To better understand the state of the art of read count approaches, we compare the performance of the three most widely used sequencing technologies (Illumina Genome Analyzer, Roche 454 and Life Technologies SOLiD) in all the analyses that we perform. Contact: albertomagi@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2012-02-17
    Description: Motivation: We investigate and quantify the generalizability of the white blood cell (WBC) transcriptome to the general, multiorgan transcriptome. We use data from the NCBI's Gene Expression Omnibus (GEO) public repository to define two datasets for comparison, WBC and OO (Other Organ) sets. Results: Comprehensive pair-wise correlation and expression level profiles are calculated for both datasets (with sizes of 81 and 1463, respectively). We have used mapping and ranking across the Gene Ontology (GO) categories to quantify similarity between the two sets. GO mappings of the most correlated and highly expressed genes from the two datasets tightly match, with the notable exceptions of components of the ribosome, cell adhesion and immune response. That is, 10 877 or 48.8% of all measured genes do not change 〉10% of rank range between WBC and OO; only 878 (3.9%) change rank 〉50%. Two trans -tissue gene lists are defined, the most changing and the least changing genes in expression rank. We also provide a general, quantitative measure of the probability of expression rank and correlation profile in the OO system given the expression rank and correlation profile in the WBC dataset. Contact: vvaltchinov@partners.org Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2012-02-17
    Description: Motivation: The understanding of the molecular sources for diseases like cancer can be significantly improved by computational models. Recently, Boolean networks have become very popular for modeling signaling and regulatory networks. However, such models rely on a set of Boolean functions that are in general not known. Unfortunately, while detailed information on the molecular interactions becomes available in large scale through electronic databases, the information on the Boolean functions does not become available simultaneously and has to be included manually into the models, if at all known. Results: We propose a new Boolean approach which can directly utilize the mechanistic network information available through modern databases. The Boolean function is implicitly defined by the reaction mechanisms. Special care has been taken for the treatment of kinetic features like inhibition. The method has been applied to a signaling model combining the Wnt and MAPK pathway. Availability: A sample C++ implementation of the proposed method is available for Linux and compatible systems through http://code.google.com/p/libscopes/wiki/Paper2011 Contact: handorf@physik.hu-berlin.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2012-02-17
    Description: Motivation: Multiple sequence alignment (MSA) is a core method in bioinformatics. The accuracy of such alignments may influence the success of downstream analyses such as phylogenetic inference, protein structure prediction, and functional prediction. The importance of MSA has lead to the proliferation of MSA methods, with different objective functions and heuristics to search for the optimal MSA. Different methods of inferring MSAs produce different results in all but the most trivial cases. By measuring the differences between inferred alignments, we may be able to develop an understanding of how these differences (i) relate to the objective functions and heuristics used in MSA methods, and (ii) affect downstream analyses. Results: We introduce four metrics to compare MSAs, which include the position in a sequence where a gap occurs or the location on a phylogenetic tree where an insertion or deletion (indel) event occurs. We use both real and synthetic data to explore the information given by these metrics and demonstrate how the different metrics in combination can yield more information about MSA methods and the differences between them. Availability: MetAl is a free software implementation of these metrics in Haskell. Source and binaries for Windows, Linux and Mac OS X are available from http://kumiho.smith.man.ac.uk/whelan/software/metal/ . Contact: simon.whelan@manchester.ac.uk
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2012-02-17
    Description: Motivation: Peptide detection is a crucial step in mass spectrometry (MS) based proteomics. Most existing algorithms are based upon greedy isotope template matching and thus may be prone to error propagation and ineffective to detect overlapping peptides. In addition, existing algorithms usually work at different charge states separately, isolating useful information that can be drawn from other charge states, which may lead to poor detection of low abundance peptides. Results: BPDA2d models spectra as a mixture of candidate peptide signals and systematically evaluates all possible combinations of possible peptide candidates to interpret the given spectra. For each candidate, BPDA2d takes into account its elution profile, charge state distribution and isotope pattern, and it combines all evidence to infer the candidate's signal and existence probability. By piecing all evidence together—especially by deriving information across charge states—low abundance peptides can be better identified and peptide detection rates can be improved. Instead of local template matching, BPDA2d performs global optimization for all candidates and systematically optimizes their signals. Since BPDA2d looks for the optimal among all possible interpretations of the given spectra, it has the capability in handling complex spectra where features overlap. BPDA2d estimates the posterior existence probability of detected peptides, which can be directly used for probability-based evaluation in subsequent processing steps. Our experiments indicate that BPDA2d outperforms state-of-the-art detection methods on both simulated data and real liquid chromatography–mass spectrometry data, according to sensitivity and detection accuracy. Availability: The BPDA2d software package is available at http://gsp.tamu.edu/Publications/supplementary/sun11a/ Contact: Michelle.Zhang@utsa.edu ; edward@ece.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2012-02-17
    Description: Motivation: The continued progress in developing technological platforms, availability of many published experimental datasets, as well as different statistical methods to analyze those data have allowed approaching the same research question using various methods simultaneously. To get the best out of all these alternatives, we need to integrate their results in an unbiased manner. Prioritized gene lists are a common result presentation method in genomic data analysis applications. Thus, the rank aggregation methods can become a useful and general solution for the integration task. Results: Standard rank aggregation methods are often ill-suited for biological settings where the gene lists are inherently noisy. As a remedy, we propose a novel robust rank aggregation (RRA) method. Our method detects genes that are ranked consistently better than expected under null hypothesis of uncorrelated inputs and assigns a significance score for each gene. The underlying probabilistic model makes the algorithm parameter free and robust to outliers, noise and errors. Significance scores also provide a rigorous way to keep only the statistically relevant genes in the final list. These properties make our approach robust and compelling for many settings. Availability: All the methods are implemented as a GNU R package R obust R ank A ggreg , freely available at the Comprehensive R Archive Network http://cran.r-project.org/ . Contact: vilo@ut.ee Supplementary information Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2012-02-17
    Description: : CLARE is a computational method designed to reveal sequence encryption of tissue-specific regulatory elements. Starting with a set of regulatory elements known to be active in a particular tissue/process, it learns the sequence code of the input set and builds a predictive model from features specific to those elements. The resulting model can then be applied to user-supplied genomic regions to identify novel candidate regulatory elements. CLARE's model also provides a detailed analysis of transcription factors that most likely bind to the elements, making it an invaluable tool for understanding mechanisms of tissue-specific gene regulation. Availability: CLARE is freely accessible at http://clare.dcode.org/ . Contact: taherl@ncbi.nlm.nih.gov ; ovcharen@nih.gov Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2012-02-17
    Description: Motivation: We present a pipeline for the pre-processing, quality assessment, read distribution and methylation estimation for methylated DNA immunoprecipitation (MeDIP)-sequence datasets. This is the first MeDIP-seq-specific analytic pipeline that starts at the output of the sequencers. This pipeline will reduce the data analysis load on staff and allows the easy and straightforward analysis of sequencing data for DNA methylation. The pipeline integrates customized scripting and several existing tools, which can deal with both paired and single end data. Availability: The package and extensive documentation, and comparison to public data is available at http://life.tongji.edu.cn/meqa/ Contact: jhuang@cephb.fr
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2012-02-17
    Description: Motivation: A plethora of bioinformatics analysis has led to the discovery of numerous gene sets, which can be interpreted as discrete measurements emitted from latent signaling pathways. Their potential to infer signaling pathway structures, however, has not been sufficiently exploited. Existing methods accommodating discrete data do not explicitly consider signal cascading mechanisms that characterize a signaling pathway. Novel computational methods are thus needed to fully utilize gene sets and broaden the scope from focusing only on pairwise interactions to the more general cascading events in the inference of signaling pathway structures. Results: We propose a gene set based simulated annealing (SA) algorithm for the reconstruction of signaling pathway structures. A signaling pathway structure is a directed graph containing up to a few hundred nodes and many overlapping signal cascades, where each cascade represents a chain of molecular interactions from the cell surface to the nucleus. Gene sets in our context refer to discrete sets of genes participating in signal cascades, the basic building blocks of a signaling pathway, with no prior information about gene orderings in the cascades. From a compendium of gene sets related to a pathway, SA aims to search for signal cascades that characterize the optimal signaling pathway structure. In the search process, the extent of overlap among signal cascades is used to measure the optimality of a structure. Throughout, we treat gene sets as random samples from a first-order Markov chain model. We evaluated the performance of SA in three case studies. In the first study conducted on 83 KEGG pathways, SA demonstrated a significantly better performance than Bayesian network methods. Since both SA and Bayesian network methods accommodate discrete data, use a ‘search and score’ network learning strategy and output a directed network, they can be compared in terms of performance and computational time. In the second study, we compared SA and Bayesian network methods using four benchmark datasets from DREAM. In our final study, we showcased two context-specific signaling pathways activated in breast cancer. Availibility: Source codes are available from http://dl.dropbox.com/u/16000775/sa_sc.zip Contact: dzhu@wayne.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2012-02-17
    Description: : We provide a Bioconductor package with quality assessment, processing and visualization tools for high-throughput sequencing data, with emphasis in ChIP-seq and RNA-seq studies. It includes detection of outliers and biases, inefficient immuno-precipitation and overamplification artifacts, de novo identification of read-rich genomic regions and visualization of the location and coverage of genomic region lists. Availability: www.bioconductor.org Contact: david.rossell@irbbarcelona.org Supplementary information: Supplementary data available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2012-02-17
    Description: Motivation: We study a stochastic method for approximating the set of local minima in partial RNA folding landscapes associated with a bounded-distance neighbourhood of folding conformations. The conformations are limited to RNA secondary structures without pseudoknots. The method aims at exploring partial energy landscapes p L induced by folding simulations and their underlying neighbourhood relations. It combines an approximation of the number of local optima devised by Garnier and Kallel (2002) with a run-time estimation for identifying sets of local optima established by Reeves and Eremeev (2004). Results: The method is tested on nine sequences of length between 50 nt and 400 nt, which allows us to compare the results with data generated by RNAsubopt and subsequent barrier tree calculations. On the nine sequences, the method captures on average 92% of local minima with settings designed for a target of 95%. The run-time of the heuristic can be estimated by O ( n 2 D ln), where n is the sequence length, is the number of local minima in the partial landscape p L under consideration and D is the maximum number of steepest descent steps in attraction basins associated with p L . Contact: a.albrecht@qub.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2012-02-17
    Description: Motivation: RNA-seq is a powerful technology for the study of transcriptome profiles that uses deep-sequencing technologies. Moreover, it may be used for cellular phenotyping and help establishing the etiology of diseases characterized by abnormal splicing patterns. In RNA-Seq, the exact nature of splicing events is buried in the reads that span exon–exon boundaries. The accurate and efficient mapping of these reads to the reference genome is a major challenge. Results: We developed PASSion, a pattern growth algorithm-based pipeline for splice site detection in paired-end RNA-Seq reads. Comparing the performance of PASSion to three existing RNA-Seq analysis pipelines, TopHat, MapSplice and HMMSplicer, revealed that PASSion is competitive with these packages. Moreover, the performance of PASSion is not affected by read length and coverage. It performs better than the other three approaches when detecting junctions in highly abundant transcripts. PASSion has the ability to detect junctions that do not have known splicing motifs, which cannot be found by the other tools. Of the two public RNA-Seq datasets, PASSion predicted ~ 137 000 and 173 000 splicing events, of which on average 82 are known junctions annotated in the Ensembl transcript database and 18% are novel. In addition, our package can discover differential and shared splicing patterns among multiple samples. Availability: The code and utilities can be freely downloaded from https://trac.nbic.nl/passion and ftp://ftp.sanger.ac.uk/pub/zn1/passion Contact: y.zhang@lumc.nl ; k.ye@lumc.nl Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2012-02-17
    Description: Motivation: The completion of 168 genome sequences from a single population of Drosophila melanogaster provides a global view of genomic variation and an understanding of the evolutionary forces shaping the patterns of DNA polymorphism and divergence along the genome. Results: We present the ‘Population Drosophila Browser’ (PopDrowser), a new genome browser specially designed for the automatic analysis and representation of genetic variation across the D. melanogaster genome sequence. PopDrowser allows estimating and visualizing the values of a number of DNA polymorphism and divergence summary statistics, linkage disequilibrium parameters and several neutrality tests. PopDrowser also allows performing custom analyses on-the-fly using user-selected parameters. Availability: PopDrowser is freely available from http://PopDrowser.uab.cat . Contact: miquel.ramia@uab.cat
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2012-02-17
    Description: Motivation: Probabilistic approaches for inferring transcription factor binding sites (TFBSs) and regulatory motifs from DNA sequences have been developed for over two decades. Previous work has shown that prediction accuracy can be significantly improved by incorporating features such as the competition of multiple transcription factors (TFs) for binding to nearby sites, the tendency of TFBSs for co-regulated TFs to cluster and form cis -regulatory modules and explicit evolutionary modeling of conservation of TFBSs across orthologous sequences. However, currently available tools only incorporate some of these features, and significant methodological hurdles hampered their synthesis into a single consistent probabilistic framework. Results: We present MotEvo, a integrated suite of Bayesian probabilistic methods for the prediction of TFBSs and inference of regulatory motifs from multiple alignments of phylogenetically related DNA sequences, which incorporates all features just mentioned. In addition, MotEvo incorporates a novel model for detecting unknown functional elements that are under evolutionary constraint, and a new robust model for treating gain and loss of TFBSs along a phylogeny. Rigorous benchmarking tests on ChIP-seq datasets show that MotEvo's novel features significantly improve the accuracy of TFBS prediction, motif inference and enhancer prediction. Availability: Source code, a user manual and files with several example applications are available at www.swissregulon.unibas.ch . Contact: erik.vannimwegen@unibas.ch Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2012-02-17
    Description: : We present LaTcOm, a new web tool, which offers several alternative methods for ‘rare codon cluster’ (RCC) identification from a single and simple graphical user interface. In the current version, three RCC detection schemes are implemented: the recently described %MinMax algorithm and a simplified sliding window approach, along with a novel modification of a linear-time algorithm for the detection of maximally scoring subsequences tailored to the RCC detection problem. Among a number of user tunable parameters, several codon-based scales relevant for RCC detection are available, including tRNA abundance values from Escherichia coli and several codon usage tables from a selection of genomes. Furthermore, useful scale transformations may be performed upon user request (e.g. linear, sigmoid). Users may choose to visualize RCC positions within the submitted sequences either with graphical representations or in textual form for further processing. Availability: LaTcOm is freely available online at the URL http://troodos.biol.ucy.ac.cy/latcom.html . Contact: vprobon@ucy.ac.cy Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2012-02-17
    Description: : Many existing databases annotate experimentally characterized single nucleotide polymorphisms (SNPs). Each non-synonymous SNP (nsSNP) changes one amino acid in the gene product (single amino acid substitution;SAAS). This change can either affect protein function or be neutral in that respect. Most polymorphisms lack experimental annotation of their functional impact. Here, we introduce SNPdbe—SNP database of effects, with predictions of computationally annotated functional impacts of SNPs. Database entries represent nsSNPs in dbSNP and 1000 Genomes collection, as well as variants from UniProt and PMD. SAASs come from 〉2600 organisms; ‘human’ being the most prevalent. The impact of each SAAS on protein function is predicted using the SNAP and SIFT algorithms and augmented with experimentally derived function/structure information and disease associations from PMD, OMIM and UniProt. SNPdbe is consistently updated and easily augmented with new sources of information. The database is available as an MySQL dump and via a web front end that allows searches with any combination of organism names, sequences and mutation IDs. Availability: http://www.rostlab.org/services/snpdbe Contact: schaefer@rostlab.org ; snpdbe@rostlab.org
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2012-02-17
    Description: : We have implemented in a single package all the features required for extracting, visualizing and manipulating fully conserved positions as well as those with a family-dependent conservation pattern in multiple sequence alignments. The program allows, among other things, to run different methods for extracting these positions, combine the results and visualize them in protein 3D structures and sequence spaces. Availability and implementation: JDet is a multiplatform application written in Java. It is freely available, including the source code, at http://csbg.cnb.csic.es/JDet . The package includes two of our recently developed programs for detecting functional positions in protein alignments ( Xdet and S3Det ), and support for other methods can be added as plug-ins. A help file and a guided tutorial for JDet are also available. Contact: pazos@cnb.csic.es
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2012-02-17
    Description: : VarSifter is a graphical software tool for desktop computers that allows investigators of varying computational skills to easily and quickly sort, filter, and sift through sequence variation data. A variety of filters and a custom query framework allow filtering based on any combination of sample and annotation information. By simplifying visualization and analyses of exome-scale sequence variation data, this program will help bring the power and promise of massively-parallel DNA sequencing to a broader group of researchers. Availability and Implementation: VarSifter is written in Java, and is freely available in source and binary versions, along with a User Guide, at http://research.nhgri.nih.gov/software/VarSifter/ . Contact: mullikin@mail.nih.gov Supplementary Information: Additional figures and methods available online at the journal's website.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-02-18
    Description: Untrolle in Meetings und im Leben Content Type Journal Article Category DUECK-beta-INSIDE Pages 1-5 DOI 10.1007/s00287-012-0602-y Authors Gunter Dueck, Gaiberger Straße 29, 69151 Neckargemünd, Deutschland Journal Informatik-Spektrum Online ISSN 1432-122X Print ISSN 0170-6012
    Print ISSN: 0170-6012
    Electronic ISSN: 1432-122X
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2012-02-18
    Description:    As many sensor networks are currently being deployed for environmental monitoring, there is a growing need to develop systems and applications for managing, processing and retrieving massive amounts of data generated from those networks. In this research, a query answering system with pattern mining techniques is investigated specifically for marine sensor data. We consider three applications of pattern mining: similar pattern search, predictive query and query by clustering. In pattern mining for query answering, we adopt the dynamic time warping (DTW) method for similarity measurement. We also propose the use of a query relaxation approach that recommends users change parameters of a given query to get an answer. Finally, we show implementation results of pattern query answering in a marine sensor network deployed in the South East of Tasmania, Australia. Pattern query answering system benefits in accessing and discovering knowledge from sensor data for decision making purposes. Content Type Journal Article Category Research Article Pages 1-17 DOI 10.1007/s12145-012-0095-x Authors Md. Sumon Shahriar, Tasmanian ICT Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Castray Esplanade, Hobart, 7001 Australia Paulo de Souza, Tasmanian ICT Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Castray Esplanade, Hobart, 7001 Australia Greg Timms, Tasmanian ICT Centre, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Castray Esplanade, Hobart, 7001 Australia Journal Earth Science Informatics Online ISSN 1865-0481 Print ISSN 1865-0473
    Print ISSN: 1865-0473
    Electronic ISSN: 1865-0481
    Topics: Geosciences , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-12-26
    Electronic ISSN: 1862-2607
    Topics: Computer Science , Law
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-12-26
    Electronic ISSN: 1862-2607
    Topics: Computer Science , Law
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-12-26
    Description: IT-Sicherheitsverfahren werden in den internationalen Normungsorganisationen ISO und IEC seit gut 20 Jahren unter der Projektführerschaft des DIN, Deutsches Institut für Normung e. V., genormt. Der Normenausschuss Informationstechnik und Anwendungen (NIA) im DIN stellt dabei den Vorsitzenden des internationalen Normungskomitees ISO/IEC JTC 1/ SC 27 „IT Security Techniques“ und auch dessen Sekretariat. Zusammen steuern diese derzeit mehrere Dutzend Normungsprojekte. Viele wichtige IT-Sicherheitsnormen wurden bisher erarbeitet (siehe DuD 1/2011). Das Erarbeiten von Normen ist in einer Technikwelt, die Aspekte der IT-Sicherheit in einer Vielzahl von Verfahren und Produkten berücksichtigen muss, jedoch nur die „halbe Miete“. Durch Technikkonvergenz tritt die Branchen und Technologiefelder übergreifende Koordinierung und Auswahl geeigneter Normen in den Vordergrund. So trifft z. B. das Smart Grid oder das Konzept des Mobile Bankings nur dann beim Kunden auf Akzeptanz, wenn neben der klassischen IT-Sicherheit auch Informationssicherheits- und Datenschutzfragen von Anfang an in die Systemarchitektur eingebracht werden. Dieser Beitrag beschreibt die deshalb seit einiger Zeit laufenden Koordinierungsaktivitäten des DIN auf dem Querschnittsgebiet IT-Sicherheitsnormung.
    Electronic ISSN: 1862-2607
    Topics: Computer Science , Law
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2012-12-28
    Description: Femtocell networks offer a series of advantages with respect to conventional cellular networks. However, a potential massive deployment of femto-access points (FAPs) poses a big challenge in terms of interference management, which requires proper radio resource allocation techniques. In this article, we propose alternative optimal power/bit allocation strategies over a time-frequency frame based on a statistical modeling of the interference activity. Given the lack of knowledge of the interference activity, we assume a Bayesian approach that provides the optimal allocation, conditioned to periodic spectrum sensing, and estimation of the interference activity statistical parameters. We consider first a single FAP accessing the radio channel in the presence of a dynamical interference environment. Then, we extend the formulation to a multi-FAP scenario, where nearby FAP's react to the strategies of the other FAP's, still within a dynamical interference scenario. The multi-user case is first approached using a strategic non-cooperative game formulation. Then, we propose a coordination game based on the introduction of a pricing mechanism that exploits the backhaul link to enable the exchange of parameters (prices) among FAP's.
    Print ISSN: 1687-1472
    Electronic ISSN: 1687-1499
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Casey S. Greene, Olga G. Troyanskaya Modern experimental strategies often generate genome-scale measurements of human tissues or cell lines in various physiological states. Investigators often use these datasets individually to help elucidate molecular mechanisms of human diseases. Here we discuss approaches that effectively weight and integrate hundreds of heterogeneous datasets to gene-gene networks that focus on a specific process or disease. Diverse and systematic genome-scale measurements provide such approaches both a great deal of power and a number of challenges. We discuss some such challenges as well as methods to address them. We also raise important considerations for the assessment and evaluation of such approaches. When carefully applied, these integrative data-driven methods can make novel high-quality predictions that can transform our understanding of the molecular-basis of human disease.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Dong-Yeon Cho, Yoo-Ah Kim, Teresa M. Przytycka Complex diseases are caused by a combination of genetic and environmental factors. Uncovering the molecular pathways through which genetic factors affect a phenotype is always difficult, but in the case of complex diseases this is further complicated since genetic factors in affected individuals might be different. In recent years, systems biology approaches and, more specifically, network based approaches emerged as powerful tools for studying complex diseases. These approaches are often built on the knowledge of physical or functional interactions between molecules which are usually represented as an interaction network. An interaction network not only reports the binary relationships between individual nodes but also encodes hidden higher level organization of cellular communication. Computational biologists were challenged with the task of uncovering this organization and utilizing it for the understanding of disease complexity, which prompted rich and diverse algorithmic approaches to be proposed. We start this chapter with a description of the general characteristics of complex diseases followed by a brief introduction to physical and functional networks. Next we will show how these networks are used to leverage genotype, gene expression, and other types of data to identify dysregulated pathways, infer the relationships between genotype and phenotype, and explain disease heterogeneity. We group the methods by common underlying principles and first provide a high level description of the principles followed by more specific examples. We hope that this chapter will give readers an appreciation for the wealth of algorithmic techniques that have been developed for the purpose of studying complex diseases as well as insight into their strengths and limitations.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Mileidy W. Gonzalez, Maricel G. Kann Proteins do not function in isolation; it is their interactions with one another and also with other molecules (e.g. DNA, RNA) that mediate metabolic and signaling pathways, cellular processes, and organismal systems. Due to their central role in biological function, protein interactions also control the mechanisms leading to healthy and diseased states in organisms. Diseases are often caused by mutations affecting the binding interface or leading to biochemically dysfunctional allosteric changes in proteins. Therefore, protein interaction networks can elucidate the molecular basis of disease, which in turn can inform methods for prevention, diagnosis, and treatment. In this chapter, we will describe the computational approaches to predict and map networks of protein interactions and briefly review the experimental methods to detect protein interactions. We will describe the application of protein interaction networks as a translational approach to the study of human disease and evaluate the challenges faced by these approaches.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Benjamin J. Raphael Differences between individual human genomes, or between human and cancer genomes, range in scale from single nucleotide variants (SNVs) through intermediate and large-scale duplications, deletions, and rearrangements of genomic segments. The latter class, called structural variants (SVs), have received considerable attention in the past several years as they are a previously under appreciated source of variation in human genomes. Much of this recent attention is the result of the availability of higher-resolution technologies for measuring these variants, including both microarray-based techniques, and more recently, high-throughput DNA sequencing. We describe the genomic technologies and computational techniques currently used to measure SVs, focusing on applications in human and cancer genomics.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by David S. Wishart “Big” molecules such as proteins and genes still continue to capture the imagination of most biologists, biochemists and bioinformaticians. “Small” molecules, on the other hand, are the molecules that most biologists, biochemists and bioinformaticians prefer to ignore. However, it is becoming increasingly apparent that small molecules such as amino acids, lipids and sugars play a far more important role in all aspects of disease etiology and disease treatment than we realized. This particular chapter focuses on an emerging field of bioinformatics called “chemical bioinformatics” – a discipline that has evolved to help address the blended chemical and molecular biological needs of toxicogenomics, pharmacogenomics, metabolomics and systems biology. In the following pages we will cover several topics related to chemical bioinformatics. First, a brief overview of some of the most important or useful chemical bioinformatic resources will be given. Second, a more detailed overview will be given on those particular resources that allow researchers to connect small molecules to diseases. This section will focus on describing a number of recently developed databases or knowledgebases that explicitly relate small molecules – either as the treatment, symptom or cause – to disease. Finally a short discussion will be provided on newly emerging software tools that exploit these databases as a means to discover new biomarkers or even new treatments for disease.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Joshua C. Denny Abstract: The combination of improved genomic analysis methods, decreasing genotyping costs, and increasing computing resources has led to an explosion of clinical genomic knowledge in the last decade. Similarly, healthcare systems are increasingly adopting robust electronic health record (EHR) systems that not only can improve health care, but also contain a vast repository of disease and treatment data that could be mined for genomic research. Indeed, institutions are creating EHR-linked DNA biobanks to enable genomic and pharmacogenomic research, using EHR data for phenotypic information. However, EHRs are designed primarily for clinical care, not research, so reuse of clinical EHR data for research purposes can be challenging. Difficulties in use of EHR data include: data availability, missing data, incorrect data, and vast quantities of unstructured narrative text data. Structured information includes billing codes, most laboratory reports, and other variables such as physiologic measurements and demographic information. Significant information, however, remains locked within EHR narrative text documents, including clinical notes and certain categories of test results, such as pathology and radiology reports. For relatively rare observations, combinations of simple free-text searches and billing codes may prove adequate when followed by manual chart review. However, to extract the large cohorts necessary for genome-wide association studies, natural language processing methods to process narrative text data may be needed. Combinations of structured and unstructured textual data can be mined to generate high-validity collections of cases and controls for a given condition. Once high-quality cases and controls are identified, EHR-derived cases can be used for genomic discovery and validation. Since EHR data includes a broad sampling of clinically-relevant phenotypic information, it may enable multiple genomic investigations upon a single set of genotyped individuals. This chapter reviews several examples of phenotype extraction and their application to genetic research, demonstrating a viable future for genomic discovery using EHR-linked data.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Konrad J. Karczewski, Roxana Daneshjou, Russ B. Altman There is great variation in drug-response phenotypes, and a “one size fits all” paradigm for drug delivery is flawed. Pharmacogenomics is the study of how human genetic information impacts drug response, and it aims to improve efficacy and reduced side effects. In this article, we provide an overview of pharmacogenetics, including pharmacokinetics (PK), pharmacodynamics (PD), gene and pathway interactions, and off-target effects. We describe methods for discovering genetic factors in drug response, including genome-wide association studies (GWAS), expression analysis, and other methods such as chemoinformatics and natural language processing (NLP). We cover the practical applications of pharmacogenomics both in the pharmaceutical industry and in a clinical setting. In drug discovery, pharmacogenomics can be used to aid lead identification, anticipate adverse events, and assist in drug repurposing efforts. Moreover, pharmacogenomic discoveries show promise as important elements of physician decision support. Finally, we consider the ethical, regulatory, and reimbursement challenges that remain for the clinical implementation of pharmacogenomics.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Yang Ding, Sarah S. Sharpe, Andrew Masse, Daniel I. Goldman The sandfish lizard ( Scincus scincus ) swims within granular media (sand) using axial body undulations to propel itself without the use of limbs. In previous work we predicted average swimming speed by developing a numerical simulation that incorporated experimentally measured biological kinematics into a multibody sandfish model. The model was coupled to an experimentally validated soft sphere discrete element method simulation of the granular medium. In this paper, we use the simulation to study the detailed mechanics of undulatory swimming in a “granular frictional fluid” and compare the predictions to our previously developed resistive force theory (RFT) which models sand-swimming using empirically determined granular drag laws. The simulation reveals that the forward speed of the center of mass (CoM) oscillates about its average speed in antiphase with head drag. The coupling between overall body motion and body deformation results in a non-trivial pattern in the magnitude of lateral displacement of the segments along the body. The actuator torque and segment power are maximal near the center of the body and decrease to zero toward the head and the tail. Approximately 30% of the net swimming power is dissipated in head drag. The power consumption is proportional to the frequency in the biologically relevant range, which confirms that frictional forces dominate during sand-swimming by the sandfish. Comparison of the segmental forces measured in simulation with the force on a laterally oscillating rod reveals that a granular hysteresis effect causes the overestimation of the body thrust forces in the RFT. Our models provide detailed testable predictions for biological locomotion in a granular environment.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by William R. Holmes, Leah Edelstein-Keshet Eukaryotic cell motility involves complex interactions of signalling molecules, cytoskeleton, cell membrane, and mechanics interacting in space and time. Collectively, these components are used by the cell to interpret and respond to external stimuli, leading to polarization, protrusion, adhesion formation, and myosin-facilitated retraction. When these processes are choreographed correctly, shape change and motility results. A wealth of experimental data have identified numerous molecular constituents involved in these processes, but the complexity of their interactions and spatial organization make this a challenging problem to understand. This has motivated theoretical and computational approaches with simplified caricatures of cell structure and behaviour, each aiming to gain better understanding of certain kinds of cells and/or repertoire of behaviour. Reaction–diffusion (RD) equations as well as equations of viscoelastic flows have been used to describe the motility machinery. In this review, we describe some of the recent computational models for cell motility, concentrating on simulations of cell shape changes (mainly in two but also three dimensions). The problem is challenging not only due to the difficulty of abstracting and simplifying biological complexity but also because computing RD or fluid flow equations in deforming regions, known as a “free-boundary” problem, is an extremely challenging problem in applied mathematics. Here we describe the distinct approaches, comparing their strengths and weaknesses, and the kinds of biological questions that they have been able to address.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Maria Victoria Schneider, Rafael C. Jimenez This article aims to introduce the nature of data integration to life scientists. Generally, the subject of data integration is not discussed outside the field of computational science and is not covered in any detail, or even neglected, when teaching/training trainees. End users (hereby defined as wet-lab trainees, clinicians, lab researchers) will mostly interact with bioinformatics resources and tools through web interfaces that mask the user from the data integration processes. However, the lack of formal training or acquaintance with even simple database concepts and terminology often results in a real obstacle to the full comprehension of the resources and tools the end users wish to access. Understanding how data integration works is fundamental to empowering trainees to see the limitations as well as the possibilities when exploring, retrieving, and analysing biological data from databases. Here we introduce a game-based learning activity for training/teaching the topic of data integration that trainers/educators can adopt and adapt for their classroom. In particular we provide an example using DAS (Distributed Annotation Systems) as a method for data integration.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Xochitl C. Morgan, Curtis Huttenhower Humans are essentially sterile during gestation, but during and after birth, every body surface, including the skin, mouth, and gut, becomes host to an enormous variety of microbes, bacterial, archaeal, fungal, and viral. Under normal circumstances, these microbes help us to digest our food and to maintain our immune systems, but dysfunction of the human microbiota has been linked to conditions ranging from inflammatory bowel disease to antibiotic-resistant infections. Modern high-throughput sequencing and bioinformatic tools provide a powerful means of understanding the contribution of the human microbiome to health and its potential as a target for therapeutic interventions. This chapter will first discuss the historical origins of microbiome studies and methods for determining the ecological diversity of a microbial community. Next, it will introduce shotgun sequencing technologies such as metagenomics and metatranscriptomics, the computational challenges and methods associated with these data, and how they enable microbiome analysis. Finally, it will conclude with examples of the functional genomics of the human microbiome and its influences upon health and disease.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Ranjan V. Mannige, Charles L. Brooks, Eugene I. Shakhnovich Despite progresses in ancestral protein sequence reconstruction, much needs to be unraveled about the nature of the putative last common ancestral proteome that served as the prototype of all extant lifeforms. Here, we present data that indicate a steady decline (oil escape) in proteome hydrophobicity over species evolvedness (node number) evident in 272 diverse proteomes, which indicates a highly hydrophobic (oily) last common ancestor (LCA). This trend, obtained from simple considerations (free from sequence reconstruction methods), was corroborated by regression studies within homologous and orthologous protein clusters as well as phylogenetic estimates of the ancestral oil content. While indicating an inherent irreversibility in molecular evolution, oil escape also serves as a rare and universal reaction-coordinate for evolution (reinforcing Darwin's principle of Common Descent), and may prove important in matters such as (i) explaining the emergence of intrinsically disordered proteins, (ii) developing composition- and speciation-based “global” molecular clocks, and (iii) improving the statistical methods for ancestral sequence reconstruction.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2012-12-28
    Description: by Oliver Ratmann, Gé Donker, Adam Meijer, Christophe Fraser, Katia Koelle A key priority in infectious disease research is to understand the ecological and evolutionary drivers of viral diseases from data on disease incidence as well as viral genetic and antigenic variation. We propose using a simulation-based, Bayesian method known as Approximate Bayesian Computation (ABC) to fit and assess phylodynamic models that simulate pathogen evolution and ecology against summaries of these data. We illustrate the versatility of the method by analyzing two spatial models describing the phylodynamics of interpandemic human influenza virus subtype A(H3N2). The first model captures antigenic drift phenomenologically with continuously waning immunity, and the second epochal evolution model describes the replacement of major, relatively long-lived antigenic clusters. Combining features of long-term surveillance data from the Netherlands with features of influenza A (H3N2) hemagglutinin gene sequences sampled in northern Europe, key phylodynamic parameters can be estimated with ABC. Goodness-of-fit analyses reveal that the irregularity in interannual incidence and H3N2's ladder-like hemagglutinin phylogeny are quantitatively only reproduced under the epochal evolution model within a spatial context. However, the concomitant incidence dynamics result in a very large reproductive number and are not consistent with empirical estimates of H3N2's population level attack rate. These results demonstrate that the interactions between the evolutionary and ecological processes impose multiple quantitative constraints on the phylodynamic trajectories of influenza A(H3N2), so that sequence and surveillance data can be used synergistically. ABC, one of several data synthesis approaches, can easily interface a broad class of phylodynamic models with various types of data but requires careful calibration of the summaries and tolerance parameters.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2012-12-28
    Description: by Jens Vindahl Kringelum, Claus Lundegaard, Ole Lund, Morten Nielsen The interaction between antibodies and antigens is one of the most important immune system mechanisms for clearing infectious organisms from the host. Antibodies bind to antigens at sites referred to as B-cell epitopes. Identification of the exact location of B-cell epitopes is essential in several biomedical applications such as; rational vaccine design, development of disease diagnostics and immunotherapeutics. However, experimental mapping of epitopes is resource intensive making in silico methods an appealing complementary approach. To date, the reported performance of methods for in silico mapping of B-cell epitopes has been moderate. Several issues regarding the evaluation data sets may however have led to the performance values being underestimated: Rarely, all potential epitopes have been mapped on an antigen, and antibodies are generally raised against the antigen in a given biological context not against the antigen monomer. Improper dealing with these aspects leads to many artificial false positive predictions and hence to incorrect low performance values. To demonstrate the impact of proper benchmark definitions, we here present an updated version of the DiscoTope method incorporating a novel spatial neighborhood definition and half-sphere exposure as surface measure. Compared to other state-of-the-art prediction methods, Discotope-2.0 displayed improved performance both in cross-validation and in independent evaluations. Using DiscoTope-2.0 , we assessed the impact on performance when using proper benchmark definitions. For 13 proteins in the training data set where sufficient biological information was available to make a proper benchmark redefinition, the average AUC performance was improved from 0.791 to 0.824. Similarly, the average AUC performance on an independent evaluation data set improved from 0.712 to 0.727. Our results thus demonstrate that given proper benchmark definitions, B-cell epitope prediction methods achieve highly significant predictive performances suggesting these tools to be a powerful asset in rational epitope discovery. The updated version of DiscoTope is available at www.cbs.dtu.dk/services/DiscoTope-2.0.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2012-12-28
    Description: by Federica Chiappori, Ivan Merelli, Giorgio Colombo, Luciano Milanesi, Giulia Morra Investigating ligand-regulated allosteric coupling between protein domains is fundamental to understand cell-life regulation. The Hsp70 family of chaperones represents an example of proteins in which ATP binding and hydrolysis at the Nucleotide Binding Domain (NBD) modulate substrate recognition at the Substrate Binding Domain (SBD). Herein, a comparative analysis of an allosteric (Hsp70-DnaK) and a non-allosteric structural homolog (Hsp110-Sse1) of the Hsp70 family is carried out through molecular dynamics simulations, starting from different conformations and ligand-states. Analysis of ligand-dependent modulation of internal fluctuations and local deformation patterns highlights the structural and dynamical changes occurring at residue level upon ATP-ADP exchange, which are connected to the conformational transition between closed and open structures. By identifying the dynamically responsive protein regions and specific cross-domain hydrogen-bonding patterns that differentiate Hsp70 from Hsp110 as a function of the nucleotide, we propose a molecular mechanism for the allosteric signal propagation of the ATP-encoded conformational signal.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Cameron W. Harvey, Huijing Du, Zhiliang Xu, Dale Kaiser, Igor Aranson, Mark Alber The formation of spore-filled fruiting bodies by myxobacteria is a fascinating case of multicellular self-organization by bacteria. The organization of Myxococcus xanthus into fruiting bodies has long been studied not only as an important example of collective motion of bacteria, but also as a simplified model for developmental morphogenesis. Sporulation within the nascent fruiting body requires signaling between moving cells in order that the rod-shaped self-propelled cells differentiate into spores at the appropriate time. Probing the three-dimensional structure of myxobacteria fruiting bodies has previously presented a challenge due to limitations of different imaging methods. A new technique using Infrared Optical Coherence Tomography (OCT) revealed previously unknown details of the internal structure of M. xanthus fruiting bodies consisting of interconnected pockets of relative high and low spore density regions. To make sense of the experimentally observed structure, modeling and computer simulations were used to test a hypothesized mechanism that could produce high-density pockets of spores. The mechanism consists of self-propelled cells aligning with each other and signaling by end-to-end contact to coordinate the process of differentiation resulting in a pattern of clusters observed in the experiment. The integration of novel OCT experimental techniques with computational simulations can provide new insight into the mechanisms that can give rise to the pattern formation seen in other biological systems such as dictyostelids, social amoeba known to form multicellular aggregates observed as slugs under starvation conditions.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2012-12-28
    Description: by Mark J. Bradshaw, Man C. Cheung, Daniel J. Ehrlich, Michael L. Smith The structural proteins of the extracellular matrix (ECM) form fibers with finely tuned mechanical properties matched to the time scales of cell traction forces. Several proteins such as fibronectin (Fn) and fibrin undergo molecular conformational changes that extend the proteins and are believed to be a major contributor to the extensibility of bulk fibers. The dynamics of these conformational changes have been thoroughly explored since the advent of single molecule force spectroscopy and molecular dynamics simulations but remarkably, these data have not been rigorously applied to the understanding of the time dependent mechanics of bulk ECM fibers. Using measurements of protein density within fibers, we have examined the influence of dynamic molecular conformational changes and the intermolecular arrangement of Fn within fibers on the bulk mechanical properties of Fn fibers. Fibers were simulated as molecular strands with architectures that promote either equal or disparate molecular loading under conditions of constant extension rate. Measurements of protein concentration within micron scale fibers using deep ultraviolet transmission microscopy allowed the simulations to be scaled appropriately for comparison to in vitro measurements of fiber mechanics as well as providing estimates of fiber porosity and water content, suggesting Fn fibers are approximately 75% solute. Comparing the properties predicted by single molecule measurements to in vitro measurements of Fn fibers showed that domain unfolding is sufficient to predict the high extensibility and nonlinear stiffness of Fn fibers with surprising accuracy, with disparately loaded fibers providing the best fit to experiment. This work shows the promise of this microstructural modeling approach for understanding Fn fiber properties, which is generally applicable to other ECM fibers, and could be further expanded to tissue scale by incorporating these simulated fibers into three dimensional network models.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2012-12-29
    Description: Background: RNA interference (RNAi) becomes an increasingly important and effective genetic tool to study the function of target genes by suppressing specific genes of interest. This system approach helps identify signaling pathways and cellular phase types by tracking intensity and/or morphological changes of cells. The traditional RNAi screening scheme, in which one siRNA is designed to knockdown one specific mRNA target, needs a large library of siRNAs and turns out to be time-consuming and expensive. Results: In this paper, we propose a conceptual model, called compressed sensing RNAi (csRNAi), which employs the unique combination of group of small interfering RNAs (siRNAs) to knockdown a much larger size of genes. This strategy is based on the fact that one gene can be partially bound with several small interfering RNAs (siRNAs) and conversely, one siRNA can bind to a few genes with distinct binding affinity. This model constructs a multi-to-multi correspondence between siRNAs and their targets, with siRNAs much fewer than mRNA targets, compared with the conventional scheme. Mathematically this problem involves an underdetermined system of equations (linear or nonlinear), which is ill-posed in general. However, the recently developed compressed sensing (CS) theory can solve this problem. We present a mathematical model to describe the csRNAi system based on both CS theory and biological concerns. To build this model, we first search nucleotide motifs in a target gene set. Then we propose a machine learning based method to find the effective siRNAs with novel features, such as image features and speech features to describe an siRNA sequence. Numerical simulations show that we can reduce the siRNA library to one third of that in the conventional scheme. In addition, the features to describe siRNAs outperform the existing ones substantially. Conclusions: This csRNAi system is very promising in saving both time and cost for large-scale RNAi screening experiments which may benefit the biological research with respect to cellular processes and pathways.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2012-12-29
    Description: Background: Copy number variations (CNVs) are genomic structural variants that are found in healthy populations and have been observed to be associated with disease susceptibility. Existing methods for CNV detection are often performed on a sample-by-sample basis, which is not ideal for large datasets where common CNVs must be estimated by comparing the frequency of CNVs in the individual samples. Here we describe a simple and novel approach to locate genome-wide CNVs common to a specific population, using human ancestry as the phenotype. Results: We utilized our previously published Genome Alteration Detection Analysis (GADA) algorithm to identify common ancestry CNVs (caCNVs) and built a caCNV model to predict population structure. We identified a 73 caCNV signature using a training set of 225 healthy individuals from European, Asian, and African ancestry. The signature was validated on an independent test set of 300 individuals with similar ancestral background. The error rate in predicting ancestry in this test set was 2% using the 73 caCNV signature. Among the caCNVs identified, several were previously confirmed experimentally to vary by ancestry. Our signature also contains a caCNV region with a single microRNA (MIR270), which represents the first reported variation of microRNA by ancestry. Conclusions: We developed a new methodology to identify common CNVs and demonstrated its performance by building a caCNV signature to predict human ancestry with high accuracy. The utility of our approach could be extended to large case--control studies to identify CNV signatures for other phenotypes such as disease susceptibility and drug response.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2012-12-31
    Description: In this paper, we investigate ontology-based data access (OBDA) to build information systems whose purposes are (i) gathering data from a network of intermodal terminals, and (ii) computing performance indicators of the network. This application domain is characterized by large amounts of data and relatively simple data models, making it a natural challenge for logic-based knowledge representation and reasoning techniques. Considering relational database (RDB) technology as a yardstick, we show that careful engineering of OBDA can achieve RDB-like scalability even in demanding applications. To the best of our knowledge, this is the first study evaluating the potential of OBDA in a typical business-size application.
    Print ISSN: 1387-3326
    Electronic ISSN: 1572-9419
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-12-31
    Description: We give subquadratic algorithms that, given two necklaces each with n beads at arbitrary positions, compute the optimal rotation of the necklaces to best align the beads. Here alignment is measured according to the ℓ p norm of the vector of distances between pairs of beads from opposite necklaces in the best perfect matching. We show surprisingly different results for p =1, p even, and p =∞. For p even, we reduce the problem to standard convolution, while for p =∞ and p =1, we reduce the problem to (min,+) convolution and $(\operatorname {median},+)$ convolution. Then we solve the latter two convolution problems in subquadratic time, which are interesting results in their own right. These results shed some light on the classic sorting X + Y problem, because the convolutions can be viewed as computing order statistics on the antidiagonals of the X + Y matrix. All of our algorithms run in o ( n 2 ) time, whereas the obvious algorithms for these problems run in Θ ( n 2 ) time.
    Print ISSN: 0178-4617
    Electronic ISSN: 1432-0541
    Topics: Computer Science , Mathematics
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2012-12-19
    Description: Background: For selection and evaluation of potential biomarkers, inclusion of already published information is of utmost importance. In spite of significant advancements in text- and data-mining techniques, the vast knowledge space of biomarkers in biomedical text has remained unexplored. Existing named entity recognition approaches are not sufficiently selective for the retrieval of biomarker information from the literature. The purpose of this study was to identify textual features that enhance the effectiveness of biomarker information retrieval for different indication areas and diverse end user perspectives. Methods: A biomarker terminology was created and further organized into six concept classes. Performance of this terminology was optimized towards balanced selectivity and specificity. The information retrieval performance using the biomarker terminology was evaluated based on various combinations of the terminology's six classes. Further validation of these results was performed on two independent corpora representing two different neurodegenerative diseases. Results: The current state of the biomarker terminology contains 119 entity classes supported by 1890 different synonyms. The result of information retrieval shows improved retrieval rate of informative abstracts, which is achieved by including clinical management terms and evidence of gene/protein alterations (e.g. gene/protein expression status or certain polymorphisms) in combination with disease and gene name recognition. When additional filtering through other classes (e.g. diagnostic or prognostic methods) is applied, the typical high number of unspecific search results is significantly reduced. The evaluation results suggest that this approach enables the automated identification of biomarker information in the literature. A demo version of the search engine SCAIView, including the biomarker retrieval, is made available to the public through http://www.scaiview.com/scaiview-academia.html. Conclusions: The approach presented in this paper demonstrates that using a dedicated biomarker terminology for automated analysis of the scientific literature maybe helpful as an aid to finding biomarker information in text. Successful extraction of candidate biomarkers information from published resources can be considered as the first step towards developing novel hypotheses. These hypotheses will be valuable for the early decision-making in the drug discovery and development process.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2012-12-19
    Description: I want to probe in the role of the market in allocating resources in this very preliminary essay. One does not have to see study deeply to that the failure of markets for various kinds of derivative securities to perform properly is an essential element of the current financial crisis. Actually, financial crises are not a new phenomenon. The history of capitalism has been marked by repeated collapses of the financial system, situations in which the “markets” for loans disappear for extensive periods of time. The 18th century saw some bubbles, but these might not be quite modern. But from 1819 on, there have a succession of failures of banks and other financial institutions. These have typically been unpredicted and did not correspond in time to any particular exogenous event (e.g., wars). Economists from Joho Stuart Mill ( 1848 ) on did recognize the phenomenon. But the discussion was and is not at all integrated with the general exposition of classical economics. No one could be a more vigorous advocate of unrestrained markets than Milton Friedman; yet, to my reading, the account that he and Anna Schwartz gave of monetary developments in the United States and particularly with regard to the Great Depression emphasizes, not prices, not even interest rates, but the supply of money, and, by inference, of liquidity. (Friedman and Schwartz ( 1963 )). I start with the neoclassical general equilibrium framework, to which I have given a good deal of attention and effort. I seek to identify a possible point at which it fails to supply a coherent theory of securities markets and so might possibly lead to some understanding of the repeated crises of the financial system underlying the development of capitalism.
    Print ISSN: 1387-3326
    Electronic ISSN: 1572-9419
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2012-12-20
    Description: Accuracy of momentum transport calculations in gyrokinetic simulations are studied using the full- f gyrokinetic Eulerian code GT5D. Toroidal angular momentum conservation is examined both in the axisymmetric limit without turbulent fluctuations and in turbulent tokamaks. As shown by Scott and Smirnov (2010 Phys. Plasmas 17 112302), the toroidal angular momentum is conserved when the simulation is based on modern gyrokinetic theory with an energetic consistency. The convergence of turbulent heat and momentum fluxes is examined by implementing higher-order drift and polarization terms. The results support the correctness of the turbulent momentum transport computed using conventional first-order gyrokinetics.
    Print ISSN: 1749-4699
    Electronic ISSN: 1749-4680
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2012-12-20
    Description: A high-performance shooting algorithm is developed to compute time-periodic solutions of the free-surface Euler equations with spectral accuracy in double and quadruple precision. The method is used to study resonance and its effect on standing water waves. We identify new nucleation mechanisms in which isolated large-amplitude solutions, and closed loops of such solutions, suddenly exist for depths below a critical threshold. We also study degenerate and secondary bifurcations related to Wilton's ripples in the traveling case, and explore the breakdown of self-similarity at the crests of extreme standing waves. In shallow water, we find that standing waves take the form of counter-propagating solitary waves that repeatedly collide quasi-elastically. In deep water with surface tension, we find that standing waves resemble counter-propagating depression waves. We also discuss the existence and non-uniqueness of solutions, and smooth versus erratic dependence of Fourier modes on wa...
    Print ISSN: 1749-4699
    Electronic ISSN: 1749-4680
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2012-12-20
    Description: The neutral point of view (NPOV) cornerstone of Wikipedia (WP) is challenged for next generation knowledge bases. A case is presented for content neutrality as a new, every point of view (EPOV) guiding principle. The architectural implications of content neutrality are discussed and translated into novel concepts of Wiki architectures. Guidelines for implementing this architecture are presented. Although NPOV is criticized, the contribution avoids ideological controversy and focuses on the benefits of the novel approach.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2012-12-20
    Description: A central biological process in all living organisms is gene translation. Developing a deeper understanding of this complex process may have ramifications to almost every biomedical discipline. Reuveni et al. recently proposed a new computational model of gene translation called the Ribosome Flow Model (RFM). In this paper, we consider a particular case of this model, called the Homogeneous Ribosome Flow Model (HRFM). From a biological viewpoint, this corresponds to the case where the transition rates of all the coding sequence codons are identical. This regime has been suggested recently based on experiments in mouse embryonic cells. We consider the steady-state distribution of the HRFM. We provide formulas that relate the different parameters of the model in steady state. We prove the following properties: 1) the ribosomal density profile is monotonically decreasing along the coding sequence; 2) the ribosomal density at each codon monotonically increases with the initiation rate; and 3) for a constant initiation rate, the translation rate monotonically decreases with the length of the coding sequence. In addition, we analyze the translation rate of the HRFM at the limit of very high and very low initiation rate, and provide explicit formulas for the translation rate in these two cases. We discuss the relationship between these theoretical results and biological findings on the translation process.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2012-12-20
    Description: Ranking genes in functional networks according to a specific biological function is a challenging task raising relevant performance and computational complexity problems. To cope with both these problems we developed a transductive gene ranking method based on kernelized score functions able to fully exploit the topology and the graph structure of biomolecular networks and to capture significant functional relationships between genes. We run the method on a network constructed by integrating multiple biomolecular data sources in the yeast model organism, achieving significantly better results than the compared state-of-the-art network-based algorithms for gene function prediction, and with relevant savings in computational time. The proposed approach is general and fast enough to be in perspective applied to other relevant node ranking problems in large and complex biological networks.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2012-12-20
    Description: An important objective of modeling biological phenomena is to develop therapeutic intervention strategies to move an undesirable state of a diseased network toward a more desirable one. Such transitions can be achieved by the use of drugs to act on some genes/metabolites that affect the undesirable behavior. Due to the fact that biological phenomena are complex processes with nonlinear dynamics that are impossible to perfectly represent with a mathematical model, the need for model-free nonlinear intervention strategies that are capable of guiding the target variables to their desired values often arises. In many applications, fuzzy systems have been found to be very useful for parameter estimation, model development and control design of nonlinear processes. In this paper, a model-free fuzzy intervention strategy (that does not require a mathematical model of the biological phenomenon) is proposed to guide the target variables of biological systems to their desired values. The proposed fuzzy intervention strategy is applied to three different biological models: a glycolytic-glycogenolytic pathway model, a purine metabolism pathway model, and a generic pathway model. The simulation results for all models demonstrate the effectiveness of the proposed scheme.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2012-12-20
    Description: Advertisement: This publication offers open access options for authors. IEEE open access Publishing.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2012-12-20
    Description: Advertisement: The ACM Special Interest Group on Bioinformatics, Computational Biology, and Biomedical Informatics (SIGBioinformatics) bridges computer science, mathematics, statistics with biology and biomedicine. The mission of the ACM SIGBioinformatics is to improve our ability to develop advanced research, training, and outreach in Bioinformatics, Computational Biology, and Biomedical Informatics by stimulating interactions among researchers, educators and practitioners from related multi-disciplinary fields. The Association for Computing Machinery (ACM) is an educational and scientific computing society working to advance computing as a science and a profession. Benefits include subscriptions to Communications of the ACM, MemberNet, TechNews and CareerNews, full access to online courses and books, discounts on conferences and the option to subscribe to the ACM Digital Library.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2012-12-21
    Description: Flickermeter is a common name for a system that measures the obnoxiousness of flicker caused by voltage fluctuations. The output of flickermeter is a value of short-term flicker severity indicator, . This paper presents the results of the numerical simulations that reconstruct the processing of flickermeter in frequency domain. With the use of standard test signals, the characteristics of flickermeter were determined for the case of amplitude modulation of input signal, frequency modulation of input signal, and for input signal with interharmonic component. For the needs of simulative research, elements of standard IEC flickermeter signal chain as well as test signal source and tools for acquisition, archiving, and presentation of the obtained results were modeled. The results were presented with a set of charts, and the specific fragments of the charts were pointed out and commented on. Some examples of the influence of input signal’s bandwidth limitation on the flickermeter measurement result were presented for the case of AM and FM modulation. In addition, the diagrams that enable the evaluation of flickermeter’s linearity were also presented.
    Print ISSN: 1687-5591
    Electronic ISSN: 1687-5605
    Topics: Computer Science , Technology
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2012-12-21
    Description: Motivation: Given the current costs of next-generation sequencing, large studies carry out low-coverage sequencing followed by application of methods that leverage linkage disequilibrium to infer genotypes. We propose a novel method that assumes study samples are sequenced at low coverage and genotyped on a genome-wide microarray, as in the 1000 Genomes Project (1KGP). We assume polymorphic sites have been detected from the sequencing data and that genotype likelihoods are available at these sites. We also assume that the microarray genotypes have been phased to construct a haplotype scaffold. We then phase each polymorphic site using an MCMC algorithm that iteratively updates the unobserved alleles based on the genotype likelihoods at that site and local haplotype information. We use a multivariate normal model to capture both allele frequency and linkage disequilibrium information around each site. When sequencing data are available from trios, Mendelian transmission constraints are easily accommodated into the updates. The method is highly parallelizable, as it analyses one position at a time. Results: We illustrate the performance of the method compared with other methods using data from Phase 1 of the 1KGP in terms of genotype accuracy, phasing accuracy and downstream imputation performance. We show that the haplotype panel we infer in African samples, which was based on a trio-phased scaffold, increases downstream imputation accuracy for rare variants (R2 increases by 〉0.05 for minor allele frequency 〈1%), and this will translate into a boost in power to detect associations. These results highlight the value of incorporating microarray genotypes when calling variants from next-generation sequence data. Availability: The method (called MVNcall) is implemented in a C++ program and is available from http://www.stats.ox.ac.uk/~marchini/#software . Contact: marchini@stats.ox.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2012-12-21
    Description: Motivation: Accurate alignment of high-throughput RNA-seq data is a challenging and yet unsolved problem because of the non-contiguous transcript structure, relatively short read lengths and constantly increasing throughput of the sequencing technologies. Currently available RNA-seq aligners suffer from high mapping error rates, low mapping speed, read length limitation and mapping biases. Results: To align our large (〉80 billon reads) ENCODE Transcriptome RNA-seq dataset, we developed the Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure. STAR outperforms other aligners by a factor of 〉50 in mapping speed, aligning to the human genome 550 million 2 x 76 bp paired-end reads per hour on a modest 12-core server, while at the same time improving alignment sensitivity and precision. In addition to unbiased de novo detection of canonical junctions, STAR can discover non-canonical splices and chimeric (fusion) transcripts, and is also capable of mapping full-length RNA sequences. Using Roche 454 sequencing of reverse transcription polymerase chain reaction amplicons, we experimentally validated 1960 novel intergenic splice junctions with an 80–90% success rate, corroborating the high precision of the STAR mapping strategy. Availability and implementation: STAR is implemented as a standalone C++ code. STAR is free open source software distributed under GPLv3 license and can be downloaded from http://code.google.com/p/rna-star/ . Contact: dobin@cshl.edu .
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...