ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (9,667)
  • Oxford University Press  (8,687)
  • De Gruyter  (980)
  • American Physical Society (APS)
  • 2010-2014  (9,165)
  • 1960-1964  (420)
  • 1955-1959  (82)
  • Computer Science  (8,687)
  • Architecture, Civil Engineering, Surveying  (980)
Collection
  • Articles  (9,667)
Years
Year
Journal
  • 1
    Publication Date: 2014-12-31
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2014-08-31
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2014-04-30
    Description: The notion of resilience is being utilised by an increasing number of authors keen to understand the dynamics of local and regional economies and particularly how they deal with economic shocks and recessionary crises. Within the burgeoning literature however, fairly limited attention has been paid to date to developing a robust conceptual understanding of what role policy-makers, particularly at sub-national level, might play in building economic resilience in regions. The purpose of this paper is to address this gap. Drawing on resilience literatures which employ a complex adaptive systems perspective, the paper seeks to develop a conceptual framework within which policy action, particularly at the subnational level, can be theorised and understood. It identifies three critical dimensions which frame the role and scope for policy intervention in the management of regional economic resilience: the modes and structures of governance, the types of policy interventions which help build resilience, and the horizons or timings for appropriate intervention. The paper concludes by considering what this framing means for the nature and scope of subnational policy intervention for regional economic resilience.
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2014-06-30
    Description: Zusammenfassung In der Stadtforschung und Stadtplanung fungiert die „Dichte“ als eines der wichtigsten Maße für die Beschreibung der baulichen Physis einer Stadt. Im Vergleich zu anderen Maßen – genannt seien beispielhaft die „Nutzungsmischung“, die „Polyzentralität“ oder die „Kompaktheit“ – erscheint die „Dichte“ auf den ersten Blick als objektiv ermittelbar und gut verständlich. Bei näherer Betrachtung handelt es sich jedoch um ein Konzept, das sich einer einfachen empirischen Ermittlung weitgehend entzieht. Bis heute gibt es kein international anerkanntes Dichtemaß und Dichteangaben für verschiedene Länder, Regionen und Städte sind in der Regel nicht oder nur eingeschränkt vergleichbar. In diesem Beitrag werden die analytischen Möglichkeiten neuer Geodatenanwendungen für eine objektive und transparente Ermittlung der baulichen und nutzungsbezogenen Dichte städtischer Siedlungsgebiete aufgezeigt. Eine Fallstudie für das Gebiet der Stadt Köln demonstriert an ausgewählten Themen die Einsatzmöglichkeiten kleinräumiger Dichtedaten.
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2014-06-30
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2014-10-31
    Description: Zusammenfassung Angesichts steigender Studierendenzahlen und zunehmender Wohnungsknappheit gewinnen Diskussionen über das Segment des studentischen Wohnens in Politik, Medien, Wohnungswirtschaft und Wissenschaft erneut an Aufmerksamkeit. Verlässliche und umfassende Informationen über die Märkte und Marktentwicklungen dienen in diesem Zusammenhang als wichtige Voraussetzung für sozialpolitische und wohnungswirtschaftliche Entscheidungen. Dabei wird deutlich, dass in Wissenschaft und Praxis nur geringe Kenntnisse über die aktuelle Entwicklung und Struktur des Segments und seinen Auswirkungen auf die lokalen Wohnungsmärkte vorhanden sind. Der Beitrag setzt an dieser Forschungslücke an. Für das Segment des studentischen Wohnens werden zunächst die Faktoren der steigenden Wohnungsnachfrage Studierender herausgearbeitet und Untersuchungen zum studentischen Wohnen im Hinblick auf ihre wohnungsmarktpolitische Bedeutung vorgestellt. Weiterhin werden die Entwicklung und Struktur der Angebots– und Nachfrageseite in ihrer jeweiligen Besonderheit charakterisiert und Kopplungseffekte mit dem lokalen Wohnungsmarkt abgeleitet. Vor dem Hintergrund der derzeitigen Diskussion um Wohnungsknappheit werden die Folgen der Marktbedingungen auf Mieter- und Vermietermärkten für Studierende herausgearbeitet sowie marktspezifische Handlungsansätze zur Erhöhung der Markttransparenz dargestellt.
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2014-04-30
    Description: This article aims to identify the resilience of the Pomorskie Region in Poland. The analysed region belongs to a group of European regions where the negative impact of the economic crisis has not been distinctively visible. In a unitary state like Poland the resilience of the region to the crisis results to a high degree from the external macroeconomic factors of the country. However some regional features play an important role in strengthening resilience. We analysed socio-economic structures in the context of their significance for the region's resilience. Walker's and Salt's (Resilience Thinking. Sustaining ecosystems and people in a changing world, 2006) resilience attributes were used as a template for the findings. Although it is difficult to determine the level of a region's resilience in a situation of ongoing and growing recession, some conclusions may be derived from the analysis. One of the most important factors of the Pomorskie Region's resilience is the diversity of economic and social structures. The community and its social capital is undoubtedly one of the strongest attributes of the regional resilience. The Pomorskie Region's richness in ecosystem services also strengthens its resilience significantly. Overlaps in governance are mainly connected with the development of institutional embeddedness. However the current lack of governance on the level of the most dynamically developing metropolitan areas is an impediment to more dynamic development. The post-totalitarian tradition of centralised governance continues to negatively influence the tight feedbacks attribute of resilience.
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2014-04-30
    Description: This introductory paper looks at recent debates on the concept of regional economic resilience and focuses on three elements of debate: (1) indicator systems and consideration of the normative content of the concept; (2) the evolutionary dimension of the concept and its inter-disciplinary linkages; (3) the policy dimension and challenges for the development of policy recommendations. The author discusses the state of the debate and presents some directions for future research priorities.
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2014-02-28
    Description: Zusammenfassung Die Bedeutung von großen Städten und Metropolen steht im Zusammenhang mit dem Strukturwandel zugunsten der Dienstleistungs- und Wissensökonomie und den weltweiten wirtschaftlichen Verflechtungen, die strategische Orte bzw. Knotenpunkte an Kontroll- und Steuerungsfunktionen hervorbringen. An diesen Knoten konzentrieren sich hochrangige Funktionen, die sich von anderen Räumen abheben. Gegenstand des Beitrags ist eine theoretische Systematisierung und Operationalisierung solcher höherrangigen metropolitanen Funktionen von Städten auf Basis internationaler Stadtkonzepte sowie der Typologie der Metropolfunktionen – Entscheidungs- und Kontrollfunktion, Innovations- und Wettbewerbsfunktion, Gatewayfunktion und Symbolfunktion. Dazu wird eine Datenbank mit 48 Indikatoren erstellt, die sich auf lokalisierte funktionale Attributeigenschaften bezieht, um den Grad der Metropolität sämtlicher 439 kreisfreien Städte/Kreise und funktional-analytischen Metropolräume zu erfassen. Es wird geprüft, welche Städte metropolitane Funktionen haben und wie sich diese im Hinblick auf die Prozesse der räumlichen Konzentration und funktionalen Spezialisierung zwischen 1995 und 2010 entwickelt haben. Es wird angenommen, dass aufgrund von Agglomerationsvorteilen (z. B. räumliche Nähe oder Entwicklungssvorsprünge) die „großen“ Metropolräume als Gewinner der Metropolisierungsprozesse hervorgehen.
    Print ISSN: 0034-0111
    Electronic ISSN: 1869-4179
    Topics: Architecture, Civil Engineering, Surveying , Geography
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2014-03-01
    Description: The paper presents numerical simulation of hysteretic live load effect in a soil-steel bridge. The effect was originally identified experimentally by Machelski [1], [2]. The truck was crossing the bridge one way and the other in the full-scale test performed. At the same time, displacements and stress in the shell were measured. The major conclusion from the research was that the measured quantities formed hysteretic loops. A numerical simulation of that effect is addressed in the present work. The analysis was performed using Flac finite difference code. The methodology of solving the mechanical problems implemented in Flac enables us to solve the problem concerning a sequence of load and non-linear mechanical behaviour of the structure. The numerical model incorporates linear elastic constitutive relations for the soil backfill, for the steel shell and the sheet piles, being a flexible substructure for the shell. Contact zone between the shell and the soil backfill is assumed to reflect elastic-plastic constitutive model. Maximum shear stress in contact zone is limited by the Coulomb condition. The plastic flow rule is described by dilation angle ψ = 0. The obtained results of numerical analysis are in fair agreement with the experimental evidence. The primary finding from the performed simulation is that the slip in the interface can be considered an explanation of the hysteresis occurrence in the charts of displacement and stress in the shell.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2014-03-01
    Description: Liquefaction has always been intensely studied in parts of the world where earthquakes occur. However, the seismic activity is not the only possible cause of this phenomenon. It may in fact be triggered by some human activities, such as constructing and mining or by rail and road transport. In the paper a road embankment built across a shallow water reservoir is analyzed in terms of susceptibility to liquefaction. Two types of dynamic loadings are considered: first corresponding to an operation of a vibratory roller and second to an earthquake. In order to evaluate a susceptibility of soil to liquefaction, a factor of safety against triggering of liquefaction is used (FSTriggering). It is defined as a ratio of vertical effective stresses to the shear stresses both varying with time. For the structure considered both stresses are obtained using finite element method program, here Plaxis 2D. The plastic behavior of the cohesionless soils is modeled by means of Hardening Soil (HS) constitutive relationship, implemented in Plaxis software. As the stress tensor varies with time during dynamic excitation, the FSTriggering has to be calculated for some particular moment of time when liquefaction is most likely to occur. For the purposes of this paper it is named a critical time and established for reference point at which the pore pressures were traced in time. As a result a factor of safety distribution throughout embankment is generated. For the modeled structure, cyclic point loads (i.e., vibrating roller) present higher risk than earthquake of magnitude 5.4. Explanation why considered structure is less susceptible to earthquake than typical dam could lay in stabilizing and damping influence of water, acting here on both sides of the slope. Analogical procedure is applied to assess liquefaction susceptibility of the road embankment considered but under earthquake excitation. Only the higher water table is considered as it is the most unfavorable. Additionally the modified factor of safety is introduced, where the dynamic shear stress component is obtained at a time step when its magnitude is the highest - not necessarily at the same time step when the pore pressure reaches its peak (i.e., critical time). This procedure provides a greater margin of safety as the computed factors of safety are smaller. Method introduced in the paper presents a clear and easy way to locate liquefied zones and estimate liquefaction susceptibility of the subsoil - not only in the road embankment.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2014-03-01
    Description: This paper is a presentation of landslide monitoring, early warning and remediation methods recommended for the Polish Carpathians. Instrumentation included standard and automatic on-line measurements with the real-time transfer of data to an Internet web server. The research was funded through EU Innovative Economy Programme and also by the SOPO Landslide Counteraction Project. The landslides investigated were characterized by relatively low rates of the displacements. These ranged from a few millimetres to several centimetres per year. Colluviums of clayey flysch deposits were of a soil-rock type with a very high plasticity and moisture content. The instrumentation consisted of 23 standard inclinometers set to depths of 5-21 m. The starting point of monitoring measurements was in January 2006. These were performed every 1-2 months over the period of 8 years. The measurements taken detected displacements from several millimetres to 40 cm set at a depth of 1-17 m. The modern, on-line monitoring and early warning system was installed in May 2010. The system is the first of its kind in Poland and only one of several such real-time systems in the world. The installation was working with the Local Road Authority in Gorlice. It contained three automatic field stations for investigation of landslide parameters to depths of 12-16 m and weather station. In-place tilt transducers and innovative 3D continuous inclinometer systems with sensors located every 0.5 m were used. It has the possibility of measuring a much greater range of movements compared to standard systems. The conventional and real-time data obtained provided a better recognition of the triggering parameters and the control of geohazard stabilizations. The monitoring methods chosen supplemented by numerical modelling could lead to more reliable forecasting of such landslides and could thus provide better control and landslide remediation possibilities also to stabilization works which prevent landslides.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2014-03-01
    Description: The paper presents results of laboratory investigation and analysis of crack initiation threshold identification of dolomite samples. First, selected methods for determining crack initiation thresholds are briefly described with special attention paid to four methods: crack volume strain method [14], change in Poisson’s ratio [8], lateral strain response method [16], and dilatancy method [4]. The investigation performed on dolomite samples shows that for the uniaxial and conventional triaxial compression tests, the above mentioned methods give quite similar values, except for the crack volume strain method. Crack initiation threshold determined by this method has a distinctively lower value than that obtained by the other methods. The aim of the present paper was to review and assess these methods for identifying crack initiation threshold based on laboratory tests of dolomite samples.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2014-03-01
    Description: DC resistivity methods, soundings and Electrical Resistivity Tomography, were applied to study shallow geology in the place of planned construction of an experimental flood bank. The geoelectrical surveys provided quantitative information about the spatial presence of the various geoelectrical/geological layers: alluvial soils, sands, gravels and clays. ERT allowed maps to be constructed showing subsurface structure. A combination of geoelectrical and geological information resulted in a much better identification of the geological structure.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2014-03-01
    Description: Crashed or dimensional rocks have been used as natural construction material, decoration stone or as material for artistic sculptures. Especially old historical towns not only in Slovakia have had experiences with use of stones for construction purposes for centuries. The whole buildings were made from dimensional stone, like sandstone, limestone or rhyolite. Pavements were made especially from basalt, andesite, rhyolite or granite. Also the most common modern construction material - concrete includes large amounts of crashed rock, especially limestone, dolostone and andesite. However, rock as any other material if exposed to exogenous processes starts to deteriorate. Especially mechanical weathering can be very intensive if rock with unsuitable rock properties is used. For long it had been believed that repeated freezing and thawing in relation to high absorption is the main reason of the rock deterioration. In Slovakia for many years the high water absorption was set as exclusion criterion for use of rocks and stones in building industry. Only after 1989 the absorption was accepted as merely informational rock property and not exclusion. The reason of the change was not the understanding of the relationship between the porosity and rock deterioration, but more or less good experiences with some high porous rocks used in constructions exposed to severe weather conditions and proving a lack of relationship between rock freeze-thaw resistivity and water absorption. Results of the recent worldwide research suggest that understanding a resistivity of rocks against deterioration is hidden not in the absorption but in the structure of rock pores in relation to thermodynamic properties of pore water and tensile strength of rocks and rock minerals. Also this article presents some results of research on rock deterioration and pore structure performed on 88 rock samples. The results divide the rocks tested into two groups - group N in which the pore water does not freeze even when the temperature decreases to -20 ºC, and the second group F in which the pore water freezes. It has been found that the rocks from group N contain critical portion of adsorbed water in pores which prevents freezing of the pore water. The presence of adsorbed water enables thermodynamic processes related to osmosis which are dominantly responsible for deterioration of rocks from group N. A high correlation (R = 0.81) between content of adsorbed water and freeze-thaw loss was proved and can be used as durability estimator of rocks from group N. The rock deterioration of group F is caused not only by osmosis, but also by some other processes and influences, such as hydraulic pressure, permeability, grain size, rock and mineral tensile strength, degree of saturation, etc., and the deterioration cannot be predicted yet without the freeze-thaw test. Since the contents of absorbed water and ratio between adsorbed and bulk water (of which the absorbed water consists) is controlled by the porosity and pore structure, it can be concluded that the deterioration of some rocks is strongly related to rock pore structure.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2014-03-01
    Description: The needle probe test, as a thermal conductivity measurement method, has become very popular in recent years. In the present study, the efficiency of this methodology, for the case of composite materials, is investigated based on the numerical simulations. The material under study is a two-phase composite with periodic microstructure of “matrix-inclusion” type. Two-scale analysis, incorporating micromechanics approach, is performed. First, the effective thermal conductivity of the composite considered is found by the solution of the appropriate boundary value problem stated for the single unit cell. Next, numerical simulations of the needle probe test are carried out. In this case, two different locations of the measuring sensor are considered. It is shown that the “equivalent” conductivity, derived from the probe test, is strongly affected by the location of the sensor. Moreover, comparing the results obtained for different scales, one can notice that the “equivalent” conductivity cannot be interpreted as the effective one for the composites considered. Hence, a crude approximation of the effective property is proposed based on the volume fractions of constituents and the equivalent conductivities derived from different sensor locations.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2014-03-01
    Description: The paper is focused on changes of geomechanical properties with regard to loss of long-term resistance in studies on the influence of salt spray on the structure of selected rocks from Poland. The investigation has shown that the analyzed rock material shows variable susceptibility to this corrosive factor. The most susceptible to ageing by salt mist were Śmiłów sandstones, whose progressive deterioration was observed in subsequent cycles. Analysis of resistance parameters has shown decreased resistance to uniaxial compression exceeding 30% also in the case of the Józefów limestones. Limestones from Raciszyn have revealed high resistance to ageing by salt mist.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2014-03-01
    Description: Based on a literature review concerning methane emissions in Poland, it was stated in 2009 that the National Greenhouse Inventory 2007 [13] was published. It was prepared firstly to meet Poland’s obligations resulting from point 3.1 Decision no. 280/2004/WE of the European Parliament and of the Council of 11 February 2004, concerning a mechanism for monitoring community greenhouse gas emissions and for implementing the Kyoto Protocol and secondly, for the United Nations Framework Convention on Climate Change (UNFCCC) and Kyoto Protocol. The National Greenhouse Inventory states that there are no detailed data concerning methane emissions in collieries in the Polish mining industry. That is why the methane emission in the methane coal mines of Górnośląskie Zagłębie Węglowe - GZW (Upper Silesian Coal Basin - USCB) in Poland was meticulously studied and evaluated. The applied methodology for estimating methane emission from the GZW coal mining system was used for the four basic sources of its emission. Methane emission during the mining and post-mining process. Such an approach resulted from the IPCC guidelines of 2006 [10]. Updating the proposed methods (IPCC2006) of estimating the methane emissions of hard coal mines (active and abandoned ones) in Poland, assumes that the methane emission factor (EF) is calculated based on methane coal mine output and actual values of absolute methane content. The result of verifying the method of estimating methane emission during the mining process for Polish coal mines is the equation of methane emission factor EF.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2014-03-01
    Description: The article presents an application of X-ray microtomography for identification of the carbonation zone in concrete material. A concrete specimen subjected earlier to harsh environmental conditions is investigated. The material is firstly checked with the use of chemical corrosion indicators and then is subjected to microstructural analysis performed with the use of X-ray microtomography. Two different settings of scanning parameters are applied implying the image resolutions of approximately 14 μm per 1 pixel and about 7 μm per 1 pixel, respectively. The results obtained are then compared and analyzed. The depth of the carbonation zone is evaluated based on the attenuation curve. The paper highlights also the significance of the corrosion phenomenon in concrete structures. Details of the deterioration mechanisms in concrete are shortly presented.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2014-03-01
    Description: Polish standards concerning field investigation with the use of a Weight Sounding Test (WST) probe give interpretation of results for non-cohesive soils only. The lack of such interpretation for cohesive soils excludes this testing equipment from use. This paper presents the results of geotechnical site investigation and laboratory tests performed for Miocene clays in Carpathian Foredeep in the Cracow area. Based on the analysis of the results a correlation was determined between the characteristic values for the WST probe (number of half-turns NWST) and the selected properties of Miocene clays. The article is an attempt to create a complete interpretation of test results obtained for cohesive soil with WST equipment.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2014-03-01
    Description: During the construction of an airport terminal it was found that as a result of the hydrostatic pressure of underground water the foundation plate of the building had dangerously shifted in the direction opposite to that of the gravitational forces. The only effective measure was to introduce a drainage system on the site. The complex geology of the area indicated that two independent drainage systems, i.e., a horizontal system in the Quaternary beds and a vertical system in the Tertiary water-bearing levels, were necessary. This paper presents numerical FEM calculations of the two drainage systems being part of the airport terminal drainaged esign. The computer simulation which was carried out took into consideration the actual effect of the drainage systems and their impact on the depression cone being formed in the two aquifers.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2014-03-01
    Description: Most of underground hydrocarbon storage are located in depleted natural gas reservoirs. Seismic survey is the most economical source of detailed subsurface information. The inversion of seismic section for obtaining pseudoacoustic impedance section gives the possibility to extract detailed subsurface information. The seismic wavelet parameters and noise briefly influence the resolution. Low signal parameters, especially long signal duration time and the presence of noise decrease pseudoimpedance resolution. Drawing out from measurement or modelled seismic data approximation of distribution of acoustic pseuoimpedance leads us to visualisation and images useful to stratum homogeneity identification goal. In this paper, the improvement of geologic section image resolution by use of minimum entropy deconvolution method before inversion is applied. The author proposes context and adaptive transformation of images and edge detection methods as a way to increase the effectiveness of correct interpretation of simulated images. In the paper, the edge detection algorithms using Sobel, Prewitt, Robert, Canny operators as well as Laplacian of Gaussian method are emphasised. Wiener filtering of image transformation improving rock section structure interpretation pseudoimpedance matrix on proper acoustic pseudoimpedance value, corresponding to selected geologic stratum. The goal of the study is to develop applications of image transformation tools to inhomogeneity detection in salt deposits.
    Print ISSN: 0137-6365
    Electronic ISSN: 2083-831X
    Topics: Architecture, Civil Engineering, Surveying , Geosciences
    Published by De Gruyter
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2014-11-07
    Description: Motivation: Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. Results: We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7–25 times faster than a standard iterative algorithm. Availability and implementation: Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html . BitPAl is implemented in C and runs on all major operating systems. Contact : jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary information : Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2014-11-07
    Description: : Next-generation sequencing (NGS) has a large potential in HIV diagnostics, and genotypic prediction models have been developed and successfully tested in the recent years. However, albeit being highly accurate, these computational models lack computational efficiency to reach their full potential. In this study, we demonstrate the use of graphics processing units (GPUs) in combination with a computational prediction model for HIV tropism. Our new model named gCUP, parallelized and optimized for GPU, is highly accurate and can classify 〉175 000 sequences per second on an NVIDIA GeForce GTX 460. The computational efficiency of our new model is the next step to enable NGS technologies to reach clinical significance in HIV diagnostics. Moreover, our approach is not limited to HIV tropism prediction, but can also be easily adapted to other settings, e.g. drug resistance prediction. Availability and implementation: The source code can be downloaded at http://www.heiderlab.de Contact: d.heider@wz-straubing.de
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    facet.materialart.
    Unknown
    Oxford University Press
    Publication Date: 2014-11-07
    Description: : We present a new method to incrementally construct the FM-index for both short and long sequence reads, up to the size of a genome. It is the first algorithm that can build the index while implicitly sorting the sequences in the reverse (complement) lexicographical order without a separate sorting step. The implementation is among the fastest for indexing short reads and the only one that practically works for reads of averaged kilobases in length. Availability and implementation: https://github.com/lh3/ropebwt2 Contact: hengli@broadinstitute.org
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2014-11-07
    Description: : AliView is an alignment viewer and editor designed to meet the requirements of next-generation sequencing era phylogenetic datasets. AliView handles alignments of unlimited size in the formats most commonly used, i.e. FASTA, Phylip, Nexus, Clustal and MSF. The intuitive graphical interface makes it easy to inspect, sort, delete, merge and realign sequences as part of the manual filtering process of large datasets. AliView also works as an easy-to-use alignment editor for small as well as large datasets. Availability and implementation: AliView is released as open-source software under the GNU General Public License, version 3.0 (GPLv3), and is available at GitHub ( www.github.com/AliView ). The program is cross-platform and extensively tested on Linux, Mac OS X and Windows systems. Downloads and help are available at http://ormbunkar.se/aliview Contact: anders.larsson@ebc.uu.se Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2014-11-07
    Description: Motivation: The ability to accurately read the order of nucleotides in DNA and RNA is fundamental for modern biology. Errors in next-generation sequencing can lead to many artifacts, from erroneous genome assemblies to mistaken inferences about RNA editing. Uneven coverage in datasets also contributes to false corrections. Result: We introduce Trowel, a massively parallelized and highly efficient error correction module for Illumina read data. Trowel both corrects erroneous base calls and boosts base qualities based on the k -mer spectrum. With high-quality k -mers and relevant base information, Trowel achieves high accuracy for different short read sequencing applications.The latency in the data path has been significantly reduced because of efficient data access and data structures. In performance evaluations, Trowel was highly competitive with other tools regardless of coverage, genome size read length and fragment size. Availability and implementation: Trowel is written in C++ and is provided under the General Public License v3.0 (GPLv3). It is available at http://trowel-ec.sourceforge.net . Contact: euncheon.lim@tue.mpg.de or weigel@tue.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2014-11-07
    Description: : The application of protein–protein docking in large-scale interactome analysis is a major challenge in structural bioinformatics and requires huge computing resources. In this work, we present MEGADOCK 4.0, an FFT-based docking software that makes extensive use of recent heterogeneous supercomputers and shows powerful, scalable performance of 〉97% strong scaling. Availability and Implementation: MEGADOCK 4.0 is written in C++ with OpenMPI and NVIDIA CUDA 5.0 (or later) and is freely available to all academic and non-profit users at: http://www.bi.cs.titech.ac.jp/megadock . Contact: akiyama@cs.titech.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2014-11-07
    Description: Motivation: The identification of active transcriptional regulatory elements is crucial to understand regulatory networks driving cellular processes such as cell development and the onset of diseases. It has recently been shown that chromatin structure information, such as DNase I hypersensitivity (DHS) or histone modifications, significantly improves cell-specific predictions of transcription factor binding sites. However, no method has so far successfully combined both DHS and histone modification data to perform active binding site prediction. Results: We propose here a method based on hidden Markov models to integrate DHS and histone modifications occupancy for the detection of open chromatin regions and active binding sites. We have created a framework that includes treatment of genomic signals, model training and genome-wide application. In a comparative analysis, our method obtained a good trade-off between sensitivity versus specificity and superior area under the curve statistics than competing methods. Moreover, our technique does not require further training or sequence information to generate binding location predictions. Therefore, the method can be easily applied on new cell types and allow flexible downstream analysis such as de novo motif finding. Availability and implementation: Our framework is available as part of the Regulatory Genomics Toolbox. The software information and all benchmarking data are available at http://costalab.org/wp/dh-hmm . Contact: ivan.costa@rwth-aachen.de or eduardo.gusmao@rwth-aachen.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2014-11-07
    Description: Motivation: A proper target or marker is essential in any diagnosis (e.g. an infection or cancer). An ideal diagnostic target should be both conserved in and unique to the pathogen. Currently, these targets can only be identified manually, which is time-consuming and usually error-prone. Because of the increasingly frequent occurrences of emerging epidemics and multidrug-resistant ‘superbugs’, a rapid diagnostic target identification process is needed. Results: A new method that can identify uniquely conserved regions (UCRs) as candidate diagnostic targets for a selected group of organisms solely from their genomic sequences has been developed and successfully tested. Using a sequence-indexing algorithm to identify UCRs and a k -mer integer-mapping model for computational efficiency, this method has successfully identified UCRs within the bacteria domain for 15 test groups, including pathogenic, probiotic, commensal and extremophilic bacterial species or strains. Based on the identified UCRs, new diagnostic primer sets were designed, and their specificity and efficiency were tested by polymerase chain reaction amplifications from both pure isolates and samples containing mixed cultures. Availability and implementation: The UCRs identified for the 15 bacterial species are now freely available at http://ucr.synblex.com . The source code of the programs used in this study is accessible at http://ucr.synblex.com/bacterialIdSourceCode.d.zip Contact: yazhousun@synblex.com Supplementary Information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2014-11-07
    Description: Motivation: A popular method for classification of protein domain movements apportions them into two main types: those with a ‘hinge’ mechanism and those with a ‘shear’ mechanism. The intuitive assignment of domain movements to these classes has limited the number of domain movements that can be classified in this way. Furthermore, whether intended or not, the term ‘shear’ is often interpreted to mean a relative translation of the domains. Results: Numbers of occurrences of four different types of residue contact changes between domains were optimally combined by logistic regression using the training set of domain movements intuitively classified as hinge and shear to produce a predictor for hinge and shear. This predictor was applied to give a 10-fold increase in the number of examples over the number previously available with a high degree of precision. It is shown that overall a relative translation of domains is rare, and that there is no difference between hinge and shear mechanisms in this respect. However, the shear set contains significantly more examples of domains having a relative twisting movement than the hinge set. The angle of rotation is also shown to be a good discriminator between the two mechanisms. Availability and implementation: Results are free to browse at http://www.cmp.uea.ac.uk/dyndom/interface/ . Contact: sjh@cmp.uea.ac.uk . Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2014-11-07
    Description: Motivation: Recent studies on human disease have revealed that aberrant interaction between proteins probably underlies a substantial number of human genetic diseases. This suggests a need to investigate disease inheritance mode using interaction, and based on which to refresh our conceptual understanding of a series of properties regarding inheritance mode of human disease. Results: We observed a strong correlation between the number of protein interactions and the likelihood of a gene causing any dominant diseases or multiple dominant diseases, whereas no correlation was observed between protein interaction and the likelihood of a gene causing recessive diseases. We found that dominant diseases are more likely to be associated with disruption of important interactions. These suggest inheritance mode should be understood using protein interaction. We therefore reviewed the previous studies and refined an interaction model of inheritance mode, and then confirmed that this model is largely reasonable using new evidences. With these findings, we found that the inheritance mode of human genetic diseases can be predicted using protein interaction. By integrating the systems biology perspectives with the classical disease genetics paradigm, our study provides some new insights into genotype–phenotype correlations. Contact: haodapeng@ems.hrbmu.edu.cn or biofomeng@hotmail.com Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2014-11-07
    Description: : Recently, several high profile studies collected cell viability data from panels of cancer cell lines treated with many drugs applied at different concentrations. Such drug sensitivity data for cancer cell lines provide suggestive treatments for different types and subtypes of cancer. Visualization of these datasets can reveal patterns that may not be obvious by examining the data without such efforts. Here we introduce Drug/Cell-line Browser (DCB), an online interactive HTML5 data visualization tool for interacting with three of the recently published datasets of cancer cell lines/drug-viability studies. DCB uses clustering and canvas visualization of the drugs and the cell lines, as well as a bar graph that summarizes drug effectiveness for the tissue of origin or the cancer subtypes for single or multiple drugs. DCB can help in understanding drug response patterns and prioritizing drug/cancer cell line interactions by tissue of origin or cancer subtype. Availability and implementation: DCB is an open source Web-based tool that is freely available at: http://www.maayanlab.net/LINCS/DCB Contact: avi.maayan@mssm.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2014-12-04
    Description: Motivation : Structural variation is common in human and cancer genomes. High-throughput DNA sequencing has enabled genome-scale surveys of structural variation. However, the short reads produced by these technologies limit the study of complex variants, particularly those involving repetitive regions. Recent ‘third-generation’ sequencing technologies provide single-molecule templates and longer sequencing reads, but at the cost of higher per-nucleotide error rates. Results : We present MultiBreak-SV, an algorithm to detect structural variants (SVs) from single molecule sequencing data, paired read sequencing data, or a combination of sequencing data from different platforms. We demonstrate that combining low-coverage third-generation data from Pacific Biosciences (PacBio) with high-coverage paired read data is advantageous on simulated chromosomes. We apply MultiBreak-SV to PacBio data from four human fosmids and show that it detects known SVs with high sensitivity and specificity. Finally, we perform a whole-genome analysis on PacBio data from a complete hydatidiform mole cell line and predict 1002 high-probability SVs, over half of which are confirmed by an Illumina-based assembly. Availability and implementation : MultiBreak-SV is available at http://compbio.cs.brown.edu/software/ . Contact : annaritz@vt.edu or braphael@cs.brown.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2014-12-04
    Description: Motivation: Insertions play an important role in genome evolution. However, such variants are difficult to detect from short-read sequencing data, especially when they exceed the paired-end insert size. Many approaches have been proposed to call short insertion variants based on paired-end mapping. However, there remains a lack of practical methods to detect and assemble long variants. Results: We propose here an original method, called M ind T he G ap , for the integrated detection and assembly of insertion variants from re-sequencing data. Importantly, it is designed to call insertions of any size, whether they are novel or duplicated, homozygous or heterozygous in the donor genome. M ind T he G ap uses an efficient k -mer-based method to detect insertion sites in a reference genome, and subsequently assemble them from the donor reads. M ind T he G ap showed high recall and precision on simulated datasets of various genome complexities. When applied to real Caenorhabditis elegans and human NA12878 datasets, M ind T he G ap detected and correctly assembled insertions 〉1 kb, using at most 14 GB of memory. Availability and implementation: http://mindthegap.genouest.org Contact: guillaume.rizk@inria.fr or claire.lemaitre@inria.fr
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2014-12-04
    Description: Motivation: Most tumor samples are a heterogeneous mixture of cells, including admixture by normal (non-cancerous) cells and subpopulations of cancerous cells with different complements of somatic aberrations. This intra-tumor heterogeneity complicates the analysis of somatic aberrations in DNA sequencing data from tumor samples. Results: We describe an algorithm called THetA2 that infers the composition of a tumor sample—including not only tumor purity but also the number and content of tumor subpopulations—directly from both whole-genome (WGS) and whole-exome (WXS) high-throughput DNA sequencing data. This algorithm builds on our earlier Tumor Heterogeneity Analysis (THetA) algorithm in several important directions. These include improved ability to analyze highly rearranged genomes using a variety of data types: both WGS sequencing (including low ~7 x coverage) and WXS sequencing. We apply our improved THetA2 algorithm to WGS (including low-pass) and WXS sequence data from 18 samples from The Cancer Genome Atlas (TCGA). We find that the improved algorithm is substantially faster and identifies numerous tumor samples containing subclonal populations in the TCGA data, including in one highly rearranged sample for which other tumor purity estimation algorithms were unable to estimate tumor purity. Availability and implementation: An implementation of THetA2 is available at http://compbio.cs.brown.edu/software Contact: layla@cs.brown.edu or braphael@brown.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2014-01-16
    Description: The matrix method, due to Bibel and Andrews, is a proof procedure designed for automated theorem-proving. We show that underlying this method is a fully structured combinatorial model of conventional classical proof theory.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2014-01-16
    Description: Proof search in inference systems such as the sequent calculus is a process of discovery. Once a proof is found, there is often information in the proof which is redundant. In this article we show how to detect and eliminate certain kinds of redundant formulae from a given proof, and in particular in a way which does not require further proof search or any rearrangement of the proof found. Our technique involves adding constraints to the inference rules, which are used once the proof is complete to determine redundant formulae and how they may be eliminated. We show how this technique can be applied to propositional linear logic, and prove its correctness for this logic. We also discuss how our approach can be extended to other logics without much change.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2014-01-22
    Description: Good accessibility of publicly funded research data is essential to secure an open scientific system and eventually becomes mandatory [Wellcome Trust will Penalise Scientists Who Don’t Embrace Open Access . The Guardian 2012]. By the use of high-throughput methods in many research areas from physics to systems biology, large data collections are increasingly important as raw material for research. Here, we present strategies worked out by international and national institutions targeting open access to publicly funded research data via incentives or obligations to share data. Funding organizations such as the British Wellcome Trust therefore have developed data sharing policies and request commitment to data management and sharing in grant applications. Increased citation rates are a profound argument for sharing publication data. Pre-publication sharing might be rewarded by a data citation credit system via digital object identifiers (DOIs) which have initially been in use for data objects. Besides policies and incentives, good practice in data management is indispensable. However, appropriate systems for data management of large-scale projects for example in systems biology are hard to find. Here, we give an overview of a selection of open-source data management systems proved to be employed successfully in large-scale projects.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2014-01-22
    Description: Genome-scale metabolic network reconstructions are now routinely used in the study of metabolic pathways, their evolution and design. The development of such reconstructions involves the integration of information on reactions and metabolites from the scientific literature as well as public databases and existing genome-scale metabolic models. The reconciliation of discrepancies between data from these sources generally requires significant manual curation, which constitutes a major obstacle in efforts to develop and apply genome-scale metabolic network reconstructions. In this work, we discuss some of the major difficulties encountered in the mapping and reconciliation of metabolic resources and review three recent initiatives that aim to accelerate this process, namely BKM-react, MetRxn and MNXref (presented in this article). Each of these resources provides a pre-compiled reconciliation of many of the most commonly used metabolic resources. By reducing the time required for manual curation of metabolite and reaction discrepancies, these resources aim to accelerate the development and application of high-quality genome-scale metabolic network reconstructions and models.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2014-01-22
    Description: microRNAs (miRNAs) are small endogenous non-coding RNAs that function as the universal specificity factors in post-transcriptional gene silencing. Discovering miRNAs, identifying their targets and further inferring miRNA functions have been a critical strategy for understanding normal biological processes of miRNAs and their roles in the development of disease. In this review, we focus on computational methods of inferring miRNA functions, including miRNA functional annotation and inferring miRNA regulatory modules, by integrating heterogeneous data sources. We also briefly introduce the research in miRNA discovery and miRNA-target identification with an emphasis on the challenges to computational biology.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2014-01-22
    Description: Supermatrix and supertree analyses are frequently used to more accurately recover vertical evolutionary history but debate still exists over which method provides greater reliability. Traditional methods that resolve relationships among organisms from single genes are often unreliable because of the frequent lack of strong phylogenetic signal and the presence of systematic artifacts. Methods developed to reconstruct organismal history from multiple genes can be divided into supermatrix and supertree approaches. A supermatrix analysis consists of the concatenation of multiple genes into a single, possibly partitioned alignment, from which phylogenies are reconstructed using a variety of approaches. Supertrees build consensus trees from the topological information contained within individual gene trees. Both methods are now widely used and have been demonstrated to solve previously ambiguous or unresolved phylogenies with high statistical support. However, the amount of misleading signal needed to induce erroneous phylogenies for both strategies is still unknown. Using genome simulations, we test the accuracy of supertree and supermatrix approaches in recovering the true organismal phylogeny under increased amounts of horizontally transferred genes and changes in substitution rates. Our results show that overall, supermatrix approaches are preferable when a low amount of gene transfer is suspected to be present in the dataset, while supertrees have greater reliability in the presence of a moderate amount of misleading gene transfers. In the face of very high or very low substitution rates without horizontal gene transfers, supermatrix approaches outperform supertrees as individual gene trees remain unresolved and additional sequences contribute to a congruent phylogenetic signal.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2014-01-16
    Description: Motivation: We have recently characterized an instance of alternative splicing that differs from the canonical gene transcript by deletion of a length of sequence not divisible by three, but where translation can be rescued by an alternative start codon. This results in a predicted protein in which the amino terminus differs markedly in sequence from the known protein product(s), as it is translated from an alternative reading frame. Automated pipelines have annotated thousands of splice variants but have overlooked these protein isoforms, leading to them being underrepresented in current databases. Results: Here we describe 1849 human and 733 mouse transcripts that can be transcribed from an alternate ATG. Of these, 〉80% have not been annotated previously. Those conserved between human and mouse genomes (and hence under likely evolutionary selection) are identified. We provide mass spectroscopy evidence for translation of selected transcripts. Of the described splice variants, only one has previously been studied in detail and converted the encoded protein from an activator of cell-function to a suppressor, demonstrating that these splice variants can result in profound functional change. We investigate the potential functional effects of this splicing using a variety of bioinformatic tools. The 2582 variants we describe are involved in a wide variety of biological processes, and therefore open many new avenues of research. Contact: aude.fahrer@anu.edu.au Supplementary Inforation: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2014-01-16
    Description: Motivation : High-throughput sequencing technologies enable the genome-wide analysis of the impact of genetic variation on molecular phenotypes at unprecedented resolution. However, although powerful, these technologies can also introduce unexpected artifacts. Results : We investigated the impact of library amplification bias on the identification of allele-specific (AS) molecular events from high-throughput sequencing data derived from chromatin immunoprecipitation assays (ChIP-seq). Putative AS DNA binding activity for RNA polymerase II was determined using ChIP-seq data derived from lymphoblastoid cell lines of two parent–daughter trios. We found that, at high-sequencing depth, many significant AS binding sites suffered from an amplification bias, as evidenced by a larger number of clonal reads representing one of the two alleles. To alleviate this bias, we devised an amplification bias detection strategy, which filters out sites with low read complexity and sites featuring a significant excess of clonal reads. This method will be useful for AS analyses involving ChIP-seq and other functional sequencing assays. Availability : The R package absfilter for library clonality simulations and detection of amplification-biased sites is available from http://updepla1srv1.epfl.ch/waszaks/absfilter Contact : sebastian.waszak@epfl.ch or bart.deplancke@epfl.ch Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2014-01-16
    Description: Motivation : Recently, investigators have proposed state-of-the-art Identity-by-descent (IBD) mapping methods to detect IBD segments between purportedly unrelated individuals. The IBD information can then be used for association testing in genetic association studies. One approach for this IBD association testing strategy is to test for excessive IBD between pairs of cases (‘pairwise method’). However, this approach is inefficient because it requires a large number of permutations. Moreover, a limited number of permutations define a lower bound for P -values, which makes fine-mapping of associated regions difficult because, in practice, a much larger genomic region is implicated than the region that is actually associated. Results: In this article, we introduce a new pairwise method ‘Fast-Pairwise’. Fast-Pairwise uses importance sampling to improve efficiency and enable approximation of extremely small P -values. Fast-Pairwise method takes only days to complete a genome-wide scan. In the application to the WTCCC type 1 diabetes data, Fast-Pairwise successfully fine-maps a known human leukocyte antigen gene that is known to cause the disease. Availability: Fast-Pairwise is publicly available at: http://genetics.cs.ucla.edu/graphibd . Contact: eeskin@cs.ucla.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2014-01-16
    Description: Motivation: Measurements are commonly taken from two phenotypes to build a classifier, where the number of data points from each class is predetermined, not random. In this ‘separate sampling’ scenario, the data cannot be used to estimate the class prior probabilities. Moreover, predetermined class sizes can severely degrade classifier performance, even for large samples. Results: We employ simulations using both synthetic and real data to show the detrimental effect of separate sampling on a variety of classification rules. We establish propositions related to the effect on the expected classifier error owing to a sampling ratio different from the population class ratio. From these we derive a sample-based minimax sampling ratio and provide an algorithm for approximating it from the data. We also extend to arbitrary distributions the classical population-based Anderson linear discriminant analysis minimax sampling ratio derived from the discriminant form of the Bayes classifier. Availability: All the codes for synthetic data and real data examples are written in MATLAB. A function called mmratio, whose output is an approximation of the minimax sampling ratio of a given dataset, is also written in MATLAB. All the codes are available at: http://gsp.tamu.edu/Publications/supplementary/shahrokh13b . Contact: edward@ece.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2014-01-16
    Description: Motivation:  Expression vectors used in different biotechnology applications are designed with domain-specific rules. For instance, promoters, origins of replication or homologous recombination sites are host-specific. Similarly, chromosomal integration or viral delivery of an expression cassette imposes specific structural constraints. As de novo gene synthesis and synthetic biology methods permeate many biotechnology specialties, the design of application-specific expression vectors becomes the new norm. In this context, it is desirable to formalize vector design strategies applicable in different domains. Results:  Using the design of constructs to express genes in the chloroplast of Chlamydomonas reinhardtii as an example, we show that a vector design strategy can be formalized as a domain-specific language. We have developed a graphical editor of context-free grammars usable by biologists without prior exposure to language theory. This environment makes it possible for biologists to iteratively improve their design strategies throughout the course of a project. It is also possible to ensure that vectors designed with early iterations of the language are consistent with the latest iteration of the language. Availability and implementation:  The context-free grammar editor is part of the GenoCAD application. A public instance of GenoCAD is available at http://www.genocad.org . GenoCAD source code is available from SourceForge and licensed under the Apache v2.0 open source license. Contact:   peccoud@vt.edu Supplementary Information:   Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2014-01-16
    Description: Motivation: Homology search methods are dominated by the central paradigm that sequence similarity is a proxy for common ancestry and, by extension, functional similarity. For determining sequence similarity in proteins, most widely used methods use models of sequence evolution and compare amino-acid strings in search for conserved linear stretches. Probabilistic models or sequence profiles capture the position-specific variation in an alignment of homologous sequences and can identify conserved motifs or domains. While profile-based search methods are generally more accurate than simple sequence comparison methods, they tend to be computationally more demanding. In recent years, several methods have emerged that perform protein similarity searches based on domain composition. However, few methods have considered the linear arrangements of domains when conducting similarity searches, despite strong evidence that domain order can harbour considerable functional and evolutionary signal. Results: Here, we introduce an alignment scheme that uses a classical dynamic programming approach to the global alignment of domains. We illustrate that representing proteins as strings of domains (domain arrangements) and comparing these strings globally allows for a both fast and sensitive homology search. Further, we demonstrate that the presented methods complement existing methods by finding similar proteins missed by popular amino-acid–based comparison methods. Availability: An implementation of the presented algorithms, a web-based interface as well as a command-line program for batch searching against the UniProt database can be found at http://rads.uni-muenster.de . Furthermore, we provide a JAVA API for programmatic access to domain-string–based search methods. Contact: terrapon.nicolas@gmail.com or ebb@uni-muenster.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2014-01-16
    Description: Motivation: DNA enrichment followed by sequencing is a versatile tool in molecular biology, with a wide variety of applications including genome-wide analysis of epigenetic marks and mechanisms. A common requirement of these diverse applications is a comparison of read coverage between experimental conditions. The amount of samples generated for such comparisons ranges from few replicates to hundreds of samples per condition for epigenome-wide association studies. Consequently, there is an urgent need for software that allows for fast and simple processing and comparison of sequencing data derived from enriched DNA. Results: Here, we present a major update of the R/Bioconductor package MEDIPS, which allows for an arbitrary number of replicates per group and integrates sophisticated statistical methods for the detection of differential coverage between experimental conditions. Our approach can be applied to a diversity of quantitative sequencing data. In addition, our update adds novel functionality to MEDIPS, including correlation analysis between samples, and takes advantage of Bioconductor’s annotation databases to facilitate annotation of specific genomic regions. Availability and implementation: The latest version of MEDIPS is available as version 1.12.0 and part of Bioconductor 2.13. The package comes with a manual containing detailed description of its functionality and is available at http://www.bioconductor.org . Contact: lienhard@molgen.mpg.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2014-01-16
    Description: Motivation:  Most methods for estimating differential expression from RNA-seq are based on statistics that compare normalized read counts between treatment classes. Unfortunately, reads are in general too short to be mapped unambiguously to features of interest, such as genes, isoforms or haplotype-specific isoforms. There are methods for estimating expression levels that account for this source of ambiguity. However, the uncertainty is not generally accounted for in downstream analysis of gene expression experiments. Moreover, at the individual transcript level, it can sometimes be too large to allow useful comparisons between treatment groups. Results:  In this article we make two proposals that improve the power, specificity and versatility of expression analysis using RNA-seq data. First, we present a Bayesian method for model selection that accounts for read mapping ambiguities using random effects. This polytomous model selection approach can be used to identify many interesting patterns of gene expression and is not confined to detecting differential expression between two groups. For illustration, we use our method to detect imprinting, different types of regulatory divergence in cis and in trans and differential isoform usage, but many other applications are possible. Second, we present a novel collapsing algorithm for grouping transcripts into inferential units that exploits the posterior correlation between transcript expression levels. The aggregate expression levels of these units can be estimated with useful levels of uncertainty. Our algorithm can improve the precision of expression estimates when uncertainty is large with only a small reduction in biological resolution. Availability and implementation:  We have implemented our software in the mmdiff and mmcollapse multithreaded C++ programs as part of the open-source MMSEQ package, available on https://github.com/eturro/mmseq . Contact:   et341@cam.ac.uk Supplementary information:   Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2014-01-16
    Description: Motivation:  Nucleotide sequence data are being produced at an ever increasing rate. Clustering such sequences by similarity is often an essential first step in their analysis—intended to reduce redundancy, define gene families or suggest taxonomic units. Exact clustering algorithms, such as hierarchical clustering, scale relatively poorly in terms of run time and memory usage, yet they are desirable because heuristic shortcuts taken during clustering might have unintended consequences in later analysis steps. Results:  Here we present HPC-CLUST, a highly optimized software pipeline that can cluster large numbers of pre-aligned DNA sequences by running on distributed computing hardware. It allocates both memory and computing resources efficiently, and can process more than a million sequences in a few hours on a small cluster. Availability and implementation:  Source code and binaries are freely available at http://meringlab.org/software/hpc-clust/ ; the pipeline is implemented in C++ and uses the Message Passing Interface (MPI) standard for distributed computing. Contact:  mering@imls.uzh.ch Supplementary Information:  Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2014-01-16
    Description: : High-throughput technologies have led to an explosion of genomic data available for automated analysis. The consequent possibility to simultaneously sample multiple layers of variation along the gene expression flow requires computational methods integrating raw information from different ‘-omics’. It has been recently demonstrated that translational control is a widespread phenomenon, with profound and still underestimated regulation capabilities. Although detecting changes in the levels of total messenger RNAs (mRNAs; the transcriptome), of polysomally loaded mRNAs (the translatome) and of proteins (the proteome) is experimentally feasible in a high-throughput way, the integration of these levels is still far from being robustly approached. Here we introduce tRanslatome, a new R/Bioconductor package, which is a complete platform for the simultaneous pairwise analysis of transcriptome, translatome and proteome data. The package includes most of the available statistical methods developed for the analysis of high-throughput data, allowing the parallel comparison of differentially expressed genes and the corresponding differentially enriched biological themes. Notably, it also enables the prediction of translational regulatory elements on mRNA sequences. The utility of this tool is demonstrated with two case studies. Availability and implementation: tRanslatome is available in Bioconductor. Contact : t.tebaldi@unitn.it Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2014-01-16
    Description: : DoMosaics is an application that unifies protein domain annotation, domain arrangement analysis and visualization in a single tool. It simplifies the analysis of protein families by consolidating disjunct procedures based on often inconvenient command-line applications and complex analysis tools. It provides a simple user interface with access to domain annotation services such as InterProScan or a local HMMER installation, and can be used to compare, analyze and visualize the evolution of domain architectures. Availability and implementation: DoMosaics is licensed under the Apache License, Version 2.0, and binaries can be freely obtained from www.domosaics.net . Contact: radmoore@uni-muenster.de or e.bornberg@uni-muenster.de
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2014-01-16
    Description: Motivation: A common problem in understanding a biochemical system is to infer its correct structure or topology. This topology consists of all relevant state variables—usually molecules and their interactions. Here we present a method called topological augmentation to infer this structure in a statistically rigorous and systematic way from prior knowledge and experimental data. Results: Topological augmentation starts from a simple model that is unable to explain the experimental data and augments its topology by adding new terms that capture the experimental behavior. This process is guided by representing the uncertainty in the model topology through stochastic differential equations whose trajectories contain information about missing model parts. We first apply this semiautomatic procedure to a pharmacokinetic model. This example illustrates that a global sampling of the parameter space is critical for inferring a correct model structure. We also use our method to improve our understanding of glutamine transport in yeast. This analysis shows that transport dynamics is determined by glutamine permeases with two different kinds of kinetics. Topological augmentation can not only be applied to biochemical systems, but also to any system that can be described by ordinary differential equations. Availability and implementation: Matlab code and examples are available at: http://www.csb.ethz.ch/tools/index . Contact: mikael.sunnaker@bsse.ethz.ch ; andreas.wagner@ieu.uzh.ch Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2014-01-16
    Description: : Assembling and/or producing integrated knowledge of sequence features continues to be an onerous and redundant task despite a large number of existing resources. We have developed SeqDepot—a novel database that focuses solely on two primary goals: (i) assimilating known primary sequences with predicted feature data and (ii) providing the most simple and straightforward means to procure and readily use this information. Access to 〉28.5 million sequences and 300 million features is provided through a well-documented and flexible RESTful interface that supports fetching specific data subsets, bulk queries, visualization and searching by MD5 digests or external database identifiers. We have also developed an HTML5/JavaScript web application exemplifying how to interact with SeqDepot and Perl/Python scripts for use with local processing pipelines. Availability: Freely available on the web at http://seqdepot.net/ . REST access via http://seqdepot.net/api/v1 . Database files and scripts may be downloaded from http://seqdepot.net/download . Contact: ulrich.luke+sci@gmail.com
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    facet.materialart.
    Unknown
    Oxford University Press
    Publication Date: 2014-01-16
    Description: Motivation: Microarray data analysis is often applied to characterize disease populations by identifying individual genes linked to the disease. In recent years, efforts have shifted to focus on sets of genes known to perform related biological functions (i.e. in the same pathways). Evaluating gene sets reduces the need to correct for false positives in multiple hypothesis testing. However, pathways are often large, and genes in the same pathway that do not contribute to the disease can cause a method to miss the pathway. In addition, large pathways may not give much insight to the cause of the disease. Moreover, when such a method is applied independently to two datasets of the same disease phenotypes, the two resulting lists of significant pathways often have low agreement. Results: We present a powerful method, PFSNet, that identifies smaller parts of pathways (which we call subnetworks), and show that significant subnetworks (and the genes therein) discovered by PFSNet are up to 51% (64%) more consistent across independent datasets of the same disease phenotypes, even for datasets based on different platforms, than previously published methods. We further show that those methods which initially declared some large pathways to be insignificant would declare subnetworks detected by PFSNet in those large pathways to be significant, if they were given those subnetworks as input instead of the entire large pathways. Availability: http://compbio.ddns.comp.nus.edu.sg:8080/pfsnet/ Contact: kevinl@comp.nus.edu.sg Supplementary Information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2014-01-16
    Description: :  Pathway Commons is a resource permitting simultaneous queries of multiple pathway databases. However, there is no standard mechanism for using these data (stored in BioPAX format) to annotate and build quantitative mathematical models. Therefore, we developed a new module within the virtual cell modeling and simulation software. It provides pathway data retrieval and visualization and enables automatic creation of executable network models directly from qualitative connections between pathway nodes. Availability and implementation:  Available at Virtual Cell ( http://vcell.org/ ). Application runs on all major platforms and does not require registration for use on the user’s computer. Tutorials and video are available at user guide page. Contact:   vcell_support@uchc.edu
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2014-01-16
    Description: : myChEMBL is a completely open platform, which combines public domain bioactivity data with open source database and cheminformatics technologies. myChEMBL consists of a Linux (Ubuntu) Virtual Machine featuring a PostgreSQL schema with the latest version of the ChEMBL database, as well as the latest RDKit cheminformatics libraries. In addition, a self-contained web interface is available, which can be modified and improved according to user specifications. Availability and implementation: The VM is available at: ftp://ftp.ebi.ac.uk/pub/databases/chembl/VM/myChEMBL/current . The web interface and web services code is available at: https://github.com/rochoa85/myChEMBL . Contact: jpo@ebi.ac.uk
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2014-01-16
    Description: Motivation: The identification of cell cycle-regulated genes through the cyclicity of messenger RNAs in genome-wide studies is a difficult task due to the presence of internal and external noise in microarray data. Moreover, the analysis is also complicated by the loss of synchrony occurring in cell cycle experiments, which often results in additional background noise. Results: To overcome these problems, here we propose the LEON (LEarning and OptimizatioN) algorithm, able to characterize the ‘cyclicity degree’ of a gene expression time profile using a two-step cascade procedure. The first step identifies a potentially cyclic behavior by means of a Support Vector Machine trained with a reliable set of positive and negative examples. The second step selects those genes having peak timing consistency along two cell cycles by means of a non-linear optimization technique using radial basis functions. To prove the effectiveness of our combined approach, we use recently published human fibroblasts cell cycle data and, performing in vivo experiments, we demonstrate that our computational strategy is able not only to confirm well-known cell cycle-regulated genes, but also to predict not yet identified ones. Availability and implementation: All scripts for implementation can be obtained on request. Contact: lorenzo.farina@uniroma1.it or gurtner@ifo.it Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2014-01-16
    Description: Motivation: RNA-seq technology has been widely adopted as an attractive alternative to microarray-based methods to study global gene expression. However, robust statistical tools to analyze these complex datasets are still lacking. By grouping genes with similar expression profiles across treatments, cluster analysis provides insight into gene functions and networks, and hence is an important technique for RNA-seq data analysis. Results: In this manuscript, we derive clustering algorithms based on appropriate probability models for RNA-seq data. An expectation-maximization algorithm and another two stochastic versions of expectation-maximization algorithms are described. In addition, a strategy for initialization based on likelihood is proposed to improve the clustering algorithms. Moreover, we present a model-based hybrid-hierarchical clustering method to generate a tree structure that allows visualization of relationships among clusters as well as flexibility of choosing the number of clusters. Results from both simulation studies and analysis of a maize RNA-seq dataset show that our proposed methods provide better clustering results than alternative methods such as the K-means algorithm and hierarchical clustering methods that are not based on probability models. Availability and implementation: An R package, MBCluster.Seq, has been developed to implement our proposed algorithms. This R package provides fast computation and is publicly available at http://www.r-project.org . Contact: sy@swufe.edu.cn ; pliu@iastate.edu Supplementary Information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2014-01-16
    Description: Motivation:  Modern biomedical and epidemiological studies often measure hundreds or thousands of biomarkers, such as gene expression or metabolite levels. Although there is an extensive statistical literature on adjusting for ‘multiple comparisons’ when testing whether these biomarkers are directly associated with a disease, testing whether they are biological mediators between a known risk factor and a disease requires a more complex null hypothesis, thus offering additional methodological challenges. Results:  We propose a permutation approach that tests multiple putative mediators and controls the family wise error rate. We demonstrate that, unlike when testing direct associations, replacing the Bonferroni correction with a permutation approach that focuses on the maximum of the test statistics can significantly improve the power to detect mediators even when all biomarkers are independent. Through simulations, we show the power of our method is 2–5 x larger than the power achieved by Bonferroni correction. Finally, we apply our permutation test to a case-control study of dietary risk factors and colorectal adenoma to show that, of 149 test metabolites, docosahexaenoate is a possible mediator between fish consumption and decreased colorectal adenoma risk. Availability and implementation:  R-package included in online Supplementary Material. Contact:   joshua.sampson@nih.gov Supplementary information:   Supplementary materials are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2014-01-16
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2014-01-16
    Description: Segerberg's Dynamic Deontic Logic is a dynamic logic where among the set of all possible histories those fulfilling the norms are distinguished. An extension of this logic to obligations (respectively permissions and prohibitions) to do an action before a given deadline or during a given time interval is defined. These temporal constraints are defined by events which may have several occurrences (like the obligation to update a given file before midnight). Violations of these kinds of norms are defined in this logical framework.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2014-01-16
    Description: In this article we show how to model a range of notions in the context of delegation and revocation applied to security scenarios. We demonstrate how a range of delegation–revocation models and policies may be represented in pictorial form and formally represented in terms of reactive Kripke models and a first-order policy specification language. We translate first-order representations of our reactive Kripke models into an equivalent Answer Set Programming form that enables users to apply flexibly well-defined definitions of predicates to represent their requirements in terms of delegation–revocation policy specification.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2014-01-16
    Description: Error-driven ranking algorithms (EDRAs) perform a sequence of slight re-rankings of the constraint set triggered by mistakes on the incoming stream of data. The sequence of rankings entertained by the algorithm (and in particular the final ranking entertained at convergence) depends not only on the grammar the algorithm is trained on, but also on the specific way data are sampled from that grammar and fed to the algorithm. The robust analysis of EDRAs pinpoints at properties of the predicted sequence of rankings that are robust, namely only depend on the target grammar, not on the way the data are sampled from it. This article reviews in detail Tesar and Smolensky's (1998, Linguist Inq. , 29, 229–268.) robust analysis of EDRAs that perform constraint demotion only, but no constraint promotion. This article then develops a new tool for the robust analysis of EDRAs that perform both constraint demotion and promotion. The latter tool is applied to the robust analysis of the EDRA model of the child's early acquisition of phonotactics, through a detailed discussion of restrictiveness on three case studies from Prince and Tesar (2004, Constraints in Phonological Acquisition , 245–291), that crucially require EDRAs that perform both demotion and promotion.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2014-01-16
    Description: Modern software systems usually deal with several sorts (types) of data elements simultaneously. Some of these sorts, like integers, booleans, and so on, can be seen as having an immediate, direct nature and therefore are called visible , and they are contrasted with the others, like types of objects (in object-oriented (OO) sense), which are called hidden sorts. A language used to specify such software system has to be heterogeneous. In addition, to reason about such computations, we have to consider k -tuples of formulas (for instance, pairs in equational reasoning). Consequently, a consequence relation used to specify and verify the properties of those systems must relate sorted sets of k -formulas with individual k -formulas. Logics usually employed in this process are called hidden k-logics and are very general in nature: they comprise several classes of logical systems, including the 2-dimensional hidden and standard equational logics, and Boolean logic. In this article, we propose a generalization of the notion of deduction-detachment system for hidden k -logics. We introduce a syntactic notion of translation, which will be used to define an equivalence relation between hidden k -logics. We show that this notion of equivalence preserves some logical properties, namely the deduction-detachment theorem (DDT) and the Craig interpolation property. We also show that if a specifiable hidden k -logic admits the DDT then it admits a presentation whose only inference rules are the generalized modus ponens rules with respect to the deduction-detachment system.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2014-01-16
    Description: Motivation: For samples of unrelated individuals, we propose a general analysis framework in which hundred thousands of genetic loci can be tested simultaneously for association with complex phenotypes. The approach is built on spatial-clustering methodology, assuming that genetic loci that are associated with the target phenotype cluster in certain genomic regions. In contrast to standard methodology for multilocus analysis, which has focused on the dimension reduction of the data, our multilocus association-clustering test profits from the availability of large numbers of genetic loci by detecting clusters of loci that are associated with the phenotype. Results: The approach is computationally fast and powerful, enabling the simultaneous association testing of large genomic regions. Even the entire genome or certain chromosomes can be tested simultaneously. Using simulation studies, the properties of the approach are evaluated. In an application to a genome-wide association study for chronic obstructive pulmonary disease, we illustrate the practical relevance of the proposed method by simultaneously testing all genotyped loci of the genome-wide association study and by testing each chromosome individually. Our findings suggest that statistical methodology that incorporates spatial-clustering information will be especially useful in whole-genome sequencing studies in which millions or billions of base pairs are recorded and grouped by genomic regions or genes, and are tested jointly for association. Availability and implementation: Implementation of the approach is available upon request. Contact : daq412@mail.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    facet.materialart.
    Unknown
    Oxford University Press
    Publication Date: 2014-01-16
    Description: ECTL is an extension of the computation tree logic (CTL) with two operators GF and FG where GF and FG represent ‘there is a path along which holds infinitely often’ and ‘along any path, there exists a state after which always holds’, respectively. A Hilbert-style axiomatization of ECTL is defined by adding the schemata G( -〉 ) -〉 (GF -〉 GF ), GF F( XGF ), G( -〉 XF ) -〉 ( -〉 GF ) and FG ¬ GF¬ to the axioms of CTL. We prove its soundness and completeness with respect to arbitrary and finite models, i.e. equivalence of the following three conditions: (i) is provable in this axiomatization of ECTL; (ii) is valid in any model; (iii) is valid in any finite model.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2014-01-16
    Description: Gurevich and Neeman introduced Distributed Knowledge Authorization Language (DKAL). The world of DKAL consists of communicating principals computing their own knowledge in their own states. DKAL is based on a new logic of information, the so-called infon logic , and its efficient subsystem called primal logic . In this article, we simplify Kripkean semantics of primal logic and study various extensions of it in search to balance expressivity and efficiency. On the proof-theoretic side we develop cut-free Gentzen-style sequent calculi for the original primal logic and its extensions.
    Print ISSN: 0955-792X
    Electronic ISSN: 1465-363X
    Topics: Computer Science , Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2014-01-22
    Description: The iteratively reweighted least square (IRLS) method is mostly identical to maximum likelihood (ML) method in terms of parameter estimation and power of quantitative trait locus (QTL) detection. But the IRLS is greatly superior to ML in terms of computing speed and the robustness of parameter estimation. In conjunction with the priors of parameters, ML can analyze multiple QTL model based on Bayesian theory, whereas under a single QTL model, IRLS has very limited statistical power to detect multiple QTLs. In this study, we proposed the iteratively reweighted least absolute shrinkage and selection operator (IRLASSO) for extending IRLS to simultaneously map multiple QTLs. The LASSO with coordinate descent step is employed to efficiently estimate non-zero genetic effect of each locus scanned over entire genome. Simulations demonstrate that IRLASSO has a higher precision of parameter estimation and power to detect QTL than IRLS, and is able to estimate residual variance more accurately than the unweighted LASSO based on LS. Especially, IRLASSO is very fast, usually taking less than five iterations to converge. The barley dataset from the North American Barley Genome Mapping Project is reanalyzed by our proposed method.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2014-01-22
    Description: The formation of phenotypic traits, such as biomass production, tumor volume and viral abundance, undergoes a complex process in which interactions between genes and developmental stimuli take place at each level of biological organization from cells to organisms. Traditional studies emphasize the impact of genes by directly linking DNA-based markers with static phenotypic values. Functional mapping, derived to detect genes that control developmental processes using growth equations, has proven powerful for addressing questions about the roles of genes in development. By treating phenotypic formation as a cohesive system using differential equations, a different approach—systems mapping—dissects the system into interconnected elements and then map genes that determine a web of interactions among these elements, facilitating our understanding of the genetic machineries for phenotypic development. Here, we argue that genetic mapping can play a more important role in studying the genotype–phenotype relationship by filling the gaps in the biochemical and regulatory process from DNA to end-point phenotype. We describe a new framework, named network mapping, to study the genetic architecture of complex traits by integrating the regulatory networks that cause a high-order phenotype. Network mapping makes use of a system of differential equations to quantify the rule by which transcriptional, proteomic and metabolomic components interact with each other to organize into a functional whole. The synthesis of functional mapping, systems mapping and network mapping provides a novel avenue to decipher a comprehensive picture of the genetic landscape of complex phenotypes that underlie economically and biomedically important traits.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2014-01-22
    Description: Traditional approaches for genetic mapping are to simply associate the genotypes of a quantitative trait locus (QTL) with the phenotypic variation of a complex trait. A more mechanistic strategy has emerged to dissect the trait phenotype into its structural components and map specific QTLs that control the mechanistic and structural formation of a complex trait. We describe and assess such a strategy, called structural mapping, by integrating the internal structural basis of trait formation into a QTL mapping framework. Electrical impedance spectroscopy (EIS) has been instrumental for describing the structural components of a phenotypic trait and their interactions. By building robust mathematical models on circuit EIS data and embedding these models within a mixture model-based likelihood for QTL mapping, structural mapping implements the EM algorithm to obtain maximum likelihood estimates of QTL genotype-specific EIS parameters. The uniqueness of structural mapping is to make it possible to test a number of hypotheses about the pattern of the genetic control of structural components. We validated structural mapping by analyzing an EIS data collected for QTL mapping of frost hardiness in a controlled cross of jujube trees. The statistical properties of parameter estimates were examined by simulation studies. Structural mapping can be a powerful alternative for genetic mapping of complex traits by taking account into the biological and physical mechanisms underlying their formation.
    Print ISSN: 1467-5463
    Electronic ISSN: 1477-4054
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    facet.materialart.
    Unknown
    Oxford University Press
    Publication Date: 2014-05-01
    Description: Merged processes (MPs) are a recently proposed condensed representation of a Petri net's behaviour similar to branching processes (unfoldings), which copes well not only with concurrency but also with other sources of state space explosion like sequences of choices. They are by orders of magnitude more compact than traditional unfoldings, and yet can be used for efficient model checking. However, constructing complete MPs is difficult, and the only known algorithm is based on building a (potentially much larger) complete unfolding prefix of a Petri net, whose nodes are then merged. Obviously, this significantly reduces their appeal as a representation that can be used for practical model checking. In this paper, we develop an algorithm that avoids constructing the intermediate unfolding prefix and builds a complete merged process directly from a safe Petri net. In particular, a challenging problem of truncating a merged process is solved.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2014-05-01
    Description: We present an algorithm for the correction of an XML document with respect to schema constraints expressed as a document type definition. Given a well-formed XML document t seen as a tree, a schema S and a non-negative threshold th , the algorithm finds every tree t ' valid with respect to S such that the edit distance between t and t ' is no higher than th . The algorithm is based on a recursive exploration of the finite-state automata representing structural constraints imposed by the schema, as well as on the construction of an edit distance matrix storing edit sequences leading to correction trees. We prove the termination, correctness and completeness of the algorithm, as well as its exponential time complexity. We also perform experimental tests on real-life XML data showing the influence of various input parameters on the execution time and on the number of solutions found. The algorithm's implementation demonstrates polynomial rather than exponential behavior. It has been made public under the GNU LGPL v3 license. As we show in our in-depth discussion of the related work, this is the first full-fledged study of the document-to-schema correction problem.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2014-05-01
    Description: Software architecture slicing extracts the right software architecture to provide reference or design guiding for developing software architecture. It will reduce the complexity of the requirement specifications based on a selected slicing criterion of either the component or the connector, but little effort has been made regarding the relationship between forward slicing and backward slicing analysis at the architectural level. This paper combines architecture description language -architecture description language semantics to build behavior graph ( BG) to represent the software architecture, and proposes methods for the coarse-grained software architecture slicing, which can reduce the number of components, connectors and constraints of BG. This method is based on the relationships between the port of the component and the role of the connector, which makes use of both forward and backward coarse-grained architecture slicing of BG. In order to understand the similarities and differences between the forward and backward architecture slicing techniques, some experiments are done. Two results are obtained: The first point is that the average percentage reduction of the backward coarse-grained architecture slice is equal to the average percentage reduction of the forward coarse-grained architecture slice. The second point is that the percentage reduction of the forward coarse-grained architecture slice cluster changes on average, while the percentage reduction of the backward coarse-grained architecture slice cluster change the quickly, and the more extreme cases.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2014-05-01
    Description: There is an increasing demand to efficiently process emerging types of queries, such as progressive queries (PQs), from contemporary database applications including telematics, e-commerce and social media. Unlike conventional queries, a PQ consists of a set of step-queries (SQ). A user formulates a new SQ on the fly based on the result(s) from the previous SQ(s). Existing database management systems were not designed to efficiently process such queries. In this paper, we present a novel technique to efficiently process a special type of PQ, called monotonic linear PQs, based on dynamically materialized views. The key idea is to create a superior relationship graph for SQs from historical PQs that can be used to estimate the benefit of keeping the current SQ result as a materialized view. The materialized views are used to improve the performance of future SQs. A new storage structure for the materialized views set is designed to facilitate efficient search for a usable view to answer a given SQ. Algorithms/strategies to efficiently construct a superior relationship graph, dynamically select materialized views, effectively manage the materialized views set and efficiently search for usable views are discussed. Experiment results demonstrate that our proposed technique is quite promising.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2014-05-01
    Description: This article presents AccessedBefore (AccB), an algorithm and its associated minimal hardware support to detect data races, and compares it with two widely known and used commercial tools: Helgrind, the data race detection tool included in the general purpose memory checking suite Valgrind, and Intel Thread Checker, now shipped as part of Intel Thread Inspector. It provides a performance overhead evaluation using current workloads, along with an analysis of AccB's scalability with the number of threads and workload input set size. It demonstrates that AccB is in the range of 2 x to 11 x faster than these two tools. Finally, it shows the complete proof that AccB is complete in that, for every static data race present in a program, there exists an instruction interleaving that would expose this data race such that AccB can detect it.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2014-05-01
    Description: The problems of query containment, equivalence and minimization are fundamental problems in the context of query processing and optimization. In their classic work published in 1977 [Chandra, A. and Merlin, P. (1977) Optimal Implementation of Conjunctive Queries in Relational Data Bases. Proc. ACM STOC , Boulder, CO, USA, May 4–6, pp. 77–90, ACM, USA], Chandra and Merlin solved the three problems for the language of conjunctive queries (CQ queries) on relational data, under the ‘set-semantics’ assumption for query evaluation. While the results of Chandra and Merlin ((1977) Optimal Implementation of Conjunctive Queries in Relational Data Bases. Proc. ACM STOC , Boulder, CO, USA, May 4–6, pp. 77–90, ACM, USA] have been very influential in database research, it was recognized long ago that the set semantics does not correspond to the semantics of the standard commercial query language structured query language (SQL). Alternative semantics, called bag and bag-set semantics , have been studied since 1993; Chaudhuri and Vardi [(1993) Optimization of Real Conjunctive Queries (Extended Abstract). Proc. PODS , Washington, DC, USA, May 25–28, pp. 59–70. ACM Press, USA] outlined necessary and sufficient conditions for the equivalence of CQ queries under these semantics. (The problems of containment of CQ bag and bag-set queries remain open to this day.) More recently, Cohen [(2006) Equivalence of Queries Combining Set and Bag-Set Semantics. Proc. PODS , Chicago, IL, USA, 26–28 June, pp. 70–79. ACM, USA; (2009) Equivalence of queries that are sensitive to multiplicities. VLDB J. , 18, 765–785] introduced a formalism for treating (generalizations of) CQ queries evaluated under each of set, bag and bag-set semantics uniformly as special cases of the more general combined semantics . This formalism provides tools for studying broader classes of practical SQL queries, specifically important types of queries that arise in on-line analytical processing. Cohen [(2009) Equivalence of queries that are sensitive to multiplicities. VLDB J. , 18, 765–785] provides a sufficient condition for the equivalence of (generalizations of) combined-semantics CQ queries, as well as sufficient and necessary equivalence conditions for several proper sublanguages of the query language of Cohen ((2009) Equivalence of queries that are sensitive to multiplicities. VLDB J. , 18, 765–785]. To the best of our knowledge, no results on minimization of CQ queries beyond set-semantics queries have been reported in the literature. Our goal in this paper is to continue the study of equivalence and minimization of CQ queries. We focus on the practically important problem of finding minimized versions of combined-semantics CQ queries. The main contribution of this paper is the extension of the minimization result of Chandra and Merlin ((1977) Optimal Implementation of Conjunctive Queries in Relational Data Bases. Proc. ACM STOC , Boulder, CO, USA, May 4–6, pp. 77–90, ACM, USA] to all combined-semantics CQ queries; we develop this result using our sufficient condition for containment of combined-semantics CQ queries [Chirkova, R. (2012) Combined-semantics equivalence is decidable for a practical class of conjunctive queries. Submitted for publication]. We also present an extension to all combined-semantics CQ queries of the well-known equivalence condition of Chandra and Merlin ((1977) Optimal Implementation of Conjunctive Queries in Relational Data Bases. Proc. ACM STOC , Boulder, CO, USA, May 4–6, pp. 77–90. ACM, USA] of CQ set-semantics queries. Similarly to the condition of Chandra and Merlin ((1977) Optimal Implementation of Conjunctive Queries in Relational Data Bases. Proc. ACM STOC , Boulder, CO, USA, May 4–6, pp. 77–90. ACM, USA], our extension is given in terms of the relationship between the minimized versions of the queries.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2014-02-26
    Description: Motivation:  The reliable identification of genes is a major challenge in genome research, as further analysis depends on the correctness of this initial step. With high-throughput RNA-Seq data reflecting currently expressed genes, a particularly meaningful source of information has become commonly available for gene finding. However, practical application in automated gene identification is still not the standard case. A particular challenge in including RNA-Seq data is the difficult handling of ambiguously mapped reads. Results:  We present GIIRA (Gene Identification Incorporating RNA-Seq data and Ambiguous reads), a novel prokaryotic and eukaryotic gene finder that is exclusively based on a RNA-Seq mapping and inherently includes ambiguously mapped reads. GIIRA extracts candidate regions supported by a sufficient number of mappings and reassigns ambiguous reads to their most likely origin using a maximum-flow approach. This avoids the exclusion of genes that are predominantly supported by ambiguous mappings. Evaluation on simulated and real data and comparison with existing methods incorporating RNA-Seq information highlight the accuracy of GIIRA in identifying the expressed genes. Availability and implementation:  GIIRA is implemented in Java and is available from https://sourceforge.net/projects/giira/ . Contact:   renardB@rki.de Supplementary Information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2014-02-26
    Description: Motivation: Statistical validation of protein identifications is an important issue in shotgun proteomics. The false discovery rate (FDR) is a powerful statistical tool for evaluating the protein identification result. Several research efforts have been made for FDR estimation at the protein level. However, there are still certain drawbacks in the existing FDR estimation methods based on the target-decoy strategy. Results: In this article, we propose a decoy-free protein-level FDR estimation method. Under the null hypothesis that each candidate protein matches an identified peptide totally at random, we assign statistical significance to protein identifications in terms of the permutation P -value and use these P -values to calculate the FDR. Our method consists of three key steps: (i) generating random bipartite graphs with the same structure; (ii) calculating the protein scores on these random graphs; and (iii) calculating the permutation P value and final FDR. As it is time-consuming or prohibitive to execute the protein inference algorithms for thousands of times in step ii, we first train a linear regression model using the original bipartite graph and identification scores provided by the target inference algorithm. Then we use the learned regression model as a substitute of original protein inference method to predict protein scores on shuffled graphs. We test our method on six public available datasets. The results show that our method is comparable with those state-of-the-art algorithms in terms of estimation accuracy. Availability: The source code of our algorithm is available at: https://sourceforge.net/projects/plfdr/ Contact: zyhe@dlut.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2014-02-26
    Description: Motivation:  Atomistic or coarse grained (CG) potentials derived from statistical distributions of internal variables have recently become popular due to the need of simplified interactions for reaching larger scales in simulations or more efficient conformational space sampling. However, the process of parameterization of accurate and predictive statistics-based force fields requires a huge amount of work and is prone to the introduction of bias and errors. Results:  This article introduces SecStAnT, a software for the creation and analysis of protein structural datasets with user-defined primary/secondary structure composition, with a particular focus on the CG representation. In addition, the possibility of managing different resolutions and the primary/secondary structure selectivity allow addressing the mapping-backmapping of atomistic to CG representation and study the secondary to primary structure relations. Sample datasets and distributions are reported, including interpretation of structural features. Availability and implementation:  SecStAnT is available free of charge at secstant.sourceforge.net/. Source code is freely available on request, implemented in Java and supported on Linux, MS Windows and OSX. Contact:  giuseppe.maccari@iit.it Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2014-02-28
    Description: Most previous studies need to learn a complex object model for parsing a specific object instance. This paper directly learns the general parsing patterns from the set of parsed objects and formalizes the parsing patterns as a series of parsing templates instead of learning the complex object model. Moreover, a novel hierarchical structure is presented to represent an object by using the parsing templates, which implicitly contains the multi-scale object parts and their relationships. For a single object, the parsing process is equivalent to establishing its hierarchical representation and determining the parsing template for each node. We combine the top-down decomposing scheme and the bottom-up composing scheme to infer the parsing process and formalize the inference as an energy minimization problem. The effect of our method is demonstrated by parsing the human body with aggressive pose variations. Compared with the state-of-the-art methods, the parsing results are more satisfying.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2014-02-26
    Description: Motivation: To reliably assess the effects of unknown chemicals on the development of fluorescently labeled sensory-, moto- and interneuron populations in the spinal cord of zebrafish, automated data analysis is essential. Results: For the evaluation of a high-throughput screen of a large chemical library, we developed a new method for the automated extraction of quantitative information from green fluorescent protein (eGFP) and red fluorescent protein (RFP) labeled spinal cord neurons in double-transgenic zebrafish embryos. The methodology comprises region of interest detection, intensity profiling with reference comparison and neuron distribution histograms. All methods were validated on a manually evaluated pilot study using a Notch inhibitor dose-response experiment. The automated evaluation showed superior performance to manual investigation regarding time consumption, information detail and reproducibility. Availability and implementation: Being part of GNU General Public Licence (GNU-GPL) licensed open-source MATLAB toolbox Gait-CAD, an implementation of the presented methods is publicly available for download at http://sourceforge.net/projects/zebrafishimage/ . Contact: johannes.stegmaier@kit.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2014-02-26
    Description: Motivation: The comparison of genes and gene products across species depends on high-quality tools to determine the relationships between gene or protein sequences from various species. Although some excellent applications are available and widely used, their performance leaves room for improvement. Results: We developed orthAgogue: a multithreaded C application for high-speed estimation of homology relations in massive datasets, operated via a flexible and easy command-line interface. Availability: The orthAgogue software is distributed under the GNU license. The source code and binaries compiled for Linux are available at https://code.google.com/p/orthagogue/ . Contact: orthagogue-issue-tracker@googlegroups.com
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2014-02-26
    Description: :  A challenge in biodata analysis is to understand the underlying phenomena among many interactions in signaling pathways. Such study is formulated as the pathway enrichment analysis, which identifies relevant pathways functional enriched in high-throughput data. The question faced here is how to analyze different data types in a unified and integrative way by characterizing pathways that these data simultaneously reveal. To this end, we developed integrative Pathway Enrichment Analysis Platform, iPEAP , which handles transcriptomics, proteomics, metabolomics and GWAS data under a unified aggregation schema. iPEAP emphasizes on the ability to aggregate various pathway enrichment results generated in different high-throughput experiments, as well as the quantitative measurements of different ranking results, thus providing the first benchmark platform for integration, comparison and evaluation of multiple types of data and enrichment methods. Availability and implementation:   iPEAP is freely available at http://www.tongji.edu.cn/~qiliu/ipeap.html . Contact:   qiliu@tongji.edu.cn or zwcao@tongji.edu.cn Supplementary information:   Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2014-02-26
    Description: :  The semantic measures library and toolkit are robust open-source and easy to use software solutions dedicated to semantic measures. They can be used for large-scale computations and analyses of semantic similarities between terms/concepts defined in terminologies and ontologies. The comparison of entities (e.g. genes) annotated by concepts is also supported. A large collection of measures is available. Not limited to a specific application context, the library and the toolkit can be used with various controlled vocabularies and ontology specifications (e.g. Open Biomedical Ontology, Resource Description Framework). The project targets both designers and practitioners of semantic measures providing a JAVA library, as well as a command-line tool that can be used on personal computers or computer clusters. Availability and implementation:  Downloads, documentation, tutorials, evaluation and support are available at http://www.semantic-measures-library.org . Contact:   harispe.sebastien@gmail.com
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2014-02-28
    Description: We propose a transductive Gaussian process (TGP) regression method with regularized Laplacian kernels. Transductive learning exploits not only the labeled data but also the unlabeled test instances for learning. GPs are Bayesian probabilistic regressors which use only labeled data. To use unlabeled data in GPs, regularized Laplacian kernels are used. Similar to the case of a supervised GP regression, the proposed method provides not only the predicted target values but also their error bars. It also provides a hyperparameter selection method based on a Bayesian model selection scheme. We applied the proposed TGP method to the object pose estimation data sets as well as artificial data sets and compared the existing methods. Experimental results show that the proposed method has some advantages over the existing methods.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2014-02-28
    Description: Owing to the sheer volume of text generated by a microblog site like Twitter, it is often difficult to fully understand what is being said about various topics. This paper presents algorithms for summarizing microblog documents. Initially, we present algorithms that produce single-document summaries but later extend them to produce summaries containing multiple documents. We evaluate the generated summaries by comparing them to both manually produced summaries and, for the multiple-post summaries, to the summarization results of some of the leading traditional summarization systems.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2014-02-28
    Description: This paper addresses the development of a new framework to control traffic signal lights for a road network with a recently introduced bus rapid transit (BRT) system. By applying automated goal-directed learning and decision-making called reinforcement learning, the best possible traffic signal actions can be sought upon changes of network states as modelled by the signalized cell transmission model (CTM). An extension to a network of cascading interactions with a BRT system has been proposed with simple uni-directional flows without turning movements. Motivated by the BRT system in Thailand, the conventional signalized CTM has been generalized to cope with the preplanned space-usage priority of a BRT over other non-priority vehicles. A BRT physical lane separator as well as the location of BRT stations have been explicitly modelled. The delay function of both carried passengers on BRT and on other non-priority vehicles as well as waiting passengers at stations has been introduced. The deployment of BRT system with one lane deducted by the lane separator cannot reduce the total passenger delay in comparison with the same road and traffic condition before the installation of the BRT system. Moreover, our proposed method outperforms preemptive and differential priority control methods because of the improved awareness of the signal switching cost.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2014-02-28
    Description: A square matrix of distinct numbers in which every row, column and both diagonals have the same total is referred to as a magic square. Constructing a magic square of a given order is considered a difficult computational problem, particularly when additional constraints are imposed. Hyper-heuristics are emerging high-level search methodologies that explore the space of heuristics for solving a given problem. In this study, we present a range of effective selection hyper-heuristics mixing perturbative low-level heuristics for constructing the constrained version of magic squares. The results show that selection hyper-heuristics, even the non-learning ones deliver an outstanding performance, beating the best-known heuristic solution on average.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    facet.materialart.
    Unknown
    Oxford University Press
    Publication Date: 2014-02-28
    Description: Probabilistic Logic Programming (PLP) allows one to represent domains containing many entities connected by uncertain relations and has many applications in particular in Machine Learning. PITA is a PLP algorithm for computing the probability of queries, which exploits tabling, answer subsumption and Binary Decision Diagrams (BDDs). PITA does not impose any restriction on the programs. Other algorithms, such as PRISM, reduce computation time by imposing restrictions on the program, namely that subgoals are independent and that clause bodies are mutually exclusive. Another assumption that simplifies inference is that clause bodies are independent. In this paper, we present the algorithms PITA(IND,IND) and PITA(OPT). PITA(IND,IND) assumes that subgoals and clause bodies are independent. PITA(OPT) instead first checks whether these assumptions hold for subprograms and subgoals: if they do, PITA(OPT) uses a simplified calculation, otherwise it resorts to BDDs. Experiments on a number of benchmark datasets show that PITA(IND,IND) is the fastest on datasets respecting the assumptions, while PITA(OPT) is a good option when nothing is known about a dataset.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2014-02-28
    Description: This paper discusses a system that extracts and displays temporal and geospatial entities in text. The first task involves identification of all events in a document followed by identification of important events using a classifier. The second task involves identifying named entities associated with the document. In particular, we extract geospatial named entities. We disambiguate the set of geospatial named entities and geocode them to determine the correct coordinates for each place name, often called grounding. We resolve ambiguity based on sentence and article context. Finally, we present a user with the key events and their associated people, places and organizations within a document in terms of a timeline and a map. For purposes of testing, we use Wikipedia articles about historical events, such as those describing wars, battles and invasions. We focus on extracting major events from the articles, although our ideas and tools can be easily used with articles from other sources such as news articles. We use several existing tools such as Evita, Google Maps, publicly available implementations of Support Vector Machines, Hidden Markov Model and Conditional Random Field, and the MIT SIMILE Timeline.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2014-03-27
    Description: Server-aided verification (SAV) has potential applicability in lightweight devices for improving signature verification, where the verifier possesses a computationally weak hardware. We observe that lightweight devices run all algorithms through hardware implementation with logic circuits. Existing SAV protocols indeed improve computational efficiency for lightweight devices, however, few of them take the hardware cost into consideration. The hardware implementation of SAV protocols could be still costly and expensive for lightweight devices. Currently, the most secure SAV protocols in the literature for pairing-based (G 1 x G 2 -〉 G T ) signatures can securely delegate pairing computations to the server; however, verifiers are still required to perform group operations over two completely different groups G 1 and G T , which heavily contribute to the cost of hardware implementation. In this work, we propose several collusion-resistant SAV protocols for pairing-based signatures to improve their applicability for lightweight devices. In our SAV protocols, verifiers are only required to perform group operations in G 1 . In comparison with existing SAV protocols, our protocols save the unnecessary hardware cost for implementing group operations in G T and therefore are more applicable to lightweight applications.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2014-03-27
    Description: Secrecy of decryption keys is an important pre-requisite for security of any encryption scheme and compromised private keys must be immediately replaced. Forward Security (FS), introduced to Public Key Encryption (PKE) by Canetti et al. (Eurocrypt 2003), reduces damage from compromised keys by guaranteeing confidentiality of messages that were encrypted prior to the compromise event. The FS property was also shown to be achievable in (Hierarchical) Identity-Based Encryption (HIBE) by Yao et al. (ACM CCS 2004). Yet, for emerging encryption techniques, offering flexible access control to encrypted data, by means of functional relationships between ciphertexts and decryption keys, FS protection was not known to exist. In this paper, we introduce FS to the powerful setting of Hierarchical Predicate Encryption (HPE), proposed by Okamoto and Takashima (Asiacrypt 2009). Anticipated applications of FS-HPE schemes can be found in searchable encryption and in fully private communication. Considering the dependencies among the concepts, our FS-HPE scheme implies forward-secure flavors of Predicate Encryption and (Hierarchical) Attribute-Based Encryption. Our FS-HPE scheme guarantees FS for plaintexts and for attributes that are hidden in HPE ciphertexts. It further allows delegation of decrypting abilities at any point in time, independent of FS time evolution. It realizes zero-inner-product predicates and is proved adaptively secure under standard assumptions. As the ‘cross-product’ approach taken in FS-HIBE is not directly applicable to the HPE setting, our construction resorts to techniques that are specific to existing HPE schemes and extends them with what can be seen as a reminiscent of binary tree encryption from FS-PKE.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2014-03-27
    Description: Distributed denial of service (DDoS) attack is a coordinated attack, generally performed on a massive scale on the availability of services of a target system or network resources. Owing to the continuous evolution of new attacks and ever-increasing number of vulnerable hosts on the Internet, many DDoS attack detection or prevention mechanisms have been proposed. In this paper, we present a comprehensive survey of DDoS attacks, detection methods and tools used in wired networks. The paper also highlights open issues, research challenges and possible solutions in this area.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2014-03-27
    Description: A novel adaptive steganographic scheme for spatial image is proposed. A noisy function is used to measure texture complexity of 2 x 2 pixel blocks, which keeps monotonic increasing after ±1 modifications. Therefore, the message is embedded into the noisiest areas and the recipient can identify the embedding region. The ‘double-layered embedding’ is exploited to reduce the number of ±1 modifications, in which the fast matrix embedding and wet paper codes are applied to the least significant bit (LSB) plane and the second LSB plane, respectively. The experiments on resisting three steganalyzers show that the proposed method performs better than four typical steganographic schemes. Moreover, comparing with the extended highly undetectable steGO having parameter T = 255, the novel method achieves the competitive ability of resisting detection and faster embedding speed.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2014-03-27
    Description: In this study, we propose an efficient aggregate signcryption scheme to maximize the security of data in a kind of wireless medical network named the disconnected or unattended wireless sensor network (applied in medical systems). These networks address patients who need to be monitored for a long time. The main challenge of these networks that are usually implanted on the patient's clothing and established in sensitive conditions is that the server (station) visits sensors continuously. Moreover, the sensors must retain data for long enough time to off-load to the station as they have limited capacity and batteries. This disconnected nature gives adversaries the power to read and modify target data without being detected or disclose private medical data related to a patient. In this paper, we address these security problems and improve the first study of identity-based aggregate signcryption in UWSNs to achieve both key privacy and invisibility. Our improved approach is at the same time efficient in terms of space and communication overload. Moreover, the proposed scheme allows servers to efficiently verify and unsigncrypt all the related data accumulated by sensors. We further show that the proposed scheme has resistance against reading and modifying attacks. We compare our scheme with the best alternative works in the literature.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2014-03-27
    Description: We introduce ZIDS, a client-server solution for private detection of intrusions that is suitable for private detection of zero-day attacks in input data. The system includes an intrusion detection system (IDS) server that has a set of sensitive signatures for zero-day attacks and IDS clients that possess some sensitive data (e.g. files, logs). Using ZIDS, each IDS client learns whether its input data matche any of the zero-day signatures, but neither party learns about any additional information. In other words, the IDS client learns nothing about the zero-day signatures and the IDS server learns nothing about the input data and the analysis results. To solve this problem, we reduce privacy-preserving intrusion detection to an instance of secure two-party oblivious deterministic finite automata (ODFA) evaluation. Then, motivated by the fact that the DFAs associated with attack signature are often sparse , we propose a new and efficient ODFA protocol that takes advantage of this sparsity. Our new construction is considerably more efficient than the existing solutions and, at the same time, does not leak any sensitive information about the nature of the sparsity in the private DFA. We provide a full implementation of our privacy-preserving system that includes optimizations that lead to better memory usage and evaluate its performance on rule sets from the Snort IDS.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2014-03-27
    Description: Anonymous multi-receiver identity-based encryption can protect the receiver identity privacy and message confidentiality. Thus, it can be used in many fields, such as Voice over Internet Protocol and pay-TV systems. In 2012, Chien improved an anonymous multi-receiver identity-based encryption scheme. This paper points out that Chien's scheme does not satisfy the indistinguishability of encryptions under selective multi-identity, chosen ciphertext attacks. The analysis is important for understanding the security risks.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2014-03-27
    Description: Conditionally anonymous ring signatures are a variant of ring signatures such that the anonymity is conditional: if a user is the true signer, then he can claim this through a confirmation protocol; if he is not the signer, he can prove this through a disavowal protocol. Hence, this can preserve the anonymity of a signer while reserving the right to trace it when necessary. The security of such a signature also requires that an innocent non-signer will not be framed as a signer. In this paper, we propose a new framework for this type of signature without random oracles. Our construction can be realized under general complexity assumptions and has a simple structure. In contrast, previous works are based on non-standard assumptions or proved secure in the random oracle model.
    Print ISSN: 0010-4620
    Electronic ISSN: 1460-2067
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...