ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (2,832)
  • 2020-2022
  • 2015-2019  (2,832)
  • 1975-1979
  • 1945-1949
  • 2018  (2,832)
  • PLoS Computational Biology  (598)
  • Algorithms  (308)
  • Bioinformatics  (66)
  • 110151
  • 2184
  • 56466
  • Computer Science  (2,832)
  • 1
    Publication Date: 2018
    Description: Wu et al. (2014) showed that under the small set expansion hypothesis (SSEH) there is no polynomial time approximation algorithm with any constant approximation factor for several graph width parameters, including tree-width, path-width, and cut-width (Wu et al. 2014). In this paper, we extend this line of research by exploring other graph width parameters: We obtain similar approximation hardness results under the SSEH for rank-width and maximum induced matching-width, while at the same time we show the approximation hardness of carving-width, clique-width, NLC-width, and boolean-width. We also give a simpler proof of the approximation hardness of tree-width, path-width, and cut-widththan that of Wu et al.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2018
    Description: This paper develops a bias compensation-based parameter and state estimation algorithm for the observability canonical state-space system corrupted by colored noise. The state-space system is transformed into a linear regressive model by eliminating the state variables. Based on the determination of the noise variance and noise model, a bias correction term is added into the least squares estimate, and the system parameters and states are computed interactively. The proposed algorithm can generate the unbiased parameter estimate. Two illustrative examples are given to show the effectiveness of the proposed algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2018
    Description: The Recurrent Neural Network (RNN) utilizes dynamically changing time information through time cycles, so it is very suitable for tasks with time sequence characteristics. However, with the increase of the number of layers, the vanishing gradient occurs in the RNN. The Grid Long Short-Term Memory (GridLSTM) recurrent neural network can alleviate this problem in two dimensions by taking advantage of the two dimensions calculated in time and depth. In addition, the time sequence task is related to the information of the current moment before and after. In this paper, we propose a method that takes into account context-sensitivity and gradient problems, namely the Bidirectional Grid Long Short-Term Memory (BiGridLSTM) recurrent neural network. This model not only takes advantage of the grid architecture, but it also captures information around the current moment. A large number of experiments on the dataset LibriSpeech show that BiGridLSTM is superior to other deep LSTM models and unidirectional LSTM models, and, when compared with GridLSTM, it gets about 26 percent gain improvement.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2018
    Description: This paper presents a fast algorithm for texture-less object recognition, which is designed to be robust to cluttered backgrounds and small transformations. At its core, the proposed method demonstrates a two-stage template-based procedure using an orientation compressing map and discriminative regional weight (OCM-DRW) to effectively detect texture-less objects. In the first stage, the proposed method quantizes and compresses all the orientations in a neighborhood to obtain the orientation compressing map which then is used to generate a set of possible object locations. To recognize the object in these possible object locations, the second stage computes the similarity of each possible object location with the learned template by using discriminative regional weight, which can effectively distinguish different categories of objects with similar parts. Experiments on publiclyavailable, texture-less object datasets indicate that apart from yielding efficient computational performance, the proposed method also attained remarkable recognition rates surpassing recent state-of-the-art texture-less object detectors in the presence of high-clutter, occlusion and scale-rotation changes. It improves the accuracy and speed by 8% and 370% respectively, relative to the previous best result on D-Textureless dataset.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2018
    Description: A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the calculation formula of this scheme from the boundary value problem of electric field and magnetic field, and compared finite-difference solutions with finite-element numerical results and analytical solutions of a 1D model. First, a homogeneous half-space model was tested and the finite-difference approach can provide very good accuracy for 2D magnetotelluric modeling. Then we compared them to the analytical solutions for the two-layered geo-electric model; the relative errors of the apparent resistivity and the impedance phase were both increased when the frequency was increased. To conclude, we compare our finite-difference simulation results with COMMEMI 2D-0 model with the finite-element solutions. Both results are in close agreement to each other. These comparisons can confirm the validity and reliability of our finite-difference algorithm. Moreover, a future project will extend the 2D structures to 3D, where non-uniform meshes should perform especially well.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2018
    Description: In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for the classification of the heart rate in patients. The fuzzy rule base was designed based on the knowledge of experts. Optimization of the membership functions of the fuzzy systems is done in order to improve the classification rate and provide a more accurate diagnosis, and for this goal the Bird Swarm Algorithm was used. Two different type-1 fuzzy systems are designed and optimized, the first one with trapezoidal membership functions and the second with Gaussian membership functions. Once the best type-1 fuzzy systems have been obtained, these are considered as a basis for designing the interval type-2 fuzzy systems, where the footprint of uncertainty was optimized to find the optimal representation of uncertainty. After performing different tests with patients and comparing the classification rate of each fuzzy system, it is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions. Additionally, tests were performed with the Crow Search Algorithm to carry out a performance comparison, with Bird Swarm Algorithm being the one with the best results.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2018
    Description: Hinge joint damage is a typical form of damage occurring in simply supported slab bridges, which can present adverse effects on the overall force distribution of the structure. However, damage identification methods of hinge joint damage are still limited. In this study, a damage identification algorithm for simply supported hinged-slab bridges based on the modified hinge plate method (MHPM) and artificial bee colony (ABC) algorithms was proposed by considering the effect of hinge damage conditions on the lateral load distribution (LLD) of structures. Firstly, MHPM was proposed and demonstrated, which is based on a traditional hinge plate method by introducing relative displacement as a damage factor to simulate hinge joint damage. The effectiveness of MHPM was verified through comparison with the finite element method (FEM). Secondly, damage identification was treated as the inverse problem of calculating the LLD in damage conditions of simply supported slab bridges. Four ABC algorithms were chosen to solve the problem due to its simple structure, ease of implementation, and robustness. Comparisons of convergence speed and identification accuracy with genetic algorithm and particle swarm optimization were also conducted. Finally, hinged bridges composed of four and seven slabs were studied as numerical examples to account for the feasibility and correctness of the proposed method. The simulation results revealed that the proposed algorithm could identify the location and degree of damaged joints efficiently and precisely.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2018
    Description: In the paper, we tackle the least squares estimators of the Vasicek-type model driven by sub-fractional Brownian motion: d X t = ( μ + θ X t ) d t + d S t H , t ≥ 0 with X 0 = 0 , where S H is a sub-fractional Brownian motion whose Hurst index H is greater than 1 2 , and μ ∈ R , θ ∈ R + are two unknown parameters. Based on the so-called continuous observations, we suggest the least square estimators of μ and θ and discuss the consistency and asymptotic distributions of the two estimators.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2018
    Description: Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence container allocation to container ships for a voyage, and the degree of influence varies, engendering a complex nonlinearity. Therefore, this paper proposes a model based on gray relational analysis (GRA) and mixed kernel support vector machine (SVM) for predicting container allocation to a container ship for a voyage. First, in this model, the weights of influencing factors are determined through GRA. Then, the weighted factors serve as the input of the SVM model, and SVM model parameters are optimized through a genetic algorithm. Numerical simulations revealed that the proposed model could effectively predict the number of containers for container ship voyage and that it exhibited strong generalization ability and high accuracy. Accordingly, this model provides a new method for predicting container volume for a voyage.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2018
    Description: Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2018
    Description: In this paper, a novel approach to facial expression recognition based on the discrete separable shearlet transform (DSST) and normalized mutual information feature selection is proposed. The approach can be divided into five steps. First, all test and training images are preprocessed. Second, DSST is applied to the preprocessed facial expression images, and all the transformation coefficients are obtained as the original feature set. Third, an improved normalized mutual information feature selection is proposed to find the optimal feature subset of the original feature set, thus we can retain the key classification information of the original data. Fourth, the feature extraction and selection of the feature space is reduced by employing linear discriminant analysis. Finally, a support vector machine is used to recognize the expressions. In this study, experimental verification was carried out on four open facial expression databases. The results show that this method can not only improve the recognition rate of facial expressions, but also significantly reduce the computational complexity and improve the system efficiency.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2018
    Description: Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual object tracking. However, visual tracking is still challenging when the target objects undergo complex scenarios such as occlusion, deformation, scale changes and illumination changes. In this paper, we utilize the hierarchical features of convolutional neural networks (CNNs) and learn a spatial-temporal context correlation filter on convolutional layers. Then, the translation is estimated by fusing the response score of the filters on the three convolutional layers. In terms of scale estimation, we learn a discriminative correlation filter to estimate scale from the best confidence results. Furthermore, we proposed a re-detection activation discrimination method to improve the robustness of visual tracking in the case of tracking failure and an adaptive model update method to reduce tracking drift caused by noisy updates. We evaluate the proposed tracker with DCFs and deep features on OTB benchmark datasets. The tracking results demonstrated that the proposed algorithm is superior to several state-of-the-art DCF methods in terms of accuracy and robustness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2018
    Description: The amount of information currently generated in the world has been increasing exponentially, raising the question of whether all acquired data is relevant for the learning algorithm process. If a subset of the data does not bring enough innovation, data-selection strategies can be employed to reduce the computational complexity cost and, in many cases, improve the estimation accuracy. In this paper, we explore some adaptive filtering algorithms whose characteristic features are their fast convergence and data selection. These algorithms incorporate a prescribed data-selection strategy and are compared in distinct applications environments. The simulation results include both synthetic and real data.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2018
    Description: Path planning of unmanned aerial vehicles (UAVs) in threatening and adversarial areas is a constrained nonlinear optimal problem which takes a great amount of static and dynamic constraints into account. Quantum-behaved pigeon-inspired optimization (QPIO) has been widely applied to such nonlinear problems. However, conventional QPIO is suffering low global convergence speed and local optimum. In order to solve the above problems, an improved QPIO algorithm, adaptive operator QPIO, is proposed in this paper. Firstly, a new initialization process based on logistic mapping method is introduced to generate the initial population of the pigeon-swarm. After that, to improve the performance of the map and compass operation, the factor parameter will be adaptively updated in each iteration, which can balance the ability between global and local search. In the final landmark operation, the gradual decreasing pigeon population-updating strategy is introduced to prevent premature convergence and local optimum. Finally, the demonstration of the proposed algorithm on UAV path planning problem is presented, and the comparison result indicates that the performance of our algorithm is better than that of particle swarm optimization (PSO), pigeon-inspired optimization (PIO), and its variants, in terms of convergence and accuracy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2018
    Description: Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2018
    Description: In this study, fifth-order and sixth-order diagonally implicit Runge–Kutta type (DIRKT) techniques for solving fourth-order ordinary differential equations (ODEs) are derived which are denoted as DIRKT5 and DIRKT6, respectively. The first method has three and the another one has four identical nonzero diagonal elements. A set of test problems are applied to validate the methods and numerical results showed that the proposed methods are more efficient in terms of accuracy and number of function evaluations compared to the existing implicit Runge–Kutta (RK) methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2018
    Description: This paper presents a comparison among the bee colony optimization (BCO), differential evolution (DE), and harmony search (HS) algorithms. In addition, for each algorithm, a type-1 fuzzy logic system (T1FLS) for the dynamic modification of the main parameters is presented. The dynamic adjustment in the main parameters for each algorithm with the implementation of fuzzy systems aims at enhancing the performance of the corresponding algorithms. Each algorithm (modified and original versions) is analyzed and compared based on the optimal design of fuzzy systems for benchmark control problems, especially in fuzzy controller design. Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers. Statistically is demonstrated that the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2018
    Description: The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2018
    Description: Parallel reservoir simulation is an important approach to solving real-time reservoir management problems. Recently, there is a new trend of using a graphics processing unit (GPU) to parallelize the reservoir simulations. Current GPU-aided reservoir simulations focus on compute unified device architecture (CUDA). Nevertheless, CUDA is not functionally portable across devices and incurs high amount of code. Meanwhile, domain decomposition is not well used for GPU-based reservoir simulations. In order to address the problems, we propose a parallel method with OpenACC to accelerate serial code and reduce the time and effort during porting an application to GPU. Furthermore, the GPU-aided domain decomposition is developed to accelerate the efficiency of reservoir simulation. The experimental results indicate that (1) the proposed GPU-aided approach can outperform the CPU-based one up to about two times, meanwhile with the help of OpenACC, the workload of the transplant code was reduced significantly by about 22 percent of the source code, (2) the domain decomposition method can further improve the execution efficiency up to 1.7×. The proposed parallel reservoir simulation method is a efficient tool to accelerate reservoir simulation.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2018
    Description: The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2018
    Description: The system identification problem becomes more challenging when the parameter space increases. Recently, several works have focused on the identification of bilinear forms, which are related to the impulse responses of a spatiotemporal model, in the context of a multiple-input/single-output system. In this framework, the problem was addressed in terms of the Wiener filter and different basic adaptive algorithms. This paper studies two types of algorithms tailored for the identification of such bilinear forms, i.e., the Kalman filter (along with its simplified version) and an optimized least-mean-square (LMS) algorithm. Also, a comparison between them is performed, which shows interesting similarities. In addition to the mathematical derivation of the algorithms, we also provide extensive experimental results, which support the theoretical findings and indicate the good performance of the proposed solutions.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2018
    Description: The Hammer prototype is a query engine for corpora of Open Data that provides users with the concept of blind querying. Since data sets published on Open Data portals are heterogeneous, users wishing to find out interesting data sets are blind: queries cannot be fully specified, as in the case of databases. Consequently, the query engine is responsible for rewriting and adapting the blind query to the actual data sets, by exploiting lexical and semantic similarity. The effectiveness of this approach was discussed in our previous works. In this paper, we report our experience in developing the query engine. In fact, in the very first version of the prototype, we realized that the implementation of the retrieval technique was too slow, even though corpora contained only a few thousands of data sets. We decided to adopt the Map-Reduce paradigm, in order to parallelize the query engine and improve performances. We passed through several versions of the query engine, either based on the Hadoop framework or on the Spark framework. Hadoop and Spark are two very popular frameworks for writing and executing parallel algorithms based on the Map-Reduce paradigm. In this paper, we present our study about the impact of adopting the Map-Reduce approach and its two most famous frameworks to parallelize the Hammer query engine; we discuss various implementations of the query engine, either obtained without significantly rewriting the algorithm or obtained by completely rewriting the algorithm by exploiting high level abstractions provided by Spark. The experimental campaign we performed shows the benefits provided by each studied solution, with the perspective of moving toward Big Data in the future. The lessons we learned are collected and synthesized into behavioral guidelines for developers approaching the problem of parallelizing algorithms by means of Map-Reduce frameworks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2018
    Description: The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, and the layout scheme affects the effect of the scheduling scheme by the transportation time of jobs. However, in traditional methods, they are regarded as separate tasks performed sequentially, which ignores the interaction. Therefore, developing effective methods to deal with the multi-objective energy-aware integration of the FJSSP and MRWLP (MEIFM) problem in a sustainable manufacturing system is becoming more and more important. Based on the interaction between FJSSP and MRWLP, the MEIFM problem can be formulated as a multi-objective bi-level programming (MOBLP) model. The upper-level model for FJSSP is employed to minimize the makespan and total energy consumption, while the lower-level model for MRWLP is used to minimize the material handling quantity. Because the MEIFM problem is denoted as a mixed integer non-liner programming model, it is difficult to solve it using traditional methods. Thus, this paper proposes an improved multi-objective hierarchical genetic algorithm (IMHGA) to solve this model. Finally, the effectiveness of the method is verified through comparative experiments.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2018
    Description: When financial market conditions change, traders adopt different strategies. The traders’ collective behaviour may cause significant changes in the statistical properties of price movements. When this happens, the market is said to have gone through “regime changes”. The purpose of this paper is to characterise what is a “normal market regime” as well as what is an “abnormal market regime”, under observations in Directional Changes (DC). Our study starts with historical data from 10 financial markets. For each market, we focus on a period of time in which significant events could have triggered regime changes. The observations of regime changes in these markets are then positioned in a designed two-dimensional indicator space based on DC. Our results suggest that the normal regimes from different markets share similar statistical characteristics. In other words, with our observations, it is possible to distinguish normal regimes from abnormal regimes. This is significant, because, for the first time, we can tell whether a market is in a normal regime by observing the DC indicators in the market. This opens the door for future work to be able to dynamically monitor the market for regime change.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2018
    Description: The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system (AIS) has been used to support navigation, route estimation, collision prediction, and abnormal traffic detection. This article examined the main elements of ship collision, developed a mathematical model for the risk assessment, and simulated a collision assessment based on AIS information, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2018
    Description: In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system are introduced. Second, the mathematical model of the quarter car active suspension system is established. Third, a sliding mode control (SMC) controller is designed to manipulate the active force to control the active suspension system. Fourth, GA is applied to optimize the weight coefficients of an SMC switching function and the parameters of the control law. Finally, the simulation model is built based on MATLAB/Simulink (version 2014a), and the simulations are performed and analyzed with the proposed control strategy to identify its performance. The simulation results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2018
    Description: In the context of this research work, we studied the problem of privacy preserving on spatiotemporal databases. In particular, we investigated the k-anonymity of mobile users based on real trajectory data. The k-anonymity set consists of the k nearest neighbors. We constructed a motion vector of the form (x,y,g,v) where x and y are the spatial coordinates, g is the angle direction, and v is the velocity of mobile users, and studied the problem in four-dimensional space. We followed two approaches. The former applied only k-Nearest Neighbor (k-NN) algorithm on the whole dataset, while the latter combined trajectory clustering, based on K-means, with k-NN. Actually, it applied k-NN inside a cluster of mobile users with similar motion pattern (g,v). We defined a metric, called vulnerability, that measures the rate at which k-NNs are varying. This metric varies from 1 k (high robustness) to 1 (low robustness) and represents the probability the real identity of a mobile user being discovered from a potential attacker. The aim of this work was to prove that, with high probability, the above rate tends to a number very close to 1 k in clustering method, which means that the k-anonymity is highly preserved. Through experiments on real spatial datasets, we evaluated the anonymity robustness, the so-called vulnerability, of the proposed method.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-12-14
    Description: by Itamar Daniel Landau, Haim Sompolinsky We present a simple model for coherent, spatially correlated chaos in a recurrent neural network. Networks of randomly connected neurons exhibit chaotic fluctuations and have been studied as a model for capturing the temporal variability of cortical activity. The dynamics generated by such networks, however, are spatially uncorrelated and do not generate coherent fluctuations, which are commonly observed across spatial scales of the neocortex. In our model we introduce a structured component of connectivity, in addition to random connections, which effectively embeds a feedforward structure via unidirectional coupling between a pair of orthogonal modes. Local fluctuations driven by the random connectivity are summed by an output mode and drive coherent activity along an input mode. The orthogonality between input and output mode preserves chaotic fluctuations by preventing feedback loops. In the regime of weak structured connectivity we apply a perturbative approach to solve the dynamic mean-field equations, showing that in this regime coherent fluctuations are driven passively by the chaos of local residual fluctuations. When we introduce a row balance constraint on the random connectivity, stronger structured connectivity puts the network in a distinct dynamical regime of self-tuned coherent chaos. In this regime the coherent component of the dynamics self-adjusts intermittently to yield periods of slow, highly coherent chaos. The dynamics display longer time-scales and switching-like activity. We show how in this regime the dynamics depend qualitatively on the particular realization of the connectivity matrix: a complex leading eigenvalue can yield coherent oscillatory chaos while a real leading eigenvalue can yield chaos with broken symmetry. The level of coherence grows with increasing strength of structured connectivity until the dynamics are almost entirely constrained to a single spatial mode. We examine the effects of network-size scaling and show that these results are not finite-size effects. Finally, we show that in the regime of weak structured connectivity, coherent chaos emerges also for a generalized structured connectivity with multiple input-output modes.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2018-03-06
    Description: Motivation The traditional view of cancer evolution states that a cancer genome accumulates a sequential ordering of mutations over a long period of time. However, in recent years it has been suggested that a cancer genome may instead undergo a one-time catastrophic event, such as chromothripsis , where a large number of mutations instead occur simultaneously . A number of potential signatures of chromothripsis have been proposed. In this work, we provide a rigorous formulation and analysis of the ‘ability to walk the derivative chromosome’ signature originally proposed by Korbel and Campbell. In particular, we show that this signature, as originally envisioned, may not always be present in a chromothripsis genome and we provide a precise quantification of under what circumstances it would be present. We also propose a variation on this signature, the H/T alternating fraction , which allows us to overcome some of the limitations of the original signature. Results We apply our measure to both simulated data and a previously analyzed real cancer dataset and find that the H/T alternating fraction may provide useful signal for distinguishing genomes having acquired mutations simultaneously from those acquired in a sequential fashion. Availability and implementation An implementation of the H/T alternating fraction is available at https://bitbucket.org/oesperlab/ht-altfrac . Contact loesper@carleton.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2018-03-06
    Description: Motivation The identification of microRNA (miRNA) target sites is important. In the past decade, dozens of computational methods have been developed to predict miRNA target sites. Despite their existence, rarely does a method consider the well-known competition and cooperation among miRNAs when attempts to discover target sites. To fill this gap, we developed a new approach called CCmiR, which takes the cooperation and competition of multiple miRNAs into account in a statistical model to predict their target sites. Results Tested on four different datasets, CCmiR predicted miRNA target sites with a high recall and a reasonable precision, and identified known and new cooperative and competitive miRNAs supported by literature. Compared with three state-of-the-art computational methods, CCmiR had a higher recall and a higher precision. Availability and implementation CCmiR is freely available at http://hulab.ucf.edu/research/projects/miRNA/CCmiR . Contact xiaoman@mail.ucf.edu or haihu@cs.ucf.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2018-03-06
    Description: Motivation Eukaryotic chromosomes adapt a complex and highly dynamic three-dimensional (3D) structure, which profoundly affects different cellular functions and outcomes including changes in epigenetic landscape and in gene expression. Making the scenario even more complex, cancer cells harbor chromosomal abnormalities [e.g. copy number variations (CNVs) and translocations] altering their genomes both at the sequence level and at the level of 3D organization. High-throughput chromosome conformation capture techniques (e.g. Hi-C), which are originally developed for decoding the 3D structure of the chromatin, provide a great opportunity to simultaneously identify the locations of genomic rearrangements and to investigate the 3D genome organization in cancer cells. Even though Hi-C data has been used for validating known rearrangements, computational methods that can distinguish rearrangement signals from the inherent biases of Hi-C data and from the actual 3D conformation of chromatin, and can precisely detect rearrangement locations de novo have been missing. Results In this work, we characterize how intra and inter-chromosomal Hi-C contacts are distributed for normal and rearranged chromosomes to devise a new set of algorithms (i) to identify genomic segments that correspond to CNV regions such as amplifications and deletions ( HiCnv ), (ii) to call inter-chromosomal translocations and their boundaries ( HiCtrans ) from Hi-C experiments and (iii) to simulate Hi-C data from genomes with desired rearrangements and abnormalities ( AveSim ) in order to select optimal parameters for and to benchmark the accuracy of our methods. Our results on 10 different cancer cell lines with Hi-C data show that we identify a total number of 105 amplifications and 45 deletions together with 90 translocations, whereas we identify virtually no such events for two karyotypically normal cell lines. Our CNV predictions correlate very well with whole genome sequencing data among chromosomes with CNV events for a breast cancer cell line ( r  = 0.89) and capture most of the CNVs we simulate using Avesim. For HiCtrans predictions, we report evidence from the literature for 30 out of 90 translocations for eight of our cancer cell lines. Furthermore, we show that our tools identify and correctly classify relatively understudied rearrangements such as double minutes and homogeneously staining regions. Considering the inherent limitations of existing techniques for karyotyping (i.e. missing balanced rearrangements and those near repetitive regions), the accurate identification of CNVs and translocations in a cost-effective and high-throughput setting is still a challenge. Our results show that the set of tools we develop effectively utilize moderately sequenced Hi-C libraries (100–300 million reads) to identify known and de novo chromosomal rearrangements/abnormalities in well-established cancer cell lines. With the decrease in required number of cells and the increase in attainable resolution, we believe that our framework will pave the way towards comprehensive mapping of genomic rearrangements in primary cells from cancer patients using Hi-C. Availability and implementation CNV calling: https://github.com/ay-lab/HiCnv , Translocation calling: https://github.com/ay-lab/HiCtrans and Hi-C simulation: https://github.com/ay-lab/AveSim . Contact ferhatay@lji.org Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2018-03-06
    Description: Summary Cancer hallmarks, a concept that seeks to explain the complexity of cancer initiation and development, provide a new perspective of studying cancer signaling which could lead to a greater understanding of this complex disease. However, to the best of our knowledge, there is currently a lack of tools that support such hallmark-based study of the cancer signaling network, thereby impeding the gain of knowledge in this area. We present TROVE, an user-friendly software that facilitates hallmark annotation, visualization and analysis in cancer signaling networks. In particular, TROVE facilitates hallmark analysis specific to particular cancer types. Availability and implementation Available under the Eclipse Public License from: https://sites.google.com/site/cosbyntu/softwares/trove and https://github.com/trove2017/Trove . Contact hechua@ntu.edu.sg or assourav@ntu.edu.sg
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2018-03-06
    Description: Summary High-throughput screening of the host transcriptional response to various viral infections provides a wealth of data, but utilization of microarray and next generation sequencing (NGS) data for analysis can be difficult. The Ho st T ranscriptional R esponse D ata B ase (HoTResDB), allows visitors to access already processed microarray and NGS data from non-human primate models of viral hemorrhagic fever to better understand the host transcriptional response. Availability HoTResDB is freely available at http://hotresdb.bu.edu Contact jhconnor@bu.edu
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2018-03-06
    Description: Motivation Structural variation, including large deletions, duplications, inversions, translocations and other rearrangements, is common in human and cancer genomes. A number of methods have been developed to identify structural variants from Illumina short-read sequencing data. However, reliable identification of structural variants remains challenging because many variants have breakpoints in repetitive regions of the genome and thus are difficult to identify with short reads. The recently developed linked-read sequencing technology from 10X Genomics combines a novel barcoding strategy with Illumina sequencing. This technology labels all reads that originate from a small number (∼5 to 10) DNA molecules ∼50 Kbp in length with the same molecular barcode. These barcoded reads contain long-range sequence information that is advantageous for identification of structural variants. Results We present Novel Adjacency Identification with Barcoded Reads (NAIBR), an algorithm to identify structural variants in linked-read sequencing data. NAIBR predicts novel adjacencies in an individual genome resulting from structural variants using a probabilistic model that combines multiple signals in barcoded reads. We show that NAIBR outperforms several existing methods for structural variant identification—including two recent methods that also analyze linked-reads—on simulated sequencing data and 10X whole-genome sequencing data from the NA12878 human genome and the HCC1954 breast cancer cell line. Several of the novel somatic structural variants identified in HCC1954 overlap known cancer genes. Availability and implementation Software is available at compbio.cs.brown.edu/software . Contact braphael@princeton.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2018-03-06
    Description: Motivation Cancers arise as the result of somatically acquired changes in the DNA of cancer cells. However, in addition to the mutations that confer a growth advantage, cancer genomes accumulate a large number of somatic mutations resulting from normal DNA damage and repair processes as well as carcinogenic exposures or cancer related aberrations of DNA maintenance machinery. These mutagenic processes often produce characteristic mutational patterns called mutational signatures. The decomposition of a cancer genome’s mutation catalog into mutations consistent with such signatures can provide valuable information about cancer etiology. However, the results from different decomposition methods are not always consistent. Hence, one needs to be able to not only decompose a patient’s mutational profile into signatures but also establish the accuracy of such decomposition. Results We proposed two complementary ways of measuring confidence and stability of decomposition results and applied them to analyze mutational signatures in breast cancer genomes. We identified both very stable and highly unstable signatures, as well as signatures that previously have not been associated with breast cancer. We also provided additional support for the novel signatures. Our results emphasize the importance of assessing the confidence and stability of inferred signature contributions. Availability and implementation All tools developed in this paper have been implemented in an R package, called SignatureEstimation, which is available from https://www.ncbi.nlm.nih.gov/CBBresearch/Przytycka/index.cgi\#signatureestimation . Contact wojtowda@ncbi.nlm.nih.gov or przytyck@ncbi.nlm.nih.gov Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2018-03-06
    Description: Motivation Brain imaging genetics, which studies the linkage between genetic variations and structural or functional measures of the human brain, has become increasingly important in recent years. Discovering the bi-multivariate relationship between genetic markers such as single-nucleotide polymorphisms (SNPs) and neuroimaging quantitative traits (QTs) is one major task in imaging genetics. Sparse Canonical Correlation Analysis (SCCA) has been a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants to induce sparsity. The ℓ 0 -norm penalty is a perfect sparsity-inducing tool which, however, is an NP-hard problem. Results In this paper, we propose the truncated ℓ 1 -norm penalized SCCA to improve the performance and effectiveness of the ℓ 1 -norm based SCCA methods. Besides, we propose an efficient optimization algorithms to solve this novel SCCA problem. The proposed method is an adaptive shrinkage method via tuning τ . It can avoid the time intensive parameter tuning if given a reasonable small τ . Furthermore, we extend it to the truncated group-lasso (TGL), and propose TGL-SCCA model to improve the group-lasso-based SCCA methods. The experimental results, compared with four benchmark methods, show that our SCCA methods identify better or similar correlation coefficients, and better canonical loading profiles than the competing methods. This demonstrates the effectiveness and efficiency of our methods in discovering interesting imaging genetic associations. Availability and implementation The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/tlpscca/ . Contact dulei@nwpu.edu.cn or shenli@iu.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2018-03-06
    Description: Motivation Modelling with multiple servers that use different algorithms for docking results in more reliable predictions of interaction sites. However, the scoring and comparison of all models by an expert is time-consuming and is not feasible for large volumes of data generated by such modelling. Results Quality ASsessment of DOcking Models (QASDOM) Server is a simple and efficient tool for real-time simultaneous analysis, scoring and ranking of data sets of receptor–ligand complexes built by a range of docking techniques. This meta-server is designed to analyse large data sets of docking models and rank them by scoring criteria developed in this study. It produces two types of output showing the likelihood of specific residues and clusters of residues to be involved in receptor–ligand interactions and the ranking of models. The server also allows visualizing residues that form interaction sites in the receptor and ligand sequence and displays 3D model structures of the receptor–ligand complexes. Availability http://qasdom.eimb.ru . Contact alexei.adzhubei@eimb.ru. Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2018-03-06
    Description: Motivation Protein–protein interactions are vital for protein function with the average protein having between three and ten interacting partners. Knowledge of precise protein–protein interfaces comes from crystal structures deposited in the Protein Data Bank (PDB), but only 50% of structures in the PDB are complexes. There is therefore a need to predict protein–protein interfaces in silico and various methods for this purpose. Here we explore the use of a predictor based on structural features and which exploits random forest machine learning, comparing its performance with a number of popular established methods. Results On an independent test set of obligate and transient complexes, our IntPred predictor performs well (MCC = 0.370, ACC = 0.811, SPEC = 0.916, SENS = 0.411) and compares favourably with other methods. Overall, IntPred ranks second of six methods tested with SPPIDER having slightly better overall performance (MCC = 0.410, ACC = 0.759, SPEC = 0.783, SENS = 0.676), but considerably worse specificity than IntPred. As with SPPIDER, using an independent test set of obligate complexes enhanced performance (MCC = 0.381) while performance is somewhat reduced on a dataset of transient complexes (MCC = 0.303). The trade-off between sensitivity and specificity compared with SPPIDER suggests that the choice of the appropriate tool is application-dependent. Availability and implementation IntPred is implemented in Perl and may be downloaded for local use or run via a web server at www.bioinf.org.uk/intpred/ . Contact andrew@bioinf.org.uk or andrew.martin@ucl.ac.uk Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2018-03-06
    Description: Motivation Protein function is directly related to amino acid residue composition and the dynamics of these residues. Centrality analyses based on residue interaction networks permit to identify key residues in a protein that are important for its fold or function. Such central residues and their environment constitute suitable targets for mutagenesis experiments. Predicted flexibility and changes in flexibility upon mutation provide valuable additional information for the design of such experiments. Results We combined centrality analyses with DynaMine flexibility predictions in a Cytoscape app called RINspector. The app performs centrality analyses and directly visualizes the results on a graph of predicted residue flexibility. In addition, the effect of mutations on local flexibility can be calculated. Availability and implementation The app is publicly available in the Cytoscape app store. Contact guillaume.brysbaert@univ-lille1.fr Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2018-03-06
    Description: Summary Fully exploiting the wealth of data in current bacterial population genomics datasets requires synthesizing and integrating different types of analysis across millions of base pairs in hundreds or thousands of isolates. Current approaches often use static representations of phylogenetic, epidemiological, statistical and evolutionary analysis results that are difficult to relate to one another. Phandango is an interactive application running in a web browser allowing fast exploration of large-scale population genomics datasets combining the output from multiple genomic analysis methods in an intuitive and interactive manner. Availability and implementation Phandango is a web application freely available for use at www.phandango.net and includes a diverse collection of datasets as examples. Source code together with a detailed wiki page is available on GitHub at https://github.com/jameshadfield/phandango . Contact jh22@sanger.ac.uk or sh16@sanger.ac.uk
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2018-03-06
    Description: Summary MetExploreViz is an open source web component that can be easily embedded in any web site. It provides features dedicated to the visualization of metabolic networks and pathways and thus offers a flexible solution to analyse omics data in a biochemical context. Availability and implementation Documentation and link to GIT code repository (GPL 3.0 license) are available at this URL: http://metexplore.toulouse.inra.fr/metexploreViz/doc/ Contact contact-metexplore@inra.fr
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2018-03-06
    Description: Motivation Single cell transcriptional profiling opens up a new avenue in studying the functional role of cell-to-cell variability in physiological processes. The analysis of single cell expression profiles creates new challenges due to the distributive nature of the data and the stochastic dynamics of gene transcription process. The reconstruction of gene regulatory networks (GRNs) using single cell transcriptional profiles is particularly challenging, especially when directed gene-gene relationships are desired. Results We developed SINCERITIES (SINgle CEll Regularized Inference using TIme-stamped Expression profileS) for the inference of GRNs from single cell transcriptional profiles. We focused on time-stamped cross-sectional expression data, commonly generated from transcriptional profiling of single cells collected at multiple time points after cell stimulation. SINCERITIES recovers directed regulatory relationships among genes by employing regularized linear regression (ridge regression), using temporal changes in the distributions of gene expressions. Meanwhile, the modes of the gene regulations (activation and repression) come from partial correlation analyses between pairs of genes. We demonstrated the efficacy of SINCERITIES in inferring GRNs using in silico time-stamped single cell expression data and single cell transcriptional profiles of THP-1 monocytic human leukemia cells. The case studies showed that SINCERITIES could provide accurate GRN predictions, significantly better than other GRN inference algorithms such as TSNI, GENIE3 and JUMP3. Moreover, SINCERITIES has a low computational complexity and is amenable to problems of extremely large dimensionality. Finally, an application of SINCERITIES to single cell expression data of T2EC chicken erythrocytes pointed to BATF as a candidate novel regulator of erythroid development. Availability and implementation MATLAB and R version of SINCERITIES are freely available from the following websites: http://www.cabsel.ethz.ch/tools/sincerities.html and https://github.com/CABSEL/SINCERITIES . The single cell THP-1 and T2EC transcriptional profiles are available from the original publications ( Kouno et al. , 2013 ; Richard et al. , 2016 ). The in silico single cell data are available on SINCERITIES websites. Contact rudi.gunawan@chem.ethz.ch Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2018-03-06
    Description: Summary Addressing deleterious effects of noncoding mutations is an essential step towards the identification of disease-causal mutations of gene regulatory elements. Several methods for quantifying the deleteriousness of noncoding mutations using artificial intelligence, deep learning and other approaches have been recently proposed. Although the majority of the proposed methods have demonstrated excellent accuracy on different test sets, there is rarely a consensus. In addition, advanced statistical and artificial learning approaches used by these methods make it difficult porting these methods outside of the labs that have developed them. To address these challenges and to transform the methodological advances in predicting deleterious noncoding mutations into a practical resource available for the broader functional genomics and population genetics communities, we developed SNPDelScore, which uses a panel of proposed methods for quantifying deleterious effects of noncoding mutations to precompute and compare the deleteriousness scores of all common SNPs in the human genome in 44 cell lines. The panel of deleteriousness scores of a SNP computed using different methods is supplemented by functional information from the GWAS Catalog, libraries of transcription factor-binding sites, and genic characteristics of mutations. SNPDelScore comes with a genome browser capable of displaying and comparing large sets of SNPs in a genomic locus and rapidly identifying consensus SNPs with the highest deleteriousness scores making those prime candidates for phenotype-causal polymorphisms. Availability and implementation https://www.ncbi.nlm.nih.gov/research/snpdelscore/ Contact ovcharen@nih.gov Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2018-03-06
    Description: Motivation The selection of a single nucleotide polymorphism (SNP) using bibliographic methods can be a very time-consuming task. Moreover, a SNP selected in this way may not be easily visualized in its genomic context by a standard user hoping to correlate it with other valuable information. Here we propose a web form built on top of Circos that can assist SNP-centered screening, based on their location in the genome and the regulatory modules they can disrupt. Its use may allow researchers to prioritize SNPs in genotyping and disease studies. Results SiNoPsis is bundled as a web portal. It focuses on the different structures involved in the genomic expression of a gene, especially those found in the core promoter upstream region. These structures include transcription factor binding sites (for promoter and enhancer signals), histones and promoter flanking regions. Additionally, the tool provides eQTL and linkage disequilibrium (LD) properties for a given SNP query, yielding further clues about other indirectly associated SNPs. Possible disruptions of the aforementioned structures affecting gene transcription are reported using multiple resource databases. SiNoPsis has a simple user-friendly interface, which allows single queries by gene symbol, genomic coordinates, Ensembl gene identifiers, RefSeq transcript identifiers and SNPs. It is the only portal providing useful SNP selection based on regulatory modules and LD with functional variants in both textual and graphic modes (by properly defining the arguments and parameters needed to run Circos). Availability and implementation SiNoPsis is freely available at https://compgen.bio.ub.edu/SiNoPsis/ Contact danielboloc@gmail.com Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2018-03-06
    Description: Motivation Inter-residue contacts in proteins have been widely acknowledged to be valuable for protein 3 D structure prediction. Accurate prediction of long-range transmembrane inter-helix residue contacts can significantly improve the quality of simulated membrane protein models. Results In this paper, we present an updated MemBrain predictor, which aims to predict transmembrane protein residue contacts. Our new model benefits from an efficient learning algorithm that can mine latent structural features, which exist in original feature space. The new MemBrain is a two-stage inter-helix contact predictor. The first stage takes sequence-based features as inputs and outputs coarse contact probabilities for each residue pair, which will be further fed into convolutional neural network together with predictions from three direct-coupling analysis approaches in the second stage. Experimental results on the training dataset show that our method achieves an average accuracy of 81.6% for the top L /5 predictions using a strict sequence-based jackknife cross-validation. Evaluated on the test dataset, MemBrain can achieve 79.4% prediction accuracy. Moreover, for the top L /5 predicted long-range loop contacts, the prediction performance can reach an accuracy of 56.4%. These results demonstrate that the new MemBrain is promising for transmembrane protein’s contact map prediction. Availability and implementation http://www.csbio.sjtu.edu.cn/bioinf/MemBrain/ Contact hbshen@sjtu.edu.cn Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2018-03-06
    Description: Motivation We recently published MS2LDA, a method for the decomposition of sets of molecular fragment data derived from large metabolomics experiments. To make the method more widely available to the community, here we present ms2lda.org, a web application that allows users to upload their data, run MS2LDA analyses and explore the results through interactive visualizations. Results Ms2lda.org takes tandem mass spectrometry data in many standard formats and allows the user to infer the sets of fragment and neutral loss features that co-occur together (Mass2Motifs). As an alternative workflow, the user can also decompose a data set onto predefined Mass2Motifs. This is accomplished through the web interface or programmatically from our web service. Availability and implementation The website can be found at http://ms2lda.org , while the source code is available at https://github.com/sdrogers/ms2ldaviz under the MIT license. Contact simon.rogers@glasgow.ac.uk Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2018-03-06
    Description: Motivation Linkage and quantitative trait loci (QTL) maps are critical tools for the study of the genetic basis of complex traits. With the advances in sequencing technology over the past decade, linkage map densities have been increasing dramatically, while the visualization tools have not kept pace. LinkageMapView is a free add-on package written in R that produces high resolution, publication-ready visualizations of linkage and QTL maps. While there is software available to generate linkage map graphics, none are freely available, produce publication quality figures, are open source and can run on all platforms. LinkageMapView can be integrated into map building pipelines as it seamlessly incorporates output from R/qtl and also accepts simple text or comma delimited files. There are numerous options within the package to build highly customizable maps, allow for linkage group comparisons, and annotate QTL regions. Availability and implementation https://cran.r-project.org/web/packages/LinkageMapView/ Contact louellet@uncc.edu
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2018-03-06
    Description: Motivation Rapid and low cost sequencing of genomes enabled widespread use of genomic data in research studies and personalized customer applications, where genomic data is shared in public databases. Although the identities of the participants are anonymized in these databases, sensitive information about individuals can still be inferred. One such information is kinship. Results We define two routes kinship privacy can leak and propose a technique to protect kinship privacy against these risks while maximizing the utility of shared data. The method involves systematic identification of minimal portions of genomic data to mask as new participants are added to the database. Choosing the proper positions to hide is cast as an optimization problem in which the number of positions to mask is minimized subject to privacy constraints that ensure the familial relationships are not revealed. We evaluate the proposed technique on real genomic data. Results indicate that concurrent sharing of data pertaining to a parent and an offspring results in high risks of kinship privacy, whereas the sharing data from further relatives together is often safer. We also show arrival order of family members have a high impact on the level of privacy risks and on the utility of sharing data. Availability and implementation https://github.com/tastanlab/Kinship-Privacy Contact erman@cs.bilkent.edu.tr or oznur.tastan@cs.bilkent.edu.tr Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2018-03-06
    Description: Summary Precision oncology is an approach that accounts for individual differences to guide cancer management. Omics signatures have been shown to predict clinical traits for cancer patients. However, the vast amount of omics information poses an informatics challenge in systematically identifying patterns associated with health outcomes, and no general purpose data mining tool exists for physicians, medical researchers and citizen scientists without significant training in programming and bioinformatics. To bridge this gap, we built the Omics AnalySIs System for PRecision Oncology (OASISPRO), a web-based system to mine the quantitative omics information from The Cancer Genome Atlas (TCGA). This system effectively visualizes patients’ clinical profiles, executes machine-learning algorithms of choice on the omics data and evaluates the prediction performance using held-out test sets. With this tool, we successfully identified genes strongly associated with tumor stage, and accurately predicted patients’ survival outcomes in many cancer types, including adrenocortical carcinoma. By identifying the links between omics and clinical phenotypes, this system will facilitate omics studies on precision cancer medicine and contribute to establishing personalized cancer treatment plans. Availability and implementation This web-based tool is available at http://tinyurl.com/oasispro ; source codes are available at http://tinyurl.com/oasisproSourceCode . Contact khyu@stanford.edu or mpsnyder@stanford.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2018-03-06
    Description: Summary Analysis of RNA sequencing (RNA-Seq) data have highlighted the fact that most genes undergo alternative splicing (AS) and that these patterns are tightly regulated. Many of these events are complex, resulting in numerous possible isoforms that quickly become difficult to visualize, interpret and experimentally validate. To address these challenges we developed MAJIQ-SPEL, a web-tool that takes as input local splicing variations (LSVs) quantified from RNA-Seq data and provides users with visualization and quantification of gene isoforms associated with those. Importantly, MAJIQ-SPEL is able to handle both classical (binary) and complex, non-binary, splicing variations. Using a matching primer design algorithm it also suggests to users possible primers for experimental validation by RT-PCR and displays those, along with the matching protein domains affected by the LSV, on UCSC Genome Browser for further downstream analysis. Availability and implementation Program and code will be available at http://majiq.biociphers.org/majiq-spel . Contact yosephb@upenn.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2018-03-06
    Description: Motivation Accurate molecular structure of the protein dimer representing the elementary building block of intermediate filaments (IFs) is essential towards the understanding of the filament assembly, rationalizing their mechanical properties and explaining the effect of disease-related IF mutations. The dimer contains a ∼300-residue long α-helical coiled coil which cannot be assessed by either direct experimental structure determination or modelling using standard approaches. At the same time, coiled coils are well-represented in structural databases. Results Here we present CCFold, a generally applicable threading-based algorithm which produces coiled-coil models from protein sequence only. The algorithm is based on a statistical analysis of experimentally determined structures and can handle any hydrophobic repeat patterns in addition to the most common heptads. We demonstrate that CCFold outperforms general-purpose computational folding in terms of accuracy, while being faster by orders of magnitude. By combining the CCFold algorithm and Rosetta folding we generate representative dimer models for all IF protein classes. Availability and implementation The source code is freely available at https://github.com/biocryst/IF ; a web server to run the program is at http://pharm.kuleuven.be/Biocrystallography/cc . Contact sergei.strelkov@kuleuven.be Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2018-03-06
    Description: Motivation Cells process information, in part, through transcription factor (TF) networks, which control the rates at which individual genes produce their products. A TF network map is a graph that indicates which TFs bind and directly regulate each gene. Previous work has described network mapping algorithms that rely exclusively on gene expression data and ‘integrative’ algorithms that exploit a wide range of data sources including chromatin immunoprecipitation sequencing (ChIP-seq) of many TFs, genome-wide chromatin marks, and binding specificities for many TFs determined in vitro . However, such resources are available only for a few major model systems and cannot be easily replicated for new organisms or cell types. Results We present NetProphet 2.0, a ‘data light’ algorithm for TF network mapping, and show that it is more accurate at identifying direct targets of TFs than other, similarly data light algorithms. In particular, it improves on the accuracy of NetProphet 1.0, which used only gene expression data, by exploiting three principles. First, combining multiple approaches to network mapping from expression data can improve accuracy relative to the constituent approaches. Second, TFs with similar DNA binding domains bind similar sets of target genes. Third, even a noisy, preliminary network map can be used to infer DNA binding specificities from promoter sequences and these inferred specificities can be used to further improve the accuracy of the network map. Availability and implementation Source code and comprehensive documentation are freely available at https://github.com/yiming-kang/NetProphet_2.0 . Contact brent@wustl.edu Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2018-03-06
    Description: Motivation To increase detection power, researchers use gene level analysis methods to aggregate weak marker signals. Due to gene expression controlling biological processes, researchers proposed aggregating signals for expression Quantitative Trait Loci (eQTL). Most gene-level eQTL methods make statistical inferences based on (i) summary statistics from genome-wide association studies (GWAS) and (ii) linkage disequilibrium patterns from a relevant reference panel. While most such tools assume homogeneous cohorts, our G ene-level J oint A nalysis of functional SNPs in C osmopolitan C ohorts (JEPEGMIX) method accommodates cosmopolitan cohorts by using heterogeneous panels. However, JEPGMIX relies on brain eQTLs from older gene expression studies and does not adjust for background enrichment in GWAS signals. Results We propose JEPEGMIX2, an extension of JEPEGMIX. When compared to JPEGMIX, it uses (i) cis-eQTL SNPs from the latest expression studies and (ii) brains specific (sub)tissues and tissues other than brain. JEPEGMIX2 also (i) avoids accumulating averagely enriched polygenic information by adjusting for background enrichment and (ii) to avoid an increase in false positive rates for studies with numerous highly enriched (above the background) genes, it outputs gene q -values based on Holm adjustment of P -values. Availability and implementation https://github.com/Chatzinakos/JEPEGMIX2 . Contact chris.chatzinakos@vcuhealth.org Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2018-03-06
    Description: Motivation Identification of disease-associated miRNAs (disease miRNAs) is critical for understanding disease etiology and pathogenesis. Since miRNAs exert their functions by regulating the expression of their target mRNAs, several methods based on the target genes were proposed to predict disease miRNA candidates. They achieved only limited success as they all suffered from the high false-positive rate of target prediction results. Alternatively, other prediction methods were based on the observation that miRNAs with similar functions tend to be associated with similar diseases and vice versa. The methods exploited the information about miRNAs and diseases, including the functional similarities between miRNAs, the similarities between diseases, and the associations between miRNAs and diseases. However, how to integrate the multiple kinds of information completely and consider the biological characteristic of disease miRNAs is a challenging problem. Results We constructed a bilayer network to represent the complex relationships among miRNAs, among diseases and between miRNAs and diseases. We proposed a non-negative matrix factorization based method to rank, so as to predict, the disease miRNA candidates. The method integrated the miRNA functional similarity, the disease similarity and the miRNA-disease associations seamlessly, which exploited the complex relationships within the bilayer network and the consensus relationship between multiple kinds of information. Considering the correlation between the candidates related to various diseases, it predicted their respective candidates for all the diseases simultaneously. In addition, the sparseness characteristic of disease miRNAs was introduced to generate more reliable prediction model that excludes those noisy candidates. The results on 15 common diseases showed a superior performance of the new method for not only well-characterized diseases but also new ones. A detailed case study on breast neoplasms, colorectal neoplasms, lung neoplasms and 32 other diseases demonstrated the ability of the method for discovering potential disease miRNAs. Availability and implementation The web service for the new method and the list of predicted candidates for all the diseases are available at http://www.bioinfolab.top . Contact xuanping@hlju.edu.cn or zhang@hlju.edu.cn or lijzh@hit.edu.cn Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2018-03-06
    Description: Summary The Quest for Orthologs (QfO) is an open collaboration framework for experts in comparative phylogenomics and related research areas who have an interest in highly accurate orthology predictions and their applications. We here report highlights and discussion points from the QfO meeting 2015 held in Barcelona. Achievements in recent years have established a basis to support developments for improved orthology prediction and to explore new approaches. Central to the QfO effort is proper benchmarking of methods and services, as well as design of standardized datasets and standardized formats to allow sharing and comparison of results. Simultaneously, analysis pipelines have been improved, evaluated and adapted to handle large datasets. All this would not have occurred without the long-term collaboration of Consortium members. Meeting regularly to review and coordinate complementary activities from a broad spectrum of innovative researchers clearly benefits the community. Highlights of the meeting include addressing sources of and legitimacy of disagreements between orthology calls, the context dependency of orthology definitions, special challenges encountered when analyzing very anciently rooted orthologies, orthology in the light of whole-genome duplications, and the concept of orthologous versus paralogous relationships at different levels, including domain-level orthology. Furthermore, particular needs for different applications (e.g. plant genomics, ancient gene families and others) and the infrastructure for making orthology inferences available (e.g. interfaces with model organism databases) were discussed, with several ongoing efforts that are expected to be reported on during the upcoming 2017 QfO meeting. Contact selewis@lbl.gov or c.dessimoz@ucl.ac.uk
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2018-03-06
    Description: Motivation In recent years, the massively parallel cDNA sequencing (RNA-Seq) technologies have become a powerful tool to provide high resolution measurement of expression and high sensitivity in detecting low abundance transcripts. However, RNA-seq data requires a huge amount of computational efforts. The very fundamental and critical step is to align each sequence fragment against the reference genome. Various de novo spliced RNA aligners have been developed in recent years. Though these aligners can handle spliced alignment and detect splice junctions, some challenges still remain to be solved. With the advances in sequencing technologies and the ongoing collection of sequencing data in the ENCODE project, more efficient alignment algorithms are highly demanded. Most read mappers follow the conventional seed-and-extend strategy to deal with inexact matches for sequence alignment. However, the extension is much more time consuming than the seeding step. Results We proposed a novel RNA-seq de novo mapping algorithm, call DART, which adopts a partitioning strategy to avoid the extension step. The experiment results on synthetic datasets and real NGS datasets showed that DART is a highly efficient aligner that yields the highest or comparable sensitivity and accuracy compared to most state-of-the-art aligners, and more importantly, it spends the least amount of time among the selected aligners. Availability and implementation https://github.com/hsinnan75/DART Contact hsu@iis.sinica.edu.tw Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2018-12-14
    Description: by Enrico Lavezzo, Michele Berselli, Ilaria Frasson, Rosalba Perrone, Giorgio Palù, Alessandra R. Brazzale, Sara N. Richter, Stefano Toppo G-quadruplexes are non-canonical nucleic-acid structures that control transcription, replication, and recombination in organisms. G-quadruplexes are present in eukaryotes, prokaryotes, and viruses. In the latter, mounting evidence indicates their key biological activity. Since data on viruses are scattered, we here present a comprehensive analysis of potential quadruplex-forming sequences (PQS) in the genome of all known viruses that can infect humans. We show that occurrence and location of PQSs are features characteristic of each virus class and family. Our statistical analysis proves that their presence within the viral genome is orderly arranged, as indicated by the possibility to correctly assign up to two-thirds of viruses to their exact class based on the PQS classification. For each virus we provide: i) the list of all PQS present in the genome (positive and negative strands), ii) their position in the viral genome, iii) the degree of conservation among strains of each PQS in its genome context, iv) the statistical significance of PQS abundance. This information is accessible from a database to allow the easy navigation of the results: http://www.medcomp.medicina.unipd.it/main_site/doku.php?id=g4virus. The availability of these data will greatly expedite research on G-quadruplex in viruses, with the possibility to accelerate finding therapeutic opportunities to numerous and some fearsome human diseases.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2018-03-06
    Description: Summary Gene expression alterations and potentially underlying gene copy number mutations can be measured routinely in the wet lab, but it is still extremely challenging to quantify impacts of altered genes on clinically relevant characteristics to predict putative driver genes. We developed the R package regNet that utilizes gene expression and copy number data to learn regulatory networks for the quantification of potential impacts of individual gene expression alterations on user-defined target genes via network propagation. We demonstrate the value of regNet by identifying putative major regulators that distinguish pilocytic from diffuse astrocytomas and by predicting putative impacts of glioblastoma-specific gene copy number alterations on cell cycle pathway genes and patient survival. Availability and implementation regNet is available for download at https://github.com/seifemi/regNet under GNU GPL-3 . Contact michael.seifert@tu-dresden.de Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2018-03-06
    Description: Motivation MicroRNAs (miRNAs) play crucial roles in post-transcriptional regulations and various cellular processes. The identification of disease-related miRNAs provides great insights into the underlying pathogenesis of diseases at a system level. However, most existing computational approaches are biased towards known miRNA-disease associations, which is inappropriate for those new diseases or miRNAs without any known association information. Results In this study, we propose a new method with graph regularized non-negative matrix factorization in heterogeneous omics data, called GRNMF, to discover potential associations between miRNAs and diseases, especially for new diseases and miRNAs or those diseases and miRNAs with sparse known associations. First, we integrate the disease semantic information and miRNA functional information to estimate disease similarity and miRNA similarity, respectively. Considering that there is no available interaction observed for new diseases or miRNAs, a preprocessing step is developed to construct the interaction score profiles that will assist in prediction. Next, a graph regularized non-negative matrix factorization framework is utilized to simultaneously identify potential associations for all diseases. The results indicated that our proposed method can effectively prioritize disease-associated miRNAs with higher accuracy compared with other recent approaches. Moreover, case studies also demonstrated the effectiveness of GRNMF to infer unknown miRNA-disease associations for those novel diseases and miRNAs. Availability and implementation The code of GRNMF is freely available at https://github.com/XIAO-HN/GRNMF/ . Contact luojiawei@hnu.edu.cn Supplementary information Supplementary dataSupplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2018-03-13
    Description: by Hao Ge, Pingping Wu, Hong Qian, Sunney Xiaoliang Xie Within an isogenic population, even in the same extracellular environment, individual cells can exhibit various phenotypic states. The exact role of stochastic gene-state switching regulating the transition among these phenotypic states in a single cell is not fully understood, especially in the presence of positive feedback. Recent high-precision single-cell measurements showed that, at least in bacteria, switching in gene states is slow relative to the typical rates of active transcription and translation. Hence using the lac operon as an archetype, in such a region of operon-state switching, we present a fluctuating-rate model for this classical gene regulation module, incorporating the more realistic operon-state switching mechanism that was recently elucidated. We found that the positive feedback mechanism induces bistability (referred to as deterministic bistability), and that the parameter range for its occurrence is significantly broadened by stochastic operon-state switching. We further show that in the absence of positive feedback, operon-state switching must be extremely slow to trigger bistability by itself. However, in the presence of positive feedback, which stabilizes the induced state, the relatively slow operon-state switching kinetics within the physiological region are sufficient to stabilize the uninduced state, together generating a broadened parameter region of bistability (referred to as stochastic bistability). We illustrate the opposite phenotype-transition rate dependence upon the operon-state switching rates in the two types of bistability, with the aid of a recently proposed rate formula for fluctuating-rate models. The rate formula also predicts a maximal transition rate in the intermediate region of operon-state switching, which is validated by numerical simulations in our model. Overall, our findings suggest a biological function of transcriptional “variations” among genetically identical cells, for the emergence of bistability and transition between phenotypic states.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2018-03-13
    Description: by Michael Pablo, Samuel A. Ramirez, Timothy C. Elston Polarity establishment, the spontaneous generation of asymmetric molecular distributions, is a crucial component of many cellular functions. Saccharomyces cerevisiae (yeast) undergoes directed growth during budding and mating, and is an ideal model organism for studying polarization. In yeast and many other cell types, the Rho GTPase Cdc42 is the key molecular player in polarity establishment. During yeast polarization, multiple patches of Cdc42 initially form, then resolve into a single front. Because polarization relies on strong positive feedback, it is likely that the amplification of molecular-level fluctuations underlies the generation of multiple nascent patches. In the absence of spatial cues, these fluctuations may be key to driving polarization. Here we used particle-based simulations to investigate the role of stochastic effects in a Turing-type model of yeast polarity establishment. In the model, reactions take place either between two molecules on the membrane, or between a cytosolic and a membrane-bound molecule. Thus, we developed a computational platform that explicitly simulates molecules at and near the cell membrane, and implicitly handles molecules away from the membrane. To evaluate stochastic effects, we compared particle simulations to deterministic reaction-diffusion equation simulations. Defining macroscopic rate constants that are consistent with the microscopic parameters for this system is challenging, because diffusion occurs in two dimensions and particles exchange between the membrane and cytoplasm. We address this problem by empirically estimating macroscopic rate constants from appropriately designed particle-based simulations. Ultimately, we find that stochastic fluctuations speed polarity establishment and permit polarization in parameter regions predicted to be Turing stable. These effects can operate at Cdc42 abundances expected of yeast cells, and promote polarization on timescales consistent with experimental results. To our knowledge, our work represents the first particle-based simulations of a model for yeast polarization that is based on a Turing mechanism.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2018-03-13
    Description: by Britta U. Westner, Sarang S. Dalal, Simon Hanslmayr, Tobias Staudigl Single-trial analyses have the potential to uncover meaningful brain dynamics that are obscured when averaging across trials. However, low signal-to-noise ratio (SNR) can impede the use of single-trial analyses and decoding methods. In this study, we investigate the applicability of a single-trial approach to decode stimulus modality from magnetoencephalographic (MEG) high frequency activity. In order to classify the auditory versus visual presentation of words, we combine beamformer source reconstruction with the random forest classification method. To enable group level inference, the classification is embedded in an across-subjects framework. We show that single-trial gamma SNR allows for good classification performance (accuracy across subjects: 66.44%). This implies that the characteristics of high frequency activity have a high consistency across trials and subjects. The random forest classifier assigned informational value to activity in both auditory and visual cortex with high spatial specificity. Across time, gamma power was most informative during stimulus presentation. Among all frequency bands, the 75 Hz 95 Hz band was the most informative frequency band in visual as well as in auditory areas. Especially in visual areas, a broad range of gamma frequencies (55 Hz 125 Hz) contributed to the successful classification. Thus, we demonstrate the feasibility of single-trial approaches for decoding the stimulus modality across subjects from high frequency activity and describe the discriminative gamma activity in time, frequency, and space.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-07-27
    Description: by Edvin Fuglebakk, Nathalie Reuter With remarkable spatial and temporal specificities, peripheral membrane proteins bind to biological membranes. They do this without compromising solubility of the protein, and their binding sites are not easily distinguished. Prototypical peripheral membrane binding sites display a combination of patches of basic and hydrophobic amino acids that are also frequently present on other protein surfaces. The purpose of this contribution is to identify simple but essential components for membrane binding, through structural criteria that distinguish exposed hydrophobes at membrane binding sites from those that are frequently found on any protein surface. We formulate the concepts of protruding hydrophobes and co-insertability and have analysed more than 300 families of proteins that are classified as peripheral membrane binders. We find that this structural motif strongly discriminates the surfaces of membrane-binding and non-binding proteins. Our model constitutes a novel formulation of a structural pattern for membrane recognition and emphasizes the importance of subtle structural properties of hydrophobic membrane binding sites.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2018-07-28
    Description: Algorithms, Vol. 11, Pages 112: A Novel Parallel Auto-Encoder Framework for Multi-Scale Data in Civil Structural Health Monitoring Algorithms doi: 10.3390/a11080112 Authors: Ruhua Wang Ling Li Jun Li In this paper, damage detection/identification for a seven-storey steel structure is investigated via using the vibration signals and deep learning techniques. Vibration characteristics, such as natural frequencies and mode shapes are captured and utilized as input for a deep learning network while the output vector represents the structural damage associated with locations. The deep auto-encoder with sparsity constraint is used for effective feature extraction for different types of signals and another deep auto-encoder is used to learn the relationship of different signals for final regression. The existing SAF model in a recent research study for the same problem processed all signals in one serial auto-encoder model. That kind of models have the following difficulties: (1) the natural frequencies and mode shapes are in different magnitude scales and it is not logical to normalize them in the same scale in building the models with training samples; (2) some frequencies and mode shapes may not be related to each other and it is not fair to use them for dimension reduction together. To tackle the above-mentioned problems for the multi-scale dataset in SHM, a novel parallel auto-encoder framework (Para-AF) is proposed in this paper. It processes the frequency signals and mode shapes separately for feature selection via dimension reduction and then combine these features together in relationship learning for regression. Furthermore, we introduce sparsity constraint in model reduction stage for performance improvement. Two experiments are conducted on performance evaluation and our results show the significant advantages of the proposed model in comparison with the existing approaches.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2018-07-28
    Description: Algorithms, Vol. 11, Pages 113: Improved Parameter Identification Method for Envelope Current Signals Based on Windowed Interpolation FFT and DE Algorithm Algorithms doi: 10.3390/a11080113 Authors: Xiangfeng Su Huaiqing Zhang Lin Chen Ling Qin Lili Yu Envelope current signals are increasingly emerging in power systems, and their parameter identification is particularly necessary for accurate measurement of electrical energy. In order to analyze the envelope current signal, the harmonic parameters, as well as the envelope parameters, need to be calculated. The interpolation fast Fourier transform (FFT) is a widely used approach which can estimate the signal frequency with high precision, but it cannot calculate the envelope parameters of the signal. Therefore, this paper proposes an improved method based on windowed interpolation FFT (WIFFT) and differential evolution (DE). The amplitude and phase parameters obtained through WIFFT and the envelope parameters estimated by the envelope analysis are optimized using the DE algorithm, which makes full use of the performance advantage of DE. The simulation results show that the proposed method can improve the accuracy of the harmonic parameters and the envelope parameter significantly. In addition, it has good anti-noise ability and high precision.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2018-07-28
    Description: by Luigi Acerbi, Kalpana Dokka, Dora E. Angelaki, Wei Ji Ma The precision of multisensory perception improves when cues arising from the same cause are integrated, such as visual and vestibular heading cues for an observer moving through a stationary environment. In order to determine how the cues should be processed, the brain must infer the causal relationship underlying the multisensory cues. In heading perception, however, it is unclear whether observers follow the Bayesian strategy, a simpler non-Bayesian heuristic, or even perform causal inference at all. We developed an efficient and robust computational framework to perform Bayesian model comparison of causal inference strategies, which incorporates a number of alternative assumptions about the observers. With this framework, we investigated whether human observers’ performance in an explicit cause attribution and an implicit heading discrimination task can be modeled as a causal inference process. In the explicit causal inference task, all subjects accounted for cue disparity when reporting judgments of common cause, although not necessarily all in a Bayesian fashion. By contrast, but in agreement with previous findings, data from the heading discrimination task only could not rule out that several of the same observers were adopting a forced-fusion strategy, whereby cues are integrated regardless of disparity. Only when we combined evidence from both tasks we were able to rule out forced-fusion in the heading discrimination task. Crucially, findings were robust across a number of variants of models and analyses. Our results demonstrate that our proposed computational framework allows researchers to ask complex questions within a rigorous Bayesian framework that accounts for parameter and model uncertainty.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-07-28
    Description: by Ida Friis, Ilia A. Solov’yov The non-homologous end joining of a DNA double strand break is initiated by the MRE11-NBS1-RAD50 complex whose subunits are the first three proteins to arrive to the breakage site thereby making the recruitment time of MRE11, NBS1 and RAD50 essential for cell survival. In the present investigation, the nature of MRE11 and NBS1 transportation from the cytoplasm to the nucleus, hosting the damaged DNA strand, is hypothesized to be a passive diffusive process. The feasibility of such a mechanism is addressed through theoretical and computational approaches which permit establishing the characteristic recruitment time of MRE11 and NBS1 by the nucleus. A computational model of a cell is constructed from a set of biological parameters and the kinetic Monte Carlo algorithm is used to simulate the diffusing MRE11 and NBS1 particles as a random walk process. To accurately describe the experimented data, it is discovered that MRE11 and NBS1 should start diffusion from significantly different starting positions which suggests that diffusion might not be the only transport mechanism of repair protein recruitment to the DNA break.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2018-07-29
    Description: Algorithms, Vol. 11, Pages 114: Revisiting Chameleon Sequences in the Protein Data Bank Algorithms doi: 10.3390/a11080114 Authors: Mihaly Mezei The steady growth of the Protein Data Bank (PDB) suggests the periodic repetition of searches for sequences that form different secondary structures in different protein structures; these are called chameleon sequences. This paper presents a fast (nlog(n)) algorithm for such searches and presents the results on all protein structures in the PDB. The longest such sequence found consists of 20 residues.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2018-07-29
    Description: Algorithms, Vol. 11, Pages 115: Color-Based Image Retrieval Using Proximity Space Theory Algorithms doi: 10.3390/a11080115 Authors: Jing Wang Lidong Wang Xiaodong Liu Yan Ren Ye Yuan The goal of object retrieval is to rank a set of images by their similarity compared with a query image. Nowadays, content-based image retrieval is a hot research topic, and color features play an important role in this procedure. However, it is important to establish a measure of image similarity in advance. The innovation point of this paper lies in the following. Firstly, the idea of the proximity space theory is utilized to retrieve the relevant images between the query image and images of database, and we use the color histogram of an image to obtain the Top-ranked colors, which can be regard as the object set. Secondly, the similarity is calculated based on an improved dominance granule structure similarity method. Thus, we propose a color-based image retrieval method by using proximity space theory. To detect the feasibility of this method, we conducted an experiment on COIL-20 image database and Corel-1000 database. Experimental results demonstrate the effectiveness of the proposed framework and its applications.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2018-07-26
    Description: Algorithms, Vol. 11, Pages 111: A Weighted Histogram-Based Tone Mapping Algorithm for CT Images Algorithms doi: 10.3390/a11080111 Authors: David Völgyes Anne Catrine Trægde Martinsen Arne Stray-Pedersen Dag Waaler Marius Pedersen Computed Tomography (CT) images have a high dynamic range, which makes visualization challenging. Histogram equalization methods either use spatially invariant weights or limited kernel size due to the complexity of pairwise contribution calculation. We present a weighted histogram equalization-based tone mapping algorithm which utilizes Fast Fourier Transform for distance-dependent contribution calculation and distance-based weights. The weights follow power-law without distance-based cut-off. The resulting images have good local contrast without noticeable artefacts. The results are compared to eight popular tone mapping operators.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2018-07-25
    Description: by Magdalena San Roman, Andreas Wagner Microorganisms modify their environment by excreting by-products of metabolism, which can create new ecological niches that can help microbial populations diversify. A striking example comes from experimental evolution of genetically identical Escherichia coli populations that are grown in a homogeneous environment with the single carbon source glucose. In such experiments, stable communities of genetically diverse cross-feeding E . coli cells readily emerge. Some cells that consume the primary carbon source glucose excrete a secondary carbon source, such as acetate, that sustains other community members. Few such cross-feeding polymorphisms are known experimentally, because they are difficult to screen for. We studied the potential of bacterial metabolism to create new ecological niches based on cross-feeding. To do so, we used genome scale models of the metabolism of E . coli and metabolisms of similar complexity, to identify unique pairs of primary and secondary carbon sources in these metabolisms. We then combined dynamic flux balance analysis with analytical calculations to identify which pair of carbon sources can sustain a polymorphic cross-feeding community. We identified almost 10,000 such pairs of carbon sources, each of them corresponding to a unique ecological niche. Bacterial metabolism shows an immense potential for the construction of new ecological niches through cross feeding.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2018-07-25
    Description: by Chia-Chin Wu, Hannah C. Beird, Jianhua Zhang, P. Andrew Futreal Numerous gene fusions have been uncovered across multiple cancer types. Although the ability to target several of these fusions has led to the development of some successful anti-cancer drugs, most of them are not druggable. Understanding the molecular pathways of a fusion is important in determining its function in oncogenesis and in developing therapeutic strategies for patients harboring the fusion. However, the molecular pathways have been elucidated for only a few fusions, in part because of the labor-intensive nature of the required functional assays. Therefore, we developed a domain-based network approach to infer the pathways of a fusion. Molecular interactions of a fusion are first predicted by using its protein domain composition, and its associated pathways are then inferred from these molecular interactions. We demonstrated the capabilities of this approach by primarily applying it to the well-studied BCR-ABL1 fusion. The approach was also applied to two undruggable fusions in sarcoma, EWS-FL1 and FUS-DDIT3. We successfully identified known genes and pathways associated with these fusions and satisfactorily validated these predictions using several benchmark sets. The predictions of EWS-FL1 and FUS-DDIT3 also correlate with results of high-throughput drug screening. To our best knowledge, this is the first approach for inferring pathways of fusions.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-08-04
    Description: by Andrew G. McDonald, Keith F. Tipton, Gavin P. Davey Glycosyltransferases are a class of enzymes that catalyse the posttranslational modification of proteins to produce a large number of glycoconjugate acceptors from a limited number of nucleotide-sugar donors. The products of one glycosyltransferase can be the substrates of several other enzymes, causing a combinatorial explosion in the number of possible glycan products. The kinetic behaviour of systems where multiple acceptor substrates compete for a single enzyme is presented, and the case in which high concentrations of an acceptor substrate are inhibitory as a result of abortive complex formation, is shown to result in non-Michaelian kinetics that can lead to bistability in an open system. A kinetic mechanism is proposed that is consistent with the available experimental evidence and provides a possible explanation for conflicting observations on the β -1,4-galactosyltransferases. Abrupt switching between steady states in networks of glycosyltransferase-catalysed reactions may account for the observed changes in glycosyl-epitopes in cancer cells.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2018-08-07
    Description: Algorithms, Vol. 11, Pages 119: An Opportunistic Network Routing Algorithm Based on Cosine Similarity of Data Packets between Nodes Algorithms doi: 10.3390/a11080119 Authors: Yucheng Lin Zhigang Chen Jia Wu Leilei Wang The mobility of nodes leads to dynamic changes in topology structure, which makes the traditional routing algorithms of a wireless network difficult to apply to the opportunistic network. In view of the problems existing in the process of information forwarding, this paper proposed a routing algorithm based on the cosine similarity of data packets between nodes (cosSim). The cosine distance, an algorithm for calculating the similarity between text data, is used to calculate the cosine similarity of data packets between nodes. The data packet set of nodes are expressed in the form of vectors, thereby facilitating the calculation of the similarity between the nodes. Through the definition of the upper and lower thresholds, the similarity between the nodes is filtered according to certain rules, and finally obtains a plurality of relatively reliable transmission paths. Simulation experiments show that compared with the traditional opportunistic network routing algorithm, such as the Spray and Wait (S&W) algorithm and Epidemic algorithm, the cosSim algorithm has a better transmission effect, which can not only improve the delivery ratio, but also reduce the network transmission delay and decline the routing overhead.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2018-08-07
    Description: Algorithms, Vol. 11, Pages 120: Probabilistic Interval-Valued Hesitant Fuzzy Information Aggregation Operators and Their Application to Multi-Attribute Decision Making Algorithms doi: 10.3390/a11080120 Authors: Wenying Wu Ying Li Zhiwei Ni Feifei Jin Xuhui Zhu Based on the probabilistic interval-valued hesitant fuzzy information aggregation operators, this paper investigates a novel multi-attribute group decision making (MAGDM) model to address the serious loss of information in a hesitant fuzzy information environment. Firstly, the definition of probabilistic interval-valued hesitant fuzzy set will be introduced, and then, using Archimedean norm, some new probabilistic interval-valued hesitant fuzzy operations are defined. Secondly, based on these operations, the generalized probabilistic interval-valued hesitant fuzzy ordered weighted averaging (GPIVHFOWA) operator, and the generalized probabilistic interval-valued hesitant fuzzy ordered weighted geometric (GPIVHFOWG) operator are proposed, and their desirable properties are discussed. We further study their common forms and analyze the relationship among these proposed operators. Finally, a new probabilistic interval-valued hesitant fuzzy MAGDM model is constructed, and the feasibility and effectiveness of the proposed model are verified by using an example of supplier selection.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2018-06-06
    Description: Algorithms, Vol. 11, Pages 80: Scheduling a Single Machine with Primary and Secondary Objectives Algorithms doi: 10.3390/a11060080 Authors: Nodari Vakhania We study a scheduling problem in which jobs with release times and due dates are to be processed on a single machine. With the primary objective to minimize the maximum job lateness, the problem is strongly NP-hard. We describe a general algorithmic scheme to minimize the maximum job lateness, with the secondary objective to minimize the maximum job completion time. The problem of finding the Pareto-optimal set of feasible solutions with these two objective criteria is strongly NP-hard. We give the dominance properties and conditions when the Pareto-optimal set can be formed in polynomial time. These properties, together with our general framework, provide the theoretical background, so that the basic framework can be expanded to (exponential-time) implicit enumeration algorithms and polynomial-time approximation algorithms (generating the Pareto sub-optimal frontier with a fair balance between the two objectives). Some available in the literature experimental results confirm the practical efficiency of the proposed framework.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2018-06-15
    Description: Algorithms, Vol. 11, Pages 85: ILC with Initial State Learning for Fractional Order Linear Distributed Parameter Systems Algorithms doi: 10.3390/a11060085 Authors: Yong-Hong Lan Zhe-Min Cui This paper presents a second order P-type iterative learning control (ILC) scheme with initial state learning for a class of fractional order linear distributed parameter systems. First, by analyzing the control and learning processes, a discrete system for P-type ILC is established, and the ILC design problem is then converted to a stability problem for such a discrete system. Next, a sufficient condition for the convergence of the control input and the tracking errors is obtained by introducing a new norm and using the generalized Gronwall inequality, which is less conservative than the existing one. Finally, the validity of the proposed method is verified by a numerical example.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2018-06-18
    Description: Algorithms, Vol. 11, Pages 86: Performance Optimal PI controller Tuning Based on Integrating Plus Time Delay Models Algorithms doi: 10.3390/a11060086 Authors: Christer Dalen David Di Ruscio A method for tuning PI controller parameters, a prescribed maximum time delay error or a relative time delay error is presented. The method is based on integrator plus time delay models. The integral time constant is linear in the relative time delay error, and the proportional constant is seen inversely proportional to the relative time delay error. The keystone in the method is the method product parameter, i.e., the product of the PI controller proportional constant, the integral time constant, and the integrator plus time delay model, velocity gain. The method product parameter is found to be constant for various PI controller tuning methods. Optimal suggestions are given for choosing the method product parameter, i.e., optimal such that the integrated absolute error or, more interestingly, the Pareto performance objective (i.e., integrated absolute error for combined step changes in output and input disturbances) is minimised. Variants of the presented tuning method are demonstrated for tuning PI controllers for motivated (possible) higher order process model examples, i.e., the presented method is combined with the model reduction step (process–reaction curve) in Ziegler–Nichols.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2018-06-19
    Description: by Wei-Ven Tee, Enrico Guarnera, Igor N. Berezovsky The omnipresence of allosteric regulation together with the fundamental role of structural dynamics in this phenomenon have initiated a great interest to the detection of regulatory exosites and design of corresponding effectors. However, despite a general consensus on the key role of dynamics most of the earlier efforts on the prediction of allosteric sites are heavily crippled by the static nature of the underlying methods, which are either structure-based approaches seeking for deep surface pockets typical for “traditional” orthosteric drugs or sequence-based techniques exploiting the conservation of protein sequences. Because of the critical role of global protein dynamics in allosteric signaling, we investigate the hypothesis of reversibility in allosteric communication, according to which allosteric sites can be detected via the perturbation of the functional sites. The reversibility hypothesis is tested here using our structure-based perturbation model of allostery, which allows one to analyze the causality and energetics of allosteric communication. We validate the “reverse perturbation” hypothesis and its predictive power on a set of classical allosteric proteins, then, on the independent extended benchmark set. We also show that, in addition to known allosteric sites, the perturbation of the functional sites unravels rather extended protein regions, which can host latent regulatory exosites. These protein parts that are dynamically coupled with functional sites can also be used for inducing and tuning allosteric communication, and an exhaustive exploration of the per-residue contributions to allosteric effects can eventually lead to the optimal modulation of protein activity. The site-effector interactions necessary for a specific mode and level of allosteric communication can be fine-tuned by adjusting the site’s structure to an available effector molecule and by the design or selection of an appropriate ligand.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-06-19
    Description: by Nathaniel Mon Père, Tom Lenaerts, Jorge M. Pacheco, David Dingli Paroxysmal nocturnal hemoglobinuria (PNH) is an acquired clonal blood disorder characterized by hemolysis and a high risk of thrombosis, that is due to a deficiency in several cell surface proteins that prevent complement activation. Its origin has been traced to a somatic mutation in the PIG-A gene within hematopoietic stem cells (HSC). However, to date the question of how this mutant clone expands in size to contribute significantly to hematopoiesis remains under debate. One hypothesis posits the existence of a selective advantage of PIG-A mutated cells due to an immune mediated attack on normal HSC, but the evidence supporting this hypothesis is inconclusive. An alternative (and simpler) explanation attributes clonal expansion to neutral drift, in which case selection neither favours nor inhibits expansion of PIG-A mutated HSC. Here we examine the implications of the neutral drift model by numerically evolving a Markov chain for the probabilities of all possible outcomes, and investigated the possible occurrence and evolution, within this framework, of multiple independently arising clones within the HSC pool. Predictions of the model agree well with the known incidence of the disease and average age at diagnosis. Notwithstanding the slight difference in clonal expansion rates between our results and those reported in the literature, our model results lead to a relative stability of clone size when averaging multiple cases, in accord with what has been observed in human trials. The probability of a patient harbouring a second clone in the HSC pool was found to be extremely low (~10-8). Thus our results suggest that in clinical cases of PNH where two independent clones of mutant cells are observed, only one of those is likely to have originated in the HSC pool.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-06-27
    Description: by Fahad Alhasoun, Faisal Aleissa, May Alhazzani, Luis G. Moyano, Claudio Pinhanez, Marta C. González This paper presents a data analysis framework to uncover relationships between health conditions, age and sex for a large population of patients. We study a massive heterogeneous sample of 1.7 million patients in Brazil, containing 47 million of health records with detailed medical conditions for visits to medical facilities for a period of 17 months. The findings suggest that medical conditions can be grouped into clusters that share very distinctive densities in the ages of the patients. For each cluster, we further present the ICD-10 chapters within it. Finally, we relate the findings to comorbidity networks, uncovering the relation of the discovered clusters of age densities to comorbidity networks literature.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2018-06-28
    Description: by Giri P. Krishnan, Burke Q. Rosen, Jen-Yung Chen, Lyle Muller, Terrence J. Sejnowski, Sydney S. Cash, Eric Halgren, Maxim Bazhenov Sleep spindles are brief oscillatory events during non-rapid eye movement (NREM) sleep. Spindle density and synchronization properties are different in MEG versus EEG recordings in humans and also vary with learning performance, suggesting spindle involvement in memory consolidation. Here, using computational models, we identified network mechanisms that may explain differences in spindle properties across cortical structures. First, we report that differences in spindle occurrence between MEG and EEG data may arise from the contrasting properties of the core and matrix thalamocortical systems. The matrix system, projecting superficially, has wider thalamocortical fanout compared to the core system, which projects to middle layers, and requires the recruitment of a larger population of neurons to initiate a spindle. This property was sufficient to explain lower spindle density and higher spatial synchrony of spindles in the superficial cortical layers, as observed in the EEG signal. In contrast, spindles in the core system occurred more frequently but less synchronously, as observed in the MEG recordings. Furthermore, consistent with human recordings, in the model, spindles occurred independently in the core system but the matrix system spindles commonly co-occurred with core spindles. We also found that the intracortical excitatory connections from layer III/IV to layer V promote spindle propagation from the core to the matrix system, leading to widespread spindle activity. Our study predicts that plasticity of intra- and inter-cortical connectivity can potentially be a mechanism for increased spindle density as has been observed during learning.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2018-06-29
    Description: Algorithms, Vol. 11, Pages 94: Tensor Completion Based on Triple Tubal Nuclear Norm Algorithms doi: 10.3390/a11070094 Authors: Dongxu Wei Andong Wang Xiaoqin Feng Boyu Wang Bo Wang Many tasks in computer vision suffer from missing values in tensor data, i.e., multi-way data array. The recently proposed tensor tubal nuclear norm (TNN) has shown superiority in imputing missing values in 3D visual data, like color images and videos. However, by interpreting in a circulant way, TNN only exploits tube (often carrying temporal/channel information) redundancy in a circulant way while preserving the row and column (often carrying spatial information) relationship. In this paper, a new tensor norm named the triple tubal nuclear norm (TriTNN) is proposed to simultaneously exploit tube, row and column redundancy in a circulant way by using a weighted sum of three TNNs. Thus, more spatial-temporal information can be mined. Further, a TriTNN-based tensor completion model with an ADMM solver is developed. Experiments on color images, videos and LiDAR datasets show the superiority of the proposed TriTNN against state-of-the-art nuclear norm-based tensor norms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2018-06-29
    Description: Algorithms, Vol. 11, Pages 93: Layered Graphs: Applications and Algorithms Algorithms doi: 10.3390/a11070093 Authors: Bhadrachalam Chitturi Srijith Balachander Sandeep Satheesh Krithic Puthiyoppil The computation of distances between strings has applications in molecular biology, music theory and pattern recognition. One such measure, called short reversal distance, has applications in evolutionary distance computation. It has been shown that this problem can be reduced to the computation of a maximum independent set on the corresponding graph that is constructed from the given input strings. The constructed graphs primarily fall into a class that we call layered graphs. In a layered graph, each layer refers to a subgraph containing, at most, some k vertices. The inter-layer edges are restricted to the vertices in adjacent layers. We study the MIS, MVC, MDS, MCV and MCD problems on layered graphs where MIS computes the maximum independent set; MVC computes the minimum vertex cover; MDS computes the minimum dominating set; MCV computes the minimum connected vertex cover; and MCD computes the minimum connected dominating set. The MIS, MVC and MDS are computed in polynomial time if k=Θ(log|V|). MCV and MCD are computed polynomial time if k=O((log|V|)α), where α<1. If k=Θ((log|V|)1+ϵ), for ϵ>0, then MIS, MVC and MDS are computed in quasi-polynomial time. If k=Θ(log|V|), then MCV and MCD are computed in quasi-polynomial time.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2018-11-28
    Description: by Lina Meinecke, Praveer P. Sharma, Huijing Du, Lei Zhang, Qing Nie, Thomas F. Schilling How does pattern formation occur accurately when confronted with tissue growth and stochastic fluctuations (noise) in gene expression? Dorso-ventral (D-V) patterning of the mandibular arch specifies upper versus lower jaw skeletal elements through a combination of Bone morphogenetic protein (Bmp), Endothelin-1 (Edn1), and Notch signaling, and this system is highly robust. We combine NanoString experiments of early D-V gene expression with live imaging of arch development in zebrafish to construct a computational model of the D-V mandibular patterning network. The model recapitulates published genetic perturbations in arch development. Patterning is most sensitive to changes in Bmp signaling, and the temporal order of gene expression modulates the response of the patterning network to noise. Thus, our integrated systems biology approach reveals non-intuitive features of the complex signaling system crucial for craniofacial development, including novel insights into roles of gene expression timing and stochasticity in signaling and gene regulation.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-12-01
    Description: by Daniel Jimenez-Carretero, Vahid Abrishami, Laura Fernández-de-Manuel, Irene Palacios, Antonio Quílez-Álvarez, Alberto Díez-Sánchez, Miguel A. del Pozo, María C. Montoya Toxicity is an important factor in failed drug development, and its efficient identification and prediction is a major challenge in drug discovery. We have explored the potential of microscopy images of fluorescently labeled nuclei for the prediction of toxicity based on nucleus pattern recognition. Deep learning algorithms obtain abstract representations of images through an automated process, allowing them to efficiently classify complex patterns, and have become the state-of-the art in machine learning for computer vision. Here, deep convolutional neural networks (CNN) were trained to predict toxicity from images of DAPI-stained cells pre-treated with a set of drugs with differing toxicity mechanisms. Different cropping strategies were used for training CNN models, the nuclei-cropping-based Tox_CNN model outperformed other models classifying cells according to health status. Tox_CNN allowed automated extraction of feature maps that clustered compounds according to mechanism of action. Moreover, fully automated region-based CNNs (RCNN) were implemented to detect and classify nuclei, providing per-cell toxicity prediction from raw screening images. We validated both Tox_(R)CNN models for detection of pre-lethal toxicity from nuclei images, which proved to be more sensitive and have broader specificity than established toxicity readouts. These models predicted toxicity of drugs with mechanisms of action other than those they had been trained for and were successfully transferred to other cell assays. The Tox_(R)CNN models thus provide robust, sensitive, and cost-effective tools for in vitro screening of drug-induced toxicity. These models can be adopted for compound prioritization in drug screening campaigns, and could thereby increase the efficiency of drug discovery.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2018-12-01
    Description: by Joanne C. Van Slooten, Sara Jahfari, Tomas Knapen, Jan Theeuwes Cognition can reveal itself in the pupil, as latent cognitive processes map onto specific pupil responses. For instance, the pupil dilates when we make decisions and these pupil size fluctuations reflect decision-making computations during and after a choice. Surprisingly little is known, however, about how pupil responses relate to decisions driven by the learned value of stimuli. This understanding is important, as most real-life decisions are guided by the outcomes of earlier choices. The goal of this study was to investigate which cognitive processes the pupil reflects during value-based decision-making. We used a reinforcement learning task to study pupil responses during value-based decisions and subsequent decision evaluations, employing computational modeling to quantitatively describe the underlying cognitive processes. We found that the pupil closely tracks reinforcement learning processes independently across participants and across trials. Prior to choice, the pupil dilated as a function of trial-by-trial fluctuations in value beliefs about the to-be chosen option and predicted an individual’s tendency to exploit high value options. After feedback a biphasic pupil response was observed, the amplitude of which correlated with participants’ learning rates. Furthermore, across trials, early feedback-related dilation scaled with value uncertainty, whereas later constriction scaled with signed reward prediction errors. These findings show that pupil size fluctuations can provide detailed information about the computations underlying value-based decisions and the subsequent updating of value beliefs. As these processes are affected in a host of psychiatric disorders, our results indicate that pupillometry can be used as an accessible tool to non-invasively study the processes underlying ongoing reinforcement learning in the clinic.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2018-12-01
    Description: by Wei Wang, Shanzhuo Zhang, Haibo Ni, Clifford J. Garratt, Mark R. Boyett, Jules C. Hancox, Henggui Zhang Cardiac electrical alternans (CEA), manifested as T-wave alternans in ECG, is a clinical biomarker for predicting cardiac arrhythmias and sudden death. However, the mechanism underlying the spontaneous transition from CEA to arrhythmias remains incompletely elucidated. In this study, multiscale rabbit ventricular models were used to study the transition and a potential role of I Na in perpetuating such a transition. It was shown CEA evolved into either concordant or discordant action potential (AP) conduction alternans in a homogeneous one-dimensional tissue model, depending on tissue AP duration and conduction velocity (CV) restitution properties. Discordant alternans was able to cause conduction failure in the model, which was promoted by impaired sodium channel with either a reduced or increased channel current. In a two-dimensional homogeneous tissue model, a combined effect of rate- and curvature-dependent CV broke-up alternating wavefronts at localised points, facilitating a spontaneous transition from CEA to re-entry. Tissue inhomogeneity or anisotropy further promoted break-up of re-entry, leading to multiple wavelets. Similar observations have also been seen in human atrial cellular and tissue models. In conclusion, our results identify a mechanism by which CEA spontaneously evolves into re-entry without a requirement for premature ventricular complexes or pre-existing tissue heterogeneities, and demonstrated the important pro-arrhythmic role of impaired sodium channel activity. These findings are model-independent and have potential human relevance.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2018-12-05
    Description: by Peter Neri Contrast is the most fundamental property of images. Consequently, any comprehensive model of biological vision must incorporate this attribute and provide a veritable description of its impact on visual perception. Current theoretical and computational models predict that vision should modify its characteristics at low contrast: for example, it should become broader (more lowpass) to protect from noise, as often demonstrated by individual neurons. We find that the opposite is true for human discrimination of elementary image elements: vision becomes sharper, not broader, as contrast approaches threshold levels. Furthermore, it suffers from increased internal variability at low contrast and it transitions from a surprisingly linear regime at high contrast to a markedly nonlinear processing mode in the low-contrast range. These characteristics are hard-wired in that they happen on a single trial without memory or expectation. Overall, the empirical results urge caution when attempting to interpret human vision from the standpoint of optimality and related theoretical constructs. Direct measurements of this phenomenon indicate that the actual constraints derive from intrinsic architectural features, such as the co-existence of complex-cell-like and simple-cell-like components. Small circuits built around these elements can indeed account for the empirical results, but do not appear to operate in a manner that conforms to optimality even approximately. More generally, our results provide a compelling demonstration of how far we still are from securing an adequate computational account of the most basic operations carried out by human vision.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2018-12-08
    Description: by Mengrou Shan, David Dai, Arunodai Vudem, Jeffrey D. Varner, Abraham D. Stroock Cancer metabolism has received renewed interest as a potential target for cancer therapy. In this study, we use a multi-scale modeling approach to interrogate the implications of three metabolic scenarios of potential clinical relevance: the Warburg effect, the reverse Warburg effect and glutamine addiction. At the intracellular level, we construct a network of central metabolism and perform flux balance analysis (FBA) to estimate metabolic fluxes; at the cellular level, we exploit this metabolic network to calculate parameters for a coarse-grained description of cellular growth kinetics; and at the multicellular level, we incorporate these kinetic schemes into the cellular automata of an agent-based model (ABM), iDynoMiCS. This ABM evaluates the reaction-diffusion of the metabolites, cellular division and motion over a simulation domain. Our multi-scale simulations suggest that the Warburg effect provides a growth advantage to the tumor cells under resource limitation. However, we identify a non-monotonic dependence of growth rate on the strength of glycolytic pathway. On the other hand, the reverse Warburg scenario provides an initial growth advantage in tumors that originate deeper in the tissue. The metabolic profile of stromal cells considered in this scenario allows more oxygen to reach the tumor cells in the deeper tissue and thus promotes tumor growth at earlier stages. Lastly, we suggest that glutamine addiction does not confer a selective advantage to tumor growth with glutamine acting as a carbon source in the tricarboxylic acid (TCA) cycle, any advantage of glutamine uptake must come through other pathways not included in our model (e.g., as a nitrogen donor). Our analysis illustrates the importance of accounting explicitly for spatial and temporal evolution of tumor microenvironment in the interpretation of metabolic scenarios and hence provides a basis for further studies, including evaluation of specific therapeutic strategies that target metabolism.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-12-14
    Description: by Aleksandra Tarkowska, Denise Carvalho-Silva, Charles E. Cook, Edd Turner, Robert D. Finn, Andrew D. Yates
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2018-08-02
    Description: Algorithms, Vol. 11, Pages 116: A Robust and Energy-Efficient Weighted Clustering Algorithm on Mobile Ad Hoc Sensor Networks † Algorithms doi: 10.3390/a11080116 Authors: Huamei Qi Fengqi Liu Tailong Xiao Jiang Su In an Ad hoc sensor network, nodes have characteristics of limited battery energy, self-organization and low mobility. Due to the mobility and heterogeneity of the energy consumption in the hierarchical network, the cluster head and topology are changed dynamically. Therefore, topology control and energy consumption are growing to be critical in enhancing the stability and prolonging the lifetime of the network. In order to improve the survivability of Ad hoc network effectively, this paper proposes a new algorithm named the robust, energy-efficient weighted clustering algorithm (RE2WCA). For the homogeneous of the energy consumption; the proposed clustering algorithm takes the residual energy and group mobility into consideration by restricting minimum iteration times. In addition, a distributed fault detection algorithm and cluster head backup mechanism are presented to achieve the periodic and real-time topology maintenance to enhance the robustness of the network. The network is analyzed and the simulations are performed to compare the performance of this new clustering algorithm with the similar algorithms in terms of cluster characteristics, lifetime, throughput and energy consumption of the network. The result shows that the proposed algorithm provides better performance than others.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2018-07-25
    Description: by Michelle R. Greene, Bruce C. Hansen Visual scene category representations emerge very rapidly, yet the computational transformations that enable such invariant categorizations remain elusive. Deep convolutional neural networks (CNNs) perform visual categorization at near human-level accuracy using a feedforward architecture, providing neuroscientists with the opportunity to assess one successful series of representational transformations that enable categorization in silico . The goal of the current study is to assess the extent to which sequential scene category representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier (0–200 ms) ERP activity was best explained by early CNN layers at all electrodes. Although later activity at most electrode sites corresponded to earlier CNN layers, activity in right occipito-temporal electrodes was best explained by the later, fully-connected layers of the CNN around 225 ms post-stimulus, along with similar patterns in frontal electrodes. Taken together, these results suggest that the emergence of scene category representations develop through a dynamic interplay between early activity over occipital electrodes as well as later activity over temporal and frontal electrodes.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-07-25
    Description: by William J. M. Probert, Chris P. Jewell, Marleen Werkman, Christopher J. Fonnesbeck, Yoshitaka Goto, Michael C. Runge, Satoshi Sekiguchi, Katriona Shea, Matt J. Keeling, Matthew J. Ferrari, Michael J. Tildesley In the event of a new infectious disease outbreak, mathematical and simulation models are commonly used to inform policy by evaluating which control strategies will minimize the impact of the epidemic. In the early stages of such outbreaks, substantial parameter uncertainty may limit the ability of models to provide accurate predictions, and policymakers do not have the luxury of waiting for data to alleviate this state of uncertainty. For policymakers, however, it is the selection of the optimal control intervention in the face of uncertainty, rather than accuracy of model predictions, that is the measure of success that counts. We simulate the process of real-time decision-making by fitting an epidemic model to observed, spatially-explicit, infection data at weekly intervals throughout two historical outbreaks of foot-and-mouth disease, UK in 2001 and Miyazaki, Japan in 2010, and compare forward simulations of the impact of switching to an alternative control intervention at the time point in question. These are compared to policy recommendations generated in hindsight using data from the entire outbreak, thereby comparing the best we could have done at the time with the best we could have done in retrospect. Our results show that the control policy that would have been chosen using all the data is also identified from an early stage in an outbreak using only the available data, despite high variability in projections of epidemic size. Critically, we find that it is an improved understanding of the locations of infected farms, rather than improved estimates of transmission parameters, that drives improved prediction of the relative performance of control interventions. However, the ability to estimate undetected infectious premises is a function of uncertainty in the transmission parameters. Here, we demonstrate the need for both real-time model fitting and generating projections to evaluate alternative control interventions throughout an outbreak. Our results highlight the use of using models at outbreak onset to inform policy and the importance of state-dependent interventions that adapt in response to additional information throughout an outbreak.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2018-08-04
    Description: by Margaret J. Tse, Brian K. Chu, Cameron P. Gallivan, Elizabeth L. Read Stochastic simulation has been a powerful tool for studying the dynamics of gene regulatory networks, particularly in terms of understanding how cell-phenotype stability and fate-transitions are impacted by noisy gene expression. However, gene networks often have dynamics characterized by multiple attractors. Stochastic simulation is often inefficient for such systems, because most of the simulation time is spent waiting for rare, barrier-crossing events to occur. We present a rare-event simulation-based method for computing epigenetic landscapes and phenotype-transitions in metastable gene networks. Our computational pipeline was inspired by studies of metastability and barrier-crossing in protein folding, and provides an automated means of computing and visualizing essential stationary and dynamic information that is generally inaccessible to conventional simulation. Applied to a network model of pluripotency in Embryonic Stem Cells, our simulations revealed rare phenotypes and approximately Markovian transitions among phenotype-states, occurring with a broad range of timescales. The relative probabilities of phenotypes and the transition paths linking pluripotency and differentiation are sensitive to global kinetic parameters governing transcription factor-DNA binding kinetics. Our approach significantly expands the capability of stochastic simulation to investigate gene regulatory network dynamics, which may help guide rational cell reprogramming strategies. Our approach is also generalizable to other types of molecular networks and stochastic dynamics frameworks.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2018-08-08
    Description: by James T. Yurkovich, Miguel A. Alcantar, Zachary B. Haiman, Bernhard O. Palsson Allosteric regulation has traditionally been described by mathematically-complex allosteric rate laws in the form of ratios of polynomials derived from the application of simplifying kinetic assumptions. Alternatively, an approach that explicitly describes all known ligand-binding events requires no simplifying assumptions while allowing for the computation of enzymatic states. Here, we employ such a modeling approach to examine the “catalytic potential” of an enzyme—an enzyme’s capacity to catalyze a biochemical reaction. The catalytic potential is the fundamental result of multiple ligand-binding events that represents a “tug of war” among the various regulators and substrates within the network. This formalism allows for the assessment of interacting allosteric enzymes and development of a network-level understanding of regulation. We first define the catalytic potential and use it to characterize the response of three key kinases (hexokinase, phosphofructokinase, and pyruvate kinase) in human red blood cell glycolysis to perturbations in ATP utilization. Next, we examine the sensitivity of the catalytic potential by using existing personalized models, finding that the catalytic potential allows for the identification of subtle but important differences in how individuals respond to such perturbations. Finally, we explore how the catalytic potential can help to elucidate how enzymes work in tandem to maintain a homeostatic state. Taken together, this work provides an interpretation and visualization of the dynamic interactions and network-level effects of interacting allosteric enzymes.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2018-08-08
    Description: by Cesar A. O. Coelho, Tatiana L. Ferreira, Juliana C. Kramer-Soares, João R. Sato, Maria Gabriela M. Oliveira Hippocampal damage results in profound retrograde, but no anterograde amnesia in contextual fear conditioning (CFC). Although the content learned in the latter have been discussed, alternative regions supporting CFC learning were seldom proposed and never empirically addressed. Here, we employed network analysis of pCREB expression quantified from brain slices of rats with dorsal hippocampal lesion (dHPC) after undergoing CFC session. Using inter-regional correlations of pCREB-positive nuclei between brain regions, we modelled functional networks using different thresholds. The dHPC network showed small-world topology, equivalent to SHAM (control) network. However, diverging hubs were identified in each network. In a direct comparison, hubs in both networks showed consistently higher centrality values compared to the other network. Further, the distribution of correlation coefficients was different between the groups, with most significantly stronger correlation coefficients belonging to the SHAM network. These results suggest that dHPC network engaged in CFC learning is partially different, and engage alternative hubs. We next tested if pre-training lesions of dHPC and one of the new dHPC network hubs (perirhinal, Per; or disgranular retrosplenial, RSC, cortices) would impair CFC. Only dHPC-RSC, but not dHPC-Per, impaired CFC. Interestingly, only RSC showed a consistently higher centrality in the dHPC network, suggesting that the increased centrality reflects an increased functional dependence on RSC. Our results provide evidence that, without hippocampus, the RSC, an anatomically central region in the medial temporal lobe memory system might support CFC learning and memory.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2018-08-07
    Description: by Tianwei Yu Dynamic correlations are pervasive in high-throughput data. Large numbers of gene pairs can change their correlation patterns in response to observed/unobserved changes in physiological states. Finding changes in correlation patterns can reveal important regulatory mechanisms. Currently there is no method that can effectively detect global dynamic correlation patterns in a dataset. Given the challenging nature of the problem, the currently available methods use genes as surrogate measurements of physiological states, which cannot faithfully represent true underlying biological signals. In this study we develop a new method that directly identifies strong latent dynamic correlation signals from the data matrix, named DCA: Dynamic Correlation Analysis. At the center of the method is a new metric for the identification of pairs of variables that are highly likely to be dynamically correlated, without knowing the underlying physiological states that govern the dynamic correlation. We validate the performance of the method with extensive simulations. We applied the method to three real datasets: a single cell RNA-seq dataset, a bulk RNA-seq dataset, and a microarray gene expression dataset. In all three datasets, the method reveals novel latent factors with clear biological meaning, bringing new insights into the data.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2018-08-10
    Description: Algorithms, Vol. 11, Pages 121: Stacked-GRU Based Power System Transient Stability Assessment Method Algorithms doi: 10.3390/a11080121 Authors: Feilai Pan Jun Li Bendong Tan Ciling Zeng Xinfan Jiang Li Liu Jun Yang With the interconnection between large power grids, the issue of security and stability has become increasingly prominent. At present, data-driven power system adaptive transient stability assessment methods have achieved excellent performances by balancing speed and accuracy, but the complicated construction and parameters are difficult to obtain. This paper proposes a stacked-GRU (Gated Recurrent Unit)-based transient stability intelligent assessment method, which builds a stacked-GRU model based on time-dependent parameter sharing and spatial stacking. By using the time series data after power system failure, the offline training is performed to obtain the optimal parameters of stacked-GRU. When the application is online, it is assessed by framework of confidence. Basing on New England power system, the performance of proposed adaptive transient stability assessment method is investigated. Simulation results show that the proposed model realizes reliable and accurate assessment of transient stability and it has the advantages of short assessment time with less complex model structure to leave time for emergency control.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2018-08-11
    Description: by Luyi Tian, Shian Su, Xueyi Dong, Daniela Amann-Zalcenstein, Christine Biben, Azadeh Seidi, Douglas J. Hilton, Shalin H. Naik, Matthew E. Ritchie Single-cell RNA sequencing (scRNA-seq) technology allows researchers to profile the transcriptomes of thousands of cells simultaneously. Protocols that incorporate both designed and random barcodes have greatly increased the throughput of scRNA-seq, but give rise to a more complex data structure. There is a need for new tools that can handle the various barcoding strategies used by different protocols and exploit this information for quality assessment at the sample-level and provide effective visualization of these results in preparation for higher-level analyses. To this end, we developed scPipe , an R/Bioconductor package that integrates barcode demultiplexing, read alignment, UMI-aware gene-level quantification and quality control of raw sequencing data generated by multiple protocols that include CEL-seq, MARS-seq, Chromium 10X, Drop-seq and Smart-seq. scPipe produces a count matrix that is essential for downstream analysis along with an HTML report that summarises data quality. These results can be used as input for downstream analyses including normalization, visualization and statistical testing. scPipe performs this processing in a few simple R commands, promoting reproducible analysis of single-cell data that is compatible with the emerging suite of open-source scRNA-seq analysis tools available in R/Bioconductor and beyond. The scPipe R package is available for download from https://www.bioconductor.org/packages/scPipe.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...