ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (22,496)
  • 2015-2019  (22,496)
  • 1945-1949
  • 2015  (22,496)
  • Computer Science  (22,496)
Collection
Years
  • 2015-2019  (22,496)
  • 1945-1949
Year
  • 1
  • 2
  • 3
  • 4
    Publication Date: 2015-10-30
    Electronic ISSN: 2255-9922
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2015-10-30
    Electronic ISSN: 2255-9922
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2015-07-30
    Electronic ISSN: 2255-9922
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2015-04-29
    Electronic ISSN: 2255-9922
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2015-07-30
    Electronic ISSN: 2255-9922
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2015-08-01
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2015-08-01
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2015-08-01
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2015-08-01
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2015-02-28
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2015-06-13
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
  • 16
    Publication Date: 2015-08-01
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
  • 18
    Publication Date: 2015-04-24
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
  • 20
  • 21
    Publication Date: 2015-05-04
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2015-08-01
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2015-08-08
    Description: : As sequencing becomes cheaper and more widely available, there is a greater need to quickly and effectively analyze large-scale genomic data. While the functionality of AVIA v1.0, whose implementation was based on ANNOVAR, was comparable with other annotation web servers, AVIA v2.0 represents an enhanced web-based server that extends genomic annotations to cell-specific transcripts and protein-level functional annotations. With AVIA’s improved interface, users can better visualize their data, perform comprehensive searches and categorize both coding and non-coding variants. Availability and implementation : AVIA is freely available through the web at http://avia.abcc.ncifcrf.gov . Contact : Hue.Vuong@fnlcr.nih.gov Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2015-08-08
    Description: : As new methods for multivariate analysis of genome wide association studies become available, it is important to be able to combine results from different cohorts in a meta-analysis. The R package MultiMeta provides an implementation of the inverse-variance-based method for meta-analysis, generalized to an n -dimensional setting. Availability and implementation: The R package MultiMeta can be downloaded from CRAN. Contact: dragana.vuckovic@burlo.trieste.it ; vi1@sanger.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2015-08-08
    Description: Background: Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. Results: In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387–404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783–802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. Conclusion: The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2015-08-09
    Description: Background: Plant organ segmentation from 3D point clouds is a relevant task for plant phenotyping and plant growth observation. Automated solutions are required to increase the efficiency of recent high-throughput plant phenotyping pipelines. However, plant geometrical properties vary with time, among observation scales and different plant types. The main objective of the present research is to develop a fully automated, fast and reliable data driven approach for plant organ segmentation. Results: The automated segmentation of plant organs using unsupervised, clustering methods is crucial in cases where the goal is to get fast insights into the data or no labeled data is available or costly to achieve. For this we propose and compare data driven approaches that are easy-to-realize and make the use of standard algorithms possible. Since normalized histograms, acquired from 3D point clouds, can be seen as samples from a probability simplex, we propose to map the data from the simplex space into Euclidean space using Aitchisons log ratio transformation, or into the positive quadrant of the unit sphere using square root transformation. This, in turn, paves the way to a wide range of commonly used analysis techniques that are based on measuring the similarities between data points using Euclidean distance. We investigate the performance of the resulting approaches in the practical context of grouping 3D point clouds and demonstrate empirically that they lead to clustering results with high accuracy for monocotyledonous and dicotyledonous plant species with diverse shoot architecture. Conclusion: An automated segmentation of 3D point clouds is demonstrated in the present work. Within seconds first insights into plant data can be deviated – even from non-labelled data. This approach is applicable to different plant species with high accuracy. The analysis cascade can be implemented in future high-throughput phenotyping scenarios and will support the evaluation of the performance of different plant genotypes exposed to stress or in different environmental scenarios.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2015-08-12
    Description: Receptor tyrosine kinases are essential proteins involved in cellular differentiation and proliferation in vivo and are heavily involved in allergic diseases, diabetes, and onset/proliferation of cancerous cells. Identifying the interacting partner of this protein, a growth factor ligand, will provide a deeper understanding of cellular proliferation/differentiation and other cell processes. In this study, we developed a method for predicting tyrosine kinase ligand-receptor pairs from their amino acid sequences. We collected tyrosine kinase ligand-receptor pairs from the Database of Interacting Proteins (DIP) and UniProtKB, filtered them by removing sequence redundancy, and used them as a dataset for machine learning and assessment of predictive performance. Our prediction method is based on support vector machines (SVMs), and we evaluated several input features suitable for tyrosine kinase for machine learning and compared and analyzed the results. Using sequence pattern information and domain information extracted from sequences as input features, we obtained 0.996 of the area under the receiver operating characteristic curve. This accuracy is higher than that obtained from general protein-protein interaction pair predictions.
    Print ISSN: 1687-8027
    Electronic ISSN: 1687-8035
    Topics: Biology , Computer Science
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2015-08-12
    Description: We examine a distributed detection problem in a wireless sensor network, where sensor nodes collaborate to detect a Gaussian signal with an unknown change of power, i.e., a scale parameter. Due to power/bandwidth constraints, we consider the case where each sensor quantizes its observation into a binary digit. The binary data are then transmitted through error-prone wireless links to a fusion center, where a generalized likelihood ratio test (GLRT) detector is employed to perform a global decision. We study the design of a binary quantizer based on an asymptotic analysis of the GLRT. Interestingly, the quantization threshold of the quantizer is independent of the unknown scale parameter. Numerical results are included to illustrate the performance of the proposed quantizer and GLRT in binary symmetric channels (BSCs).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2015-08-09
    Description: Genome memory is an important aspect of electronic cells. Here, a novel genome memory structure called partial-DNA cyclic memory is proposed, in which cells only store a portion of the system’s entire DNA. The stored gene number is independent of the scale of embryonic array and of the target circuit, and can be set according to actual demand in the design process. Genes can be transferred in the cell and the embryonics array through intracellular and intercellular gene cyclic and non-cyclic shifts, and based on this process the embryonic array’s functional differentiation and self-repair can be achieved. In particular, lost genes caused by faulty cells can be recovered through gene updating based on the remaining normal neighbor cells during the self-repair process. A reliability model of the proposed memory structure is built considering the gene updating method, and depending on the implementations of the memory, the hardware overhead is modeled. Based on the reliability model and hardware overhead model, we can find that the memory can achieve high reliability with relatively few gene backups and with low hardware overhead. Theoretical analysis and a simulation experiment show that the new genome memory structure not only achieves functional differentiation and self-repair of the embryonics array, but also ensures system reliability while reducing hardware overhead. This has significant value in engineering applications, allowing the proposed genome memory structure to be used to design larger scale self-repair chips.
    Print ISSN: 1389-2576
    Electronic ISSN: 1573-7632
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2015-08-12
    Description: As internet technologies make their way into developing areas, so too does the possibility of education and training being delivered to the people living in those previously unserved areas. The growing catalogue of free, high quality courseware, when combined with the newly acquired means of delivery, creates the potential for millions of people in the developing world to acquire a good education. Yet a good education obviously requires more than simply delivering information; students must also receive high quality feedback on their assessments. They must be told how their performance compares with the ideal, and be shown how to close the gap between the two. However, delivering high quality feedback is labor-intensive, and therefore expensive, and has long been recognized as a problematic issue by educators. This paper outlines a case study that uses a Learning Management System (LMS) to efficiently deliver detailed feedback that is informed by the principles of best practice. We make the case that the efficiencies of this method allow for large-scale courses with thousands of enrolments that are accessible to developing and developed areas alike. We explore the question; is computer-mediated feedback delivery efficient and effective and might it be applied to large-scale courses at low-cost?
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-13
    Description: In the classic k -center problem, we are given a metric graph, and the objective is to select k nodes as centers such that the maximum distance from any vertex to its closest center is minimized. In this paper, we consider two important generalizations of k -center, the matroid center problem and the knapsack center problem. Both problems are motivated by recent content distribution network applications. Our contributions can be summarized as follows: (1) We consider the matroid center problem in which the centers are required to form an independent set of a given matroid. We show this problem is NP-hard even on a line. We present a 3-approximation algorithm for the problem on general metrics. We also consider the outlier version of the problem where a given number of vertices can be excluded as outliers from the solution. We present a 7-approximation for the outlier version. (2) We consider the (multi-)knapsack center problem in which the centers are required to satisfy one (or more) knapsack constraint(s). It is known that the knapsack center problem with a single knapsack constraint admits a 3-approximation. However, when there are at least two knapsack constraints, we show this problem is not approximable at all. To complement the hardness result, we present a polynomial time algorithm that gives a 3-approximate solution such that one knapsack constraint is satisfied and the others may be violated by at most a factor of \(1+\epsilon \) . We also obtain a 3-approximation for the outlier version that may violate the knapsack constraint by \(1+\epsilon \) .
    Print ISSN: 0178-4617
    Electronic ISSN: 1432-0541
    Topics: Computer Science , Mathematics
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-13
    Description: We design a succinct data structure for representing a poset that, given two elements, can report whether one precedes the other in constant time. This is equivalent to succinctly representing the transitive closure graph of the poset, and we note that the same method can also be used to succinctly represent the transitive reduction graph. For an n element poset, the data structure occupies \(n^2/4 + o(n^2)\) bits in the worst case. Furthermore, a slight extension to this data structure yields a succinct oracle for reachability in arbitrary directed graphs. Thus, using no more than a quarter of the space required to represent an arbitrary directed graph, reachability queries can be supported in constant time. We also consider the operation of listing all the successors or predecessors of a given element, and show how to do this in constant time per element reported using a slightly modified version of our succinct data structure.
    Print ISSN: 0178-4617
    Electronic ISSN: 1432-0541
    Topics: Computer Science , Mathematics
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2015-08-13
    Description: More and more hybrid electric vehicles are driven since they offer such advantages as energy savings and better active safety performance. Hybrid vehicles have two or more power driving systems and frequently switch working condition, so controlling stability is very important. In this work, a two-stage Kalman algorithm method is used to fuse data in hybrid vehicle stability testing. First, the RT3102 navigation system and Dewetron system are introduced. Second, a modeling of data fusion is proposed based on the Kalman filter. Then, this modeling is simulated and tested on a sample vehicle, using Carsim and Simulink software to test the results. The results showed the merits of this modeling.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2015-08-13
    Description: Background: Host genetic variability has been implicated in chemotherapy-induced peripheral neuropathy (CIPN). A dose-limiting toxicity for chemotherapy agents, CIPN is also a debilitating condition that may progress to chronic neuropathic pain. We utilized a bioinformatics approach, which captures the complexity of intracellular and intercellular interactions, to identify genes for CIPN. Methods: Using genes pooled from the literature as a starting point, we used Ingenuity Pathway Analysis (IPA) to generate gene networks for CIPN. Results: We performed IPA core analysis for genes associated with platinum-, taxane- and platinum-taxane–induced neuropathy. We found that IL6, TNF, CXCL8, IL1B and ERK1/2 were the top genes in terms of the number of connections in platinum-induced neuropathy and TP53, MYC, PARP1, P38 MAPK and TNF for combined taxane-platinum–induced neuropathy. Conclusion: Neurotoxicity is common in cancer patients treated with platinum compounds and anti-microtubule agents and CIPN is one of the debilitating sequela. The bioinformatic approach helped identify genes associated with CIPN in cancer patients.
    Electronic ISSN: 1756-0381
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2015-08-13
    Description: Background: Tumorigenesis is an evolutionary process by which tumor cells acquire mutations through successive diversification and differentiation. There is much interest in reconstructing this process of evolution due to its relevance to identifying drivers of mutation and predicting future prognosis and drug response. Efforts are challenged by high tumor heterogeneity, though, both within and among patients. In prior work, we showed that this heterogeneity could be turned into an advantage by computationally reconstructing models of cell populations mixed to different degrees in distinct tumors. Such mixed membership model approaches, however, are still limited in their ability to dissect more than a few well-conserved cell populations across a tumor data set. Results: We present a method to improve on current mixed membership model approaches by better accounting for conserved progression pathways between subsets of cancers, which imply a structure to the data that has not previously been exploited. We extend our prior methods, which use an interpretation of the mixture problem as that of reconstructing simple geometric objects called simplices, to instead search for structured unions of simplices called simplicial complexes that one would expect to emerge from mixture processes describing branches along an evolutionary tree. We further improve on the prior work with a novel objective function to better identify mixtures corresponding to parsimonious evolutionary tree models. We demonstrate that this approach improves on our ability to accurately resolve mixtures on simulated data sets and demonstrate its practical applicability on a large RNASeq tumor data set. Conclusions: Better exploiting the expected geometric structure for mixed membership models produced from common evolutionary trees allows us to quickly and accurately reconstruct models of cell populations sampled from those trees. In the process, we hope to develop a better understanding of tumor evolution as well as other biological problems that involve interpreting genomic data gathered from heterogeneous populations of cells.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2015-08-13
    Description: Background: Understanding the architecture and function of RNA molecules requires methods for comparing and analyzing their tertiary and quaternary structures. While structural superposition of short RNAs is achievable in a reasonable time, large structures represent much bigger challenge. Therefore, we have developed a fast and accurate algorithm for RNA pairwise structure superposition called SETTER and implemented it in the SETTER web server. However, though biological relationships can be inferred by a pairwise structure alignment, key features preserved by evolution can be identified only from a multiple structure alignment. Thus, we extended the SETTER algorithm to the alignment of multiple RNA structures and developed the MultiSETTER algorithm. Results: In this paper, we present the updated version of the SETTER web server that implements a user friendly interface to the MultiSETTER algorithm. The server accepts RNA structures either as the list of PDB IDs or as user-defined PDB files. After the superposition is computed, structures are visualized in 3D and several reports and statistics are generated. Conclusion: To the best of our knowledge, the MultiSETTER web server is the first publicly available tool for a multiple RNA structure alignment. The MultiSETTER server offers the visual inspection of an alignment in 3D space which may reveal structural and functional relationships not captured by other multiple alignment methods based either on a sequence or on secondary structure motifs.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2015-08-13
    Description: Background: Today’s modern research of B and T cell antigen receptors (the immunoglobulins (IG) or antibodies and T cell receptors (TR)) forms the basis for detailed analyses of the human adaptive immune system. For instance, insights in the state of the adaptive immune system provide information that is essentially important in monitoring transplantation processes and the regulation of immune suppressiva. In this context, algorithms and tools are necessary for analyzing the IG and TR diversity on nucleotide as well as on amino acid sequence level, identifying highly proliferated clonotypes, determining the diversity of the cell repertoire found in a sample, comparing different states of the human immune system, and visualizing all relevant information. Results: We here present IMEX, a software framework for the detailed characterization and visualization of the state of human IG and TR repertoires. IMEX offers a broad range of algorithms for statistical analysis of IG and TR data, CDR and V-(D)-J analysis, diversity analysis by calculating the distribution of IG and TR, calculating primer efficiency, and comparing multiple data sets. We use a mathematical model that is able to describe the number of unique clonotypes in a sample taking into account the true number of unique sequences and read errors; we heuristically optimize the parameters of this model. IMEX uses IMGT/HighV-QUEST analysis outputs and includes methods for splitting and merging to enable the submission to this portal and to combine the outputs results, respectively. All calculation results can be visualized and exported. Conclusion: IMEX is an user-friendly and flexible framework for performing clonality experiments based on CDR and V-(D)-J rearranged regions, diversity analysis, primer efficiency, and various different visualization experiments. Using IMEX, various immunological reactions and alterations can be investigated in detail. IMEX is freely available for Windows and Unix platforms at http://bioinformatics.fh-hagenberg.at/immunexplorer/.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2015-08-13
    Description: Background: Bacterial vaginosis (BV) is a disease associated with the vagina microbiome. It is highly prevalent and is characterized by symptoms including odor, discharge and irritation. No single microbe has been found to cause BV. In this paper we use random forests and logistic regression classifiers to model the relationship between the microbial community and BV. We use subsets of the microbial community features in order to determine which features are important to the classification models. Results: We find that models generated using logistic regression and random forests perform nearly identically and identify largely similar important features. Only a few features are necessary to obtain high BV classification accuracy. Additionally, there appears to be substantial redundancy between the microbial community features. Conclusions: These results are in contrast to a previous study in which the important features identified by the classifiers were dissimilar. This difference appears to be the result of using different feature importance measures. It is not clear whether machine learning classifiers are capturing patterns different from simple correlations.
    Electronic ISSN: 1756-0381
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-14
    Description: We consider a graph observability problem: how many edge colors are needed for an unlabeled graph so that an agent, walking from node to node, can uniquely determine its location from just the observed color sequence of the walk? Specifically, let G ( n ,  d ) be an edge-colored subgraph of d -dimensional (directed or undirected) lattice of size \(n^d = n \times n \times \cdots \times n\) . We say that G ( n ,  d ) is t -observable if an agent can uniquely determine its current position in the graph from the color sequence of any t -dimensional walk, where the dimension is the number of different directions spanned by the edges of the walk. A walk in an undirected lattice G ( n ,  d ) has dimension between 1 and d , but a directed walk can have dimension between 1 and 2 d because of two different orientations for each axis. We derive bounds on the number of colors needed for t -observability. Our main result is that \(\varTheta (n^{d/t})\) colors are both necessary and sufficient for t -observability of G ( n ,  d ), where d is considered a constant. This shows an interesting dependence of graph observability on the ratio between the dimension of the lattice and that of the walk. In particular, the number of colors for full-dimensional walks is \(\varTheta (n^{1/2})\) in the directed case, and \(\varTheta (n)\) in the undirected case, independent of the lattice dimension. All of our results extend easily to non-square lattices: given a lattice graph of size \(N = n_1 \times n_2 \times \cdots \times n_d\) , the number of colors for t -observability is \(\varTheta (\root t \of {N})\) .
    Print ISSN: 0178-4617
    Electronic ISSN: 1432-0541
    Topics: Computer Science , Mathematics
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2015-08-14
    Description: Background: With the introduction and implementation of a variety of government programs and policies to encourage adoption of electronic medical records (EMRs), EMRs are being increasingly adopted in North America. We sought to evaluate the completeness of a variety of EMR fields to determine if family physicians were comprehensively using their EMRs and the suitability of use of the data for secondary purposes in Ontario, Canada. Methods: We examined EMR data from a convenience sample of family physicians distributed throughout Ontario within the Electronic Medical Record Administrative data Linked Database (EMRALD) as extracted in the summer of 2012. We identified all physicians with at least one year of EMR use. Measures were developed and rates of physician documentation of clinical encounters, electronic prescriptions, laboratory tests, blood pressure and weight, referrals, consultation letters, and all fields in the cumulative patient profile were calculated as a function of physician and patient time since starting on the EMR. Results: Of the 167 physicians with at least one year of EMR use, we identified 186,237 patients. Overall, the fields with the highest level of completeness were for visit documentations and prescriptions (〉70 %). Improvements were observed with increasing trends of completeness overtime for almost all EMR fields according to increasing physician time on EMR. Assessment of the influence of patient time on EMR demonstrated an increasing likelihood of the population of EMR fields overtime, with the largest improvements occurring between the first and second years. Conclusions: All of the data fields examined appear to be reasonably complete within the first year of adoption with the biggest increase occurring the first to second year. Using all of the basic functions of the EMR appears to be occurring in the current environment of EMR adoption in Ontario. Thus the data appears to be suitable for secondary use.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2015-08-14
    Description: Background: Recognising the limitations of a paper-based approach to documenting vital sign observations and responding to national clinical guidelines, we have explored the use of an electronic solution that could improve the quality and safety of patient care. We have developed a system for recording vital sign observations at the bedside, automatically calculating an Early Warning Score, and saving data such that it is accessible to all relevant clinicians within a hospital trust. We have studied current clinical practice of using paper observation charts, and attempted to streamline the process. We describe our user-focussed design process, and present the key design decisions prior to describing the system in greater detail. Results: The system has been deployed in three pilot clinical areas over a period of 9 months. During this time, vital sign observations were recorded electronically using our system. Analysis of the number of observations recorded (21,316 observations) and the number of active users (111 users) confirmed that the system is being used for routine clinical observations. Feedback from clinical end-users was collected to assess user acceptance of the system. This resulted in a System Usability Scale score of 77.8, indicating high user acceptability. Conclusions: Our system has been successfully piloted, and is in the process of full implementation throughout adult inpatient clinical areas in the Oxford University Hospitals. Whilst our results demonstrate qualitative acceptance of the system, its quantitative effect on clinical care is yet to be evaluated.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-13
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2015-08-13
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2015-08-13
    Description: This paper describes the design of multirate filters used for the sample rate reduction of digital signals and their implementation on mixed-signal integrated circuits. Reaching target specifications of power, speed and silicon area as well as realizing multi-mode operation presents challenges. Possible solutions are discussed particularly for those filter stages which operate at the high sample rates and are illustrated in examples.
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-13
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2015-08-13
    Description: A fault-injection tool can be very interesting in context to safety-critical applications, e.g., to test fault-detection and avoidance mechanisms or simply to stress an application and analyze its behavior when faults occur. In this work, a fault-injection tool is presented which can be used to instrument an FPGA design with fault-injection logic on netlist level during the implementation phase and to inject faults during runtime afterwards. The proposed approach can be smoothly integrated into an industrial FPGA tool flow, supports devices from multiple FPGA vendors and is highly configurable in order to fit to the number of available FPGA logic resources. Differences to related approaches which are applied on either HDL- and netlist-level as well as on the FPGA configuration bitstream are described. Finally, some results are presented to prove the applicability of the proposed solution.
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-15
    Description: This installment of Computer's series highlighting the work published in IEEE Computer Society journals comes from IEEE Transactions on Visualization and Computer Graphics. The Web extra at http://youtu.be/E1PVTitj7h0 is a video demonstration of a novel solution to multivariate data visualization that helps users interactively explore data by combining standard presentations, from detailed views to high-level overviews.
    Print ISSN: 0018-9162
    Electronic ISSN: 1558-0814
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-15
    Description: The data rearrangement engine (DRE) performs in-memory data restructuring to accelerate irregular, data-intensive applications. An emulation on a field-programmable gate array shows how the DRE could improve speedup, memory bandwidth, and energy consumption on three representative benchmarks.
    Print ISSN: 0018-9162
    Electronic ISSN: 1558-0814
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-15
    Description: Advertisement, IEEE.
    Print ISSN: 0018-9162
    Electronic ISSN: 1558-0814
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2015-08-15
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2015-08-15
    Description: Line-of-sight (LOS) communication links have an ever increasing demand for high throughput IP-based data transmission. Such LOS systems are of particular interest for bi-directional communications from an aircraft to a ground station. A key requirement for the operation in various countries is carrier frequency agility because of the regulatory issues. That is why the LOS link developed by JOANNEUM RESEARCH DIGITAL supports adjustable carrier frequencies in the range from 2 GHz to 6 GHz. The used TDD (Time Division Duplex) medium access has the advantage that only one frequency for up- and downlink is required. A further advantage is configurable allocation of the available carrier bandwidth between the downlink to the ground and uplink to the aircraft. The maximum throughput in the downlink is 8 Mbit/s. The system is based on a software defined radio (SDR) approach which leads to high flexibility because of fast development and testing of signal processing algorithms. This fact allows cost effective adaption of the system to different user requirements. The paper is organized in the following way. Section 1 provides a system overview. In Section 2 the modem design parameters are discussed in detail, followed by Section 3 and 4 which present the signal processing algorithms implemented and the RF front end. Finally, Section 5 shows the signal characteristic of a test flight with a DA42 aircraft.
    Print ISSN: 0932-383X
    Electronic ISSN: 1613-7620
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2015-08-15
    Description: In the existing attribute-based encryption (ABE) scheme, the authority (i.e., private key generator (PKG)) is able to calculate and issue any user’s private key, which makes it completely trusted, which severely influences the applications of the ABE scheme. To mitigate this problem, we propose the black box traceable ciphertext policy attribute-based encryption (T-CP-ABE) scheme in which if the PKG re-distributes the users’ private keys for malicious uses, it might be caught and sued. We provide a construction to realize the T-CP-ABE scheme in a black box model. Our scheme is based on the decisional bilinear Diffie-Hellman (DBDH) assumption in the standard model. In our scheme, we employ a pair (ID, S) to identify a user, where ID denotes the identity of a user and S denotes the attribute set associated with her.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2015-08-17
    Description: In this paper, a comprehensive comparison analysis in terms of outage probability and average symbol error ratio (SER) is presented for cooperative cognitive multiple-input and multiple-output (CC-MIMO) multiuser systems with amplify-and-forward (AF) protocol. Specially, we consider two scenarios where the CC-MIMO multiuser systems have the perfect and imperfect channel state information (CSI). The CC-MIMO multiuser systems consist of one multi-antenna source, one single-antenna relay, and multiple multi-antenna destinations. At the secondary source and destinations, the maximal ratio transmission (MRT) and maximal ratio combining (MRC) are employed, respectively. For such CC-MIMO multiuser systems, we first obtain the exact closed-form expressions of outage probability under the two cases where the CC-MIMO multiuser systems have the perfect and imperfect CSI. Then, to reduce the implementation complexity, the tight lower bounds of outage probability and average SER are derived. Finally, to obtain insight, by using the high signal-to-noise ratio (SNR) approximation, the asymptotic estimations of outage probability are achieved. The numerical results show that the derivations are agreed with the simulations, which validate our derivations. At the same time, the results show that, for the systems without perfect CSI, the achievable diversity order reduces to one, regardless of the number of antennas at the cognitive source and destinations as well as the number of the cognitive destinations. Nevertheless, these key parameters affect the coding gain of the CC-MIMO multiuser systems. When the systems have the perfect CSI (or without feedback delay), the achievable diversity gain is determined by the minimum between the number of source’s antennas and the product of the number of destinations and the number of destination’s antennas. For the effect of PU’s parameters, our results indicate that primary systems only affect the coding gain but not the diversity gain.
    Print ISSN: 1687-1472
    Electronic ISSN: 1687-1499
    Topics: Electrical Engineering, Measurement and Control Technology , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-04
    Description: The goal of cross-domain matching (CDM) is to find correspondences between two sets of objects in different domains in an unsupervised way. CDM has various interesting applications, including photo album summarization where photos are automatically aligned into a designed frame expressed in the Cartesian coordinate system, and temporal alignment which aligns sequences such as videos that are potentially expressed using different features. In this paper, we propose an information-theoretic CDM framework based on squared-loss mutual information (SMI). The proposed approach can directly handle non-linearly related objects/sequences with different dimensions, with the ability that hyper-parameters can be objectively optimized by cross-validation. We apply the proposed method to several real-world problems including image matching, unpaired voice conversion, photo album summarization, cross-feature video and cross-domain video-to-mocap alignment, and Kinect -based action recognition, and experimentally demonstrate that the proposed method is a promising alternative to state-of-the-art CDM methods.
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-04
    Description: The skeleton of a 2D shape is an important geometric structure in pattern analysis and computer vision. In this paper we study the skeleton of a 2D shape in a two-manifold $mathcal {M}$ , based on a geodesic metric. We present a formal definition of the skeleton $S(Omega )$ for a shape $Omega$ in $mathcal {M}$ and show several properties that make $S(Omega )$ distinct from its Euclidean counterpart in $mathbb {R}^2$ . We further prove that for a shape sequence $lbrace Omega _irbrace$ that converge to a shape $Omega$ in $mathcal {M}$ , the mapping $Omega righta- row overline{S}(Omega )$ is lower semi-continuous. A direct application of this result is that we can use a set $P$ of sample points to approximate the boundary of a 2D shape $Omega$ in $mathcal {M}$ , and the Voronoi diagram of $P$ inside $Omega subset mathcal {M}$ gives a good approximation to the skeleton $S(Omega )$ . Examples of skeleton computation in topography and brain morphometry are illustrated.
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-04
    Description: A widely used approach for locating points on deformable objects in images is to generate feature response images for each point, and then to fit a shape model to these response images. We demonstrate that Random Forest regression-voting can be used to generate high quality response images quickly. Rather than using a generative or a discriminative model to evaluate each pixel, a regressor is used to cast votes for the optimal position of each point. We show that this leads to fast and accurate shape model matching when applied in the Constrained Local Model framework. We evaluate the technique in detail, and compare it with a range of commonly used alternatives across application areas: the annotation of the joints of the hands in radiographs and the detection of feature points in facial images. We show that our approach outperforms alternative techniques, achieving what we believe to be the most accurate results yet published for hand joint annotation and state-of-the-art performance for facial feature point detection.
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-04
    Description: We present a novel method to recognise planar structures in a single image and estimate their 3D orientation. This is done by exploiting the relationship between image appearance and 3D structure, using machine learning methods with supervised training data. As such, the method does not require specific features or use geometric cues, such as vanishing points. We employ general feature representations based on spatiograms of gradients and colour, coupled with relevance vector machines for classification and regression. We first show that using hand-labelled training data, we are able to classify pre-segmented regions as being planar or not, and estimate their 3D orientation. We then incorporate the method into a segmentation algorithm to detect multiple planar structures from a previously unseen image.
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2015-08-04
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-04
    Description: Multiple view segmentation consists in segmenting objects simultaneously in several views. A key issue in that respect and compared to monocular settings is to ensure propagation of segmentation information between views while minimizing complexity and computational cost. In this work, we first investigate the idea that examining measurements at the projections of a sparse set of 3D points is sufficient to achieve this goal. The proposed algorithm softly assigns each of these 3D samples to the scene background if it projects on the background region in at least one view, or to the foreground if it projects on foreground region in all views. Second, we show how other modalities such as depth may be seamlessly integrated in the model and benefit the segmentation. The paper exposes a detailed set of experiments used to validate the algorithm, showing results comparable with the state of art, with reduced computational complexity. We also discuss the use of different modalities for specific situations, such as dealing with a low number of viewpoints or a scene with color ambiguities between foreground and background.
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-04
    Description: We revisit the matrix problems sparse null space and matrix sparsification , and show that they are equivalent. We then proceed to seek algorithms for these problems: we prove the hardness of approximation of these problems, and also give a powerful tool to extend algorithms and heuristics for sparse approximation theory to these problems.
    Print ISSN: 0178-4617
    Electronic ISSN: 1432-0541
    Topics: Computer Science , Mathematics
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    facet.materialart.
    Unknown
    Springer
    Publication Date: 2015-08-04
    Description: We introduce Planar Disjoint Paths Completion , a completion counterpart of the Disjoint Paths problem, and study its parameterized complexity. The problem can be stated as follows: given a, not necessarily connected, plane graph G ,  k pairs of terminals, and a face F of G ,  find a minimum-size set of edges, if one exists, to be added inside F so that the embedding remains planar and the pairs become connected by k disjoint paths in the augmented network. Our results are twofold: first, we give an upper bound on the number of necessary additional edges when a solution exists. This bound is a function of k , independent of the size of G . Second, we show that the problem is fixed-parameter tractable, in particular, it can be solved in time \(f(k)\cdot n^{2}\) .
    Print ISSN: 0178-4617
    Electronic ISSN: 1432-0541
    Topics: Computer Science , Mathematics
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2015-08-05
    Description: Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-04
    Description: Text detection in natural scene images is an important prerequisite for many content-based image analysis tasks, while most current research efforts only focus on horizontal or near horizontal scene text. In this paper, first we present a unified distance metric learning framework for adaptive hierarchical clustering, which can simultaneously learn similarity weights (to adaptively combine different feature similarities) and the clustering threshold (to automatically determine the number of clusters). Then, we propose an effective multi-orientation scene text detection system, which constructs text candidates by grouping characters based on this adaptive clustering. Our text candidates construction method consists of several sequential coarse-to-fine grouping steps: morphology-based grouping via single-link clustering, orientation-based grouping via divisive hierarchical clustering, and projection-based grouping also via divisive clustering. The effectiveness of our proposed system is evaluated on several public scene text databases, e.g., ICDAR Robust Reading Competition data sets (2011 and 2013), MSRA-TD500 and NEOCR. Specifically, on the multi-orientation text data set MSRA-TD500, the $f$ measure of our system is $71$ percent, much better than the state-of-the-art performance. We also construct and release a practical challenging multi-orientation scene text data set (USTB-SV1K), which is available at http://prir.ustb.edu.cn/TexStar/MOMV-text-detection/.
    Print ISSN: 0162-8828
    Electronic ISSN: 1939-3539
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2015-08-05
    Description: Digital publishing resources contain a lot of useful and authoritative knowledge. It may be necessary to reorganize the resources by concepts and recommend the related concepts for e-learning. A recommender system is presented in this paper based on the semantic relatedness of concepts computed by texts from digital publishing resources. Firstly, concepts are extracted from encyclopedias. Information in digital publishing resources is then reorganized by concepts. Secondly, concept vectors are generated by skip-gram model and semantic relatedness between concepts is measured according to the concept vectors. As a result, the related concepts and associated information can be recommended to users by the semantic relatedness for learning or reading. History data or users’ preferences data are not needed for recommendation in a specific domain. The technique may not be language-specific. The method shows potential usability for e-learning in a specific domain.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2015-08-05
    Description: Data depth is a statistical method which models data distribution in terms of center-outward ranking rather than density or linear ranking. While there are a lot of academic interests, its applications are hampered by the lack of a method which is both robust and efficient. This paper introduces Half-Space Mass which is a significantly improved version of half-space data depth. Half-Space Mass is the only data depth method which is both robust and efficient, as far as we know. We also reveal four theoretical properties of Half-Space Mass : (i) its resultant mass distribution is concave regardless of the underlying density distribution, (ii) its maximum point is unique which can be considered as median, (iii) the median is maximally robust, and (iv) its estimation extends to a higher dimensional space in which the convex hull of the dataset occupies zero volume. We demonstrate the power of Half-Space Mass through its applications in two tasks. In anomaly detection, being a maximally robust location estimator leads directly to a robust anomaly detector that yields a better detection accuracy than half-space depth; and it runs orders of magnitude faster than \(L_2\) depth, an existing maximally robust location estimator. In clustering, the Half-Space Mass version of K-means overcomes three weaknesses of K-means.
    Print ISSN: 0885-6125
    Electronic ISSN: 1573-0565
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2015-08-05
    Description: Publication date: Available online 3 August 2015 Source: SoftwareX Author(s): Gianmarco Alberti Correspondence Analysis (CA) is a statistical exploratory technique frequently used in many research fields to graphically visualize the structure of contingency tables. Many programs, both commercial and free, perform CA but none of them as yet provides a visual aid to the interpretation of the results. The ‘CAinterprTools’ package, designed to be used in the free R statistical environment, aims at filling that gap. A novel-to-medium R user has been considered as target. 15 commands enable to easily obtain charts that help (and are relevant to) the interpretation of the CA’s results, freeing the user from the need to inspect and scrutinize tabular CA outputs, and to look up values and statistics on which further calculations are necessary. The package also implements tests to assess the significance of the input table’s total inertia and individual dimensions.
    Electronic ISSN: 2352-7110
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2015-08-06
    Description: Background: Biological data mining is a powerful tool that can provide a wealth of information about patterns of genetic and genomic biomarkers of health and disease. A potential disadvantage of data mining is volume and complexity of the results that can often be overwhelming. It is our working hypothesis that visualization methods can greatly enhance our ability to make sense of data mining results. More specifically, we propose that 3-D printing has an important role to play as a visualization technology in biological data mining. We provide here a brief review of 3-D printing along with a case study to illustrate how it might be used in a research setting. Results: We present as a case study a genetic interaction network associated with grey matter density, an endophenotype for late onset Alzheimer’s disease, as a physical model constructed with a 3-D printer. The synergy or interaction effects of multiple genetic variants were represented through a color gradient of the physical connections between nodes. The digital gene-gene interaction network was then 3-D printed to generate a physical network model. Conclusions: The physical 3-D gene-gene interaction network provided an easily manipulated, intuitive and creative way to visualize the synergistic relationships between the genetic variants and grey matter density in patients with late onset Alzheimer’s disease. We discuss the advantages and disadvantages of this novel method of biological data mining visualization.
    Electronic ISSN: 1756-0381
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2015-08-06
    Description: One of the greatest concerns related to the popularity of GPS-enabled devices and applications is the increasing availability of the personal location information generated by them and shared with application and service providers. Moreover, people tend to have regular routines and be characterized by a set of “significant places”, thus making it possible to identify a user from his/her mobility data.In this paper we present a series of techniques for identifying individuals from their GPS movements. More specifically, we study the uniqueness of GPS information for three popular datasets, and we provide a detailed analysis of the discriminatory power of speed, direction and distance of travel. Most importantly, we present a simple yet effective technique for the identification of users from location information that are not included in the original dataset used for training, thus raising important privacy concerns for the management of location datasets.
    Electronic ISSN: 2193-1127
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2015-08-07
    Description: Background: According to the World Health Organization 130–150 million (according to WHO) of people globally are chronically infected with hepatitis C virus. The virus is responsible for chronic hepatitis that ultimately may cause liver cirrhosis and death. The disease is progressive, however antiviral treatment may slow down or stop its development. Therefore, it is important to estimate the severity of liver fibrosis for diagnostic, therapeutic and prognostic purposes.Liver biopsy provides a high accuracy diagnosis, however it is painful and invasive procedure. Recently, we witness an outburst of non-invasive tests (biological and physical ones) aiming to define severity of liver fibrosis, but commonly used FibroTest®, according to an independent research, in some cases may have accuracy lower than 50 %. In this paper a data mining and classification technique is proposed to determine the stage of liver fibrosis using easily accessible laboratory data. Methods: Research was carried out on archival records of routine laboratory blood tests (morphology, coagulation, biochemistry, protein electrophoresis) and histopathology records of liver biopsy as a reference value. As a result, the granular model was proposed, that contains a series of intervals representing influence of separate blood attributes on liver fibrosis stage. The model determines final diagnosis for a patient using aggregation method and voting procedure. The proposed solution is robust to missing or corrupted data. Results: The results were obtained on data from 290 patients with hepatitis C virus collected over 6 years. The model has been validated using training and test data. The overall accuracy of the solution is equal to 67.9 %. The intermediate liver fibrosis stages are hard to distinguish, due to effectiveness of biopsy itself. Additionally, the method was verified against dataset obtained from 365 patients with liver disease of various etiologies. The model proved to be robust to new data. What is worth mentioning, the error rate in misclassification of the first stage and the last stage is below 6.5 % for all analyzed datasets. Conclusions: The proposed system supports the physician and defines the stage of liver fibrosis in chronic hepatitis C. The biggest advantage of the solution is a human-centric approach using intervals, which can be verified by a specialist, before giving the final decision. Moreover, it is robust to missing data. The system can be used as a powerful support tool for diagnosis in real treatment.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Digital circuits are expected to increasingly suffer from more hard faults due to technology scaling. Especially, a single hard fault in ALU (Arithmetic Logic Unit) might lead to a total failure in processors or significantly reduce their performance. To address these increasingly important problems, we propose a novel cost-efficient fault-tolerant mechanism for the ALU, called LIZARD. LIZARD employs two half-word ALUs, instead of a single full-word ALU, to perform computations with concurrent fault detection. When a fault is detected, the two ALUs are partitioned into four quarter-word ALUs. After diagnosing and isolating a faulty quarter-word ALU, LIZARD continues its operation using the remaining ones, which can detect and isolate another fault. Even though LIZARD uses narrow ALUs for computations, it adds negligible performance overhead through exploiting predictability of the results in the arithmetic computations. We also present the architectural modifications when employing LIZARD for scalar as well as superscalar processors. Through comparative evaluation, we demonstrate that LIZARD outperforms other competitive fault-tolerant mechanisms in terms of area, energy consumption, performance and reliability.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Information searches are the most common application within social networks. Normally, the social network is modeled as a network graph, consisting of nodes (In the rest of the paper, unless otherwise specified, we will use the terms “user” and “node” interchangeably.) representing users within the network and edges representing relationships between users. Choosing the appropriate nodes to form an auxiliary structure for supporting the effective query message spreading can reduce the troublesome repeated queries. To accomplish this, a hybrid search (HS) scheme is proposed. If the query message is received by a node belonging the auxiliary structure constructed by dynamic weighted distributed label clustering (DW-DLC), it would be flooded to all neighbors of the visited node; otherwise, it would be forwarded to one neighbor of the visited node. The DW-DLC based auxiliary structure can accelerate the process of obtaining required information within the network. The simulation results show that the HS+DW-DLC scheme can reduce the average searching delay time, even in a required-information-scarce social network. In addition, the proposed scheme can generate a relatively low amount of repeated messages to lower repeatedly asking social network users.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: This paper presents a derivation of four radix-2 division algorithms by digit recurrence. Each division algorithm selects a quotient digit from the over-redundant digit set {−2, −1, 0, 1, 2}, and the selection of each quotient digit depends only on the two most-significant digits of the partial remainder in a redundant representation. Two algorithms use a two’s complement representation for the partial remainder and carry-save additions, and the other two algorithms use a binary signed-digit representation for the partial remainder and carry-free additions. Three algorithms are novel. The fourth algorithm has been presented before. Results from the synthesized netlists show that two of our fastest algorithms achieve an improvement of 10 percent in latency per iteration over a standard radix-2 SRT algorithm at the cost of 36 percent more power and 50 percent more area.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: We present WaFS, a user-level file system, and a related scheduling algorithm for scientific workflow computation in the cloud. WaFS’s primary design goal is to automatically detect and gather the explicit and implicit data dependencies between workflow jobs, rather than high-performance file access. Using WaFS’s data, a workflow scheduler can either make effective cost-performance tradeoffs or improve storage utilization. Proper resource provisioning and storage utilization on pay-as-you-go clouds can be more cost effective than the uses of resources in traditional HPC systems. WaFS and the scheduler controls the number of concurrent workflow instances at runtime so that the storage is well used, while the total makespan (i.e., turnaround time for a workload) is not severely compromised. We describe the design and implementation of WaFS and the new workflow scheduling algorithm based on our previous work. We present empirical evidence of the acceptable overheads of our prototype WaFS and describe a simulation-based study, using representative workflows, to show the makespan benefits of our WaFS-enabled scheduling algorithm.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Given a database table with records that can be ranked, an interesting problem is to identify selection conditions for the table, which are qualified by an input record and render its ranking as high as possible among the qualifying tuples. In this paper, we study this standing maximization problem, which finds application in object promotion and characterization. After showing the hardness of the problem, we propose greedy methods, which are experimentally shown to achieve high accuracy compared to exhaustive enumeration, while scaling very well to the problem input size. Our contributions include a linear-time algorithm for determining the optimal selection range for an ordinal attribute and techniques for choosing and prioritizing the most promising selection predicates to apply. Experiments on real datasets confirm the effectiveness and efficiency of our techniques.
    Print ISSN: 1041-4347
    Electronic ISSN: 1558-2191
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Some fairly recent research has focused on providing XACML-based solutions for dynamic privacy policy management. In this regard, a number of works have provided enhancements to the performance of XACML policy enforcement point (PEP) component, but very few have focused on enhancing the accuracy of that component. This paper improves the accuracy of an XACML PEP by filling some gaps in the existing works. In particular, dynamically incorporating user access context into the privacy policy decision, and its enforcement. We provide an XACML-based implementation of a dynamic privacy policy management framework and an evaluation of the applicability of our system in comparison to some of the existing approaches.
    Print ISSN: 1041-4347
    Electronic ISSN: 1558-2191
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: This paper first introduces pattern aided regression (PXR) models, a new type of regression models designed to represent accurate and interpretable prediction models. This was motivated by two observations: (1) Regression modeling applications often involve complex diverse predictor-response relationships , which occur when the optimal regression models (of given regression model type) fitting two or more distinct logical groups of data are highly different. (2) State-of-the-art regression methods are often unable to adequately model such relationships. This paper defines PXR models using several patterns and local regression models, which respectively serve as logical and behavioral characterizations of distinct predictor-response relationships. The paper also introduces a contrast pattern aided regression (CPXR) method, to build accurate PXR models. In experiments, the PXR models built by CPXR are very accurate in general, often outperforming state-of-the-art regression methods by big margins. Usually using (a) around seven simple patterns and (b) linear local regression models, those PXR models are easy to interpret; in fact, their complexity is just a bit higher than that of (piecewise) linear regression models and is significantly lower than that of traditional ensemble based regression models. CPXR is especially effective for high-dimensional data. The paper also discusses how to use CPXR methodology for analyzing prediction models and correcting their prediction errors.
    Print ISSN: 1041-4347
    Electronic ISSN: 1558-2191
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2015-08-07
    Description: This paper presents an anomaly detection model that is granular and distributed to accurately and efficiently identify sensed data anomalies within wireless sensor networks. A more decentralised mechanism is introduced with wider use of in-network processing on a hierarchical sensor node topology resulting in a robust framework for dynamic data domains. This efficiently addresses the big data issue that is encountered in large scale industrial sensor network applications. Data vectors on each node’s observation domain is first partitioned using an unsupervised approach that is adaptive regarding dynamic data streams using cumulative point-wise entropy and average relative density . Second order statistical analysis applied on average relative densities and mean entropy values is then used to differentiate anomalies through robust and adaptive thresholds that are responsive to a dynamic environment. Anomaly detection is then performed in a non-parametric and non-probabilistic manner over the different network tiers in the hierarchical topology in offering increased granularity for evaluation. Experiments were performed extensively using both real and artificial data distributions representative of different dynamic and multi-density observation domains. Results demonstrate higher accuracies in detection as more than 94 percent accompanied by a desirable reduction of more than 85 percent in communication costs when compared to existing centralized methods.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: We analyze models for predicting the probability of a strikeout for a batter/pitcher matchup in baseball using player descriptors that can be estimated accurately from small samples. We start with the log5 model which has been used extensively for describing matchups in sports. Log5 is a special case of a logit model and we use constrained logistic regression over nearly one million matchup observations to assess the use of the log5 explanatory variables for this application. We also show that a batter/pitcher ground ball rate interaction variable is significant for the prediction of strikeout probability and we provide physical justification for the inclusion of this variable in the model. We quantify the differences among the models and show that batters control the majority of the variance in predicted strikeout rate.
    Print ISSN: 1041-4347
    Electronic ISSN: 1558-2191
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: This study proposes a quantitative measurement of split of the second heart sound (S2) based on nonstationary signal decomposition to deal with overlaps and energy modeling of the subcomponents of S2. The second heart sound includes aortic (A2) and pulmonic (P2) closure sounds. However, the split detection is obscured due to A2-P2 overlap and low energy of P2. To identify such split, HVD method is used to decompose the S2 into a number of components while preserving the phase information. Further, A2s and P2s are localized using smoothed pseudo Wigner-Ville distribution followed by reassignment method. Finally, the split iscalculated by taking the differences between the means of time indices of A2s and P2s. Experiments on total 33 clips of S2 signals are performed for evaluation of the method. The mean ± standard deviation of the split is 34.7 ± 4.6 ms. The method measures the splitefficiently, even when A2-P2 overlap is ≤ 20 ms and the normalized peak temporal ratio of P2 to A2 is low (≥ 0.22). This proposed method thus, demonstrates its robustness by defining split detectability (SDT), the split detection aptness through detecting P2s, by measuring upto 96 percent. Such findings reveal the effectiveness of the method as competent against the other baselines, especially for A2-P2 overlaps and low energy P2.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Post-acquisition denoising of magnetic resonance (MR) images is an important step to improve any quantitative measurement of the acquired data. In this paper, assuming a Rician noise model, a new filtering method based on the linear minimum mean square error (LMMSE) estimation is introduced, which employs the self-similarity property of the MR data to restore the noise-less signal. This method takes into account the structural characteristics of images and the Bayesian mean square error (Bmse) of the estimator to address the denoising problem. In general, a twofold data processing approach is developed; first, the noisy MR data is processed using a patch-based L 2 -norm similarity measure to provide the primary set of samples required for the estimation process. Afterwards, the Bmse of the estimator is derived as the optimization function to analyze the pre-selected samples and minimize the error between the estimated and the underlying signal. Compared to the LMMSE method and also its recently proposed SNR-adapted realization (SNLMMSE), the optimized way of choosing the samples together with the automatic adjustment of the filtering parameters lead to a more robust estimation performance with our approach. Experimental results show the competitive performance of the proposed method in comparison with related state-of-the-art methods.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2015-08-07
    Description: Large-scale ad hoc analytics of genomic data is popular using the R-programming language supported by over 700 software packages provided by Bioconductor. More recently, analytical jobs are benefitting from on-demand computing and storage, their scalability and their low maintenance cost, all of which are offered by the cloud. While biologists and bioinformaticists can take an analytical job and execute it on their personal workstations, it remains challenging to seamlessly execute the job on the cloud infrastructure without extensive knowledge of the cloud dashboard. How analytical jobs can not only with minimum effort be executed on the cloud, but also how both the resources and data required by the job can be managed is explored in this paper. An open-source light-weight framework for executing R-scripts using Bioconductor packages, referred to as ‘RBioCloud’, is designed and developed. RBioCloud offers a set of simple command-line tools for managing the cloud resources, the data and the execution of the job. Three biological test cases validate the feasibility of RBioCloud. The framework is available from http://www.rbiocloud.com .
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2015-08-07
    Description: Of major interest to translational genomics is the intervention in gene regulatory networks (GRNs) to affect cell behavior; in particular, to alter pathological phenotypes. Owing to the complexity of GRNs, accurate network inference is practically challenging and GRN models often contain considerable amounts of uncertainty. Considering the cost and time required for conducting biological experiments, it is desirable to have a systematic method for prioritizing potential experiments so that an experiment can be chosen to optimally reduce network uncertainty. Moreover, from a translational perspective it is crucial that GRN uncertainty be quantified and reduced in a manner that pertains to the operational cost that it induces, such as the cost of network intervention. In this work, we utilize the concept of mean objective cost of uncertainty (MOCU) to propose a novel framework for optimal experimental design. In the proposed framework, potential experiments are prioritized based on the MOCU expected to remain after conducting the experiment. Based on this prioritization, one can select an optimal experiment with the largest potential to reduce the pertinent uncertainty present in the current network model. We demonstrate the effectiveness of the proposed method via extensive simulations based on synthetic and real gene regulatory networks.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2015-08-07
    Description: A novel approach to Contact Map Overlap (CMO) problem is proposed using the two dimensional clusters present in the contact maps. Each protein is represented as a set of the non-trivial clusters of contacts extracted from its contact map. The approach involves finding matching regions between the two contact maps using approximate 2D-pattern matching algorithm and dynamic programming technique. These matched pairs of small contact maps are submitted in parallel to a fast heuristic CMO algorithm. The approach facilitates parallelization at this level since all the pairs of contact maps can be submitted to the algorithm in parallel. Then, a merge algorithm is used in order to obtain the overall alignment. As a proof of concept, MSVNS, a heuristic CMO algorithm is used for global as well as local alignment. The divide and conquer approach is evaluated for two benchmark data sets that of Skolnick and Ding et al. It is interesting to note that along with achieving saving of time, better overlap is also obtained for certain protein folds.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Canalizing genes possess broad regulatory power over a wide swath of regulatory processes. On the other hand, it has been hypothesized that the phenomenon of intrinsically multivariate prediction (IMP) is associated with canalization. However, applications have relied on user-selectable thresholds on the IMP score to decide on the presence of IMP. A methodology is developed here that avoids arbitrary thresholds, by providing a statistical test for the IMP score. In addition, the proposed procedure allows the incorporation of prior knowledge if available, which can alleviate the problem of loss of power due to small sample sizes. The issue of multiplicity of tests is addressed by family-wise error rate (FWER) and false discovery rate (FDR) controlling approaches. The proposed methodology is demonstrated by experiments using synthetic and real gene-expression data from studies on melanoma and ionizing radiation (IR) responsive genes. The results with the real data identified DUSP1 and p53, two well-known canalizing genes associated with melanoma and IR response, respectively, as the genes with a clear majority of IMP predictor pairs. This validates the potential of the proposed methodology as a tool for discovery of canalizing genes from binary gene-expression data. The procedure is made available through an R package.
    Print ISSN: 1545-5963
    Electronic ISSN: 1557-9964
    Topics: Biology , Computer Science
    Published by Institute of Electrical and Electronics Engineers (IEEE) on behalf of The IEEE Computational Intelligence Society ; The IEEE Computer Society ; The IEEE Control Systems Society ; The IEEE Engineering in Medicine and Biology Society ; The Association for Computing Machinery.
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    facet.materialart.
    Unknown
    Public Library of Science (PLoS)
    Publication Date: 2015-08-07
    Description: by Eliseo Ferrante, Ali Emre Turgut, Edgar Duéñez-Guzmán, Marco Dorigo, Tom Wenseleers Division of labor is ubiquitous in biological systems, as evidenced by various forms of complex task specialization observed in both animal societies and multicellular organisms. Although clearly adaptive, the way in which division of labor first evolved remains enigmatic, as it requires the simultaneous co-occurrence of several complex traits to achieve the required degree of coordination. Recently, evolutionary swarm robotics has emerged as an excellent test bed to study the evolution of coordinated group-level behavior. Here we use this framework for the first time to study the evolutionary origin of behavioral task specialization among groups of identical robots. The scenario we study involves an advanced form of division of labor, common in insect societies and known as “task partitioning”, whereby two sets of tasks have to be carried out in sequence by different individuals. Our results show that task partitioning is favored whenever the environment has features that, when exploited, reduce switching costs and increase the net efficiency of the group, and that an optimal mix of task specialists is achieved most readily when the behavioral repertoires aimed at carrying out the different subtasks are available as pre-adapted building blocks. Nevertheless, we also show for the first time that self-organized task specialization could be evolved entirely from scratch, starting only from basic, low-level behavioral primitives, using a nature-inspired evolutionary method known as Grammatical Evolution. Remarkably, division of labor was achieved merely by selecting on overall group performance, and without providing any prior information on how the global object retrieval task was best divided into smaller subtasks. We discuss the potential of our method for engineering adaptively behaving robot swarms and interpret our results in relation to the likely path that nature took to evolve complex sociality and task specialization.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2015-08-07
    Description: by Patrícia Santos-Oliveira, António Correia, Tiago Rodrigues, Teresa M Ribeiro-Rodrigues, Paulo Matafome, Juan Carlos Rodríguez-Manzaneque, Raquel Seiça, Henrique Girão, Rui D. M. Travasso Sprouting angiogenesis, where new blood vessels grow from pre-existing ones, is a complex process where biochemical and mechanical signals regulate endothelial cell proliferation and movement. Therefore, a mathematical description of sprouting angiogenesis has to take into consideration biological signals as well as relevant physical processes, in particular the mechanical interplay between adjacent endothelial cells and the extracellular microenvironment. In this work, we introduce the first phase-field continuous model of sprouting angiogenesis capable of predicting sprout morphology as a function of the elastic properties of the tissues and the traction forces exerted by the cells. The model is very compact, only consisting of three coupled partial differential equations, and has the clear advantage of a reduced number of parameters. This model allows us to describe sprout growth as a function of the cell-cell adhesion forces and the traction force exerted by the sprout tip cell. In the absence of proliferation, we observe that the sprout either achieves a maximum length or, when the traction and adhesion are very large, it breaks. Endothelial cell proliferation alters significantly sprout morphology, and we explore how different types of endothelial cell proliferation regulation are able to determine the shape of the growing sprout. The largest region in parameter space with well formed long and straight sprouts is obtained always when the proliferation is triggered by endothelial cell strain and its rate grows with angiogenic factor concentration. We conclude that in this scenario the tip cell has the role of creating a tension in the cells that follow its lead. On those first stalk cells, this tension produces strain and/or empty spaces, inevitably triggering cell proliferation. The new cells occupy the space behind the tip, the tension decreases, and the process restarts. Our results highlight the ability of mathematical models to suggest relevant hypotheses with respect to the role of forces in sprouting, hence underlining the necessary collaboration between modelling and molecular biology techniques to improve the current state-of-the-art.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2015-08-08
    Description: by Sayed-Rzgar Hosseini, Aditya Barve, Andreas Wagner All biological evolution takes place in a space of possible genotypes and their phenotypes. The structure of this space defines the evolutionary potential and limitations of an evolving system. Metabolism is one of the most ancient and fundamental evolving systems, sustaining life by extracting energy from extracellular nutrients. Here we study metabolism’s potential for innovation by analyzing an exhaustive genotype-phenotype map for a space of 10 15 metabolisms that encodes all possible subsets of 51 reactions in central carbon metabolism. Using flux balance analysis, we predict the viability of these metabolisms on 10 different carbon sources which give rise to 1024 potential metabolic phenotypes. Although viable metabolisms with any one phenotype comprise a tiny fraction of genotype space, their absolute numbers exceed 10 9 for some phenotypes. Metabolisms with any one phenotype typically form a single network of genotypes that extends far or all the way through metabolic genotype space, where any two genotypes can be reached from each other through a series of single reaction changes. The minimal distance of genotype networks associated with different phenotypes is small, such that one can reach metabolisms with novel phenotypes – viable on new carbon sources – through one or few genotypic changes. Exceptions to these principles exist for those metabolisms whose complexity (number of reactions) is close to the minimum needed for viability. Increasing metabolic complexity enhances the potential for both evolutionary conservation and evolutionary innovation.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2015-08-08
    Description: Motivation: Stem cell differentiation is largely guided by master transcriptional regulators, but it also depends on the expression of other types of genes, such as cell cycle genes, signaling genes, metabolic genes, trafficking genes, etc. Traditional approaches to understanding gene expression patterns across multiple conditions, such as principal components analysis or K-means clustering, can group cell types based on gene expression, but they do so without knowledge of the differentiation hierarchy. Hierarchical clustering can organize cell types into a tree, but in general this tree is different from the differentiation hierarchy itself. Methods: Given the differentiation hierarchy and gene expression data at each node, we construct a weighted Euclidean distance metric such that the minimum spanning tree with respect to that metric is precisely the given differentiation hierarchy. We provide a set of linear constraints that are provably sufficient for the desired construction and a linear programming approach to identify sparse sets of weights, effectively identifying genes that are most relevant for discriminating different parts of the tree. Results: We apply our method to microarray gene expression data describing 38 cell types in the hematopoiesis hierarchy, constructing a weighted Euclidean metric that uses just 175 genes. However, we find that there are many alternative sets of weights that satisfy the linear constraints. Thus, in the style of random-forest training, we also construct metrics based on random subsets of the genes and compare them to the metric of 175 genes. We then report on the selected genes and their biological functions. Our approach offers a new way to identify genes that may have important roles in stem cell differentiation. Contact: tperkins@ohri.ca Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2015-08-08
    Description: Motivation: Principal component analysis (PCA) is a basic tool often used in bioinformatics for visualization and dimension reduction. However, it is known that PCA may not consistently estimate the true direction of maximal variability in high-dimensional, low sample size settings, which are typical for molecular data. Assuming that the underlying signal is sparse, i.e. that only a fraction of features contribute to a principal component (PC), this estimation consistency can be retained. Most existing sparse PCA methods use L1-penalization, i.e. the lasso , to perform feature selection. But, the lasso is known to lack variable selection consistency in high dimensions and therefore a subsequent interpretation of selected features can give misleading results. Results: We present S4VDPCA, a sparse PCA method that incorporates a subsampling approach, namely stability selection. S4VDPCA can consistently select the truly relevant variables contributing to a sparse PC while also consistently estimate the direction of maximal variability. The performance of the S4VDPCA is assessed in a simulation study and compared to other PCA approaches, as well as to a hypothetical oracle PCA that ‘knows’ the truly relevant features in advance and thus finds optimal, unbiased sparse PCs. S4VDPCA is computationally efficient and performs best in simulations regarding parameter estimation consistency and feature selection consistency. Furthermore, S4VDPCA is applied to a publicly available gene expression data set of medulloblastoma brain tumors. Features contributing to the first two estimated sparse PCs represent genes significantly over-represented in pathways typically deregulated between molecular subgroups of medulloblastoma. Availability and implementation: Software is available at https://github.com/mwsill/s4vdpca . Contact: m.sill@dkfz.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2015-08-08
    Description: Motivation: Glycans play critical roles in many biological processes, and their structural diversity is key for specific protein-glycan recognition. Comparative structural studies of biological molecules provide useful insight into their biological relationships. However, most computational tools are designed for protein structure, and despite their importance, there is no currently available tool for comparing glycan structures in a sequence order- and size-independent manner. Results: A novel method, GS-align, is developed for glycan structure alignment and similarity measurement. GS-align generates possible alignments between two glycan structures through iterative maximum clique search and fragment superposition. The optimal alignment is then determined by the maximum structural similarity score, GS-score, which is size-independent. Benchmark tests against the Protein Data Bank (PDB) N -linked glycan library and PDB homologous/non-homologous N -glycoprotein sets indicate that GS-align is a robust computational tool to align glycan structures and quantify their structural similarity. GS-align is also applied to template-based glycan structure prediction and monosaccharide substitution matrix generation to illustrate its utility. Availability and implementation: http://www.glycanstructure.org/gsalign . Contact: wonpil@ku.edu Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2015-08-08
    Description: Motivation: Impedance-based technologies are advancing methods for measuring proliferation of adherent cell cultures non-invasively and in real time. The analysis of the resulting data has so far been hampered by inappropriate computational methods and the lack of systematic data to evaluate the characteristics of the assay. Results: We used a commercially available system for impedance-based growth measurement (xCELLigence) and compared the reported cell index with data from microscopy. We found that the measured signal correlates linearly with the cell number throughout the time of an experiment with sufficient accuracy in subconfluent cell cultures. The resulting growth curves for various colon cancer cells could be well described with the empirical Richards growth model, which allows for extracting quantitative parameters (such as characteristic cycle times). We found that frequently used readouts like the cell index at a specific time or the area under the growth curve cannot be used to faithfully characterize growth inhibition. We propose to calculate the average growth rate of selected time intervals to accurately estimate time-dependent IC50 values of drugs from growth curves. Contact: nils.bluethgen@charite.de Supplementary information: Supplementary data are available at Bioinformatics online.
    Print ISSN: 1367-4803
    Electronic ISSN: 1460-2059
    Topics: Biology , Computer Science , Medicine
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2015-08-18
    Description: The identification of distinct seismic regions and features’ extraction of theirs could provide valuable information towards the better understanding of the underlying physics, the generation mechanism and the behavior of the seismic phenomenon. This research paper works towards that direction and unveils the potential presence of a distinct seismic region located in between the Ionean sea and the Cretan see, extending south-east of Peloponnesus, Greece. This observation has emerged as a result of the development and application of a spatio temporal clustering algorithm based on expert knowledge upon seismic data provided by the Geodynamics Institute of Athens, and is further supported by geological observations, which have unveiled the presence of two parallel groups of underground faults beneath the newly discovered potential distinct seismic region. The overall spatio temporal clustering results throughout the Greek vicinity are also in accordance with empirical observations reported in the literature and coincide with cartographic groups of underground faults of Greece.
    Print ISSN: 1865-0473
    Electronic ISSN: 1865-0481
    Topics: Geosciences , Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2015-08-19
    Description: by Pengxing Cao, Ada W. C. Yan, Jane M. Heffernan, Stephen Petrie, Robert G. Moss, Louise A. Carolan, Teagan A. Guarnaccia, Anne Kelso, Ian G. Barr, Jodie McVernon, Karen L. Laurie, James M. McCaw Influenza is an infectious disease that primarily attacks the respiratory system. Innate immunity provides both a very early defense to influenza virus invasion and an effective control of viral growth. Previous modelling studies of virus–innate immune response interactions have focused on infection with a single virus and, while improving our understanding of viral and immune dynamics, have been unable to effectively evaluate the relative feasibility of different hypothesised mechanisms of antiviral immunity. In recent experiments, we have applied consecutive exposures to different virus strains in a ferret model, and demonstrated that viruses differed in their ability to induce a state of temporary immunity or viral interference capable of modifying the infection kinetics of the subsequent exposure. These results imply that virus-induced early immune responses may be responsible for the observed viral hierarchy. Here we introduce and analyse a family of within-host models of re-infection viral kinetics which allow for different viruses to stimulate the innate immune response to different degrees. The proposed models differ in their hypothesised mechanisms of action of the non-specific innate immune response. We compare these alternative models in terms of their abilities to reproduce the re-exposure data. Our results show that 1) a model with viral control mediated solely by a virus-resistant state, as commonly considered in the literature, is not able to reproduce the observed viral hierarchy; 2) the synchronised and desynchronised behaviour of consecutive virus infections is highly dependent upon the interval between primary virus and challenge virus exposures and is consistent with virus-dependent stimulation of the innate immune response. Our study provides the first mechanistic explanation for the recently observed influenza viral hierarchies and demonstrates the importance of understanding the host response to multi-strain viral infections. Re-exposure experiments provide a new paradigm in which to study the immune response to influenza and its role in viral control.
    Print ISSN: 1553-734X
    Electronic ISSN: 1553-7358
    Topics: Biology , Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2015-08-19
    Description: In a virtualized environment, different virtual networks can operate over the same physical infrastructure. Each virtual network has its own protocols and share the available resources, thus highlighting the need of resource isolation mechanisms.Investigating the isolation mechanisms provided by FlowVisor, we have discovered vulnerabilities previously unknown regarding addressing space isolation. We show that, in the presence of a malicious controller, FlowVisor’s isolation can be broken allowing different attacks. This paper addresses these vulnerabilities by proposing an Action Slicing mechanism, that allows FlowVisor to limit which actions can be used by each virtual network controller, thus extending the virtual network definition. Our experimental results show that using the proposed Action Slicing mechanism can effectively neutralize the discovered vulnerabilities.
    Print ISSN: 1867-4828
    Electronic ISSN: 1869-0238
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2015-08-19
    Description: Uniaxial compressive strength (UCS) of rock is crucial for any type of projects constructed in/on rock mass. The test that is conducted to measure the UCS of rock is expensive, time consuming and having sample restriction. For this reason, the UCS of rock may be estimated using simple rock tests such as point load index ( I s(50) ), Schmidt hammer ( R n ) and p-wave velocity ( V p ) tests. To estimate the UCS of granitic rock as a function of relevant rock properties like R n , p-wave and I s(50) , the rock cores were collected from the face of the Pahang–Selangor fresh water tunnel in Malaysia. Afterwards, 124 samples are prepared and tested in accordance with relevant standards and the dataset is obtained. Further an established dataset is used for estimating the UCS of rock via three-nonlinear prediction tools, namely non-linear multiple regression (NLMR), artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). After conducting the mentioned models, considering several performance indices including coefficient of determination ( R 2 ), variance account for and root mean squared error and also using simple ranking procedure, the models were examined and the best prediction model was selected. It is concluded that the R 2 equal to 0.951 for testing dataset suggests the superiority of the ANFIS model, while these values are 0.651 and 0.886 for NLMR and ANN techniques, respectively. The results pointed out that the ANFIS model can be used for predicting UCS of rocks with higher capacity in comparison with others. However, the developed model may be useful at a preliminary stage of design; it should be used with caution and only for the specified rock types.
    Print ISSN: 0177-0667
    Electronic ISSN: 1435-5663
    Topics: Computer Science , Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics , Technology
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2015-08-20
    Description: Background: Health decision-making requires evidence from high-quality data. As one example, the Discharge Abstract Database (DAD) compiles data from the majority of Canadian hospitals to form one of the most comprehensive and highly regarded administrative health databases available for health research, internationally. However, despite the success of this and other administrative health data resources, little is known about their history or the factors that have led to their success. The purpose of this paper is to provide an historical overview of Canadian administrative health data for health research to contribute to the institutional memory of this field. Methods: We conducted a qualitative content analysis of approximately 20 key sources to construct an historical narrative of administrative health data in Canada. Specifically, we searched for content related to key events, individuals, challenges, and successes in this field over time. Results: In Canada, administrative health data for health research has developed in tangent with provincial research centres. Interestingly, the lessons learned from this history align with the original recommendations of the 1964 Royal Commission on Health Services: (1) standardization, and (2) centralization of data resources, that is (3) facilitated through governmental financial support. Conclusions: The overview history provided here illustrates the need for longstanding partnerships between government and academia, for classification, terminology and standardization are time-consuming and ever-evolving processes. This paper will be of interest to those who work with administrative health data, and also for countries that are looking to build or improve upon their use of administrative health data for decision-making.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2015-08-20
    Description: The paper explores use of psychometric analysis based on latent trait theory to study quality of information propagation in online social networks. The collective intelligence of users of the network could be used to determine credibility of information. We use the latent trait of ability of users to distinguish between true information and misinformation as a measure of social computing in the network. Using repropagation features available in these networks as an affirmation of credibility of information, we build a dichotomous item response matrix which is evaluated using different models in latent trait theory. This enables us to detect presence of misinformation and also evaluate trust of users in the sources of information. Trust between users and sources of information is further used to construct a polytomous matrix. The matrices are evaluated using polytomous latent theory models to evaluate the types of trust and segregate possible collusion of users to spread misinformation. We show experimental results of psychometric analysis carried out in data sets obtained from ‘Twitter’ to support our claim.
    Print ISSN: 0010-485X
    Electronic ISSN: 1436-5057
    Topics: Computer Science
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2015-08-21
    Description: Background: Membrane proteins represent over 25 % of human protein genes and account for more than 60 % of drug targets due to their accessibility from the extracellular environment. The increasing number of available crystal structures of these proteins in the Protein Data Bank permits an initial estimation of their structural properties.DescriptionWe have developed two web servers—TMalphaDB for α-helix bundles and TMbetaDB for β-barrels—to analyse the growing repertoire of available crystal structures of membrane proteins. TMalphaDB and TMbetaDB permit to search for these specific sequence motifs in a non-redundant structure database of transmembrane segments and quantify structural parameters such as ϕ and ψ backbone dihedral angles, χ 1 side chain torsion angle, unit bend and unit twist. Conclusions: The structural information offered by TMalphaDB and TMbetaDB permits to quantify structural distortions induced by specific sequence motifs, and to elucidate their role in the 3D structure. This specific structural information has direct implications in homology modeling of the growing sequences of membrane proteins lacking experimental structure. TMalphaDB and TMbetaDB are freely available at http://lmc.uab.cat/TMalphaDB and http://lmc.uab.cat/TMbetaDB.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2015-08-21
    Description: Background: Scoring DNA sequences against Position Weight Matrices (PWMs) is a widely adopted method to identify putative transcription factor binding sites. While common bioinformatics tools produce scores that can reflect the binding strength between a specific transcription factor and the DNA, these scores are not directly comparable between different transcription factors. Other methods, including p-value associated approaches (Touzet H, Varré J-S. Efficient and accurate p-value computation for position weight matrices. Algorithms Mol Biol. 2007;2(1510.1186):1748–7188), provide more rigorous ways to identify potential binding sites, but their results are difficult to interpret in terms of binding energy, which is essential for the modeling of transcription factor binding dynamics and enhancer activities. Results: Here, we provide two different ways to find the scaling parameter λ that allows us to infer binding energy from a PWM score. The first approach uses a PWM and background genomic sequence as input to estimate λ for a specific transcription factor, which we applied to show that λ distributions for different transcription factor families correspond with their DNA binding properties. Our second method can reliably convert λ between different PWMs of the same transcription factor, which allows us to directly compare PWMs that were generated by different approaches. Conclusion: These two approaches provide computationally efficient ways to scale PWM scores and estimate the strength of transcription factor binding sites in quantitative studies of binding dynamics. Their results are consistent with each other and previous reports in most of cases.
    Electronic ISSN: 1471-2105
    Topics: Biology , Computer Science
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2015-08-21
    Description: Without baseline information prior to damage, the damage should be detected by only measured data. Most non-baseline damage detection methods can be sensitive to the external noise and have difficulty in detecting damage when the sensor is not close to it. The purpose of this study is to propose a non-baseline damage detection method using only a few measurements and being less sensitive to the noise. A set of pseudo-reference data to be established at measurement instant is compared with another set of measurement data on the structure with the additional damage at a known location. The damage is found at the location to represent the abrupt change of the difference in the two response data sets. Compared to the global-deviation method, this method has merits to reduce the noise effect and to be able to detect damage by a few sensors. The experimental work also investigates the sensitivity of accelerometer and strain gage in detecting damage. The proposed method is verified in a numerical application and an experiment.
    Print ISSN: 0177-0667
    Electronic ISSN: 1435-5663
    Topics: Computer Science , Mechanical Engineering, Materials Science, Production Engineering, Mining and Metallurgy, Traffic Engineering, Precision Mechanics , Technology
    Published by Springer
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...