ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
  • Artikel  (1.383)
  • Molecular Diversity Preservation International  (1.383)
  • 2020-2022  (496)
  • 2015-2019  (804)
  • 2005-2009  (83)
  • 1975-1979
  • 1945-1949
  • Algorithms. 2008; 1(1): 1-1. Published 2008 Sep 04. doi: 10.3390/a1010001.  (1)
  • Algorithms. 2008; 1(1): 2-29. Published 2008 Sep 12. doi: 10.3390/a1010002.  (1)
  • Algorithms. 2008; 1(1): 30-42. Published 2008 Sep 19. doi: 10.3390/a1010030.  (1)
  • Algorithms. 2008; 1(2): 100-129. Published 2008 Nov 24. doi: 10.3390/a1020100.  (1)
  • Algorithms. 2008; 1(2): 130-152. Published 2008 Dec 03. doi: 10.3390/a1020130.  (1)
  • Algorithms. 2008; 1(2): 153-182. Published 2008 Dec 15. doi: 10.3390/a1020153.  (1)
  • Algorithms. 2008; 1(2): 183-200. Published 2008 Dec 18. doi: 10.3390/a1020183.  (1)
  • Algorithms. 2008; 1(2): 43-51. Published 2008 Oct 09. doi: 10.3390/a1020043.  (1)
  • Algorithms. 2008; 1(2): 52-68. Published 2008 Oct 30. doi: 10.3390/a1020052.  (1)
  • Algorithms. 2008; 1(2): 69-99. Published 2008 Nov 03. doi: 10.3390/a1020069.  (1)
  • Algorithms. 2009; 2(1): 1-18. Published 2009 Jan 13. doi: 10.3390/a2010001.  (1)
  • Algorithms. 2009; 2(1): 108-120. Published 2009 Feb 02. doi: 10.3390/a2010108.  (1)
  • Algorithms. 2009; 2(1): 121-157. Published 2009 Feb 03. doi: 10.3390/a2010121.  (1)
  • Algorithms. 2009; 2(1): 158-182. Published 2009 Feb 09. doi: 10.3390/a2010158.  (1)
  • Algorithms. 2009; 2(1): 183-199. Published 2009 Feb 10. doi: 10.3390/a2010183.  (1)
  • Algorithms. 2009; 2(1): 19-30. Published 2009 Jan 16. doi: 10.3390/a2010019.  (1)
  • Algorithms. 2009; 2(1): 200-214. Published 2009 Feb 10. doi: 10.3390/a2010200.  (1)
  • Algorithms. 2009; 2(1): 215-226. Published 2009 Feb 10. doi: 10.3390/a2010215.  (1)
  • Algorithms. 2009; 2(1): 227-246. Published 2009 Feb 17. doi: 10.3390/a2010227.  (1)
  • Algorithms. 2009; 2(1): 247-258. Published 2009 Feb 17. doi: 10.3390/a2010247.  (1)
  • 110151
  • 1288
  • 3363
  • 56466
  • Informatik  (1.383)
Sammlung
  • Artikel  (1.383)
Verlag/Herausgeber
Erscheinungszeitraum
Jahr
Zeitschrift
Thema
  • Informatik  (1.383)
  • 1
    Publikationsdatum: 2020-08-29
    Beschreibung: Healthcare facilities are constantly deteriorating due to tight budgets allocated to the upkeep of building assets. This entails the need for improved deterioration modeling of such buildings in order to enforce a predictive maintenance approach that decreases the unexpected occurrence of failures and the corresponding downtime elapsed to repair or replace the faulty asset components. Currently, hospitals utilize subjective deterioration prediction methodologies that mostly rely on age as the sole indicator of degradation to forecast the useful lives of the building components. Thus, this paper aims at formulating a more efficient stochastic deterioration prediction model that integrates the latest observed condition into the forecasting procedure to overcome the subjectivity and uncertainties associated with the currently employed methods. This is achieved by means of developing a hybrid genetic algorithm-based fuzzy Markovian model that simulates the deterioration process given the scarcity of available data demonstrating the condition assessment and evaluation for such critical facilities. A nonhomogeneous transition probability matrix (TPM) based on fuzzy membership functions representing the condition, age and relative deterioration rate of the hospital systems is utilized to address the inherited uncertainties. The TPM is further calibrated by means of a genetic algorithm to circumvent the drawbacks of the expert-based models. A sensitivity analysis was carried out to analyze the possible changes in the output resulting from predefined modifications to the input parameters in order to ensure the robustness of the model. The performance of the deterioration prediction model developed is then validated through a comparison with a state-of-art stochastic model in contrast to real hospital datasets, and the results obtained from the developed model significantly outperformed the long-established Weibull distribution-based deterioration prediction methodology with mean absolute errors of 1.405 and 9.852, respectively. Therefore, the developed model is expected to assist decision-makers in creating more efficient maintenance programs as well as more data-driven capital renewal plans.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Publikationsdatum: 2020-08-29
    Beschreibung: The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that is, graphs in which edges can appear and disappear during time) and in this paper we address the question of finding the top-k nodes for this metric. Computing the temporal closeness for one node can be done in O(m) time, where m is the number of temporal edges. Therefore computing exactly the closeness for all nodes, in order to find the ones with top closeness, would require O(nm) time, where n is the number of nodes. This time complexity is intractable for large temporal graphs. Instead, we show how this measure can be efficiently approximated by using a “backward” temporal breadth-first search algorithm and a classical sampling technique. Our experimental results show that the approximation is excellent for nodes with high closeness, allowing us to detect them in practice in a fraction of the time needed for computing the exact closeness of all nodes. We validate our approach with an extensive set of experiments.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    Publikationsdatum: 2020-07-16
    Beschreibung: High order convective Cahn-Hilliard type equations describe the faceting of a growing surface, or the dynamics of phase transitions in ternary oil-water-surfactant systems. In this paper, we prove the well-posedness of the classical solutions for the Cauchy problem, associated with this equation.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 4
    Publikationsdatum: 2020-07-08
    Beschreibung: We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 . We put emphasis on the algorithmic aspects of this problem, since they are crucial for exploratory analysis of big data that are needed for the estimation. A specialized learning algorithm, based on the 2D FFT, is proposed and tested on observations that allow for estimate p.d.f.’s of a jet engine temperatures as a function of its rotation speed. We also derive theoretical results concerning the convergence of the estimation procedure that contains hints on selecting parameters of the estimation algorithm.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 5
    Publikationsdatum: 2020-07-09
    Beschreibung: We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine the occurrence or absence of spike events, at each time step, is carried out by using the analytical solution to a simplified version of the HH neuron model. We find that the SNN based edge detector detects more edge pixels in images than those obtained by a Sobel edge detector. We designed a pipeline for image classification with a low-exposure frame simulation layer, SNN edge detection layers as pre-processing layers and a Convolutional Neural Network (CNN) as a classification module. We tested this pipeline for the task of classification with the Digits dataset, which is available in MATLAB. We find that the SNN based edge detection layer increases the image classification accuracy at lower exposure times, that is, for 1 〈 t 〈 T /4, where t is the number of milliseconds in a simulated exposure frame and T is the total exposure time, with reference to a Sobel edge or Canny edge detection layer in the pipeline. These results pave the way for developing novel cognitive neuromorphic computing architectures for millisecond timescale detection and object classification applications using event or spike cameras.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 6
    Publikationsdatum: 2020-07-05
    Beschreibung: Microscopic crowd simulation can help to enhance the safety of pedestrians in situations that range from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, a lack of knowledge or limited measurement accuracy add uncertainty to the input. In addition, for meaningful parameter studies, we first need to identify the most influential parameters of our parametric computer models. The field of uncertainty quantification offers standardized and fully automatized methods that we believe to be beneficial for pedestrian dynamics. In addition, many methods come at a comparatively low cost, even for computationally expensive problems. This allows for their application to larger scenarios. We aim to identify and adapt fitting methods to microscopic crowd simulation in order to explore their potential in pedestrian dynamics. In this work, we first perform a variance-based sensitivity analysis using Sobol’ indices and then crosscheck the results by a derivative-based measure, the activity scores. We apply both methods to a typical scenario in crowd simulation, a bottleneck. Because constrictions can lead to high crowd densities and delays in evacuations, several experiments and simulation studies have been conducted for this setting. We show qualitative agreement between the results of both methods. Additionally, we identify a one-dimensional subspace in the input parameter space and discuss its impact on the simulation. Moreover, we analyze and interpret the sensitivity indices with respect to the bottleneck scenario.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 7
    Publikationsdatum: 2020-06-30
    Beschreibung: Standard (Lomb-Scargle, likelihood, etc.) procedures for power-spectrum analysis provide convenient estimates of the significance of any peak in a power spectrum, based—typically—on the assumption that the measurements being analyzed have a normal (i.e., Gaussian) distribution. However, the measurement sequence provided by a real experiment or a real observational program may not meet this requirement. The RONO (rank-order normalization) procedure generates a proxy distribution that retains the rank-order of the original measurements but has a strictly normal distribution. The proxy distribution may then be analyzed by standard power-spectrum analysis. We show by an example that the resulting power spectrum may prove to be quite close to the power spectrum obtained from the original data by a standard procedure, even if the distribution of the original measurements is far from normal. Such a comparison would tend to validate the original analysis.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 8
    Publikationsdatum: 2020-06-30
    Beschreibung: Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 9
    Publikationsdatum: 2020-07-01
    Beschreibung: Text annotation is the process of identifying the sense of a textual segment within a given context to a corresponding entity on a concept ontology. As the bag of words paradigm’s limitations become increasingly discernible in modern applications, several information retrieval and artificial intelligence tasks are shifting to semantic representations for addressing the inherent natural language polysemy and homonymy challenges. With extensive application in a broad range of scientific fields, such as digital marketing, bioinformatics, chemical engineering, neuroscience, and social sciences, community detection has attracted great scientific interest. Focusing on linguistics, by aiming to identify groups of densely interconnected subgroups of semantic ontologies, community detection application has proven beneficial in terms of disambiguation improvement and ontology enhancement. In this paper we introduce a novel distributed supervised knowledge-based methodology employing community detection algorithms for text annotation with Wikipedia Entities, establishing the unprecedented concept of community Coherence as a metric for local contextual coherence compatibility. Our experimental evaluation revealed that deeper inference of relatedness and local entity community coherence in the Wikipedia graph bears substantial improvements overall via a focus on accuracy amelioration of less common annotations. The proposed methodology is propitious for wider adoption, attaining robust disambiguation performance.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 10
    Publikationsdatum: 2020-06-30
    Beschreibung: Geomechanical modelling of the processes associated to the exploitation of subsurface resources, such as land subsidence or triggered/induced seismicity, is a common practice of major interest. The prediction reliability depends on different sources of uncertainty, such as the parameterization of the constitutive model characterizing the deep rock behaviour. In this study, we focus on a Sobol’-based sensitivity analysis and uncertainty reduction via assimilation of land deformations. A synthetic test case application on a deep hydrocarbon reservoir is considered, where land settlements are predicted with the aid of a 3-D Finite Element (FE) model. Data assimilation is performed via the Ensemble Smoother (ES) technique and its variation in the form of Multiple Data Assimilation (ES-MDA). However, the ES convergence is guaranteed with a large number of Monte Carlo (MC) simulations, that may be computationally infeasible in large scale and complex systems. For this reason, a surrogate model based on the generalized Polynomial Chaos Expansion (gPCE) is proposed as an approximation of the forward problem. This approach allows to efficiently compute the Sobol’ indices for the sensitivity analysis and greatly reduce the computational cost of the original ES and MDA formulations, also enhancing the accuracy of the overall prediction process.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 11
    Publikationsdatum: 2020-06-30
    Beschreibung: Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 12
    Publikationsdatum: 2020-07-03
    Beschreibung: Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONDA-PM) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (SLR) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 13
    Publikationsdatum: 2020-04-14
    Beschreibung: Let P be a set of n points in R d , k ≥ 1 be an integer and ε ∈ ( 0 , 1 ) be a constant. An ε-coreset is a subset C ⊆ P with appropriate non-negative weights (scalars), that approximates any given set Q ⊆ R d of k centers. That is, the sum of squared distances over every point in P to its closest point in Q is the same, up to a factor of 1 ± ε to the weighted sum of C to the same k centers. If the coreset is small, we can solve problems such as k-means clustering or its variants (e.g., discrete k-means, where the centers are restricted to be in P, or other restricted zones) on the small coreset to get faster provable approximations. Moreover, it is known that such coreset support streaming, dynamic and distributed data using the classic merge-reduce trees. The fact that the coreset is a subset implies that it preserves the sparsity of the data. However, existing such coresets are randomized and their size has at least linear dependency on the dimension d. We suggest the first such coreset of size independent of d. This is also the first deterministic coreset construction whose resulting size is not exponential in d. Extensive experimental results and benchmarks are provided on public datasets, including the first coreset of the English Wikipedia using Amazon’s cloud.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 14
    Publikationsdatum: 2020-04-23
    Beschreibung: This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme,while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent symbol pairs, works within the corresponding most frequent maximal repeats. Then, we reveal the relation between maximal repeats and grammars constructed by RePair. On the basis of this analysis, we further propose a novel variant of RePair, called MR-RePair, which considers the one-time substitution of the most frequent maximal repeats instead of the consecutive substitution of the most frequent pairs. The results of the experiments comparing the size of constructed grammars and execution time of RePair and MR-RePair on several text corpora demonstrate that MR-RePair constructs more compact grammars than RePair does, especially for highly repetitive texts.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 15
    Publikationsdatum: 2020-04-24
    Beschreibung: The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The platform is designed using a component-based software architectural approach, implemented in R and the Java programming languages. It uses experimental data as input for model fitting, then applies the obtained model at the landscape level via a spatial temperature grid data to yield regional and continental maps. Different modules and functionalities of the tool are presented with a case study, in which the tool is used to establish a temperature-dependent virulence model and map the potential zone of efficacy of a fungal-based biopesticide. The decision support system (DSS) was developed in generic form, and it can be used by anyone interested in fitting mathematical equations to experimental data collected following the described protocol and, depending on the type of investigation, it offers the possibility of projecting the model at the landscape level.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 16
    Publikationsdatum: 2020-04-22
    Beschreibung: In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file storage, especially the amount of disk space that is required. The less space is used on storing data sets, the lower is the cost of this service. Another important aspect of storing data warehouses in the form of files is the cost of data transmission needed for file transfer and its processing. Moreover, the data that are stored should be minimally protected against access and reading by other entities. The aspects mentioned above are particularly important for large data sets like Big Data. Considering the above criteria, i.e., minimizing storage space, data transfer, ensuring minimum security, the main goal of the article was to show the new way of storing text files. This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format. Taking into account such criteria as the output size of the file, the results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file. The described method can be used for texts saved in extended ASCII and UTF-8 coding.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 17
    Publikationsdatum: 2020-04-23
    Beschreibung: Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 18
    Publikationsdatum: 2020-04-24
    Beschreibung: The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are easily submerged in wide-band background noise without external excitation sources. Previously, Variational Mode Decomposition (VMD) was used to separate modal components; however, VMD is based on narrow-band signal processing algorithm and the calculation is complex. In this article, a method of pipeline defect signal based on Variational Specific Mode Extraction (VSME) is employed to extract the signal of a specific central frequency by signal modal decomposition, i.e., the specific mode is weak magnetic anomaly signal of pipeline defects. VSME is based on the fact that a wide-band signal can be converted into a narrow-band signal by demodulation method. Furthermore, the problem of wide-band signal decomposition is expressed as an optimal demodulation problem, which can be solved by alternating direction method of multipliers. The proposed algorithm is verified by artificially synthesized signals, and its performance is better than that of VMD. The results showed that the VSME method can extract the magnetic anomaly signal of pipeline damage using experimental data, while obtaining a better accuracy.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 19
    Publikationsdatum: 2020-04-29
    Beschreibung: In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 20
    Publikationsdatum: 2020-04-26
    Beschreibung: Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 21
    Publikationsdatum: 2020-04-25
    Beschreibung: Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like ‘hit the ball a little higher when serving’. However, the biomechanical of a tennis stroke is only clear to an expert. We, therefore, developed a prototype of a rule-based expert system (RBES) aimed at an amateur competition player that is not accompanied by his/her coach to a championship and is not serving as usual (the RBES is so far restricted to serving). The player has to answer a set of questions about how he/she is serving that day and his/her usual serving technique and the RBES obtains a diagnosis using logic inference about the possible reasons (according of the logic rules that have been previously given to the RBES). A certain knowledge of the tennis terminology and technique is required from the player, but that is something known at this level. The underlying logic is Boolean and the inference engine is algebraic (it uses Groebner bases).
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 22
    Publikationsdatum: 2020-04-26
    Beschreibung: The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary to be able to recognize and assess their quality. Only then is it possible to compile a result from the available data that fulfills its purpose for the user, namely to deliver reliable data and information. This paper discussed the quality problems of source metadata in Wikipedia and CRIS. Based on real data from over 40 million Wikipedia articles in various languages, we performed preliminary quality analysis of the metadata of scientific publications using a data quality tool. So far, no data quality measurements have been programmed with Python to assess the quality of metadata from scientific publications in Wikipedia and CRIS. With this in mind, we programmed the methods and algorithms as code, but presented it in the form of pseudocode in this paper to measure the quality related to objective data quality dimensions such as completeness, correctness, consistency, and timeliness. This was prepared as a macro service so that the users can use the measurement results with the program code to make a statement about their scientific publications metadata so that the management can rely on high-quality data when making decisions.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 23
    Publikationsdatum: 2020-04-29
    Beschreibung: Since its creation by Nawaz, Enscore, and Ham in 1983, NEH remains the best heuristic method to solve flowshop scheduling problems. In the large body of literature dealing with the application of this heuristic, it can be clearly noted that results differ from one paper to another. In this paper, two methods are proposed to improve the original NEH, based on the two points in the method where choices must be made, in case of equivalence between two job orders or partial sequences. When an equality occurs in a sorting method, two results are equivalent, but can lead to different final results. In order to propose the first improvement to NEH, the factorial basis decomposition method is introduced, which makes a number computationally correspond to a permutation. This method is very helpful for the first improvement, and allows testing of all the sequencing possibilities for problems counting up to 50 jobs. The second improvement is located where NEH keeps the best partial sequence. Similarly, a list of equivalent partial sequences is kept, rather than only one, to provide the global method a chance of better performance. The results obtained with the successive use of the two methods of improvement present an average improvement of 19% over the already effective results of the original NEH method.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 24
    Publikationsdatum: 2020-04-28
    Beschreibung: In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 25
    Publikationsdatum: 2020-04-28
    Beschreibung: Civil engineering applications are often characterized by a large uncertainty on the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can be expressed as a random field represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic responses, i.e., the expected value and variance of a chosen quantity of interest, remains very costly, even when state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using a recently developed multilevel method: p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). This method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. It is combined with a rank-1 Quasi-Monte Carlo (QMC) lattice rule, which yields faster convergence compared to the use of random Monte Carlo points. In this work, we developed algorithms for the p-MLQMC method for two dimensional problems. The p-MLQMC method is first benchmarked on an academic beam problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering, and typically suffers from large parameter uncertainty. For both considered problems, we observe a very significant reduction in the amount of computational work with respect to MLMC.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 26
    Publikationsdatum: 2020-04-21
    Beschreibung: The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , … arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 27
    Publikationsdatum: 2020-04-20
    Beschreibung: In this work, we present a novel strategy for the numerical solution of a coupled system of partial differential equations that describe reaction–diffusion processes of a mixture of metals and ligands that can be absorbed by a sensor or a microorganism, in an aqueous medium. The novelty introduced in this work consisted of an adequate database management in conjunction with a direct iterative schema, which allowed the construction of simple, fast and efficient algorithms. Except in really adverse conditions, the calculation is converging and satisfactory solutions were reached. Computing times showed to be better than those obtained with some commercial programs. Although we concentrate on the solution for a particular system (Diffusive Gradients in Thin Films [DGT] sensors), the proposed algorithm does not require major modifications to consider new theoretical or experimental configurations. Since the quality of numerical simulations of reaction–diffusion problems often faces some drawbacks as the values of reaction rate constants increase, some additional effort has been invested in obtaining proper solutions in those cases.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 28
    Publikationsdatum: 2020-04-20
    Beschreibung: With the emergent evolution in DNA sequencing technology, a massive amount of genomic data is produced every day, mainly DNA sequences, craving for more storage and bandwidth. Unfortunately, managing, analyzing and specifically storing these large amounts of data become a major scientific challenge for bioinformatics. Therefore, to overcome these challenges, compression has become necessary. In this paper, we describe a new reference-free DNA compressor abbreviated as DNAC-SBE. DNAC-SBE is a lossless hybrid compressor that consists of three phases. First, starting from the largest base (Bi), the positions of each Bi are replaced with ones and the positions of other bases that have smaller frequencies than Bi are replaced with zeros. Second, to encode the generated streams, we propose a new single-block encoding scheme (SEB) based on the exploitation of the position of neighboring bits within the block using two different techniques. Finally, the proposed algorithm dynamically assigns the shorter length code to each block. Results show that DNAC-SBE outperforms state-of-the-art compressors and proves its efficiency in terms of special conditions imposed on compressed data, storage space and data transfer rate regardless of the file format or the size of the data.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 29
    Publikationsdatum: 2020-04-13
    Beschreibung: In this paper, we consider the effect of stochastic uncertainties on non-linear systems with chaotic behavior. More specifically, we quantify the effect of parametric uncertainties to time-averaged quantities and their sensitivities. Sampling methods for Uncertainty Quantification (UQ), such as the Monte–Carlo (MC), are very costly, while traditional methods for sensitivity analysis, such as the adjoint, fail in chaotic systems. In this work, we employ the non-intrusive generalized Polynomial Chaos (gPC) for UQ, coupled with the Multiple-Shooting Shadowing (MSS) algorithm for sensitivity analysis of chaotic systems. It is shown that the gPC, coupled with MSS, is an appropriate method for conducting UQ in chaotic systems and produces results that match well with those from MC and Finite-Differences (FD).
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 30
    Publikationsdatum: 2020-04-09
    Beschreibung: In this study, a new modification of the meta-heuristic approach called Co-Operation of Biology-Related Algorithms (COBRA) is proposed. Originally the COBRA approach was based on a fuzzy logic controller and used for solving real-parameter optimization problems. The basic idea consists of a cooperative work of six well-known biology-inspired algorithms, referred to as components. However, it was established that the search efficiency of COBRA depends on its ability to keep the exploitation and exploration balance when solving optimization problems. The new modification of the COBRA approach is based on other method for generating potential solutions. This method keeps a historical memory of successful positions found by individuals to lead them in different directions and therefore to improve the exploitation and exploration capabilities. The proposed technique was applied to the COBRA components and to its basic steps. The newly proposed meta-heuristic as well as other modifications of the COBRA approach and components were evaluated on three sets of various benchmark problems. The experimental results obtained by all algorithms with the same computational effort are presented and compared. It was concluded that the proposed modification outperformed other algorithms used in comparison. Therefore, its usefulness and workability were demonstrated.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 31
    Publikationsdatum: 2020-05-07
    Beschreibung: It is well known that ensemble methods often provide enhanced performance in reinforcement learning. In this paper, we explore this concept further by using group-aided training within the distributional reinforcement learning paradigm. Specifically, we propose an extension to categorical reinforcement learning, where distributional learning targets are implicitly based on the total information gathered by an ensemble. We empirically show that this may lead to much more robust initial learning, a stronger individual performance level, and good efficiency on a per-sample basis.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 32
    Publikationsdatum: 2020-05-08
    Beschreibung: The electricity market is a complex, evolutionary, and dynamic environment. Forecasting electricity prices is an important issue for all electricity market participants. In this study, we shed light on how to improve electricity price forecasting accuracy through the use of a machine learning technique—namely, a novel genetic programming approach. Drawing on empirical data from the largest EU energy markets, we propose a forecasting model that considers variables related to weather conditions, oil prices, and CO2 coupons and predicts energy prices 24 hours ahead. We show that the proposed model provides more accurate predictions of future electricity prices than existing prediction methods. Our important findings will assist the electricity market participants in forecasting future price movements.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 33
    Publikationsdatum: 2020-05-06
    Beschreibung: Symbiotic organisms search (SOS) is a promising metaheuristic algorithm that has been studied recently by numerous researchers due to its capability to solve various hard and complex optimization problems. SOS is a powerful optimization technique that mimics the simulation of the typical symbiotic interactions among organisms in an ecosystem. This study presents a new SOS-based hybrid algorithm for solving the challenging construction site layout planning (CSLP) discrete problems. A new algorithm called the hybrid symbiotic organisms search with local operators (HSOS-LO) represents a combination of the canonical SOS and several local search mechanisms aimed at increasing the searching capability in discrete-based solution space. In this study, three CSLP problems that consist of single and multi-floor facility layout problems are tested, and the obtained results were compared with other widely used metaheuristic algorithms. The results indicate the robust performance of the HSOS-LO algorithm in handling discrete-based CSLP problems.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 34
    Publikationsdatum: 2020-05-10
    Beschreibung: Developing tools for precise quantification of brain metabolites using magnetic resonance spectroscopy (MRS) is an active area of research with broad application in non-invasive neurodegenerative disease studies. The tools are mainly developed based on black box (data-driven), or basis sets approaches. In this study, we offer a multi-stage framework that integrates data-driven and basis sets methods. We first use truncated Hankel singular value decomposition (HSVD) to decompose free induction decay (FID) signals into single tone FIDs, as the data-driven stage. Subsequently, single tone FIDs are clustered into basis sets while using initialized K-means with prior knowledge of the metabolites, as the basis set stage. The generated basis sets are fitted with the magnetic resonance (MR) spectra while using a linear constrained least square, and then the metabolite concentration is calculated. Prior to using our proposed multi-stage approach, a sequence of preprocessing blocks: water peak removal, phase correction, and baseline correction (developed in house) are used.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 35
    Publikationsdatum: 2020-05-10
    Beschreibung: Nowadays, cryptocurrency has infiltrated almost all financial transactions; thus, it is generally recognized as an alternative method for paying and exchanging currency. Cryptocurrency trade constitutes a constantly increasing financial market and a promising type of profitable investment; however, it is characterized by high volatility and strong fluctuations of prices over time. Therefore, the development of an intelligent forecasting model is considered essential for portfolio optimization and decision making. The main contribution of this research is the combination of three of the most widely employed ensemble learning strategies: ensemble-averaging, bagging and stacking with advanced deep learning models for forecasting major cryptocurrency hourly prices. The proposed ensemble models were evaluated utilizing state-of-the-art deep learning models as component learners, which were comprised by combinations of long short-term memory (LSTM), Bi-directional LSTM and convolutional layers. The ensemble models were evaluated on prediction of the cryptocurrency price on the following hour (regression) and also on the prediction if the price on the following hour will increase or decrease with respect to the current price (classification). Additionally, the reliability of each forecasting model and the efficiency of its predictions is evaluated by examining for autocorrelation of the errors. Our detailed experimental analysis indicates that ensemble learning and deep learning can be efficiently beneficial to each other, for developing strong, stable, and reliable forecasting models.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 36
    Publikationsdatum: 2020-05-06
    Beschreibung: The expected utility principle is often used to compute the insurance premium through a second-order approximation of the expected value of the utility of losses. We investigate the impact of using a more accurate approximation based on the fourth-order statistics of the expected loss and derive the premium under this expectedly more accurate approximation. The comparison between the two approximation levels shows that the second-order-based premium is always lower (i.e., an underestimate of the correct one) for the commonest loss distributions encountered in insurance. The comparison is also carried out for real cases, considering the loss parameters values estimated in the literature. The increased risk of the insurer is assessed through the Value-at-Risk.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 37
    Publikationsdatum: 2020-05-19
    Beschreibung: Salient segmentation is a critical step in biomedical image analysis, aiming to cut out regions that are most interesting to humans. Recently, supervised methods have achieved promising results in biomedical areas, but they depend on annotated training data sets, which requires labor and proficiency in related background knowledge. In contrast, unsupervised learning makes data-driven decisions by obtaining insights directly from the data themselves. In this paper, we propose a completely unsupervised self-aware network based on pre-training and attentional backpropagation for biomedical salient segmentation, named as PUB-SalNet. Firstly, we aggregate a new biomedical data set from several simulated Cellular Electron Cryo-Tomography (CECT) data sets featuring rich salient objects, different SNR settings, and various resolutions, which is called SalSeg-CECT. Based on the SalSeg-CECT data set, we then pre-train a model specially designed for biomedical tasks as a backbone module to initialize network parameters. Next, we present a U-SalNet network to learn to selectively attend to salient objects. It includes two types of attention modules to facilitate learning saliency through global contrast and local similarity. Lastly, we jointly refine the salient regions together with feature representations from U-SalNet, with the parameters updated by self-aware attentional backpropagation. We apply PUB-SalNet for analysis of 2D simulated and real images and achieve state-of-the-art performance on simulated biomedical data sets. Furthermore, our proposed PUB-SalNet can be easily extended to 3D images. The experimental results on the 2d and 3d data sets also demonstrate the generalization ability and robustness of our method.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 38
    facet.materialart.
    Unbekannt
    Molecular Diversity Preservation International
    Publikationsdatum: 2020-05-15
    Beschreibung: In this paper, we study the relationship of parameterized enumeration complexity classes defined by Creignou et al. (MFCS 2013). Specifically, we introduce two hierarchies (IncFPTa and CapIncFPTa) of enumeration complexity classes for incremental fpt-time in terms of exponent slices and show how they interleave. Furthermore, we define several parameterized function classes and, in particular, introduce the parameterized counterpart of the class of nondeterministic multivalued functions with values that are polynomially verifiable and guaranteed to exist, TFNP, known from Megiddo and Papadimitriou (TCS 1991). We show that this class TF(para-NP), the restriction of the function variant of NP to total functions, collapsing to F(FPT), the function variant of FPT, is equivalent to the result that OutputFPT coincides with IncFPT. In addition, these collapses are shown to be equivalent to TFNP = FP, and also equivalent to P equals NP intersected with coNP. Finally, we show that these two collapses are equivalent to the collapse of IncP and OutputP in the classical setting. These results are the first direct connections of collapses in parameterized enumeration complexity to collapses in classical enumeration complexity, parameterized function complexity, classical function complexity, and computational complexity theory.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 39
    Publikationsdatum: 2020-05-19
    Beschreibung: In this paper, we explore the effect of using different convolutional layers, batch normalization and the global average pooling layer upon a convolutional neural network (CNN) based gaze tracking system. A novel method is proposed to label the participant’s face images as gaze points retrieved from eye tracker while watching videos for building a training dataset that is closer to human visual behavior. The participants can swing their head freely; therefore, the most real and natural images can be obtained without too many restrictions. The labeled data are classified according to the coordinate of gaze and area of interest on the screen. Therefore, varied network architectures are applied to estimate and compare the effects including the number of convolutional layers, batch normalization (BN) and the global average pooling (GAP) layer instead of the fully connected layer. Three schemes, including the single eye image, double eyes image and facial image, with data augmentation are used to feed into neural network to train and evaluate the efficiency. The input image of the eye or face for an eye tracking system is mostly a small-sized image with relatively few features. The results show that BN and GAP are helpful in overcoming the problem to train models and in reducing the amount of network parameters. It is shown that the accuracy is significantly improved when using GAP and BN at the mean time. Overall, the face scheme has a highest accuracy of 0.883 when BN and GAP are used at the mean time. Additionally, comparing to the fully connected layer set to 512 cases, the number of parameters is reduced by less than 50% and the accuracy is improved by about 2%. A detection accuracy comparison of our model with the existing George and Routray methods shows that our proposed method achieves better prediction accuracy of more than 6%.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 40
    Publikationsdatum: 2020-05-26
    Beschreibung: Feature Selection (FS) methods have been studied extensively in the literature, and there are a crucial component in machine learning techniques. However, unsupervised text feature selection has not been well studied in document clustering problems. Feature selection could be modelled as an optimization problem due to the large number of possible solutions that might be valid. In this paper, a memetic method that combines Differential Evolution (DE) with Simulated Annealing (SA) for unsupervised FS was proposed. Due to the use of only two values indicating the existence or absence of the feature, a binary version of differential evolution is used. A dichotomous DE was used for the purpose of the binary version, and the proposed method is named Dichotomous Differential Evolution Simulated Annealing (DDESA). This method uses dichotomous mutation instead of using the standard mutation DE to be more effective for binary purposes. The Mean Absolute Distance (MAD) filter was used as the feature subset internal evaluation measure in this paper. The proposed method was compared with other state-of-the-art methods including the standard DE combined with SA, which is named DESA in this paper, using five benchmark datasets. The F-micro, F-macro (F-scores) and Average Distance of Document to Cluster (ADDC) measures were utilized as the evaluation measures. The Reduction Rate (RR) was also used as an evaluation measure. Test results showed that the proposed DDESA outperformed the other tested methods in performing the unsupervised text feature selection.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 41
    Publikationsdatum: 2020-04-01
    Beschreibung: Sentiment analysis usually refers to the analysis of human-generated content via a polarity filter. Affective computing deals with the exact emotions conveyed through information. Emotional information most frequently cannot be accurately described by a single emotion class. Multilabel classifiers can categorize human-generated content in multiple emotional classes. Ensemble learning can improve the statistical, computational and representation aspects of such classifiers. We present a baseline stacked ensemble and propose a weighted ensemble. Our proposed weighted ensemble can use multiple classifiers to improve classification results without hyperparameter tuning or data overfitting. We evaluate our ensemble models with two datasets. The first dataset is from Semeval2018-Task 1 and contains almost 7000 Tweets, labeled with 11 sentiment classes. The second dataset is the Toxic Comment Dataset with more than 150,000 comments, labeled with six different levels of abuse or harassment. Our results suggest that ensemble learning improves classification results by 1.5 % to 5.4 % .
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 42
    Publikationsdatum: 2020-04-02
    Beschreibung: In order to improve the efficiency of transportation networks, it is critical to forecast traffic congestion. Large-scale traffic congestion data have become available and accessible, yet they need to be properly represented in order to avoid overfitting, reduce the requirements of computational resources, and be utilized effectively by various methodologies and models. Inspired by pooling operations in deep learning, we propose a representation framework for traffic congestion data in urban road traffic networks. This framework consists of grid-based partition of urban road traffic networks and a pooling operation to reduce multiple values into an aggregated one. We also propose using a pooling operation to calculate the maximum value in each grid (MAV). Raw snapshots of traffic congestion maps are transformed and represented as a series of matrices which are used as inputs to a spatiotemporal congestion prediction network (STCN) to evaluate the effectiveness of representation when predicting traffic congestion. STCN combines convolutional neural networks (CNNs) and long short-term memory neural network (LSTMs) for their spatiotemporal capability. CNNs can extract spatial features and dependencies of traffic congestion between roads, and LSTMs can learn their temporal evolution patterns and correlations. An empirical experiment on an urban road traffic network shows that when incorporated into our proposed representation framework, MAV outperforms other pooling operations in the effectiveness of the representation of traffic congestion data for traffic congestion prediction, and that the framework is cost-efficient in terms of computational resources.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 43
    Publikationsdatum: 2020-04-09
    Beschreibung: In this paper, the Feasibility Pump is adapted for the problem of sparse representations of signals affected by Gaussian noise. This adaptation is tested and then compared to Orthogonal Matching Pursuit (OMP) and the Fast Iterative Shrinkage-Thresholding Algorithm (FISTA). The feasibility pump recovers the true support much better than the other two algorithms and, as the SNR decreases and the support size increases, it has a smaller recovery and representation error when compared with its competitors. It is observed that, in order for the algorithm to be efficient, a regularization parameter and a weight term for the error are needed.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 44
    Publikationsdatum: 2020-04-07
    Beschreibung: Data processing agreements in health data management are laid out by organisations in monolithic “Terms and Conditions” documents written in natural legal language. These top-down policies usually protect the interest of the service providers, rather than the data owners. They are coarse-grained and do not allow for more than a few opt-in or opt-out options for individuals to express their consent on personal data processing, and these options often do not transfer to software as they were intended to. In this paper, we study the problem of health data sharing and we advocate the need for individuals to describe their personal contract of data usage in a formal, machine-processable language. We develop an application for sharing patient genomic information and test results, and use interactions with patients and clinicians in order to identify the particular peculiarities a privacy/policy/consent language should offer in this complicated domain. We present how Semantic Web technologies can have a central role in this approach by providing the formal tools and features required in such a language. We present our ongoing approach to construct an ontology-based framework and a policy language that allows patients and clinicians to express fine-grained consent, preferences or suggestions on sharing medical information. Our language offers unique features such as multi-party ownership of data or data sharing dependencies. We evaluate the landscape of policy languages from different areas, and show how they are lacking major requirements needed in health data management. In addition to enabling patients, our approach helps organisations increase technological capabilities, abide by legal requirements, and save resources.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 45
    Publikationsdatum: 2020-04-03
    Beschreibung: Inspired by biological systems, swarm intelligence algorithms are widely used to solve multimodal optimization problems. In this study, we consider the hybridization problem of an algorithm based on the collective behavior of fish schools. The algorithm is computationally inexpensive compared to other population-based algorithms. Accuracy of fish school search increases with the increase of predefined iteration count, but this also affects computation time required to find a suboptimal solution. We propose two hybrid approaches, intending to improve the evolutionary-inspired algorithm accuracy by using classical optimization methods, such as gradient descent and Newton's optimization method. The study shows the effectiveness of the proposed hybrid algorithms, and the strong advantage of the hybrid algorithm based on fish school search and gradient descent. We provide a solution for the linearly inseparable exclusive disjunction problem using the developed algorithm and a perceptron with one hidden layer. To demonstrate the effectiveness of the algorithms, we visualize high dimensional loss surfaces near global extreme points. In addition, we apply the distributed version of the most effective hybrid algorithm to the hyperparameter optimization problem of a neural network.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 46
    Publikationsdatum: 2020-04-06
    Beschreibung: In this study, a new voting procedure for combining the fuzzy logic based classifiers and other classifiers called confidence-based voting is proposed. This method combines two classifiers, namely the fuzzy classification system, and for the cases when the fuzzy system returns high confidence levels, i.e., the returned membership value is large, the fuzzy system is used to perform classification, otherwise, the second classifier is applied. As a result, most of the sample is classified by the explainable and interpretable fuzzy system, and the second, more accurate, but less interpretable classifier is applied only for the most difficult cases. To show the efficiency of the proposed approach, a set of experiments is performed on test datasets, as well as two problems of estimating the person’s emotional state with the data collected by non-contact vital sensors, which use the Doppler effect. To validate the accuracies of the proposed approach, the statistical tests were used for comparison. The obtained results demonstrate the efficiency of the proposed technique, as it allows for both improving the classification accuracy and explaining the decision making process.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 47
    Publikationsdatum: 2020-05-22
    Beschreibung: One of the main issues addressed in any engineering design problem is to predict the performance of the component or system as accurately and realistically as possible, taking into account the variability of operating conditions or the uncertainty on input data (boundary conditions or geometry tolerance). In this paper, the propagation of uncertainty on boundary conditions through a numerical model of supersonic nozzle is investigated. The evaluation of the statistics of the problem response functions is performed following ‘Surrogate-Based Uncertainty Quantification’. The approach involves: (a) the generation of a response surface starting from a DoE in order to approximate the convergent–divergent ‘physical’ model (expensive to simulate), (b) the application of the UQ technique based on the LHS to the meta-model. Probability Density Functions are introduced for the inlet boundary conditions in order to quantify their effects on the output nozzle performance. The physical problem considered is very relevant for the experimental tests on the UQ approach because of its high non-linearity. A small perturbation to the input data can drive the solution to a completely different output condition. The CFD simulations and the Uncertainty Quantification were performed by coupling the open source Dakota platform with the ANSYS Fluent® CFD commercial software: the process is automated through scripting. The procedure adopted in this work demonstrate the applicability of advanced simulation techniques (such as UQ analysis) to industrial technical problems. Moreover, the analysis highlights the practical use of the uncertainty quantification techniques in predicting the performance of a nozzle design affected by off-design conditions with fluid-dynamic complexity due to strong nonlinearity.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 48
    Publikationsdatum: 2020-05-20
    Beschreibung: It is very often the case that at some moment a time series process abruptly changes its underlying structure and, therefore, it is very important to accurately detect such change-points. In this problem, which is called a change-point (or break-point) detection problem, we need to find a method that divides the original nonstationary time series into a piecewise stationary segments. In this paper, we develop a flexible method to estimate the unknown number and the locations of change-points in autoregressive time series. In order to find the optimal value of a performance function, which is based on the Minimum Description Length principle, we develop a Cross-Entropy algorithm for the combinatorial optimization problem. Our numerical experiments show that the proposed approach is very efficient in detecting multiple change-points when the underlying process has moderate to substantial variations in the mean and the autocorrelation coefficient. We also apply the proposed method to real data of daily AUD/CNY exchange rate series from 2 January 2018 to 24 March 2020.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 49
    Publikationsdatum: 2020-04-14
    Beschreibung: In this paper, we consider a class of structured optimization problems whose objective function is the summation of two convex functions: f and h, which are not necessarily differentiable. We focus particularly on the case where the function f is general and its exact first-order information (function value and subgradient) may be difficult to obtain, while the function h is relatively simple. We propose a generalized alternating linearization bundle method for solving this class of problems, which can handle inexact first-order information of on-demand accuracy. The inexact information can be very general, which covers various oracles, such as inexact, partially inexact and asymptotically exact oracles, and so forth. At each iteration, the algorithm solves two interrelated subproblems: one aims to find the proximal point of the polyhedron model of f plus the linearization of h; the other aims to find the proximal point of the linearization of f plus h. We establish global convergence of the algorithm under different types of inexactness. Finally, some preliminary numerical results on a set of two-stage stochastic linear programming problems show that our method is very encouraging.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 50
    Publikationsdatum: 2020-04-15
    Beschreibung: Artificial joint surface coating is a hot issue in the interdisciplinary fields of manufacturing, materials and biomedicine. Due to the complex surface characteristics of artificial joints, there are some problems with efficiency and precision in automatic cladding path planning for coating fabrication. In this study, a path planning method for a laser cladding robot for artificial joints surface was proposed. The key of this method was the topological reconstruction of the artificial joint surface. On the basis of the topological relation, a set of parallel planes were used to intersect the CAD model to generate a set of continuous, directed and equidistant surface transversals on the artificial joint surface. The arch height error method was used to extract robot interpolation points from surface transversal lines according to machining accuracy requirements. The coordinates and normal vectors of interpolation points were used to calculate the position and pose of the robot tool center point (TCP). To ensure that the laser beam was always perpendicular to the artificial joint surface, a novel laser cladding set-up with a robot was designed, of which the joint part clamped by a six-axis robot moved while the laser head was fixed on the workbench. The proposed methodology was validated with the planned path on the surface of an artificial acetabular cup using simulation and experimentation via an industrial NACHI robot. The results indicated that the path planning method based on topological reconstruction was feasible and more efficient than the traditional robot teaching method.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 51
    Publikationsdatum: 2020-04-16
    Beschreibung: This paper develops the generalized causality algorithm and applies it to a multitude of data from the fields of economics and finance. Specifically, our parameter-free algorithm efficiently determines the optimal non-linear mapping and identifies varying lead–lag effects between two given time series. This procedure allows an elastic adjustment of the time axis to find similar but phase-shifted sequences—structural breaks in their relationship are also captured. A large-scale simulation study validates the outperformance in the vast majority of parameter constellations in terms of efficiency, robustness, and feasibility. Finally, the presented methodology is applied to real data from the areas of macroeconomics, finance, and metal. Highest similarity show the pairs of gross domestic product and consumer price index (macroeconomics), S&P 500 index and Deutscher Aktienindex (finance), as well as gold and silver (metal). In addition, the algorithm takes full use of its flexibility and identifies both various structural breaks and regime patterns over time, which are (partly) well documented in the literature.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 52
    Publikationsdatum: 2020-04-16
    Beschreibung: This paper presents a hybrid grasshopper optimization algorithm using a novel decoder and local search to solve instances of the open vehicle routing problem with capacity and distance constraints. The algorithm’s decoder first defines the number of vehicles to be used and then it partitions the clients, assigning them to the available routes. The algorithm performs a local search in three neighborhoods after decoding. When a new best solution is found, every route is locally optimized by solving a traveling salesman problem, considering the depot and clients in the route. Three sets containing a total of 30 benchmark problems from the literature were used to test the algorithm. The experiments considered two cases of the problem. In the first, the primary objective is to minimize the total number of vehicles and then the total distance to be traveled. In the second case, the total distance traveled by the vehicles is minimized. The obtained results showed the algorithm’s proficient performance. For the first case, the algorithm was able to improve or match the best-known solutions for 21 of the 30 benchmark problems. For the second case, the best-known solutions for 18 of the 30 benchmark problems were found or improved by the algorithm. Finally, a case study from a real-life problem is included.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 53
    Publikationsdatum: 2020-04-17
    Beschreibung: Model Predictive Control constitutes an important element of any modern control system. There is growing interest in this technology. More and more advanced predictive structures have been implemented. The first applications were in chemical engineering, and now Model Predictive Control can be found in almost all kinds of applications, from the process industry to embedded control systems or for autonomous objects. Currently, each implementation of a control system requires strict financial justification. Application engineers need tools to measure and quantify the quality of the control and the potential for improvement that may be achieved by retrofitting control systems. Furthermore, a successful implementation of predictive control must conform to prior estimations not only during commissioning, but also during regular daily operations. The system must sustain the quality of control performance. The assessment of Model Predictive Control requires a suitable, often specific, methodology and comparative indicators. These demands establish the rationale of this survey. Therefore, the paper collects and summarizes control performance assessment methods specifically designed for and utilized in predictive control. These observations present the picture of the assessment technology. Further generalization leads to the formulation of a control assessment procedure to support control application engineers.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 54
    Publikationsdatum: 2020-05-02
    Beschreibung: Risk maturity evaluation is an efficient tool which can assist construction organizations in the identification of their strengths and weaknesses in risk management processes and in taking necessary actions for the improvement of these processes. The accuracy of its results relies heavily on the quality of responses provided by participants specialized in these processes across the organization. Risk maturity models reported in the literature gave equal importance to participants’ responses during the model development, neglecting their level of authority in the organization as well as their level of expertise in risk management processes. Unlike the existing models, this paper presents a new risk maturity model that considers the relative importance of the responses provided by the participants in the model development. It considered their authority in the organization and their level of involvement in the risk management processes for calculating the relative weights associated with the risk maturity attributes. It employed an analytic network process (ANP) to model the interdependencies among the risk maturity attributes and utilizes the fuzzy set theory to incorporate the uncertainty associated with the ambiguity of the responses used in the model development. The developed model allows the construction organizations to have a more accurate and realistic view of their current performance in risk management processes. The application of the developed model was investigated by measuring the risk maturity level of an industrial partner working on civil infrastructure projects in Canada.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 55
    Publikationsdatum: 2020-04-15
    Beschreibung: This study examines the performance of 11 tree taper models to predict the diameter of bark at any given height and the total stem volume of eight dominant tree species in the boreal forests of Lithuania. Here, we develop eight new models using stochastic differential equations (SDEs). The symmetrical Vasicek model and asymmetrical Gompertz model are used to describe tree taper evolution, as well as geometric-type diffusion processes. These models are compared with those traditionally used for four tree taper models by using performance statistics and residual analysis. The observed dataset consists of longitudinal measurements of 3703 trees, representing the eight dominant tree species in Lithuania (pine, spruce, oak, ash, birch, black alder, white alder, and aspen). Overall, the best goodness of fit statistics of diameter predictions produced the SDE taper models. All results have been implemented in the Maple computer algebra system using the “Statistics” and “VectorCalculus” packages.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 56
    Publikationsdatum: 2020-04-30
    Beschreibung: We consider initial value problems (IVPs) where we are interested in a quantity of interest (QoI) that is the integral in time of a functional of the solution. For these, we analyze goal oriented time adaptive methods that use only local error estimates. A local error estimate and timestep controller for step-wise contributions to the QoI are derived. We prove convergence of the error in the QoI for tolerance to zero under a controllability assumption. By analyzing global error propagation with respect to the QoI, we can identify possible issues and make performance predictions. Numerical tests verify these results. We compare performance with classical local error based time-adaptivity and a posteriori based adaptivity using the dual-weighted residual (DWR) method. For dissipative problems, local error based methods show better performance than DWR and the goal oriented method shows good results in most examples, with significant speedups in some cases.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 57
    Publikationsdatum: 2020-05-02
    Beschreibung: Upgrading ordinary streetlights to smart streetlights to help monitor traffic flow is a low-cost and pragmatic option for cities. Fine-grained classification of vehicles in the sight of smart streetlights is essential for intelligent transportation and smart cities. In order to improve the classification accuracy of distant cars, we propose a reformed YOLOv3 (You Only Look Once, version 3) algorithm to realize the detection of various types of automobiles, such as SUVs, sedans, taxis, commercial vehicles, small commercial vehicles, vans, buses, trucks and pickup trucks. Based on the dataset UA-DETRAC-LITE, manually labeled data is added to improve the data balance. First, data optimization for the vehicle target is performed to improve the generalization ability and position regression loss function of the model. The experimental results show that, within the range of 67 m, and through scale optimization (i.e., by introducing multi-scale training and anchor clustering), the classification accuracies of trucks and pickup trucks are raised by 26.98% and 16.54%, respectively, and the overall accuracy is increased by 8%. Secondly, label smoothing and mixup optimization is also performed to improve the generalization ability of the model. Compared with the original YOLO algorithm, the accuracy of the proposed algorithm is improved by 16.01%. By combining the optimization of the position regression loss function of GIOU (Generalized Intersection Over Union), the overall system accuracy can reach 92.7%, which improves the performance by 21.28% compared with the original YOLOv3 algorithm.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 58
    Publikationsdatum: 2020-05-21
    Beschreibung: One of the most challenging aspects of medical modalities such as Computed Tomography (CT) as well hybrid techniques such as CT/PET (Computed Tomography/Positron emission tomography) and PET/MRI is finding a balance between examination time, radiation dose, and image quality. The need for a dense sampling grid is associated with two major factors: image resolution enhancement, which leads to a strengthening of human perception, and image features interpretation. All these aspects make an unsupervised image processing much easier. The presented algorithm employs super-resolution-reconstruction with high accuracy motion fields’ estimation at its core for Computed Tomography/Positron Emission Tomography (CT/PET) images enhancement. The suggested method starts with processing compressively sensed input signals. This paper shows that it is possible to achieve higher image resolution while keeping the same radiation dose. The purpose of this paper is to propose a highly effective CT/PET image reconstruction strategy, allowing for simultaneous resolution enhancing and scanning time minimisation. The algorithm aims to overcome two major obstacles—image resolution limitation and algorithm reconstruction time efficiency-by combining a highly-sparse Ridgelet analysis based sampling pattern as well as PET signal sensing with super-resolution (SR) image enhancement. Due to the diverse nature of Computed Tomography, the applied Ridgelet analysis arguing its usability turned out to be efficient in reducing acquisition times in regard to maintaining satisfying scan quality. This paper presents a super-resolution image enhancement algorithm designed for handling highly sensitively compressed hybrid CT/PET scanners raw data. The presented technique allows for improving image resolution while reducing motion artefacts and keeping scanning times at pretty low levels.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 59
    Publikationsdatum: 2020-05-18
    Beschreibung: Sequential pattern mining is a fundamental data mining task with application in several domains. We study two variants of this task—the first is the extraction of frequent sequential patterns, whose frequency in a dataset of sequential transactions is higher than a user-provided threshold; the second is the mining of true frequent sequential patterns, which appear with probability above a user-defined threshold in transactions drawn from the generative process underlying the data. We present the first sampling-based algorithm to mine, with high confidence, a rigorous approximation of the frequent sequential patterns from massive datasets. We also present the first algorithms to mine approximations of the true frequent sequential patterns with rigorous guarantees on the quality of the output. Our algorithms are based on novel applications of Vapnik-Chervonenkis dimension and Rademacher complexity, advanced tools from statistical learning theory, to sequential pattern mining. Our extensive experimental evaluation shows that our algorithms provide high-quality approximations for both problems we consider.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 60
    Publikationsdatum: 2020-05-18
    Beschreibung: Deep learning is now present in a wide range of services and applications, replacing and complementing other machine learning algorithms. Performing training and inference of deep neural networks using the cloud computing model is not viable for applications where low latency is required. Furthermore, the rapid proliferation of the Internet of Things will generate a large volume of data to be processed, which will soon overload the capacity of cloud servers. One solution is to process the data at the edge devices themselves, in order to alleviate cloud server workloads and improve latency. However, edge devices are less powerful than cloud servers, and many are subject to energy constraints. Hence, new resource and energy-oriented deep learning models are required, as well as new computing platforms. This paper reviews the main research directions for edge computing deep learning algorithms.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 61
    Publikationsdatum: 2020-05-18
    Beschreibung: Inference of chemical compounds with desired properties is important for drug design, chemo-informatics, and bioinformatics, to which various algorithmic and machine learning techniques have been applied. Recently, a novel method has been proposed for this inference problem using both artificial neural networks (ANN) and mixed integer linear programming (MILP). This method consists of the training phase and the inverse prediction phase. In the training phase, an ANN is trained so that the output of the ANN takes a value nearly equal to a given chemical property for each sample. In the inverse prediction phase, a chemical structure is inferred using MILP and enumeration so that the structure can have a desired output value for the trained ANN. However, the framework has been applied only to the case of acyclic and monocyclic chemical compounds so far. In this paper, we significantly extend the framework and present a new method for the inference problem for rank-2 chemical compounds (chemical graphs with cycle index 2). The results of computational experiments using such chemical properties as octanol/water partition coefficient, melting point, and boiling point suggest that the proposed method is much more useful than the previous method.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 62
    Publikationsdatum: 2020-06-12
    Beschreibung: The researches about a mobile entity (called agent) on dynamic networks have attracted a lot of attention in recent years. Exploration which requires an agent to visit all the nodes in the network is one of the most fundamental problems. While the exploration of dynamic networks with complete information or with no information about network changes has been studied, an agent with partial information about the network changes has not been considered yet despite its practical importance. In this paper, we consider the exploration of dynamic networks by a single agent with partial information about network changes. To the best of our knowledge, this is the very first work to investigate the exploration problem with such partial information. As a first step in this research direction, we focus on 1-interval connected rings as dynamic networks in this paper. We assume that the single agent has partial information called the ( H , S ) view by which it always knows whether or not each of the links within H hops is available in each of the next S time steps. In this setting, we show that H + S ≥ n and S ≥ ⌈ n / 2 ⌉ (n is the size of the network) are necessary and sufficient conditions to explore 1-interval connected rings. Moreover, we investigate the upper and lower bounds of the exploration time. It is proven that the exploration time is O ( n 2 ) for ⌈ n / 2 ⌉ ≤ S 〈 2 H ′ − 1 , O ( n 2 / H + n H ) for S ≥ max ( ⌈ n / 2 ⌉ , 2 H ′ − 1 ) , O ( n 2 / H + n log H ) for S ≥ n − 1 , and Ω ( n 2 / H ) for any S where H ′ = min ( H , ⌊ n / 2 ⌋ ) .
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 63
    Publikationsdatum: 2020-07-17
    Beschreibung: Soil temperature (ST) plays a key role in the processes and functions of almost all ecosystems, and is also an essential parameter for various applications such as agricultural production, geothermal development, and their utilization. Although numerous machine learning models have been used in the prediction of ST, and good results have been obtained, most of the current studies have focused on daily or monthly ST predictions, while hourly ST predictions are scarce. This paper presents a novel scheme for forecasting the hourly ST using weather forecast data. The method considers the hourly ST prediction to be the superposition of two parts, namely, the daily average ST prediction and the ST amplitude (the difference between the hourly ST and the daily average ST) prediction. According to the results of correlation analysis, we selected nine meteorological parameters and combined two temporal parameters as the input vectors for predicting the daily average ST. For the task of predicting the ST amplitude, seven meteorological parameters and one temporal parameter were selected as the inputs. Two submodels were constructed using a deep bidirectional long short-term memory network (BiLSTM). For the task of hourly ST prediction at five different soil depths at 30 sites, which are located in 5 common climates in the United States, the results showed the method proposed in this paper performs best at all depths for 30 stations (100% of all) for the root mean square error (RMSE), 27 stations (90% of all) for the mean absolute error (MAE), and 30 stations (100% of all) for the coefficient of determination (R2), respectively. Moreover, the method adopted in this study displays a stronger ST prediction ability than the traditional methods under all climate types involved in the experiment, the hourly ST produced by it can be used as a driving parameter for high-resolution biogeochemical models, land surface models and hydrological models and can provide ideas for an analysis of other time series data.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 64
    Publikationsdatum: 2020-06-14
    Beschreibung: In Model Predictive Control (MPC) algorithms, control signals are generated after solving optimization problems. If the model used for prediction is linear then the optimization problem is a standard, easy to solve, quadratic programming problem with linear constraints. However, such an algorithm may offer insufficient performance if applied to a nonlinear control plant. On the other hand, if a model used for prediction is nonlinear, then non–convex optimization problem must be solved at each algorithm iteration. Then the numerical problems may occur during solving it and the time needed to calculate the control signals cannot be determined. Therefore approaches based on linearized models are preferred in practical applications. A fuzzy algorithm with an advanced generation of the prediction is proposed in the article. The prediction is obtained in such a way that the algorithm is formulated as a quadratic optimization problem but offers performance very close to that of the MPC algorithm with nonlinear optimization. The efficiency of the proposed approach is demonstrated in the control system of a nonlinear chemical control plant—a CSTR (Continuous Stirred–Tank Reactor) with van de Vusse reaction.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 65
    Publikationsdatum: 2020-06-12
    Beschreibung: One of the possible approaches to solving difficult optimization problems is applying population-based metaheuristics. Among such metaheuristics, there is a special class where searching for the best solution is based on the collective behavior of decentralized, self-organized agents. This study proposes an approach in which a swarm of agents tries to improve solutions from the population of solutions. The process is carried out in parallel threads. The proposed algorithm—based on the mushroom-picking metaphor—was implemented using Scala in an Apache Spark environment. An extended computational experiment shows how introducing a combination of simple optimization agents and increasing the number of threads may improve the results obtained by the model in the case of TSP and JSSP problems.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 66
    Publikationsdatum: 2020-06-26
    Beschreibung: In this paper, we develop a novel computation model of Intuitionistic Fuzzy Values with the usage of fuzzy negations and Archimedean copulas. This novel computation model’s structure is based on the extension of the existing operations of intuitionistic fuzzy values with some classes of fuzzy negations. Many properties of the proposed operations are investigated and proved. Additionally, in this paper we introduce the concepts of intuitionistic fuzzy Archimedean copula weighted arithmetic and geometric aggregation operators based on fuzzy negations, including a further analysis of their properties. Finally, using a case study from an already published paper we found that our method has many advantages.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 67
    Publikationsdatum: 2020-06-28
    Beschreibung: Bayesian Networks (BN) are probabilistic models that are commonly used for the diagnosis in numerous domains (medicine, finance, transport, robotics, …). In the case of autonomous vehicles, they can contribute to elaborate intelligent monitors that can take the environmental context into account. We show in this paper some main abilities of BN that can help in the elaboration of fault detection isolation and recovery (FDIR) modules. One of the main difficulty with the BN model is generally to elaborate these ones according to the case of study. Then, we propose some automatic generation techniques from failure mode and effects analysis (FMEA)-like tables using the pattern design approach. Once defined, these modules have to operate online for autonomous vehicles. In a second part, we propose a design methodology to embed the real-time and non-intrusive implementations of the BN modules using FPGA-SoC support. We show that the FPGA implementation can offer an interesting speed-up with very limited energy cost. Lastly, we show how these BN modules can be incorporated into the decision-making model for the mission planning of unmanned aerial vehicles (UAVs). We illustrate the integration by means of two models: the Decision Network model that is a straightforward extension of the BN model, and the BFM model that is an extension of the Markov Decision Process (MDP) decision-making model incorporating a BN. We illustrate the different proposals with realistic examples and show that the hybrid implementation on FPGA-SoC can offer some benefits.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 68
    Publikationsdatum: 2020-06-04
    Beschreibung: To solve a system of equations that needs few updates, such as sparse systems, the leading dichotomous coordinate descent (DCD) algorithm is better than the cyclic DCD algorithm because of its fast speed of convergence. In the case of sparse systems requiring a large number of updates, the cyclic DCD algorithm converges faster and has a lower error level than the leading DCD algorithm. However, the leading DCD algorithm has a faster convergence speed in the initial updates. In this paper, we propose a combination of leading and cyclic DCD iterations, the leading-cyclic DCD algorithm, to improve the convergence speed of the cyclic DCD algorithm. The proposed algorithm involves two steps. First, by properly selecting the number of updates of the solution vector used in the leading DCD algorithm, a solution is obtained from the leading DCD algorithm. Second, taking the output of the leading DCD algorithm as the initial values, an improved soft output is generated by the cyclic DCD algorithm with a large number of iterations. Numerical results demonstrate that when the solution sparsity γ is in the interval [ 1 / 8 , 6 / 8 ] , the proposed leading-cyclic DCD algorithm outperforms both the existing cyclic and leading DCD algorithms for all iterations.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 69
    Publikationsdatum: 2020-06-18
    Beschreibung: Often, multiple geophysical measurements are sensitive to the same subsurface parameters. In this case, joint inversions are mostly preferred over two (or more) separate inversions of the geophysical data sets due to the expected reduction of the non-uniqueness in the joint inverse solution. This reduction can be quantified using Bayesian inversions. However, standard Markov chain Monte Carlo (MCMC) approaches are computationally expensive for most geophysical inverse problems. We present the Kalman ensemble generator (KEG) method as an efficient alternative to the standard MCMC inversion approaches. As proof of concept, we provide two synthetic studies of joint inversion of frequency domain electromagnetic (FDEM) and direct current (DC) resistivity data for a parameter model with vertical variation in electrical conductivity. For both studies, joint results show a considerable improvement for the joint framework over the separate inversions. This improvement consists of (1) an uncertainty reduction in the posterior probability density function and (2) an ensemble mean that is closer to the synthetic true electrical conductivities. Finally, we apply the KEG joint inversion to FDEM and DC resistivity field data. Joint field data inversions improve in the same way seen for the synthetic studies.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 70
    Publikationsdatum: 2020-08-27
    Beschreibung: Delay-differential equations belong to the class of infinite-dimensional dynamical systems. However, it is often observed that the solutions are rapidly attracted to smooth manifolds embedded in the finite-dimensional state space, called inertial manifolds. The computation of an inertial manifold yields an ordinary differential equation (ODE) model representing the long-term dynamics of the system. Note in particular that any attractors must be embedded in the inertial manifold when one exists, therefore reducing the study of these attractors to the ODE context, for which methods of analysis are well developed. This contribution presents a study of a previously developed method for constructing inertial manifolds based on an expansion of the delayed term in small powers of the delay, and subsequent solution of the invariance equation by the Fraser functional iteration method. The combined perturbative-iterative method is applied to several variations of a model for the expression of an inducible enzyme, where the delay represents the time required to transcribe messenger RNA and to translate that RNA into the protein. It is shown that inertial manifolds of different dimensions can be computed. Qualitatively correct inertial manifolds are obtained. Among other things, the dynamics confined to computed inertial manifolds display Andronov–Hopf bifurcations at similar parameter values as the original DDE model.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 71
    Publikationsdatum: 2020-07-17
    Beschreibung: Collaborative filtering algorithms formulate personalized recommendations for a user, first by analysing already entered ratings to identify other users with similar tastes to the user (termed as near neighbours), and then using the opinions of the near neighbours to predict which items the target user would like. However, in sparse datasets, too few near neighbours can be identified, resulting in low accuracy predictions and even a total inability to formulate personalized predictions. This paper addresses the sparsity problem by presenting an algorithm that uses robust predictions, that is predictions deemed as highly probable to be accurate, as derived ratings. Thus, the density of sparse datasets increases, and improved rating prediction coverage and accuracy are achieved. The proposed algorithm, termed as CFDR, is extensively evaluated using (1) seven widely-used collaborative filtering datasets, (2) the two most widely-used correlation metrics in collaborative filtering research, namely the Pearson correlation coefficient and the cosine similarity, and (3) the two most widely-used error metrics in collaborative filtering, namely the mean absolute error and the root mean square error. The evaluation results show that, by successfully increasing the density of the datasets, the capacity of collaborative filtering systems to formulate personalized and accurate recommendations is considerably improved.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 72
    facet.materialart.
    Unbekannt
    Molecular Diversity Preservation International
    Publikationsdatum: 2020-07-17
    Beschreibung: We define the notion of an approximate triangulation for a manifold M embedded in Euclidean space. The basic idea is to build a nested family of simplicial complexes whose vertices lie in M and use persistent homology to find a complex in the family whose homology agrees with that of M. Our key examples are various Grassmann manifolds G k ( R n ) .
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 73
    Publikationsdatum: 2020-07-11
    Beschreibung: Computer vision is currently one of the most exciting and rapidly evolving fields of science, which affects numerous industries. Research and development breakthroughs, mainly in the field of convolutional neural networks (CNNs), opened the way to unprecedented sensitivity and precision in object detection and recognition tasks. Nevertheless, the findings in recent years on the sensitivity of neural networks to additive noise, light conditions, and to the wholeness of the training dataset, indicate that this technology still lacks the robustness needed for the autonomous robotic industry. In an attempt to bring computer vision algorithms closer to the capabilities of a human operator, the mechanisms of the human visual system was analyzed in this work. Recent studies show that the mechanisms behind the recognition process in the human brain include continuous generation of predictions based on prior knowledge of the world. These predictions enable rapid generation of contextual hypotheses that bias the outcome of the recognition process. This mechanism is especially advantageous in situations of uncertainty, when visual input is ambiguous. In addition, the human visual system continuously updates its knowledge about the world based on the gaps between its prediction and the visual feedback. CNNs are feed forward in nature and lack such top-down contextual attenuation mechanisms. As a result, although they process massive amounts of visual information during their operation, the information is not transformed into knowledge that can be used to generate contextual predictions and improve their performance. In this work, an architecture was designed that aims to integrate the concepts behind the top-down prediction and learning processes of the human visual system with the state-of-the-art bottom-up object recognition models, e.g., deep CNNs. The work focuses on two mechanisms of the human visual system: anticipation-driven perception and reinforcement-driven learning. Imitating these top-down mechanisms, together with the state-of-the-art bottom-up feed-forward algorithms, resulted in an accurate, robust, and continuously improving target recognition model.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 74
    Publikationsdatum: 2020-07-15
    Beschreibung: Modeling spatiotemporal representations is one of the most essential yet challenging issues in video action recognition. Existing methods lack the capacity to accurately model either the correlations between spatial and temporal features or the global temporal dependencies. Inspired by the two-stream network for video action recognition, we propose an encoder–decoder framework named Two-Stream Bidirectional Long Short-Term Memory (LSTM) Residual Network (TBRNet) which takes advantage of the interaction between spatiotemporal representations and global temporal dependencies. In the encoding phase, the two-stream architecture, based on the proposed Residual Convolutional 3D (Res-C3D) network, extracts features with residual connections inserted between the two pathways, and then the features are fused to become the short-term spatiotemporal features of the encoder. In the decoding phase, those short-term spatiotemporal features are first fed into a temporal attention-based bidirectional LSTM (BiLSTM) network to obtain long-term bidirectional attention-pooling dependencies. Subsequently, those temporal dependencies are integrated with short-term spatiotemporal features to obtain global spatiotemporal relationships. On two benchmark datasets, UCF101 and HMDB51, we verified the effectiveness of our proposed TBRNet by a series of experiments, and it achieved competitive or even better results compared with existing state-of-the-art approaches.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 75
    Publikationsdatum: 2020-07-11
    Beschreibung: For piecewise linear functions f : R n ↦ R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f ˇ and a concave part f ^ , including a pair of generalized gradients g ˇ ∈ R n ∋ g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f ˇ and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g ˇ and − g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 76
    Publikationsdatum: 2020-07-11
    Beschreibung: The technical debt (TD) in a software project refers to the adoption of an inadequate solution from its design to the source code. When developers admit the presence of technical debt in the source code, through comments or commit messages, it is called self-admitted technical debt (SATD). This aspect of TD has been the subject of numerous research studies, which have investigated its distribution, the impact on software quality, and removal. Therefore, this work focuses on the relationship between SATD and TD values. In particular, the study aims to compare the admitted technical debt with respect to its objective measure. In fact, the trends of TD values during SATD removals have been studied. This was done thanks to the use of an SATD dataset and their related removals in four open source projects. Instead, the SonarQube tool was used to measure TD values. Thanks to this work, it turned out that SATD removals in a few cases correspond to an effective reduction of TD values, while in numerous cases, the classes indicated are removed.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 77
    Publikationsdatum: 2020-07-07
    Beschreibung: Determining contingency reserve is critical to project risk management. Classic methods of determining contingency reserve significantly rely on historical data and fail to effectively incorporate certain types of uncertainties such as vagueness, ambiguity, and subjectivity. In this paper, an interval type-2 fuzzy risk analysis model (IT2FRAM) is introduced in order to determine the contingency reserve. In IT2FRAM, the membership functions for the linguistic terms used to describe the probability, impact of risk and the opportunity events are developed, optimized, and aggregated using interval type-2 fuzzy sets and the principle of justifiable granularity. IT2FRAM is an extension of a fuzzy arithmetic-based risk analysis method which considers such uncertainties and addresses the limitations of probabilistic and deterministic techniques of contingency determination methods. The contribution of IT2FRAM is that it considers the opinions of several subject matter experts to develop the membership functions of linguistic terms. Moreover, the effect of outlier opinions in developing the membership functions of linguistic terms are reduced. IT2FRAM also enables the aggregation of non-linear membership functions into trapezoidal membership functions. A hypothetical case study is presented in order to illustrate the application of IT2FRAM in Fuzzy Risk Analyzer© (FRA©), a risk analysis software.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 78
    Publikationsdatum: 2020-07-16
    Beschreibung: A Sudoku puzzle often has a regular pattern in the arrangement of initial digits and it is typically made solvable with known solving techniques called strategies. In this paper, we consider the problem of generating such Sudoku instances. We introduce a rigorous framework to discuss solvability for Sudoku instances with respect to strategies. This allows us to handle not only known strategies but also general strategies under a few reasonable assumptions. We propose an exact method for determining Sudoku clues for a given set of clue positions that is solvable with a given set of strategies. This is the first exact method except for a trivial brute-force search. Besides the clue generation, we present an application of our method to the problem of determining the minimum number of strategy-solvable Sudoku clues. We conduct experiments to evaluate our method, varying the position and the number of clues at random. Our method terminates within 1 min for many grids. However, as the number of clues gets closer to 20, the running time rapidly increases and exceeds the time limit set to 600 s. We also evaluate our method for several instances with 17 clue positions taken from known minimum Sudokus to see the efficiency for deciding unsolvability.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 79
    Publikationsdatum: 2020-06-01
    Beschreibung: High-dimensional system identification problems can be efficiently addressed based on tensor decompositions and modelling. In this paper, we design a recursive least-squares (RLS) algorithm tailored for the identification of trilinear forms, namely RLS-TF. In our framework, the trilinear form is related to the decomposition of a third-order tensor (of rank one). The proposed RLS-TF algorithm acts on the individual components of the global impulse response, thus being efficient in terms of both performance and complexity. Simulation results indicate that the proposed solution outperforms the conventional RLS algorithm (which handles only the global impulse response), but also the previously developed trilinear counterparts based on the least-mean- squares algorithm.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 80
    Publikationsdatum: 2020-05-29
    Beschreibung: Image based instance recognition is a difficult problem, in some cases even for the human eye. While latest developments in computer vision—mostly driven by deep learning—have shown that high performance models for classification or categorization can be engineered, the problem of discriminating similar objects with a low number of samples remain challenging. Advances from multi-class classification are applied for object matching problems, as the feature extraction techniques are the same; nature-inspired multi-layered convolutional nets learn the representations, and the output of such a model maps them to a multidimensional encoding space. A metric based loss brings same instance embeddings close to each other. While these solutions achieve high classification performance, low efficiency is caused by memory cost of high parameter number, which is in a relationship with input image size. Upon shrinking the input, the model requires less trainable parameters, while performance decreases. This drawback is tackled by using compressed feature extraction, e.g., projections. In this paper, a multi-directional image projection transformation with fixed vector lengths (MDIPFL) is applied for one-shot recognition tasks, trained on Siamese and Triplet architectures. Results show, that MDIPFL based approach achieves decent performance, despite of the significantly lower number of parameters.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 81
    Publikationsdatum: 2020-05-29
    Beschreibung: The topic of this paper is modeling based on Hamiltonian spin interactions. Preliminary studies on the identification of quasi-static magnetizing field in a magnetic system were presented. The random-field Ising model was then used to simulate the simplified ferromagnetic structure. The validation of algorithms and simulation tests were carried out for the 2D and the 3D model spaces containing at least 106 unit cells. The research showed that the response of a slowly driven magnetic system did not depend on the external field sweep rate. Changes in the spatial magnetization of the lattice were very similar below a certain rate of the external field change known as the quasi-static boundary. The observed differences in obtained magnetization curves under quasi-static conditions stemmed from the random nature of the molecular field and the avalanche-like magnetization process
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 82
    Publikationsdatum: 2020-07-23
    Beschreibung: Road departure crashes tend to be hazardous, especially in rural areas like Wyoming. Traffic barriers could be installed to mitigate the severity of those crashes. However, the severity of traffic barriers crashes still persists. Besides various drivers and environmental characteristics, the roadways and barrier geometric characteristics play a critical role in the severity of barrier crashes. The Wyoming department of transportation (WYDOT) has initiated a project to identify and optimize the heights of those barriers that are below the design standard, while prioritizing them based on the monetary benefit. This is to optimize first barriers that need an immediate attention, considering the limited budget, and then all other barriers being under design. In order to account for both aspects of frequency and severity of crashes, equivalent property damage only (EPDO) was considered. The data of this type besides having an over-dispersion, exhibits excess amounts of zeroes. Thus, a two-component model was employed to provide a flexible way of addressing this problem. Beside this technique, one-component hierarchical modeling approach was considered for a comparison purpose. This paper presents an empirical cost-benefit analysis based on Bayesian hierarchical machine learning techniques. After identifying the best model in terms of the performance, deviance information criterion (DIC), the results were converted into an equation, and the equation was used for a purpose of machine learning technique. An automated method generated cost based on barriers’ current conditions, and then based on optimized barrier heights. The empirical analysis showed that cost-sensitive modeling and machine learning technique deployment could be used as an effective way for cost-benefit analysis. That could be achieved through measuring the associated costs of barriers’ enhancements, added benefits over years and consequently, barrier prioritization due to lack of available budget. A comprehensive discussion across the two-component models, zero-inflated and hurdle, is included in the manuscript.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 83
    Publikationsdatum: 2020-07-22
    Beschreibung: We suggest a provable and practical approximation algorithm for fitting a set P of n points in R d to a sphere. Here, a sphere is represented by its center x ∈ R d and radius r 〉 0 . The goal is to minimize the sum ∑ p ∈ P ∣ p − x − r ∣ of distances to the points up to a multiplicative factor of 1 ± ε , for a given constant ε 〉 0 , over every such r and x. Our main technical result is a data summarization of the input set, called coreset, that approximates the above sum of distances on the original (big) set P for every sphere. Then, an accurate sphere can be extracted quickly via an inefficient exhaustive search from the small coreset. Most articles focus mainly on sphere identification (e.g., circles in 2 D image) rather than finding the exact match (in the sense of extent measures), and do not provide approximation guarantees. We implement our algorithm and provide extensive experimental results on both synthetic and real-world data. We then combine our algorithm in a mechanical pressure control system whose main bottleneck is tracking a falling ball. Full open source is also provided.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 84
    Publikationsdatum: 2020-07-22
    Beschreibung: Intelligence is the ability to learn from experience and use domain experts’ knowledge to adapt to new situations. In this context, an intelligent Recommender System should be able to learn from domain experts’ knowledge and experience, as it is vital to know the domain that the items will be recommended. Traditionally, Recommender Systems have been recognized as playlist generators for video/music services (e.g., Netflix and Spotify), e-commerce product recommenders (e.g., Amazon and eBay), or social content recommenders (e.g., Facebook and Twitter). However, Recommender Systems in modern enterprises are highly data-/knowledge-driven and may rely on users’ cognitive aspects such as personality, behavior, and attitude. In this paper, we survey and summarize previously published studies on Recommender Systems to help readers understand our method’s contributions to the field in this context. We discuss the current limitations of the state of the art approaches in Recommender Systems and the need for our new approach: A vision and a general framework for a new type of data-driven, knowledge-driven, and cognition-driven Recommender Systems, namely, Cognitive Recommender Systems. Cognitive Recommender Systems will be the new type of intelligent Recommender Systems that understand the user’s preferences, detect changes in user preferences over time, predict user’s unknown favorites, and explore adaptive mechanisms to enable intelligent actions within the compound and changing environments. We present a motivating scenario in banking and argue that existing Recommender Systems: (i) do not use domain experts’ knowledge to adapt to new situations; (ii) may not be able to predict the ratings or preferences a customer would give to a product (e.g., loan, deposit, or trust service); and (iii) do not support data capture and analytics around customers’ cognitive activities and use it to provide intelligent and time-aware recommendations.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 85
    Publikationsdatum: 2020-06-22
    Beschreibung: This work considers model reduction techniques that can substantially decrease computational cost in simulating parmetrized Allen–Cahn equation. We first employ the proper orthogonal decomposition (POD) approach to reduce the number of unknowns in the full-order discretized system. Since POD cannot reduce the computational complexity of nonlinearity in Allen–Cahn equation, we also apply discrete empirical interpolation method (DEIM) to approximate the nonlinear term for a substantial reduction in overall simulation time. However, in general, the POD-DEIM approach is less accurate than the POD approach, since it further approximates the nonlinear term. To increase the accuracy of the POD-DEIM approach, this work introduces an extension of the DEIM approximation based on the concept of Gappy POD (GPOD), which is optimal in the least-squares sense. The POD-GPOD approach is tested and compared with the POD and POD-DEIM approaches on Allen–Cahn equation for both cases of fixed parameter value and varying parameter values. The modified GPOD approximation introduced in this work is demonstrated to improve accuracy of DEIM without sacrificing too much efficiency on the computational speedup, e.g., in one of our numerical tests, the POD-GPOD approach provides an approximate solution to the parmetrized Allen–Cahn equation 200 times faster than the full-order system with average error of order O ( 10 − 4 ) . The POD-GPOD approach is therefore shown to be a promising technique that compromises between the accuracy of POD approach and the efficiency of POD-DEIM approach.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 86
    Publikationsdatum: 2020-06-23
    Beschreibung: The Evasion Problem is the question of whether—given a collection of sensors and a particular movement pattern over time—it is possible to stay undetected within the domain over the same stretch of time. It has been studied using topological techniques since 2006—with sufficient conditions for non-existence of an Evasion Path provided by de Silva and Ghrist; sufficient and necessary conditions with extended sensor capabilities provided by Adams and Carlsson; and sufficient and necessary conditions using sheaf theory by Krishnan and Ghrist. In this paper, we propose three algorithms for the Evasion Problem: one distributed algorithm extension of Adams’ approach for evasion path detection, and two different approaches to evasion path enumeration.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 87
    Publikationsdatum: 2020-07-20
    Beschreibung: The open-domain Frame Problem is the problem of determining what features of an open task environment need to be updated following an action. Here we prove that the open-domain Frame Problem is equivalent to the Halting Problem and is therefore undecidable. We discuss two other open-domain problems closely related to the Frame Problem, the system identification problem and the symbol-grounding problem, and show that they are similarly undecidable. We then reformulate the Frame Problem as a quantum decision problem, and show that it is undecidable by any finite quantum computer.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 88
    Publikationsdatum: 2020-06-26
    Beschreibung: In this paper, a novel graph model to figure Collision-Free Multiple Traveling Salesman Problem (CFMTSP) is proposed. In this problem, a group of vehicles start from different nodes in an undirected graph and must visit each node in the graph, following the well-known Traveling Salesman Problem (TSP) fashion without any collision. This paper’s main objective is to obtain free-collision routes for each vehicle while minimizing the traveling time of the slowest vehicle. This problem can be approached by applying speed to each vehicle, and a novel augmented graph model can perform it. This approach accommodates not only the position of nodes and inter-node distances, but also the speed of all the vehicles is proposed. The proposed augmented graph should be able to be used to perform optimal trajectories, i.e., routes and speeds, for all vehicles. An ant colony optimization (ACO) algorithm is used on the proposed augmented graph. Simulations show that the algorithm can satisfy the main objective. Considered factors, such as limitation of the mission successfulness, i.e., the inter-vehicle arrival time on a node, the number of vehicles, and the numbers of vehicles and edges of the graph are also discussed.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 89
    Publikationsdatum: 2020-06-20
    Beschreibung: A local convergence comparison is presented between two ninth order algorithms for solving nonlinear equations. In earlier studies derivatives not appearing on the algorithms up to the 10th order were utilized to show convergence. Moreover, no error estimates, radius of convergence or results on the uniqueness of the solution that can be computed were given. The novelty of our study is that we address all these concerns by using only the first derivative which actually appears on these algorithms. That is how to extend the applicability of these algorithms. Our technique provides a direct comparison between these algorithms under the same set of convergence criteria. This technique can be used on other algorithms. Numerical experiments are utilized to test the convergence criteria.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 90
    Publikationsdatum: 2020-06-09
    Beschreibung: This paper considers the general lot sizing and scheduling problem with rich constraints exemplified by means of rework and lifetime constraints for defective items (GLSP-RP), which finds numerous applications in industrial settings, for example, the food processing industry and the pharmaceutical industry. To address this problem, we propose the Late Acceptance Hill-climbing Matheuristic (LAHCM) as a novel solution framework that exploits and integrates the late acceptance hill climbing algorithm and exact approaches for speeding up the solution process in comparison to solving the problem by means of a general solver. The computational results show the benefits of incorporating exact approaches within the LAHCM template leading to high-quality solutions within short computational times.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 91
    Publikationsdatum: 2020-07-23
    Beschreibung: The paper addresses issues associated with implementing GPC controllers in systems with multiple input signals. Depending on the method of identification, the resulting models may be of a high order and when applied to a control/regulation law, may result in numerical errors due to the limitations of representing values in double-precision floating point numbers. This phenomenon is to be avoided, because even if the model is correct, the resulting numerical errors will lead to poor control performance. An effective way to identify, and at the same time eliminate, this unfavorable feature is to reduce the model order. A method of model order reduction is presented in this paper that effectively mitigates these issues. In this paper, the Generalized Predictive Control (GPC) algorithm is presented, followed by a discussion of the conditions that result in high order models. Examples are included where the discussed problem is demonstrated along with the subsequent results after the reduction. The obtained results and formulated conclusions are valuable for industry practitioners who implement a predictive control in industry.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 92
    Publikationsdatum: 2020-06-08
    Beschreibung: In knowledge-based systems, besides obtaining good output prediction accuracy, it is crucial to understand the subset of input variables that have most influence on the output, with the goal of gaining deeper insight into the underlying process. These requirements call for logistic model estimation techniques that provide a sparse solution, i.e., where coefficients associated with non-important variables are set to zero. In this work we compare the performance of two methods: the first one is based on the well known Least Absolute Shrinkage and Selection Operator (LASSO) which involves regularization with an ℓ 1 norm; the second one is the Relevance Vector Machine (RVM) which is based on a Bayesian implementation of the linear logistic model. The two methods are extensively compared in this paper, on real and simulated datasets. Results show that, in general, the two approaches are comparable in terms of prediction performance. RVM outperforms the LASSO both in term of structure recovery (estimation of the correct non-zero model coefficients) and prediction accuracy when the dimensionality of the data tends to increase. However, LASSO shows comparable performance to RVM when the dimensionality of the data is much higher than number of samples that is p 〉 〉 n .
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 93
    Publikationsdatum: 2020-06-23
    Beschreibung: Predictive models are used in many different fields of science and engineering and are always prone to make faulty predictions. These faulty predictions can be more or less malignant depending on the model application. We describe fibers of failure (FiFa), a method to classify failure modes of predictive processes. Our method uses Mapper, an algorithm from topological data analysis (TDA), to build a graphical model of input data stratified by prediction errors. We demonstrate two ways to use the failure mode groupings: either to produce a correction layer that adjusts predictions by similarity to the failure modes; or to inspect members of the failure modes to illustrate and investigate what characterizes each failure mode. We demonstrate FiFa on two scenarios: a convolutional neural network (CNN) predicting MNIST images with added noise, and an artificial neural network (ANN) predicting the electrical energy consumption of an electric arc furnace (EAF). The correction layer on the CNN model improved its prediction accuracy significantly while the inspection of failure modes for the EAF model provided guiding insights into the domain-specific reasons behind several high-error regions.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 94
    Publikationsdatum: 2020-06-24
    Beschreibung: The increase in memory and in network traffic used and caused by new sequenced biological data has recently deeply grown. Genomic projects such as HapMap and 1000 Genomes have contributed to the very large rise of databases and network traffic related to genomic data and to the development of new efficient technologies. The large-scale sequencing of samples of DNA has brought new attention and produced new research, and thus the interest in the scientific community for genomic data has greatly increased. In a very short time, researchers have developed hardware tools, analysis software, algorithms, private databases, and infrastructures to support the research in genomics. In this paper, we analyze different approaches for compressing digital files generated by Next-Generation Sequencing tools containing nucleotide sequences, and we discuss and evaluate the compression performance of generic compression algorithms by confronting them with a specific system designed by Jones et al. specifically for genomic file compression: Quip. Moreover, we present a simple but effective technique for the compression of DNA sequences in which we only consider the relevant DNA data and experimentally evaluate its performances.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 95
    Publikationsdatum: 2020-06-24
    Beschreibung: Wireless sensor networks (WSNs) can provide data acquisition for long-term environment monitoring, which are important parts of Internet of Things (IoT). In the WSN-assisted IoT, energy efficient routing algorithms are required to maintain a long network lifetime. In this paper, a DS evidence theory-based energy balanced routing algorithm for network lifetime enhancement (EBRA-NLE) in WSN-assisted IOT is proposed. From the perspective of energy balance and minimization of routing path energy consumption, three attribute indexes are established to evaluate the forward neighboring nodes. Then a route selection method based on DS evidence theory is developed to comprehensively evaluate the nodes and select the optimal next hop. In order to avoid missing the ideal solution because of the excessive difference between the index values, the sine function is used to adjust this difference. The simulation results show that the proposed EBRA-NLE has certain advantages in prolonging network lifetime and balancing energy between nodes.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 96
    Publikationsdatum: 2020-06-11
    Beschreibung: Community detection is one of the most challenging and interesting problems in many research areas. Being able to detect highly linked communities in a network can lead to many benefits, such as understanding relationships between entities or interactions between biological genes, for instance. Two different immunological algorithms have been designed for this problem, called Opt-IA and Hybrid-IA, respectively. The main difference between the two algorithms is the search strategy and related immunological operators developed: the first carries out a random search together with purely stochastic operators; the last one is instead based on a deterministic Local Search that tries to refine and improve the current solutions discovered. The robustness of Opt-IA and Hybrid-IA has been assessed on several real social networks. These same networks have also been considered for comparing both algorithms with other seven different metaheuristics and the well-known greedy optimization Louvain algorithm. The experimental analysis conducted proves that Opt-IA and Hybrid-IA are reliable optimization methods for community detection, outperforming all compared algorithms.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 97
    Publikationsdatum: 2020-06-11
    Beschreibung: During the last decades, in the area of machine learning and data mining, the development of ensemble methods has gained a significant attention from the scientific community. Machine learning ensemble methods combine multiple learning algorithms to obtain better predictive performance than could be obtained from any of the constituent learning algorithms alone. Combining multiple learning models has been theoretically and experimentally shown to provide significantly better performance than their single base learners. In the literature, ensemble learning algorithms constitute a dominant and state-of-the-art approach for obtaining maximum performance, thus they have been applied in a variety of real-world problems ranging from face and emotion recognition through text classification and medical diagnosis to financial forecasting.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 98
    Publikationsdatum: 2020-06-19
    Beschreibung: Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 99
    Publikationsdatum: 2020-09-22
    Beschreibung: The autonomic composition of Virtual Networks (VNs) and Service Function Chains (SFCs) based on application requirements is significant for complex environments. In this paper, we use graph transformation in order to compose an Extended Virtual Network (EVN) that is based on different requirements, such as locations, low latency, redundancy, and security functions. The EVN can represent physical environment devices and virtual application and network functions. We build a generic Virtual Network Embedding (VNE) framework for transforming an Application Request (AR) to an EVN. Subsequently, we define a set of transformations that reflect preliminary topological, performance, reliability, and security policies. These transformations update the entities and demands of the VN and add SFCs that include the required Virtual Network Functions (VNFs). Additionally, we propose a greedy proactive heuristic for path-independent embedding of the composed SFCs. This heuristic is appropriate for real complex environments, such as industrial networks. Furthermore, we present an Industrail Internet of Things (IIoT) use case that was inspired by Industry 4.0 concepts, in which EVNs for remote asset management are deployed over three levels; manufacturing halls and edge and cloud computing. We also implement the developed methods in Alevin and show exemplary mapping results from our use case. Finally, we evaluate the chain embedding heuristic while using a random topology that is typical for such a use case, and show that it can improve the admission ratio and resource utilization with minimal overhead.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 100
    facet.materialart.
    Unbekannt
    Molecular Diversity Preservation International
    Publikationsdatum: 2020-09-23
    Beschreibung: Many sharing-economy platforms operate as follows. Owners list the availability of resources, prices, and contract-length limits. Customers propose contract start times and lengths. The owners decide immediately whether to accept or decline each proposal, even if the contract is for a future date. Accepted proposals generate revenue. Declined proposals are lost. At any decision epoch, the owner has no information regarding future proposals. The owner seeks easy-to-implement algorithms that achieve the best competitive ratio (CR). We first derive a lower bound on the CR of any algorithm. We then analyze CRs of all intuitive “greedy” algorithms. We propose two new algorithms that have significantly better CRs than that of any greedy algorithm for certain parameter-value ranges. The key idea behind these algorithms is that owners may reserve some amount of capacity for late-arriving higher-value proposals in an attempt to improve revenue. Our contribution lies in operationalizing this idea with the help of algorithms that utilize thresholds. Moreover, we show that if non-optimal thresholds are chosen, then those may lead to poor CRs. We provide a rigorous method by which an owner can decide the best approach in their context by analyzing the CRs of greedy algorithms and those proposed by us.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie hier...