ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (404)
  • MDPI  (404)
  • 2015-2019  (404)
  • 1945-1949
  • 110151
  • 2184
  • 1
    Publication Date: 2019
    Description: In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019
    Description: The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order to handle big data, such knowledge-discovery queries must be computed in parallel distributed environments. In this paper, we consider an efficient parallel algorithm for the “K-skyband query” and the “top-k dominating query”, which are popular variants of skyline query. We propose a method for computing both queries simultaneously in a parallel distributed framework called MapReduce, which is a popular framework for processing “big data” problems. Our extensive evaluation results validate the effectiveness and efficiency of the proposed algorithm on both real and synthetic datasets.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019
    Description: A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers ( s ≥ 2 ) of all nonzero elements in G F ( p m ) , where p ≥ 2 is prime. Some of the resulting codes are optimal or near-optimal and include projective codes over G F ( 4 ) that give rise to optimal or near optimal quantum codes. In addition, the codes yield interesting combinatorial structures, such as strongly regular graphs and block designs.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019
    Description: Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced-partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks such as MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, for example, via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, and minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, for example, a label-propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: For instance, compared to the label-propagation algorithm, we report an improvement of 15–25% in the cut value. We also observe that our algorithms admit scalable distributed implementation for any number of partitions. Finally, we explain three applications of this work at Google: (1) Balanced partitioning is used to route multi-term queries to different replicas in Google Search backend in a way that reduces the cache miss rates by ≈ 0.5 % , which leads to a double-digit gain in throughput of production clusters. (2) Applied to the Google Maps Driving Directions, balanced partitioning minimizes the number of cross-shard queries with the goal of saving in CPU usage. This system achieves load balancing by dividing the world graph into several “shards”. Live experiments demonstrate an ≈ 40 % drop in the number of cross-shard queries when compared to a standard geography-based method. (3) In a job scheduling problem for our data centers, we use balanced partitioning to evenly distribute the work while minimizing the amount of communication across geographically distant servers. In fact, the hierarchical nature of our solution goes well with the layering of data center servers, where certain machines are closer to each other and have faster links to one another.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019
    Description: Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the structural patterns and morphology of networks. We propose a distributed computing approach for the calculation of network centrality value for each user using the MapReduce approach in the Hadoop platform, which allows faster and more efficient computation as compared to the conventional implementation. A distributed approach is scalable and helps in efficient computations of large-scale datasets, such as social network data. The proposed approach improves the calculation performance of degree centrality by 39.8%, closeness centrality by 40.7% and eigenvalue centrality by 41.1% using a Twitter dataset.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019
    Description: Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light of the possible clinical applications of these predictive models, researchers have developed approaches such as virtual sample generation. Virtual sample generation significantly improves learning and classification performance when working with small samples. The main objective of this study is to evaluate the ability of the proposed virtual sample generation to overcome the small sample size problem, which is a feature of the automated detection of a neurodevelopmental disorder, namely autism spectrum disorder. Results show that our method enhances diagnostic accuracy from 84%–95% using virtual samples generated on the basis of five actual clinical samples. The present findings show the feasibility of using the proposed technique to improve classification performance even in cases of clinical samples of limited size. Accounting for concerns in relation to small sample sizes, our technique represents a meaningful step forward in terms of pattern recognition methodology, particularly when it is applied to diagnostic classifications of neurodevelopmental disorders. Besides, the proposed technique has been tested with other available benchmark datasets. The experimental outcomes showed that the accuracy of the classification that used virtual samples was superior to the one that used original training data without virtual samples.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019
    Description: Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it, …). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model checker for a small fragment of MSO 1 (that is, we do not consider “edge-set-based” problems). This fragment is powerful enough to describe many natural problems, and our model checker turns out to be very competitive against similar state-of-the-art tools.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019
    Description: Let V be a finite set of positive integers with sum equal to a multiple of the integer b . When does V have a partition into b parts so that all parts have equal sums? We develop algorithmic constructions which yield positive, albeit incomplete, answers for the following classes of set V , where n is a given positive integer: (1) an initial interval { a ∈ ℤ + : a ≤ n } ; (2) an initial interval of primes { p ∈ ℙ : p ≤ n } , where ℙ is the set of primes; (3) a divisor set { d ∈ ℤ + : d | n } ; (4) an aliquot set { d ∈ ℤ + : d | n ,   d 〈 n } . Open general questions and conjectures are included for each of these classes.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019
    Description: The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training or overtraining artificial neural networks is very time-consuming and requires high computing power. In this paper, we proposed using a blockchain technique to train neural networks. This type of activity is important due to the possible search for initial weights in the network, which affect faster training, due to gradient decrease. We performed the tests with much heavier calculations to indicate that such an action is possible. However, this type of solution can also be used for less demanding calculations, i.e., only a few iterations of training and finding a better configuration of initial weights.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019
    Description: In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, which enables us to obtain a compact representation of lambda terms that leads to the Church numeral of the natural number. For natural number n, we prove that the size of the lambda term obtained by the proposed method is O ( ( slog 2 n ) ( log n / log log n ) ) . Moreover, we experimentally confirmed that the proposed method outperforms binary representation of Church numerals on average, when n is less than approximately 10,000.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2019
    Description: In the vehicle routing problem with simultaneous pickup and delivery (VRPSPD), customers demanding both delivery and pickup operations have to be visited once by a single vehicle. In this work, we propose a fast randomized algorithm using a nearest neighbor strategy to tackle an extension of the VRPSPD in which the fleet of vehicles is heterogeneous. This variant is an NP-hard problem, which in practice makes it impossible to be solved to proven optimality for large instances. To evaluate the proposal, we use benchmark instances from the literature and compare our results to those obtained by a state-of-the-art algorithm. Our approach presents very competitive results, not only improving several of the known solutions, but also running in a shorter time.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2019
    Description: Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting a vast range of scientific fields such as bio-informatics, sociology, discrete mathematics, nonlinear dynamics, digital marketing, and computer science. Even if an impressive amount of research has yet been published to tackle this NP-hard class problem, the existing methods and algorithms have virtually been proven inefficient and severely unscalable. In this regard, the purpose of this manuscript is to combine the network topology properties expressed by the loose similarity and the local edge betweenness, which is a currently proposed Girvan–Newman’s edge betweenness measure alternative, along with the intrinsic user content information, in order to introduce a novel and highly distributed hybrid community detection methodology. The proposed approach has been thoroughly tested on various real social graphs, roundly compared to other classic divisive community detection algorithms that serve as baselines and practically proven exceptionally scalable, highly efficient, and adequately accurate in terms of revealing the subjacent network hierarchy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2019
    Description: Herein, robust pole placement controller design for linear uncertain discrete time dynamic systems is addressed. The adopted approach uses the so called “D regions” where the closed loop system poles are determined to lie. The discrete time pole regions corresponding to the prescribed damping of the resulting closed loop system are studied. The key issue is to determine the appropriate convex approximation to the originally non-convex discrete-time system pole region, so that numerically efficient robust controller design algorithms based on Linear Matrix Inequalities (LMI) can be used. Several alternatives for relatively simple inner approximations and their corresponding LMI descriptions are presented. The developed LMI region for the prescribed damping can be arbitrarily combined with other LMI pole limitations (e.g., stability degree). Simple algorithms to calculate the matrices for LMI representation of the proposed convex pole regions are provided in a concise way. The results and their use in a robust controller design are illustrated on a case study of a laboratory magnetic levitation system.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2019
    Description: The objective of the cell suppression problem (CSP) is to protect sensitive cell values in tabular data under the presence of linear relations concerning marginal sums. Previous algorithms for solving CSPs ensure that every sensitive cell has enough uncertainty on its values based on the interval width of all possible values. However, we find that every deterministic CSP algorithm is vulnerable to an adversary who possesses the knowledge of that algorithm. We devise a matching attack scheme that narrows down the ranges of sensitive cell values by matching the suppression pattern of an original table with that of each candidate table. Our experiments show that actual ranges of sensitive cell values are significantly narrower than those assumed by the previous CSP algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2018
    Description: Wu et al. (2014) showed that under the small set expansion hypothesis (SSEH) there is no polynomial time approximation algorithm with any constant approximation factor for several graph width parameters, including tree-width, path-width, and cut-width (Wu et al. 2014). In this paper, we extend this line of research by exploring other graph width parameters: We obtain similar approximation hardness results under the SSEH for rank-width and maximum induced matching-width, while at the same time we show the approximation hardness of carving-width, clique-width, NLC-width, and boolean-width. We also give a simpler proof of the approximation hardness of tree-width, path-width, and cut-widththan that of Wu et al.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2018
    Description: This paper develops a bias compensation-based parameter and state estimation algorithm for the observability canonical state-space system corrupted by colored noise. The state-space system is transformed into a linear regressive model by eliminating the state variables. Based on the determination of the noise variance and noise model, a bias correction term is added into the least squares estimate, and the system parameters and states are computed interactively. The proposed algorithm can generate the unbiased parameter estimate. Two illustrative examples are given to show the effectiveness of the proposed algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2018
    Description: The Recurrent Neural Network (RNN) utilizes dynamically changing time information through time cycles, so it is very suitable for tasks with time sequence characteristics. However, with the increase of the number of layers, the vanishing gradient occurs in the RNN. The Grid Long Short-Term Memory (GridLSTM) recurrent neural network can alleviate this problem in two dimensions by taking advantage of the two dimensions calculated in time and depth. In addition, the time sequence task is related to the information of the current moment before and after. In this paper, we propose a method that takes into account context-sensitivity and gradient problems, namely the Bidirectional Grid Long Short-Term Memory (BiGridLSTM) recurrent neural network. This model not only takes advantage of the grid architecture, but it also captures information around the current moment. A large number of experiments on the dataset LibriSpeech show that BiGridLSTM is superior to other deep LSTM models and unidirectional LSTM models, and, when compared with GridLSTM, it gets about 26 percent gain improvement.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2019
    Description: There are two main challenges in wireless multimedia sensors networks: energy constraints and providing DiffServ. In this paper, a joint flow control, routing, scheduling, and power control scheme based on a Lyapunov optimization framework is proposed to increase network lifetime and scheduling fairness. For an adaptive distribution of transmission opportunities, a differentiated queueing services (DQS) scheme is adopted for maintaining data queues. In the Lyapunov function, different types of queues are normalized for a unified dimension. To prolong network lifetime, control coefficients are designed according to the characteristics of the wireless sensor networks. The power control problem is proved to be a convex optimization problem and two optimal algorithms are discussed. Simulation results show that, compared with existing schemes, the proposed scheme can achieve a better trade-off between QoS performances and network lifetime. The simulation results also show that the scheme utilizing the distributed media access control scheme in scheduling performs best in the transmission of real-time services.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2019
    Description: We propose in this paper a two-phase approach that decomposes the process of solving the three-dimensional single Container Loading Problem (CLP) into subsequent tasks: (i) the generation of blocks of boxes and (ii) the loading of blocks into the container. The first phase is deterministic, and it is performed by means of constructive algorithms from the literature. The second phase is non-deterministic, and it is performed with the use of Generate-and-Solve (GS), a problem-independent hybrid optimization framework based on problem instance reduction that combines a metaheuristic with an exact solver. Computational experiments performed on benchmark instances indicate that our approach presents competitive results compared to those found by state-of-the-art algorithms, particularly for problem instances consisting of a few types of boxes. In fact, we present new best solutions for classical instances from groups BR1 and BR2.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2019
    Description: In this paper, a novel constraint-following control for uncertain robot manipulators that is inspired by analytical dynamics is developed. The motion can be regarded as external constraints of the system. However, it is not easy to obtain explicit equations for dynamic modeling of constrained systems. For a multibody system subject to motion constraints, it is a common practice to introduce Lagrange multipliers, but using these to obtain explicit dynamical equations is a very difficult task. In order to obtain such equations more simply, motion constraints are handled here using the Udwadia-Kalaba equation(UKE). Then, considering real-life robot manipulators are usually uncertain(but bounded), by using continuous controllers compensate for the uncertainties. No linearizations/approximations of the robot manipulators systems are made throughout, and the tracking errors are bounds. A redundant manipulator of the SCARA type as the example to illustrates the methodology. Numerical results are demonstrates the simplicity and ease of implementation of the methodology.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2019
    Description: To improve the overall accuracy of tidal forecasting and ameliorate the low accuracy of single harmonic analysis, this paper proposes a combined tidal forecasting model based on harmonic analysis and autoregressive integrated moving average–support vector regression (ARIMA-SVR). In tidal analysis, the resultant tide can be considered as a superposition of the astronomical tide level and the non-astronomical tidal level, which are affected by the tide-generating force and environmental factors, respectively. The tidal data are de-noised via wavelet analysis, and the astronomical tide level is subsequently calculated via harmonic analysis. The residual sequence generated via harmonic analysis is used as the sample dataset of the non-astronomical tidal level, and the tidal height of the system is calculated by the ARIMA-SVR model. Finally, the tidal values are predicted by linearly summing the calculated results of both systems. The simulation results were validated against the measured tidal data at the tidal station of Bay Waveland Yacht Club, USA. By considering the residual non-astronomical tide level effects (which are ignored in traditional harmonic analysis), the combined model improves the accuracy of tidal prediction. Moreover, the combined model is feasible and efficient.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2018
    Description: This paper presents a fast algorithm for texture-less object recognition, which is designed to be robust to cluttered backgrounds and small transformations. At its core, the proposed method demonstrates a two-stage template-based procedure using an orientation compressing map and discriminative regional weight (OCM-DRW) to effectively detect texture-less objects. In the first stage, the proposed method quantizes and compresses all the orientations in a neighborhood to obtain the orientation compressing map which then is used to generate a set of possible object locations. To recognize the object in these possible object locations, the second stage computes the similarity of each possible object location with the learned template by using discriminative regional weight, which can effectively distinguish different categories of objects with similar parts. Experiments on publiclyavailable, texture-less object datasets indicate that apart from yielding efficient computational performance, the proposed method also attained remarkable recognition rates surpassing recent state-of-the-art texture-less object detectors in the presence of high-clutter, occlusion and scale-rotation changes. It improves the accuracy and speed by 8% and 370% respectively, relative to the previous best result on D-Textureless dataset.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2018
    Description: A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the calculation formula of this scheme from the boundary value problem of electric field and magnetic field, and compared finite-difference solutions with finite-element numerical results and analytical solutions of a 1D model. First, a homogeneous half-space model was tested and the finite-difference approach can provide very good accuracy for 2D magnetotelluric modeling. Then we compared them to the analytical solutions for the two-layered geo-electric model; the relative errors of the apparent resistivity and the impedance phase were both increased when the frequency was increased. To conclude, we compare our finite-difference simulation results with COMMEMI 2D-0 model with the finite-element solutions. Both results are in close agreement to each other. These comparisons can confirm the validity and reliability of our finite-difference algorithm. Moreover, a future project will extend the 2D structures to 3D, where non-uniform meshes should perform especially well.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2018
    Description: In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for the classification of the heart rate in patients. The fuzzy rule base was designed based on the knowledge of experts. Optimization of the membership functions of the fuzzy systems is done in order to improve the classification rate and provide a more accurate diagnosis, and for this goal the Bird Swarm Algorithm was used. Two different type-1 fuzzy systems are designed and optimized, the first one with trapezoidal membership functions and the second with Gaussian membership functions. Once the best type-1 fuzzy systems have been obtained, these are considered as a basis for designing the interval type-2 fuzzy systems, where the footprint of uncertainty was optimized to find the optimal representation of uncertainty. After performing different tests with patients and comparing the classification rate of each fuzzy system, it is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions. Additionally, tests were performed with the Crow Search Algorithm to carry out a performance comparison, with Bird Swarm Algorithm being the one with the best results.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2018
    Description: Hinge joint damage is a typical form of damage occurring in simply supported slab bridges, which can present adverse effects on the overall force distribution of the structure. However, damage identification methods of hinge joint damage are still limited. In this study, a damage identification algorithm for simply supported hinged-slab bridges based on the modified hinge plate method (MHPM) and artificial bee colony (ABC) algorithms was proposed by considering the effect of hinge damage conditions on the lateral load distribution (LLD) of structures. Firstly, MHPM was proposed and demonstrated, which is based on a traditional hinge plate method by introducing relative displacement as a damage factor to simulate hinge joint damage. The effectiveness of MHPM was verified through comparison with the finite element method (FEM). Secondly, damage identification was treated as the inverse problem of calculating the LLD in damage conditions of simply supported slab bridges. Four ABC algorithms were chosen to solve the problem due to its simple structure, ease of implementation, and robustness. Comparisons of convergence speed and identification accuracy with genetic algorithm and particle swarm optimization were also conducted. Finally, hinged bridges composed of four and seven slabs were studied as numerical examples to account for the feasibility and correctness of the proposed method. The simulation results revealed that the proposed algorithm could identify the location and degree of damaged joints efficiently and precisely.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2018
    Description: In the paper, we tackle the least squares estimators of the Vasicek-type model driven by sub-fractional Brownian motion: d X t = ( μ + θ X t ) d t + d S t H , t ≥ 0 with X 0 = 0 , where S H is a sub-fractional Brownian motion whose Hurst index H is greater than 1 2 , and μ ∈ R , θ ∈ R + are two unknown parameters. Based on the so-called continuous observations, we suggest the least square estimators of μ and θ and discuss the consistency and asymptotic distributions of the two estimators.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2018
    Description: Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence container allocation to container ships for a voyage, and the degree of influence varies, engendering a complex nonlinearity. Therefore, this paper proposes a model based on gray relational analysis (GRA) and mixed kernel support vector machine (SVM) for predicting container allocation to a container ship for a voyage. First, in this model, the weights of influencing factors are determined through GRA. Then, the weighted factors serve as the input of the SVM model, and SVM model parameters are optimized through a genetic algorithm. Numerical simulations revealed that the proposed model could effectively predict the number of containers for container ship voyage and that it exhibited strong generalization ability and high accuracy. Accordingly, this model provides a new method for predicting container volume for a voyage.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2019
    Description: Software defect prediction is an important means to guarantee software quality. Because there are no sufficient historical data within a project to train the classifier, cross-project defect prediction (CPDP) has been recognized as a fundamental approach. However, traditional defect prediction methods use feature attributes to represent samples, which cannot avoid negative transferring, may result in poor performance model in CPDP. This paper proposes a multi-source cross-project defect prediction method based on dissimilarity space (DM-CPDP). This method not only retains the original information, but also obtains the relationship with other objects. So it can enhances the discriminant ability of the sample attributes to the class label. This method firstly uses the density-based clustering method to construct the prototype set with the cluster center of samples in the target set. Then, the arc-cosine kernel is used to calculate the sample dissimilarities between the prototype set and the source domain or the target set to form the dissimilarity space. In this space, the training set is obtained with the earth mover’s distance (EMD) method. For the unlabeled samples converted from the target set, the k-Nearest Neighbor (KNN) algorithm is used to label those samples. Finally, the model is learned from training data based on TrAdaBoost method and used to predict new potential defects. The experimental results show that this approach has better performance than other traditional CPDP methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2018
    Description: Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2018
    Description: In this paper, a novel approach to facial expression recognition based on the discrete separable shearlet transform (DSST) and normalized mutual information feature selection is proposed. The approach can be divided into five steps. First, all test and training images are preprocessed. Second, DSST is applied to the preprocessed facial expression images, and all the transformation coefficients are obtained as the original feature set. Third, an improved normalized mutual information feature selection is proposed to find the optimal feature subset of the original feature set, thus we can retain the key classification information of the original data. Fourth, the feature extraction and selection of the feature space is reduced by employing linear discriminant analysis. Finally, a support vector machine is used to recognize the expressions. In this study, experimental verification was carried out on four open facial expression databases. The results show that this method can not only improve the recognition rate of facial expressions, but also significantly reduce the computational complexity and improve the system efficiency.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2018
    Description: Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual object tracking. However, visual tracking is still challenging when the target objects undergo complex scenarios such as occlusion, deformation, scale changes and illumination changes. In this paper, we utilize the hierarchical features of convolutional neural networks (CNNs) and learn a spatial-temporal context correlation filter on convolutional layers. Then, the translation is estimated by fusing the response score of the filters on the three convolutional layers. In terms of scale estimation, we learn a discriminative correlation filter to estimate scale from the best confidence results. Furthermore, we proposed a re-detection activation discrimination method to improve the robustness of visual tracking in the case of tracking failure and an adaptive model update method to reduce tracking drift caused by noisy updates. We evaluate the proposed tracker with DCFs and deep features on OTB benchmark datasets. The tracking results demonstrated that the proposed algorithm is superior to several state-of-the-art DCF methods in terms of accuracy and robustness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2018
    Description: The amount of information currently generated in the world has been increasing exponentially, raising the question of whether all acquired data is relevant for the learning algorithm process. If a subset of the data does not bring enough innovation, data-selection strategies can be employed to reduce the computational complexity cost and, in many cases, improve the estimation accuracy. In this paper, we explore some adaptive filtering algorithms whose characteristic features are their fast convergence and data selection. These algorithms incorporate a prescribed data-selection strategy and are compared in distinct applications environments. The simulation results include both synthetic and real data.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2018
    Description: Path planning of unmanned aerial vehicles (UAVs) in threatening and adversarial areas is a constrained nonlinear optimal problem which takes a great amount of static and dynamic constraints into account. Quantum-behaved pigeon-inspired optimization (QPIO) has been widely applied to such nonlinear problems. However, conventional QPIO is suffering low global convergence speed and local optimum. In order to solve the above problems, an improved QPIO algorithm, adaptive operator QPIO, is proposed in this paper. Firstly, a new initialization process based on logistic mapping method is introduced to generate the initial population of the pigeon-swarm. After that, to improve the performance of the map and compass operation, the factor parameter will be adaptively updated in each iteration, which can balance the ability between global and local search. In the final landmark operation, the gradual decreasing pigeon population-updating strategy is introduced to prevent premature convergence and local optimum. Finally, the demonstration of the proposed algorithm on UAV path planning problem is presented, and the comparison result indicates that the performance of our algorithm is better than that of particle swarm optimization (PSO), pigeon-inspired optimization (PIO), and its variants, in terms of convergence and accuracy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2018
    Description: Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2019
    Description: Link prediction is a task predicting whether there is a link between two nodes in a network. Traditional link prediction methods that assume handcrafted features (such as common neighbors) as the link’s formation mechanism are not universal. Other popular methods tend to learn the link’s representation, but they cannot represent the link fully. In this paper, we propose Edge-Nodes Representation Neural Machine (ENRNM), a novel method which can learn abundant topological features from the network as the link’s representation to promote the formation of the link. The ENRNM learns the link’s formation mechanism by combining the representation of edge and the representations of nodes on the two sides of the edge as link’s full representation. To predict the link’s existence, we train a fully connected neural network which can learn meaningful and abundant patterns. We prove that the features of edge and two nodes have the same importance in link’s formation. Comprehensive experiments are conducted on eight networks, experiment results demonstrate that the method ENRNM not only exceeds plenty of state-of-the-art link prediction methods but also performs very well on diverse networks with different structures and characteristics.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2018
    Description: In this study, fifth-order and sixth-order diagonally implicit Runge–Kutta type (DIRKT) techniques for solving fourth-order ordinary differential equations (ODEs) are derived which are denoted as DIRKT5 and DIRKT6, respectively. The first method has three and the another one has four identical nonzero diagonal elements. A set of test problems are applied to validate the methods and numerical results showed that the proposed methods are more efficient in terms of accuracy and number of function evaluations compared to the existing implicit Runge–Kutta (RK) methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2018
    Description: This paper presents a comparison among the bee colony optimization (BCO), differential evolution (DE), and harmony search (HS) algorithms. In addition, for each algorithm, a type-1 fuzzy logic system (T1FLS) for the dynamic modification of the main parameters is presented. The dynamic adjustment in the main parameters for each algorithm with the implementation of fuzzy systems aims at enhancing the performance of the corresponding algorithms. Each algorithm (modified and original versions) is analyzed and compared based on the optimal design of fuzzy systems for benchmark control problems, especially in fuzzy controller design. Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers. Statistically is demonstrated that the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2018
    Description: The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2018
    Description: Parallel reservoir simulation is an important approach to solving real-time reservoir management problems. Recently, there is a new trend of using a graphics processing unit (GPU) to parallelize the reservoir simulations. Current GPU-aided reservoir simulations focus on compute unified device architecture (CUDA). Nevertheless, CUDA is not functionally portable across devices and incurs high amount of code. Meanwhile, domain decomposition is not well used for GPU-based reservoir simulations. In order to address the problems, we propose a parallel method with OpenACC to accelerate serial code and reduce the time and effort during porting an application to GPU. Furthermore, the GPU-aided domain decomposition is developed to accelerate the efficiency of reservoir simulation. The experimental results indicate that (1) the proposed GPU-aided approach can outperform the CPU-based one up to about two times, meanwhile with the help of OpenACC, the workload of the transplant code was reduced significantly by about 22 percent of the source code, (2) the domain decomposition method can further improve the execution efficiency up to 1.7×. The proposed parallel reservoir simulation method is a efficient tool to accelerate reservoir simulation.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2018
    Description: The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2018
    Description: The system identification problem becomes more challenging when the parameter space increases. Recently, several works have focused on the identification of bilinear forms, which are related to the impulse responses of a spatiotemporal model, in the context of a multiple-input/single-output system. In this framework, the problem was addressed in terms of the Wiener filter and different basic adaptive algorithms. This paper studies two types of algorithms tailored for the identification of such bilinear forms, i.e., the Kalman filter (along with its simplified version) and an optimized least-mean-square (LMS) algorithm. Also, a comparison between them is performed, which shows interesting similarities. In addition to the mathematical derivation of the algorithms, we also provide extensive experimental results, which support the theoretical findings and indicate the good performance of the proposed solutions.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2018
    Description: The Hammer prototype is a query engine for corpora of Open Data that provides users with the concept of blind querying. Since data sets published on Open Data portals are heterogeneous, users wishing to find out interesting data sets are blind: queries cannot be fully specified, as in the case of databases. Consequently, the query engine is responsible for rewriting and adapting the blind query to the actual data sets, by exploiting lexical and semantic similarity. The effectiveness of this approach was discussed in our previous works. In this paper, we report our experience in developing the query engine. In fact, in the very first version of the prototype, we realized that the implementation of the retrieval technique was too slow, even though corpora contained only a few thousands of data sets. We decided to adopt the Map-Reduce paradigm, in order to parallelize the query engine and improve performances. We passed through several versions of the query engine, either based on the Hadoop framework or on the Spark framework. Hadoop and Spark are two very popular frameworks for writing and executing parallel algorithms based on the Map-Reduce paradigm. In this paper, we present our study about the impact of adopting the Map-Reduce approach and its two most famous frameworks to parallelize the Hammer query engine; we discuss various implementations of the query engine, either obtained without significantly rewriting the algorithm or obtained by completely rewriting the algorithm by exploiting high level abstractions provided by Spark. The experimental campaign we performed shows the benefits provided by each studied solution, with the perspective of moving toward Big Data in the future. The lessons we learned are collected and synthesized into behavioral guidelines for developers approaching the problem of parallelizing algorithms by means of Map-Reduce frameworks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2018
    Description: The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, and the layout scheme affects the effect of the scheduling scheme by the transportation time of jobs. However, in traditional methods, they are regarded as separate tasks performed sequentially, which ignores the interaction. Therefore, developing effective methods to deal with the multi-objective energy-aware integration of the FJSSP and MRWLP (MEIFM) problem in a sustainable manufacturing system is becoming more and more important. Based on the interaction between FJSSP and MRWLP, the MEIFM problem can be formulated as a multi-objective bi-level programming (MOBLP) model. The upper-level model for FJSSP is employed to minimize the makespan and total energy consumption, while the lower-level model for MRWLP is used to minimize the material handling quantity. Because the MEIFM problem is denoted as a mixed integer non-liner programming model, it is difficult to solve it using traditional methods. Thus, this paper proposes an improved multi-objective hierarchical genetic algorithm (IMHGA) to solve this model. Finally, the effectiveness of the method is verified through comparative experiments.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2018
    Description: When financial market conditions change, traders adopt different strategies. The traders’ collective behaviour may cause significant changes in the statistical properties of price movements. When this happens, the market is said to have gone through “regime changes”. The purpose of this paper is to characterise what is a “normal market regime” as well as what is an “abnormal market regime”, under observations in Directional Changes (DC). Our study starts with historical data from 10 financial markets. For each market, we focus on a period of time in which significant events could have triggered regime changes. The observations of regime changes in these markets are then positioned in a designed two-dimensional indicator space based on DC. Our results suggest that the normal regimes from different markets share similar statistical characteristics. In other words, with our observations, it is possible to distinguish normal regimes from abnormal regimes. This is significant, because, for the first time, we can tell whether a market is in a normal regime by observing the DC indicators in the market. This opens the door for future work to be able to dynamically monitor the market for regime change.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2018
    Description: The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system (AIS) has been used to support navigation, route estimation, collision prediction, and abnormal traffic detection. This article examined the main elements of ship collision, developed a mathematical model for the risk assessment, and simulated a collision assessment based on AIS information, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2018
    Description: In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system are introduced. Second, the mathematical model of the quarter car active suspension system is established. Third, a sliding mode control (SMC) controller is designed to manipulate the active force to control the active suspension system. Fourth, GA is applied to optimize the weight coefficients of an SMC switching function and the parameters of the control law. Finally, the simulation model is built based on MATLAB/Simulink (version 2014a), and the simulations are performed and analyzed with the proposed control strategy to identify its performance. The simulation results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2018
    Description: In the context of this research work, we studied the problem of privacy preserving on spatiotemporal databases. In particular, we investigated the k-anonymity of mobile users based on real trajectory data. The k-anonymity set consists of the k nearest neighbors. We constructed a motion vector of the form (x,y,g,v) where x and y are the spatial coordinates, g is the angle direction, and v is the velocity of mobile users, and studied the problem in four-dimensional space. We followed two approaches. The former applied only k-Nearest Neighbor (k-NN) algorithm on the whole dataset, while the latter combined trajectory clustering, based on K-means, with k-NN. Actually, it applied k-NN inside a cluster of mobile users with similar motion pattern (g,v). We defined a metric, called vulnerability, that measures the rate at which k-NNs are varying. This metric varies from 1 k (high robustness) to 1 (low robustness) and represents the probability the real identity of a mobile user being discovered from a potential attacker. The aim of this work was to prove that, with high probability, the above rate tends to a number very close to 1 k in clustering method, which means that the k-anonymity is highly preserved. Through experiments on real spatial datasets, we evaluated the anonymity robustness, the so-called vulnerability, of the proposed method.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2019
    Description: The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a multi-scale feature fusion network is proposed, which mainly includes two parts: the feature extraction module and the deviation detection module. The feature extraction module uses a multi-scale feature fusion network structure to fuse low-level features with rich position and detail information and high-level features with stronger semantic information to improve network detection performance. Depthwise separable convolutions are used to achieve real-time detection. The deviation detection module identifies and monitors the deviation fault by calculating the offset of conveyor belt. In particular, a new weighted loss function is designed to optimize the network and to improve the detection effect of the conveyor belt edge. In order to evaluate the effectiveness of the proposed method, the Canny algorithm, FCNs, UNet and Deeplab v3 networks are selected for comparison. The experimental results show that the proposed algorithm achieves 78.92% in terms of pixel accuracy (PA), and reaches 13.4 FPS (Frames per Second) with the error of less than 3.2 mm, which outperforms the other four algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2019
    Description: Aimed at the one-to-one certification problem of unsteady state iris at different shooting times, a multi-algorithm parallel integration general model structure is proposed in this paper. The iris in the lightweight constrained state affected by defocusing, deflection, and illumination is taken as the research object, the existing algorithms are combined into the model structure effectively, and a one-to-one certification algorithm for lightweight constrained state unsteady iris was designed based on multi-algorithm integration and maximum trusted decision. In this algorithm, a sufficient number of iris internal feature points from the unstable state texture were extracted as effective iris information through the image processing layer composed of various filtering processing algorithms, thereby eliminating defocused interference. In the feature recognition layer, iris deflection interference was excluded by the improved methods of Gabor and Hamming and Haar and BP for the stable features extracted by the image processing layer, and two certification results were obtained by means of parallel recognition. The correct number of certifications for an algorithm under a certain lighting condition were counted. The method with the most correct number was set as the maximum trusted method under this lighting condition, and the results of the maximum trusted method were taken as the final decision, thereby eliminating the effect of illumination. Experiments using the JLU and CASIA iris libraries under the prerequisites in this paper show that the correct recognition rate of the algorithm can reach a high level of 98% or more, indicating that the algorithm can effectively improve the accuracy of the one-to-one certification of lightweight constrained state unsteady iris. Compared with the latest architecture algorithms, such as CNN and deep learning, the proposed algorithm is more suitable for the prerequisites presented in this paper, which has good environmental inclusiveness and can better improve existing traditional algorithms’ effectiveness through the design of a parallel integration model structure.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2019
    Description: The development of robotic applications for agricultural environments has several problems which are not present in the robotic systems used for indoor environments. Some of these problems can be solved with an efficient navigation system. In this paper, a new system is introduced to improve the navigation tasks for those robots which operate in agricultural environments. Concretely, the paper focuses on the problem related to the autonomous mapping of agricultural parcels (i.e., an orange grove). The map created by the system will be used to help the robots navigate into the parcel to perform maintenance tasks such as weed removal, harvest, or pest inspection. The proposed system connects to a satellite positioning service to obtain the real coordinates where the robotic system is placed. With these coordinates, the parcel information is downloaded from an online map service in order to autonomously obtain a map of the parcel in a readable format for the robot. Finally, path planning is performed by means of Fast Marching techniques using the robot or a team of two robots. This paper introduces the proof-of-concept and describes all the necessary steps and algorithms to obtain the path planning just from the initial coordinates of the robot.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2019
    Description: The characteristic of the satellite repeat shift time can reflect the status of the satellite operation, and is also one of the key factors of the sidereal filtering multipath correction. Although some methods have been developed to calculate the repeat shift time, few efforts have been made to analyze and compare the performance of this feature for the GPS (Global Positioning System), BDS (BeiDou System), and Galileo in depth. Hence, three methods used for calculating the repeat shift time are presented, and used to compare and analyze the three global systems in depth, named the broadcast ephemeris method (BEM), correlation coefficient method (CCM), and aspect repeat time method (ARTM). The experiment results show that the repeat shift time of each satellite is different. Also, the difference between the maximum and minimum varies from different systems. The maximum difference is about 25 s for the BDS IGSO (Inclined Geosynchronous Orbit) and the minimum is merely 10 s for the GPS system. Furthermore, for the same satellite, the shift time calculated by the three methods is almost identical, and the maximum difference is only about 7 s between the CCM and the ARTM method for the BDS MEO (Medium Earth Orbit) satellite. Although the repeat shift time is different daily for the same satellite and the same method, the changes are very small. Moreover, in terms of the STD (Standard Deviation) of the BS (between satellites) and MS (mean shift for the same satellite), the GPS system is the best, the performance of the BDS system is medium, and the Galileo performs slightly worse than the GPS and BDS.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2019
    Description: This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual mathematical model of the EMA is rebuilt by combining the ideal model and the nonlinear factors. To realize high performance braking pressure control, the backstepping control method is adopted to address the mismatched uncertainties in the electric braking system, and a radial basis function (RBF) neural network is established to estimate the nonlinear functions in the control system. The experimental results indicate that the proposed braking pressure control strategy can improve the servo performance of the electric braking system. In addition, the hardware-in-loop (HIL) experimental results show that the proposed EMA controller can satisfy the requirements of the aircraft antilock braking systems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2019
    Description: Objects that possess mass (e.g., automobiles, manufactured items, etc.) translationally accelerate in direct proportion to the force applied scaled by the object’s mass in accordance with Newton’s Law, while the rotational companion is Euler’s moment equations relating angular acceleration of objects that possess mass moments of inertia. Michel Chasles’s theorem allows us to simply invoke Newton and Euler’s equations to fully describe the six degrees of freedom of mechanical motion. Many options are available to control the motion of objects by controlling the applied force and moment. A long, distinguished list of references has matured the field of controlling a mechanical motion, which culminates in the burgeoning field of deterministic artificial intelligence as a natural progression of the laudable goal of adaptive and/or model predictive controllers that can be proven to be optimal subsequent to their development. Deterministic A.I. uses Chasle’s claim to assert Newton’s and Euler’s relations as deterministic self-awareness statements that are optimal with respect to state errors. Predictive controllers (both continuous and sampled-data) derived from the outset to be optimal by first solving an optimization problem with the governing dynamic equations of motion lead to several controllers (including a controller that twice invokes optimization to formulate robust, predictive control). These controllers are compared to each other with noise and modeling errors, and the many figures of merit are used: tracking error and rate error deviations and means, in addition to total mean cost. Robustness is evaluated using Monte Carlo analysis where plant parameters are randomly assumed to be incorrectly modeled. Six instances of controllers are compared against these methods and interpretations, which allow engineers to select a tailored control for their given circumstances. Novel versions of the ubiquitous classical proportional-derivative, “PD” controller, is developed from the optimization statement at the outset by using a novel re-parameterization of the optimal results from time-to-state parameterization. Furthermore, time-optimal controllers, continuous predictive controllers, and sampled-data predictive controllers, as well as combined feedforward plus feedback controllers, and the two degree of freedom controllers (i.e., 2DOF). The context of the term “feedforward” used in this study is the context of deterministic artificial intelligence, where analytic self-awareness statements are strictly determined by the governing physics (of mechanics in this case, e.g., Chasle, Newton, and Euler). When feedforward is combined with feedback per the previously mentioned method (provenance foremost in optimization), the combination is referred to as “2DOF” or two degrees of freedom to indicate the twice invocation of optimization at the genesis of the feedforward and the feedback, respectively. The feedforward plus feedback case is augmented by an online (real time) comparison to the optimal case. This manuscript compares these many optional control strategies against each other. Nominal plants are used, but the addition of plant noise reveals the robustness of each controller, even without optimally rejecting assumed-Gaussian noise (e.g., via the Kalman filter). In other words, noise terms are intentionally left unaddressed in the problem formulation to evaluate the robustness of the proposed method when the real-world noise is added. Lastly, mismodeled plants controlled by each strategy reveal relative performance. Well-anticipated results include the lowest cost, which is achieved by the optimal controller (with very poor robustness), while low mean errors and deviations are achieved by the classical controllers (at the highest cost). Both continuous predictive control and sampled-data predictive control perform well at both cost as well as errors and deviations, while the 2DOF controller performance was the best overall.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2019
    Description: This paper presents a space mission planning tool, which was developed for LEO (Low Earth Orbit) observation satellites. The tool is focused on a two-phase planning strategy with clustering preprocessing and mission planning, where an improved clustering algorithm is applied, and a hybrid algorithm that combines the genetic algorithm with the simulated annealing algorithm (GA–SA) is given and discussed. Experimental simulation studies demonstrate that the GA–SA algorithm with the improved clique partition algorithm based on the graph theory model exhibits higher fitness value and better optimization performance and reliability than the GA or SA algorithms alone.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2019
    Description: Network representation learning is a key research field in network data mining. In this paper, we propose a novel multi-view network representation algorithm (MVNR), which embeds multi-scale relations of network vertices into the low dimensional representation space. In contrast to existing approaches, MVNR explicitly encodes higher order information using k-step networks. In addition, we introduce the matrix forest index as a kind of network feature, which can be applied to balance the representation weights of different network views. We also research the relevance amongst MVNR and several excellent research achievements, including DeepWalk, node2vec and GraRep and so forth. We conduct our experiment on several real-world citation datasets and demonstrate that MVNR outperforms some new approaches using neural matrix factorization. Specifically, we demonstrate the efficiency of MVNR on network classification, visualization and link prediction tasks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2019
    Description: In the real word, optimization problems in multi-objective optimization (MOP) and dynamic optimization can be seen everywhere. During the last decade, among various swarm intelligence algorithms for multi-objective optimization problems, glowworm swarm optimization (GSO) and bacterial foraging algorithm (BFO) have attracted increasing attention from scholars. Although many scholars have proposed improvement strategies for GSO and BFO to keep a good balance between convergence and diversity, there are still many problems to be solved carefully. In this paper, a new coupling algorithm based on GSO and BFO (MGSOBFO) is proposed for solving dynamic multi-objective optimization problems (dMOP). MGSOBFO is proposed to achieve a good balance between exploration and exploitation by dividing into two parts. Part I is in charge of exploitation by GSO and Part II is in charge of exploration by BFO. At the same time, the simulation binary crossover (SBX) and polynomial mutation are introduced into the MGSOBFO to enhance the convergence and diversity ability of the algorithm. In order to show the excellent performance of the algorithm, we experimentally compare MGSOBFO with three algorithms on the benchmark function. The results suggests that such a coupling algorithm has good performance and outperforms other algorithms which deal with dMOP.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2019
    Description: Clustering is an important task in data mining that has become more challenging due to the ever-increasing size of available datasets. To cope with these big data scenarios, a high-performance clustering approach is required. Sparse grid clustering is a density-based clustering method that uses a sparse grid density estimation as its central building block. The underlying density estimation approach enables the detection of clusters with non-convex shapes and without a predetermined number of clusters. In this work, we introduce a new distributed and performance-portable variant of the sparse grid clustering algorithm that is suited for big data settings. Our computed kernels were implemented in OpenCL to enable portability across a wide range of architectures. For distributed environments, we added a manager–worker scheme that was implemented using MPI. In experiments on two supercomputers, Piz Daint and Hazel Hen, with up to 100 million data points in a ten-dimensional dataset, we show the performance and scalability of our approach. The dataset with 100 million data points was clustered in 1198 s using 128 nodes of Piz Daint. This translates to an overall performance of 352 TFLOPS . On the node-level, we provide results for two GPUs, Nvidia’s Tesla P100 and the AMD FirePro W8100, and one processor-based platform that uses Intel Xeon E5-2680v3 processors. In these experiments, we achieved between 43% and 66% of the peak performance across all computed kernels and devices, demonstrating the performance portability of our approach.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2019
    Description: In order to solve the poor control accuracy problem of the traditional synchronous control algorithm for a double-cylinder forging hydraulic press, a synchronous control algorithm for double-cylinder forging hydraulic press based on a fuzzy neural network was proposed. According to the flow equation of valve and hydraulic cylinder, the balance equation and force balance equation of forging hydraulic cylinder are established by using the theory of electro-hydraulic servo systems, and the cylinder-controlled transfer function of forging hydraulic cylinder is deduced. By properly simplifying the transfer function, the mathematical model of synchronous control of double cylinder forging hydraulic press is established. According to the implementation process of traditional fuzzy neural networks, the properties of compensation operation are introduced. The traditional fuzzy neural network is optimized, and the optimized neural network is used to realize the synchronous control of the double cylinder forging hydraulic press. The experimental results show that the amplitude curve of the algorithm is very close to the expected amplitude curve, the error amplitude is only 0.3 mm, and the average control time is about 140 s, which fully shows that the algorithm has high accuracy and a good control effect.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2019
    Description: This paper proposes a low-carbon location routing problem (LCLRP) model with simultaneous delivery and pick up, time windows, and heterogeneous fleets to reduce the logistics cost and carbon emissions and improve customer satisfaction. The correctness of the model is tested by a simple example of CPLEX (optimization software for mathematical programming). To solve this problem, a hyper-heuristic algorithm is designed based on a secondary exponential smoothing strategy and adaptive receiving mechanism. The algorithm can achieve fast convergence and is highly robust. This case study analyzes the impact of depot distribution and cost, heterogeneous fleets (HF), and customer distribution and time windows on logistics costs, carbon emissions, and customer satisfaction. The experimental results show that the proposed model can reduce logistics costs by 1.72%, carbon emissions by 11.23%, and vehicle travel distance by 9.69%, and show that the proposed model has guiding significance for reducing logistics costs.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2019
    Description: The compressed sensing theory has been widely used in solving undetermined equations in various fields and has made remarkable achievements. The regularized smooth L0 (ReSL0) reconstruction algorithm adds an error regularization term to the smooth L0(SL0) algorithm, achieving the reconstruction of the signal well in the presence of noise. However, the ReSL0 reconstruction algorithm still has some flaws. It still chooses the original optimization method of SL0 and the Gauss approximation function, but this method has the problem of a sawtooth effect in the later optimization stage, and the convergence effect is not ideal. Therefore, we make two adjustments to the basis of the ReSL0 reconstruction algorithm: firstly, we introduce another CIPF function which has a better approximation effect than Gauss function; secondly, we combine the steepest descent method and Newton method in terms of the algorithm optimization. Then, a novel regularized recovery algorithm named combined regularized smooth L0 (CReSL0) is proposed. Under the same experimental conditions, the CReSL0 algorithm is compared with other popular reconstruction algorithms. Overall, the CReSL0 algorithm achieves excellent reconstruction performance in terms of the peak signal-to-noise ratio (PSNR) and run-time for both a one-dimensional Gauss signal and two-dimensional image reconstruction tasks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2019
    Description: As of today, bioinformatics is one of the most exciting fields of scientific research. There is a wide-ranging list of challenging problems to face, i.e., pairwise and multiple alignments, motif detection/discrimination/classification, phylogenetic tree reconstruction, protein secondary and tertiary structure prediction, protein function prediction, DNA microarray analysis, gene regulation/regulatory networks, just to mention a few, and an army of researchers, coming from several scientific backgrounds, focus their efforts on developing models to properly address these problems. In this paper, we aim to briefly review some of the huge amount of machine learning methods, developed in the last two decades, suited for the analysis of gene microarray data that have a strong impact on molecular biology. In particular, we focus on the wide-ranging list of data clustering and visualization techniques able to find homogeneous data groupings, and also provide the possibility to discover its connections in terms of structure, function and evolution.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2019
    Description: A multi-track string is a tuple of strings of the same length. Given the pattern and text of two multi-track strings, the permuted pattern matching problem is to find the occurrence positions of all permutations of the pattern in the text. In this paper, we propose several algorithms for permuted pattern matching. Our first algorithm, which is based on the Knuth–Morris–Pratt (KMP) algorithm, has a fast theoretical computing time with O ( m k ) as the preprocessing time and O ( n k log σ ) as the matching time, where n, m, k, σ , and occ denote the length of the text, the length of the pattern, the number of strings in the multi-track, the alphabet size, and the number of occurrences of the pattern, respectively. We then improve the KMP-based algorithm by using an automaton, which has a better experimental running time. The next proposed algorithms are based on the Boyer–Moore algorithm and the Horspool algorithm that try to perform pattern matching. These algorithms are the fastest experimental algorithms. Furthermore, we propose an extension of the AC-automaton algorithm that can solve dictionary matching on multi-tracks, which is a task to find multiple multi-track patterns in a multi-track text. Finally, we propose filtering algorithms that can perform permuted pattern matching quickly in practice.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2019
    Description: The Ensemble Empirical Mode Decomposition (EEMD) algorithm has been used in bearing fault diagnosis. In order to overcome the blindness in the selection of white noise amplitude coefficient e in EEMD, an improved artificial bee colony algorithm (IABC) is proposed to obtain it adaptively, which providing a new idea for the selection of EEMD parameters. In the improved algorithm, chaos initialization is introduced in the artificial bee colony (ABC) algorithm to insure the diversity of the population and the ergodicity of the population search process. On the other hand, the collecting bees are divided into two parts in the improved algorithm, one part collects the optimal information of the region according to the original algorithm, the other does Levy flight around the current global best solution to improve its global search capabilities. Four standard test functions are used to show the superiority of the proposed method. The application of the IABC and EEMD algorithm in bearing fault diagnosis proves its effectiveness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2019
    Description: Advancing the background-subtraction method in dynamic scenes is an ongoing timely goal for many researchers. Recently, background subtraction methods have been developed with deep convolutional features, which have improved their performance. However, most of these deep methods are supervised, only available for a certain scene, and have high computational cost. In contrast, the traditional background subtraction methods have low computational costs and can be applied to general scenes. Therefore, in this paper, we propose an unsupervised and concise method based on the features learned from a deep convolutional neural network to refine the traditional background subtraction methods. For the proposed method, the low-level features of an input image are extracted from the lower layer of a pretrained convolutional neural network, and the main features are retained to further establish the dynamic background model. The evaluation of the experiments on dynamic scenes demonstrates that the proposed method significantly improves the performance of traditional background subtraction methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2019
    Description: The authors wish to make the following corrections to their paper[...]
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2019
    Description: We present two modifications of Duval’s algorithm for computing the Lyndon factorization of a string. One of the algorithms has been designed for strings containing runs of the smallest character. It works best for small alphabets and it is able to skip a significant number of characters of the string. Moreover, it can be engineered to have linear time complexity in the worst case. When there is a run-length encoded string R of length ρ , the other algorithm computes the Lyndon factorization of R in O ( ρ ) time and in constant space. It is shown by experimental results that the new variations are faster than Duval’s original algorithm in many scenarios.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2019
    Description: We study two problems in computational phylogenetics. The first is tree compatibility. The input is a collection P of phylogenetic trees over different partially-overlapping sets of species. The goal is to find a single phylogenetic tree that displays all the evolutionary relationships implied by P . The second problem is incomplete directed perfect phylogeny (IDPP). The input is a data matrix describing a collection of species by a set of characters, where some of the information is missing. The question is whether there exists a way to fill in the missing information so that the resulting matrix can be explained by a phylogenetic tree satisfying certain conditions. We explain the connection between tree compatibility and IDPP and show that a recent tree compatibility algorithm is effectively a generalization of an earlier IDPP algorithm. Both algorithms rely heavily on maintaining the connected components of a graph under a sequence of edge and vertex deletions, for which they use the dynamic connectivity data structure of Holm et al., known as HDT. We present a computational study of algorithms for tree compatibility and IDPP. We show experimentally that substituting HDT by a much simpler data structure—essentially, a single-level version of HDT—improves the performance of both of these algorithm in practice. We give partial empirical and theoretical justifications for this observation.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2019
    Description: This article describes an optimization methodology based on a model of competitiveness between different metaheuristic methods. The main contribution is a strategy to dynamically find the algorithm that obtains the best result based on the competitiveness of methods to solve a specific problem using different performance metrics depending on the problem. The algorithms used in the preliminary tests are: the firefly algorithm (FA), which is inspired by blinking fireflies; wind-driven optimization (WDO), which is inspired by the movement of the wind in the atmosphere, and in which the positions and velocities of the wind packages are updated; and finally, drone squadron optimization (DSO)—the inspiration for this method is new and interesting—based on artifacts, where drones have a command center that sends information to individual drones and updates their software to optimize the objective function. The proposed model helps discover the best method to solve a specific problem, and also reduces the time that it takes to search for methods before finding the one that obtains the most satisfactory results. The main idea is that with this competitiveness approach, methods are tested at the same time until the best one to solve the problem in question is found. As preliminary tests of the model, the optimization of the benchmark mathematical functions and membership functions of a fuzzy controller of an autonomous mobile robot was used.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2019
    Description: In this paper we introduced a new class of strong negations, which were generated via conical sections. This paper focuses on the fact that simple mathematical and computational processes generate new strong fuzzy negations, through purely geometrical concepts such as the ellipse and the hyperbola. Well-known negations like the classical negation, Sugeno negation, etc., were produced via the suggested conical sections. The strong negations were a structural element in the production of fuzzy implications. Thus, we have a machine for producing fuzzy implications, which can be useful in many areas, as in artificial intelligence, neural networks, etc. Strong Fuzzy Negations refers to the discrepancy between the degree of difficulty of the effort and the significance of its results. Innovative results may, therefore, derive for use in literature in the specific field of mathematics. These data are, moreover, generated in an effortless, concise, as well as self-evident manner.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2019
    Description: Electrical impedance tomography (EIT) has been a hot topic among researchers for the last 30 years. It is a new imaging method and has evolved over the last few decades. By injecting a small amount of current, the electrical properties of tissues are determined and measurements of the resulting voltages are taken. By using a reconstructing algorithm these voltages then transformed into a tomographic image. EIT contains no identified threats and as compared to magnetic resonance imaging (MRI) and computed tomography (CT) scans (imaging techniques), it is cheaper in cost as well. In this paper, a comprehensive review of efforts and advancements undertaken and achieved in recent work to improve this technology and the role of artificial intelligence to solve this non-linear, ill-posed problem are presented. In addition, a review of EIT clinical based applications has also been presented.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2019
    Description: The grounding grid of a substation is important for the safety of substation equipment. Especially to address the difficulty of parameter design in the auxiliary anode system of a grounding grid, an algorithm is proposed that is an optimization algorithm for the auxiliary anode system of a grounding grid based on improved simulated annealing. The mathematical model of the auxiliary anode system is inferred from the mathematical model of cathodic protection. On that basis, the parameters of the finite element model are optimized with the improved simulated annealing algorithm, thereby the auxiliary anode system of a grounding grid with optimized parameters is structured. Then the algorithm is proven as valid through experiments. The precision of the optimized parameters is improved by about 1.55% with respect to the Variable Metric Method and the Genetic Algorithm, so it can provide a basis for parameter design in the auxiliary anode system of a grounding grid.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2019
    Description: In this paper, we propose a novel spatial image error concealment (EC) method based on deep neural network. Considering that the natural images have local correlation and non-local self-similarity, we use the local information to predict the missing pixels and the non-local information to correct the predictions. The deep neural network we utilize can be divided into two parts: the prediction part and the auto-encoder (AE) part. The first part utilizes the local correlation among pixels to predict the missing ones. The second part extracts image features, which are used to collect similar samples from the whole image. In addition, a novel adaptive scan order based on the joint credibility of the support area and reconstruction is also proposed to alleviate the error propagation problem. The experimental results show that the proposed method can reconstruct corrupted images effectively and outperform the compared state-of-the-art methods in terms of objective and perceptual metrics.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2019
    Description: During the last few decades, machine learning has constituted a significant tool in extracting useful knowledge from economic data for assisting decision-making. In this work, we evaluate the performance of weight-constrained recurrent neural networks in forecasting economic classification problems. These networks are efficiently trained with a recently-proposed training algorithm, which has two major advantages. Firstly, it exploits the numerical efficiency and very low memory requirements of the limited memory BFGS matrices; secondly, it utilizes a gradient-projection strategy for handling the bounds on the weights. The reported numerical experiments present the classification accuracy of the proposed model, providing empirical evidence that the application of the bounds on the weights of the recurrent neural network provides more stable and reliable learning.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2019
    Description: The horizontal bending angle of scraper conveyor has a great influence on the running resistance, the current consumption, coal winning efficiency of the working surface, etc. Approximately 1–3° is usually the range of horizontal bending angle, but does not indicate the optimum bending angle of the coal mining face. To find the optimal horizontal bending angle, an optimization method is proposed. A mathematical calculation model of the running resistance of the scraper is established based on the direction of the shearer operation. Then, a method of adjusting the step size of the search by inertia weight and expanding fly distance range obeying the t-distribution is proposed based on the basic bat algorithm (BA). Finally, an industrial application was conducted in 21220 Changcun fully mechanized coal mining face, Henan Province. The results show that the current consumption by the scraper conveyor was reduced by 31% when the horizontal bending angle of the S-bending area was 0.66°. Meanwhile, the theoretical current has good consistency with the experimental data, and the average absolute error was 3%.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2019
    Description: During the last decades, intensive efforts have been devoted to the extraction of useful knowledge from large volumes of medical data employing advanced machine learning and data mining techniques. Advances in digital chest radiography have enabled research and medical centers to accumulate large repositories of classified (labeled) images and mostly of unclassified (unlabeled) images from human experts. Machine learning methods such as semi-supervised learning algorithms have been proposed as a new direction to address the problem of shortage of available labeled data, by exploiting the explicit classification information of labeled data with the information hidden in the unlabeled data. In the present work, we propose a new ensemble semi-supervised learning algorithm for the classification of lung abnormalities from chest X-rays based on a new weighted voting scheme. The proposed algorithm assigns a vector of weights on each component classifier of the ensemble based on its accuracy on each class. Our numerical experiments illustrate the efficiency of the proposed ensemble methodology against other state-of-the-art classification methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2019
    Description: This paper deals with the modeling and optimization of a bi-level multi-objective production planning problem, where some of the coefficients of objective functions and parameters of constraints are multi-choice. A general transformation technique based on a binary variable has been used to transform the multi-choices parameters of the problem into their equivalent deterministic form. Finally, two different types of secularization technique have been used to achieve the maximum degree of individually membership goals by minimizing their deviational variables and obtained the most satisfactory solution of the formulated problem. An illustrative real case study of production planning has been discussed and, also compared to validate the efficiency and usefulness of the proposed work.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2019
    Description: The automatic train operation system is a significant component of the intelligent railway transportation. As a fundamental problem, the construction of the train dynamic model has been extensively researched using parametric approaches. The parametric based models may have poor performances due to unrealistic assumptions and changeable environments. In this paper, a long short-term memory network is carefully developed to build the train dynamic model in a nonparametric way. By optimizing the hyperparameters of the proposed model, more accurate outputs can be obtained with the same inputs of the parametric approaches. The proposed model was compared with two parametric methods using actual data. Experimental results suggest that the model performance is better than those of traditional models due to the strong learning ability. By exploring a detailed feature engineering process, the proposed long short-term memory network based algorithm was extended to predict train speed for multiple steps ahead.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2019
    Description: With the rapid expansion of applied 3D computational vision, shape descriptors have become increasingly important for a wide variety of applications and objects from molecules to planets. Appropriate shape descriptors are critical for accurate (and efficient) shape retrieval and 3D model classification. Several spectral-based shape descriptors have been introduced by solving various physical equations over a 3D surface model. In this paper, for the first time, we incorporate a specific manifold learning technique, introduced in statistics and machine learning, to develop a global, spectral-based shape descriptor in the computer graphics domain. The proposed descriptor utilizes the Laplacian Eigenmap technique in which the Laplacian eigenvalue problem is discretized using an exponential weighting scheme. As a result, our descriptor eliminates the limitations tied to the existing spectral descriptors, namely dependency on triangular mesh representation and high intra-class quality of 3D models. We also present a straightforward normalization method to obtain a scale-invariant and noise-resistant descriptor. The extensive experiments performed in this study using two standard 3D shape benchmarks—high-resolution TOSCA and McGill datasets—demonstrate that the present contribution provides a highly discriminative and robust shape descriptor under the presence of a high level of noise, random scale variations, and low sampling rate, in addition to the known isometric-invariance property of the Laplace–Beltrami operator. The proposed method significantly outperforms state-of-the-art spectral descriptors in shape retrieval and classification. The proposed descriptor is limited to closed manifolds due to its inherited inability to accurately handle manifolds with boundaries.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2019
    Description: Computed tomography (CT) image reconstruction and restoration are very important in medical image processing, and are associated together to be an inverse problem. Image iterative reconstruction is a key tool to increase the applicability of CT imaging and reduce radiation dose. Nevertheless, traditional image iterative reconstruction methods are limited by the sampling theorem and also the blurring of projection data will propagate unhampered artifact in the reconstructed image. To overcome these problems, image restoration techniques should be developed to accurately correct a wide variety of image degrading effects in order to effectively improve image reconstruction. In this paper, a blind image restoration technique is embedded in the compressive sensing CT image reconstruction, which can result in a high-quality reconstruction image using fewer projection data. Because a small amount of data can be obtained by radiation in a shorter time, high-quality image reconstruction with less data is equivalent to reducing radiation dose. Technically, both the blurring process and the sparse representation of the sharp CT image are first modeled as a serial of parameters. The sharp CT image will be obtained from the estimated sparse representation. Then, the model parameters are estimated by a hierarchical Bayesian maximum posteriori formulation. Finally, the estimated model parameters are optimized to obtain the final image reconstruction. We demonstrate the effectiveness of the proposed method with the simulation experiments in terms of the peak signal to noise ratio (PSNR), and structural similarity index (SSIM).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2019
    Description: This work addresses the physical layer channel code design for an uncoordinated, frame- and slot-asynchronous random access protocol. Starting from the observation that collisions between two users yield very specific interference patterns, we define a surrogate channel model and propose different protograph low-density parity-check code designs. The proposed codes are both tested in a setup where the physical layer is abstracted, as well as on a more realistic channel model, where finite-length physical layer simulations of the entire asynchronous random access scheme, including decoding, are carried out. We find that the abstracted physical layer model overestimates the performance when short blocks are considered. Additionally, the optimized codes show gains in supported channel traffic, a measure of the number of terminals that can be concurrently accommodated on the channel, of around 17% at a packet loss rate of 10 − 2 w.r.t. off-the-shelf codes.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2019
    Description: Aiming at the problems of low efficiency, heavy quality, and high cost of traditional components, it is necessary to study a design and analysis method of non-standard components. Taking the non-standard parts-turret loading and -unloading device as the carrier, the key parts of the non-standard parts are extracted for structural design and the multi-objective mathematical model and modal theory model are established. The optimization analysis of the key parts is carried out by genetic algorithm. Finally, the optimization results are compared and simulated by ANSYS Workbench. The results show that: in this case, the genetic algorithm optimized data with other data, the overall quality difference is 4.1%. The first six order modal values in the optimized results are in the range of 68 Hz to 130 Hz, which provides a basis for similar research in the future.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2019
    Description: Most of the architectural design problems are basically real-parameter optimization problems. So, any type of evolutionary and swarm algorithms can be used in this field. However, there is a little attention on using optimization methods within the computer aided design (CAD) programs. In this paper, we present Optimus, which is a new optimization tool for grasshopper algorithmic modeling in Rhinoceros CAD software. Optimus implements self-adaptive differential evolution algorithm with ensemble of mutation strategies (jEDE). We made an experiment using standard test problems in the literature and some of the test problems proposed in IEEE CEC 2005. We reported minimum, maximum, average, standard deviations and number of function evaluations of five replications for each function. Experimental results on the benchmark suite showed that Optimus (jEDE) outperforms other optimization tools, namely Galapagos (genetic algorithm), SilverEye (particle swarm optimization), and Opossum (RbfOpt) by finding better results for 19 out of 20 problems. For only one function, Galapagos presented slightly better result than Optimus. Ultimately, we presented an architectural design problem and compared the tools for testing Optimus in the design domain. We reported minimum, maximum, average and number of function evaluations of one replication for each tool. Galapagos and Silvereye presented infeasible results, whereas Optimus and Opossum found feasible solutions. However, Optimus discovered a much better fitness result than Opossum. As a conclusion, we discuss advantages and limitations of Optimus in comparison to other tools. The target audience of this paper is frequent users of parametric design modelling e.g., architects, engineers, designers. The main contribution of this paper is summarized as follows. Optimus showed that near-optimal solutions of architectural design problems can be improved by testing different types of algorithms with respect to no-free lunch theorem. Moreover, Optimus facilitates implementing different type of algorithms due to its modular system.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2019
    Description: For many parallel and distributed systems, automatic data redistribution improves its locality and increases system performance for various computer problems and applications. In general, an array can be distributed to multiple processing systems by using regular or irregular distributions. Some data distribution adopts BLOCK, CYCLIC, or BLOCK-CYCLIC to specify data array decomposition and distribution. On the other hand, irregular distributions specify a different-size data array distribution according to user-defined commands or procedures. In this work, we propose three bipartite graph problems, including the “maximum edge coloring problem”, the “maximum degree edge coloring problem”, and the “cost-sharing maximum edge coloring problem” to formulate these kinds of distribution problems. Next, we propose an approximation algorithm with a ratio bound of two for the maximum edge coloring problem when the input graph is biplanar. Moreover, we also prove that the “cost-sharing maximum edge coloring problem” is an NP-complete problem even when the input graph is biplanar.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2019
    Description: Augmented Reality is increasingly used for enhancing user experiences in different tasks. The present paper describes a model combining augmented reality and artificial intelligence algorithms in a 3D model of an area of the city of Coimbra, based on information extracted from OpenStreetMap. The augmented reality effect is achieved using a video projection over a 3D printed map. Users can interact with the model using a smart phone or similar device and simulate itineraries which are optimized using a genetic algorithm and A*. Among other applications, the model can be used for tourists or travelers to simulate travels with realism, as well as virtual reconstructions of historical places or remote areas.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2019
    Description: One of the most important functions of an export credit agency (ECA) is to act as an intermediary between national governments and exporters. These organizations provide financing to reduce the political and commercial risks in international trade. The agents assess the buyers based on financial and non-financial indicators to determine whether it is advisable to grant them credit. Because many of these indicators are qualitative and inherently linguistically ambiguous, the agents must make decisions in uncertain environments. Therefore, to make the most accurate decision possible, they often utilize fuzzy inference systems. The purpose of this research was to design a credit rating model in an uncertain environment using the fuzzy inference system (FIS). In this research, we used suitable variables of agency ratings from previous studies and then screened them via the Delphi method. Finally, we created a credit rating model using these variables and FIS including related IF-THEN rules which can be applied in a practical setting.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2019
    Description: Image clustering involves the process of mapping an archive image into a cluster such that the set of clusters has the same information. It is an important field of machine learning and computer vision. While traditional clustering methods, such as k-means or the agglomerative clustering method, have been widely used for the task of clustering, it is difficult for them to handle image data due to having no predefined distance metrics and high dimensionality. Recently, deep unsupervised feature learning methods, such as the autoencoder (AE), have been employed for image clustering with great success. However, each model has its specialty and advantages for image clustering. Hence, we combine three AE-based models—the convolutional autoencoder (CAE), adversarial autoencoder (AAE), and stacked autoencoder (SAE)—to form a hybrid autoencoder (BAE) model for image clustering. The MNIST and CIFAR-10 datasets are used to test the result of the proposed models and compare the results with others. The results of the clustering criteria indicate that the proposed models outperform others in the numerical experiment.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2019
    Description: The field of network science is a highly interdisciplinary area; for the empirical analysis of network data, it draws algorithmic methodologies from several research fields. Hence, research procedures and descriptions of the technical results often differ, sometimes widely. In this paper we focus on methodologies for the experimental part of algorithm engineering for network analysis—an important ingredient for a research area with empirical focus. More precisely, we unify and adapt existing recommendations from different fields and propose universal guidelines—including statistical analyses—for the systematic evaluation of network analysis algorithms. This way, the behavior of newly proposed algorithms can be properly assessed and comparisons to existing solutions become meaningful. Moreover, as the main technical contribution, we provide , a highly automated tool to perform and analyze experiments following our guidelines. To illustrate the merits of and our guidelines, we apply them in a case study: we design, perform, visualize and evaluate experiments of a recent algorithm for approximating betweenness centrality, an important problem in network analysis. In summary, both our guidelines and shall modernize and complement previous efforts in experimental algorithmics; they are not only useful for network analysis, but also in related contexts.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2019
    Description: A novel approach to defacement detection is proposed in this paper, addressing explicitly the possible presence of a passive adversary. Defacement detection is an important security measure for Web Sites and Applications, aimed at avoiding unwanted modifications that would result in significant reputational damage. As in many other anomaly detection contexts, the algorithm used to identify possible defacements is obtained via an Adversarial Machine Learning process. We consider an exploratory setting, where the adversary can observe the detector’s alarm-generating behaviour, with the purpose of devising and injecting defacements that will pass undetected. It is then necessary to make to learning process unpredictable, so that the adversary will be unable to replicate it and predict the classifier’s behaviour. We achieve this goal by introducing a secret key—a key that our adversary does not know. The key will influence the learning process in a number of different ways, that are precisely defined in this paper. This includes the subset of examples and features that are actually used, the time of learning and testing, as well as the learning algorithm’s hyper-parameters. This learning methodology is successfully applied in this context, by using the system with both real and artificially modified Web sites. A year-long experimentation is also described, referred to the monitoring of the new Web Site of a major manufacturing company.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2019
    Description: For a graph G = ( V , E ) , the γ -graph of G, denoted G ( γ ) = ( V ( γ ) , E ( γ ) ) , is the graph whose vertex set is the collection of minimum dominating sets, or γ -sets of G, and two γ -sets are adjacent in G ( γ ) if they differ by a single vertex and the two different vertices are adjacent in G. In this paper, we consider γ -graphs of trees. We develop an algorithm for determining the γ -graph of a tree, characterize which trees are γ -graphs of trees, and further comment on the structure of γ -graphs of trees and its connections with Cartesian product graphs, the set of graphs which can be obtained from the Cartesian product of graphs of order at least two.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2019
    Description: Home Healthcare (HHC) is an emerging and fast-expanding service sector that gives rise to challenging vehicle routing and scheduling problems. Each day, HHC structures must schedule the visits of caregivers to patients requiring specific medical and paramedical services at home. These operations have the potential to be unsuitable if the visits are not planned correctly, leading hence to high logistics costs and/or deteriorated service level. In this article, this issue is modeled as a vehicle routing problem where a set of routes has to be built to visit patients asking for one or more specific service within a given time window and during a fixed service time. Each patient has a preference value associated with each available caregiver. The problem addressed in this paper considers two objectives to optimize simultaneously: minimize the caregivers’ travel costs and maximize the patients’ preferences. In this paper, different methods based on the bi-objective non-dominated sorting algorithm are proposed to solve the vehicle routing problem with time windows, preferences, and timing constraints. Numerical results are presented for instances with up to 73 clients. Metrics such as the distance measure, hyper-volume, and the number of non-dominated solutions in the Pareto front are used to assess the quality of the proposed approaches.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2019
    Description: The satellite system is one of the most efficient means for broadcasting due to its wide service coverage as well as the fact that it can provide high data rate services by using high frequency bands. However, there are a number of problems in the satellite system, such as a long round trip delay (RTD) and heterogeneity of the channel conditions of the earth stations. Even though utilizing adaptive coding and modulation (ACM) is almost mandatory for the satellite systems using high frequency bands due to the serious rain fading, the long RTD makes it difficult to quickly respond to channel quality information, resulting in a decrease in the efficiency of ACM. A high heterogeneity of earth stations caused by a wide service coverage also makes it difficult to apply a uniform transmission mode, and thus satellite systems require receiver-dependent transmission modes. A rateless code can be an effective means to compensate for these disadvantages of satellite systems compared to terrestrial wireless systems. This paper presents soft iterative decoding algorithms for efficient application of rateless codes in satellite systems and demonstrates that rateless codes can be effectively used for hybrid automatic repeat request schemes.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2019
    Description: Intel recently introduced the Heterogeneous Architecture Research Platform, HARP. In this platform, the Central Processing Unit and a Field-Programmable Gate Array are connected through a high-bandwidth, low-latency interconnect and both share DRAM memory. For this platform, Open Computing Language (OpenCL), a High-Level Synthesis (HLS) language, is made available. By making use of HLS, a faster design cycle can be achieved compared to programming in a traditional hardware description language. This, however, comes at the cost of having less control over the hardware implementation. We will investigate how OpenCL can be applied to implement a real-time guided image filter on the HARP platform. In the first phase, the performance-critical parameters of the OpenCL programming model are defined using several specialized benchmarks. In a second phase, the guided image filter algorithm is implemented using the insights gained in the first phase. Both a floating-point and a fixed-point implementation were developed for this algorithm, based on a sliding window implementation. This resulted in a maximum floating-point performance of 135 GFLOPS, a maximum fixed-point performance of 430 GOPS and a throughput of HD color images at 74 frames per second.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2019
    Description: The differential evolution (DE) algorithm is a popular and efficient evolutionary algorithm that can be used for single objective real-parameter optimization. Its performance is greatly affected by its parameters. Generally, parameter control strategies involve determining the most suitable value for the current state; there is only a little research on parameter combination and parameter distribution which is also useful for improving algorithm performance. This paper proposes an idea to use parameter region division and parameter strategy combination to flexibly adjust the parameter distribution. Based on the idea, a group-based two-level parameter combination framework is designed to support various modes of parameter combination, and enrich the parameter distribution characteristics. Under this framework, two customized parameter combination strategies are given for a single-operation DE algorithm and a multi-operation DE algorithm. The experiments verify the effectiveness of the two strategies and it also illustrates the meaning of the framework.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2019
    Description: Linearly and nonlinearly parameterized approximate dynamic programming approaches used for output reference model (ORM) tracking control are proposed. The ORM tracking problem is of significant interest in practice since, with a linear ORM, the closed-loop control system is indirectly feedback linearized and value iteration (VI) offers the means to achieve ORM tracking without using process dynamics. Ranging from linear to nonlinear parameterizations, a successful approximate VI implementation for continuous state-action spaces depends on several key parameters such as: problem dimension, exploration of the state-action space, the state-transitions dataset size, and suitable selection of the function approximators. We show that using the same transitions dataset and under a general linear parameterization of the Q-function, high performance ORM tracking can be achieved with an approximate VI scheme, on the same performance level as that of a neural-network (NN)-based implementation that is more complex and takes significantly more time to learn. However, the latter proves to be more robust to hyperparameters selection, dataset size, and to exploration strategies, recommending it as the de facto practical implementation. The case study is aimed at ORM tracking of a real-world nonlinear two inputs–two outputs aerodynamic process with ten internal states, as a representative high order system.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2019
    Description: This paper investigates a real-world distribution problem arising in the vehicle production industry, particularly in a logistics company, in which cars and vans must be loaded on auto-carriers and then delivered to dealerships. A solution to the problem involves the loading and optimal routing, without violating the capacity and time window constraints for each auto-carrier. A two-phase heuristic algorithm was implemented to solve the problem. In the first phase the heuristic builds a route with an optimal insertion procedure, and in the second phase the determination of a feasible loading. The experimental results show that the purposed algorithm can be used to tackle the transportation problem in terms of minimizing total traveling distance, loading/unloading operations and transportation costs, facilitating a decision-making process for the logistics company.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2019
    Description: The Internet is becoming a major source of radicalization. The propaganda efforts of new extremist groups include creating new propaganda videos from fragments of old terrorist attack videos. This article presents a web-scraping method for retrieving relevant videos and a pHash-based algorithm which identifies the original content of a video. Automatic novelty verification is now possible, which can potentially reduce and improve journalist research work, as well as reduce the spreading of fake news. The obtained results have been satisfactory as all original sources of new videos have been identified correctly.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2019
    Description: The paper deals with the problem of global minimization of a polynomial function expressed through the Frobenius norm of two-dimensional or three-dimensional matrices. An adaptive procedure is proposed which applies a Multistart algorithm according to a heuristic approach. The basic step of the procedure consists of splitting the runs of different initial points in segments of fixed length and to interlace the processing order of the various segments, discarding those which appear less promising. A priority queue is suggested to implement this strategy. Various parameters contribute to the handling of the queue, whose length shrinks during the computation, allowing a considerable saving of the computational time with respect to classical procedures. To verify the validity of the approach, a large experimentation has been performed on both nonnegatively constrained and unconstrained problems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2019
    Description: Blast-resistant buildings are mainly used to protect main instruments, controllers, expensive equipment, and people from explosion waves. Oil and gas industry projects almost always include blast-resistant buildings. For instance, based on a hazard identification (HAZID) and hazard and operability (HAZOP) analysis of a plant, control rooms and substations are sometimes designed to withstand an external free air explosion that generates blast over pressure. In this regard, a building façade is considered to be the first barrier of resistance against explosion waves, and therefore a building façade has an important role in reducing a building’s vulnerability and human casualties. In case of a lack of enough resistance, explosion waves enter a building and bring about irreparable damage to the building. Consequently, it seems important to study and evaluate various materials used in a façade against the consequences of an explosion. This study tried to make a comparison between different types of building facades against explosion waves. The materials used in a building play a key role in the vulnerability of a building. In this research, a literature review and the fuzzy Delphi method were applied to find the most critical criteria, and then a fuzzy evaluation based on the distance from the average solution (EDAS) was applied in order to assess various materials used in building facades from the perspective of resiliency. A questionnaire was presented to measure effective indices in order to receive experts’ ideas. Finally, by implementing this methodology in a case study, it was concluded that a stone façade performs much better against explosions.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2019
    Description: Chen and Flum showed that any FPT-approximation of the k-Clique problem is not in para- AC 0 and the k-DominatingSet (k-DomSet) problem could not be computed by para- AC 0 circuits. It is natural to ask whether the f ( k ) -approximation of the k-DomSet problem is in para- AC 0 for some computable function f. Very recently it was proved that assuming W [ 1 ] ≠ FPT , the k-DomSet problem cannot be f ( k ) -approximated by FPT algorithms for any computable function f by S., Laekhanukit and Manurangsi and Lin, seperately. We observe that the constructions used in Lin’s work can be carried out using constant-depth circuits, and thus we prove that para- AC 0 circuits could not approximate this problem with ratio f ( k ) for any computable function f. Moreover, under the hypothesis that the 3-CNF-SAT problem cannot be computed by constant-depth circuits of size 2 ε n for some ε 〉 0 , we show that constant-depth circuits of size n o ( k ) cannot distinguish graphs whose dominating numbers are either ≤k or 〉 log n 3 log log n 1 / k . However, we find that the hypothesis may be hard to settle by showing that it implies NP ⊈ NC 1 .
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2019
    Description: Depth-first search (DFS) is a well-known graph traversal algorithm and can be performed in O ( n + m ) time for a graph with n vertices and m edges. We consider the dynamic DFS problem, that is, to maintain a DFS tree of an undirected graph G under the condition that edges and vertices are gradually inserted into or deleted from G. We present an algorithm for this problem, which takes worst-case O ( m n · polylog ( n ) ) time per update and requires only ( 3 m + o ( m ) ) log n bits of space. This algorithm reduces the space usage of dynamic DFS algorithm to only 1.5 times as much space as that of the adjacency list of the graph. We also show applications of our dynamic DFS algorithm to dynamic connectivity, biconnectivity, and 2-edge-connectivity problems under vertex insertions and deletions.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...