ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (2,316)
  • MDPI  (2,316)
  • Computer Science  (2,316)
Collection
Years
Journal
  • 101
    Publication Date: 2018
    Description: Recurrent neural network (RNN) has achieved remarkable success in sequence labeling tasks with memory requirement. RNN can remember previous information of a sequence and can thus be used to solve natural language processing (NLP) tasks. Named entity recognition (NER) is a common task of NLP and can be considered a classification problem. We propose a bidirectional long short-term memory (LSTM) model for this entity recognition task of the Arabic text. The LSTM network can process sequences and relate to each part of it, which makes it useful for the NER task. Moreover, we use pre-trained word embedding to train the inputs that are fed into the LSTM network. The proposed model is evaluated on a popular dataset called “ANERcorp.” Experimental results show that the model with word embedding achieves a high F-score measure of approximately 88.01%.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 102
    Publication Date: 2018
    Description: In this paper, a printed monopole antenna design for WiMAX/WLAN applications in cable-free self-positioning seismograph nodes is proposed. Great improvements were achieved in miniaturizing the antenna and in widening the narrow bandwidth of the high-frequency band. The antenna was fed by a microstrip gradient line and consisted of a triangle, an inverted-F shape, and an M-shaped structure, which was rotated 90° counterclockwise to form a surface-radiating patch. This structure effectively widened the operating bandwidth of the antenna. Excitation led to the generation of two impedance bands of 2.39–2.49 and 4.26–7.99 GHz for a voltage standing wave ratio of less than 2. The two impedance bandwidths were 100 MHz, i.e., 4.08% relative to the center frequency of 2.45 GHz, and 3730 MHz, i.e., 64.31% relative to the center frequency of 5.80 GHz, covering the WiMAX high-frequency band (5.25–5.85 GHz) and the WLAN band (2.4/5.2/5.8). This article describes the design details of the antenna and presents the results of both simulations and experiments that show good agreement. The proposed antenna meets the field-work requirements of cable-less seismograph nodes.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 103
    Publication Date: 2018
    Description: The present contribution focuses on the estimation of the Cartesian kinematic jerk of the hips’ orientation during a full three-dimensional movement in the context of enabling eHealth applications of advanced mathematical signal analysis. The kinematic jerk index is estimated on the basis of gyroscopic signals acquired offline through a smartphone. A specific free mobile application is used to acquire the gyroscopic signals and to transmit them to a personal computer through a wireless network. The personal computer elaborates the acquired data and returns the kinematic jerk index associated with a motor task. A comparison of the kinematic jerk index value on a number of data sets confirms that such index can be used to evaluate the fluency of hips orientation during motion. The present research confirms that the proposed gyroscopic data acquisition/processing setup constitutes an inexpensive and portable solution to motion fluency analysis. The proposed data-acquisition and data-processing setup may serve as a supporting eHealth technology in clinical bio-mechanics as well as in sports science.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 104
    Publication Date: 2018
    Description: This paper presents a fast algorithm for texture-less object recognition, which is designed to be robust to cluttered backgrounds and small transformations. At its core, the proposed method demonstrates a two-stage template-based procedure using an orientation compressing map and discriminative regional weight (OCM-DRW) to effectively detect texture-less objects. In the first stage, the proposed method quantizes and compresses all the orientations in a neighborhood to obtain the orientation compressing map which then is used to generate a set of possible object locations. To recognize the object in these possible object locations, the second stage computes the similarity of each possible object location with the learned template by using discriminative regional weight, which can effectively distinguish different categories of objects with similar parts. Experiments on publiclyavailable, texture-less object datasets indicate that apart from yielding efficient computational performance, the proposed method also attained remarkable recognition rates surpassing recent state-of-the-art texture-less object detectors in the presence of high-clutter, occlusion and scale-rotation changes. It improves the accuracy and speed by 8% and 370% respectively, relative to the previous best result on D-Textureless dataset.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 105
    Publication Date: 2018
    Description: A finite-difference approach with non-uniform meshes was presented for simulating magnetotelluric responses in 2D structures. We presented the calculation formula of this scheme from the boundary value problem of electric field and magnetic field, and compared finite-difference solutions with finite-element numerical results and analytical solutions of a 1D model. First, a homogeneous half-space model was tested and the finite-difference approach can provide very good accuracy for 2D magnetotelluric modeling. Then we compared them to the analytical solutions for the two-layered geo-electric model; the relative errors of the apparent resistivity and the impedance phase were both increased when the frequency was increased. To conclude, we compare our finite-difference simulation results with COMMEMI 2D-0 model with the finite-element solutions. Both results are in close agreement to each other. These comparisons can confirm the validity and reliability of our finite-difference algorithm. Moreover, a future project will extend the 2D structures to 3D, where non-uniform meshes should perform especially well.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 106
    Publication Date: 2018
    Description: In this paper, the optimal designs of type-1 and interval type-2 fuzzy systems for the classification of the heart rate level are presented. The contribution of this work is a proposed approach for achieving the optimal design of interval type-2 fuzzy systems for the classification of the heart rate in patients. The fuzzy rule base was designed based on the knowledge of experts. Optimization of the membership functions of the fuzzy systems is done in order to improve the classification rate and provide a more accurate diagnosis, and for this goal the Bird Swarm Algorithm was used. Two different type-1 fuzzy systems are designed and optimized, the first one with trapezoidal membership functions and the second with Gaussian membership functions. Once the best type-1 fuzzy systems have been obtained, these are considered as a basis for designing the interval type-2 fuzzy systems, where the footprint of uncertainty was optimized to find the optimal representation of uncertainty. After performing different tests with patients and comparing the classification rate of each fuzzy system, it is concluded that fuzzy systems with Gaussian membership functions provide a better classification than those designed with trapezoidal membership functions. Additionally, tests were performed with the Crow Search Algorithm to carry out a performance comparison, with Bird Swarm Algorithm being the one with the best results.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 107
    Publication Date: 2018
    Description: An important function of next-generation (5G) and beyond mobile communication systems is aim to provide thousand-fold capacity growth and to support high-speed data transmission up to several megabits per second. However, the research community and industries have to face a dilemma of power consumption and hardware design to satisfy the increasing communication requirements. For the purpose of improving the system cost, power consumption, and implementation complexity, a novel scheme of symbol timing and frequency offset estimation with low-resolution analog-to-digital converters (ADCs) based on an orthogonal frequency division multiplexing ultra-wideband (OFDM-UWB) system is proposed in this paper. In our work, we first verified the principle that the autocorrelation of the pseudo-noise (PN) sequences was not affected by low-resolution quantization. With the help of this property, the timing synchronization could be strongly implemented against the influence of low-resolution quantization. Then, the transmitted signal structure and low-resolution quantization scheme under the synchronization scheme were designed. Finally, a frequency offset estimation model with one-bit timing synchronization was established. Theoretical analysis and simulation results corroborate that the performance of the proposed scheme not only approximates to that of the full-resolution synchronization scheme, but also has lower power consumption and computational complexity.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 108
    Publication Date: 2018
    Description: With the emergence of autonomous vehicles and internet of vehicles (IoV), future roads of smart cities will have a combination of autonomous and automated vehicles with regular vehicles that require human operators. To ensure the safety of the road commuters in such a network, it is imperative to enhance the performance of Advanced Driver Assistance Systems (ADAS). Real-time driving risk prediction is a fundamental part of an ADAS. Many driving risk prediction systems have been proposed. However, most of them are based only on vehicle’s velocity. But in most of the accident scenarios, other factors are also involved, such as weather conditions or driver fatigue. In this paper, we proposed an accident prediction system for Vehicular ad hoc networks (VANETs) in urban environments, in which we considered the crash risk as a latent variable that can be observed using multi-observation such as velocity, weather condition, risk location, nearby vehicles density and driver fatigue. A Hidden Markov Model (HMM) was used to model the correlation between these observations and the latent variable. Simulation results showed that the proposed system has a better performance in terms of sensitivity and precision compared to state of the art single factor schemes.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 109
    Publication Date: 2018
    Description: Conventional control charts are one of the most important techniques in statistical process control which are used to assess the performance of processes to see whether they are in- or out-of-control. As traditional control charts deal with crisp data, they are not suitable to study unclear, vague, and fuzzy data. In many real-world applications, however, the data to be used in a control charting method are not crisp since they are approximated due to environmental uncertainties and systematic ambiguities involved in the systems under investigation. In these situations, fuzzy numbers and linguistic variables are used to grab such uncertainties. That is why the use of a fuzzy control chart, in which fuzzy data are used, is justified. As an exponentially weighted moving average (EWMA) scheme is usually used to detect small shifts, in this paper a fuzzy EWMA (F-EWMA) control chart is proposed to detect small shifts in the process mean when fuzzy data are available. The application of the newly developed fuzzy control chart is illustrated using real-life data.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 110
    Publication Date: 2018
    Description: Robot technology at small- and medium-sized enterprises has become a crucial part of current business operations. Beginning with the manufacturing industry, more industries than ever before have recently begun making use of robot technology to increase operational efficiency and productivity. However, prior studies regarding innovation related to intelligent robot use have been limited to developing strategies for describing robot technologies in general. Therefore, we developed a research model for investigating process innovation as it relates to intelligent robots. Based on the literature, two variables of technology benefits (direct usefulness and indirect usefulness) and two constructs of environmental pressure (industry and government) were incorporated into the research model as key determinants of a firm’s process innovation. Furthermore, organizational supports as moderating variables were added to the relationship between technology benefits and process innovation. We collected 257 responses in managerial position at various firms in order to test the proposed hypotheses using structural equation modeling in the statistical software (AMOS 22.0). The results revealed that all variables have a significant impact on process innovation, as well as the moderator. The findings of this study provide theoretical and practical implications for process innovation based on intelligent robot technology.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 111
    Publication Date: 2018
    Description: The research presents the methodology of improving the accuracy in sentiment classification in the light of modelling the latent semantic relations (LSR). The objective of this methodology is to find ways of eliminating the limitations of the discriminant and probabilistic methods for LSR revealing and customizing the sentiment classification process (SCP) to the more accurate recognition of text tonality. This objective was achieved by providing the possibility of the joint usage of the following methods: (1) retrieval and recognition of the hierarchical semantic structure of the text and (2) development of the hierarchical contextually-oriented sentiment dictionary in order to perform the context-sensitive SCP. The main scientific contribution of this research is the set of the following approaches: at the phase of LSR revealing (1) combination of the discriminant and probabilistic models while applying the rules of adjustments to obtain the final joint result; at all SCP phases (2) considering document as a complex structure of topically completed textual components (paragraphs) and (3) taking into account the features of persuasive documents’ type. The experimental results have demonstrated the enhancement of the SCP accuracy, namely significant increase of average values of recall and precision indicators and guarantee of sufficient accuracy level.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 112
    Publication Date: 2018
    Description: Hinge joint damage is a typical form of damage occurring in simply supported slab bridges, which can present adverse effects on the overall force distribution of the structure. However, damage identification methods of hinge joint damage are still limited. In this study, a damage identification algorithm for simply supported hinged-slab bridges based on the modified hinge plate method (MHPM) and artificial bee colony (ABC) algorithms was proposed by considering the effect of hinge damage conditions on the lateral load distribution (LLD) of structures. Firstly, MHPM was proposed and demonstrated, which is based on a traditional hinge plate method by introducing relative displacement as a damage factor to simulate hinge joint damage. The effectiveness of MHPM was verified through comparison with the finite element method (FEM). Secondly, damage identification was treated as the inverse problem of calculating the LLD in damage conditions of simply supported slab bridges. Four ABC algorithms were chosen to solve the problem due to its simple structure, ease of implementation, and robustness. Comparisons of convergence speed and identification accuracy with genetic algorithm and particle swarm optimization were also conducted. Finally, hinged bridges composed of four and seven slabs were studied as numerical examples to account for the feasibility and correctness of the proposed method. The simulation results revealed that the proposed algorithm could identify the location and degree of damaged joints efficiently and precisely.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 113
    Publication Date: 2018
    Description: In the paper, we tackle the least squares estimators of the Vasicek-type model driven by sub-fractional Brownian motion: d X t = ( μ + θ X t ) d t + d S t H , t ≥ 0 with X 0 = 0 , where S H is a sub-fractional Brownian motion whose Hurst index H is greater than 1 2 , and μ ∈ R , θ ∈ R + are two unknown parameters. Based on the so-called continuous observations, we suggest the least square estimators of μ and θ and discuss the consistency and asymptotic distributions of the two estimators.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 114
    Publication Date: 2018
    Description: Social customer relationship management (SCRM) is a new philosophy influencing the relationship between customer and organization where the customer gets the opportunity to control the relationship through social media. This paper aims to identify (a) the current level of SCRM and (b) the influence of basic organizational characteristics on the SCRM level. The data were gathered through a questionnaire distributed to 362 organizations headquartered in the Czech Republic. The questionnaire comprised 54 questions focusing on the significance of marketing and CRM practices, establishing a relationship with the customer, online communities, the use of social media in marketing, and acquiring and managing information. Scalable questions with a typical five-level Likert scale were applied in the questionnaire. The results show that larger firms more often set up their own online communities and manage them strategically; moreover, they are able to manage information better. Contrariwise, small-sized organizations use social networks as a way to establish communication with the customer more than large-sized entities. The use of social media for marketing purposes is significantly higher in organizations oriented to consumer markets than in those oriented to business markets.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 115
    Publication Date: 2018
    Description: OpenCourseWare (OCW) systems have been established to provide open educational resources that are accessible by anyone, including learners with special accessibility needs and preferences. We need to find a formal and interoperable way to describe these preferences in order to use them in OCW systems and retrieve relevant educational resources. This formal representation should use standard accessibility definitions of OCW that can be reused by other OCW systems to represent accessibility concepts. In this article, we present an ontology to represent the accessibility needs of learners with respect to the IMS AfA specifications. The ontology definitions together with rule-based queries are used to retrieve relevant educational resources. Related to this, we developed a user interface component that enables users to create accessibility profiles representing their individual needs and preferences based on our ontology. We evaluated the approach with five examples profiles.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 116
    Publication Date: 2018
    Description: To address the continuous growth in high-speed ubiquitous access required by residential users and enterprises, Telecommunication operators must upgrade their networks to higher data rates. For optical fiber access networks that directly connect end users to metro/regional network, capacity upgrade must be done in a cost- and energy-efficient manner. 40 Gb/s is the possible lane rate for the next generation passive optical networks (NG-PONs). Ideally, existing 10 G PON components could be reused to support 40 Gb/s lane-rate NG-PON transceiver, which requires efficient modulation format and digital signal processing (DSP) to alleviate the bandwidth limitation and fiber dispersion. The major contribution of this work is to offer insight performance comparisons of 40 Gb/s lane rate electrical three level Duobinary, optical Duobinary, and four-level pulse amplitude modulation (PAM-4) for incorporating low complex DSPs, including linear and nonlinear Volterra equalization, as well as maximum likelihood sequence estimation. Detailed analysis and comparison of the complexity of various DSP algorithms are performed. Transceiver bandwidth optimization is also undertaken. The results show that the choices of proper modulation format and DSP configuration depend on the transmission distances of interest.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 117
    Publication Date: 2018
    Description: Container ships must pass through multiple ports of call during a voyage. Therefore, forecasting container volume information at the port of origin followed by sending such information to subsequent ports is crucial for container terminal management and container stowage personnel. Numerous factors influence container allocation to container ships for a voyage, and the degree of influence varies, engendering a complex nonlinearity. Therefore, this paper proposes a model based on gray relational analysis (GRA) and mixed kernel support vector machine (SVM) for predicting container allocation to a container ship for a voyage. First, in this model, the weights of influencing factors are determined through GRA. Then, the weighted factors serve as the input of the SVM model, and SVM model parameters are optimized through a genetic algorithm. Numerical simulations revealed that the proposed model could effectively predict the number of containers for container ship voyage and that it exhibited strong generalization ability and high accuracy. Accordingly, this model provides a new method for predicting container volume for a voyage.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 118
    Publication Date: 2019
    Description: The rapid development of distributed technology has made it possible to store and query massive trajectory data. As a result, a variety of schemes for big trajectory data management have been proposed. However, the factor of data transmission is not considered in most of these, resulting in a certain impact on query efficiency. In view of that, we present THBase, a coprocessor-based scheme for big trajectory data management in HBase. THBase introduces a segment-based data model and a moving-object-based partition model to solve massive trajectory data storage, and exploits a hybrid local secondary index structure based on Observer coprocessor to accelerate spatiotemporal queries. Furthermore, it adopts certain maintenance strategies to ensure the colocation of relevant data. Based on these, THBase designs node-locality-based parallel query algorithms by Endpoint coprocessor to reduce the overhead caused by data transmission, thus ensuring efficient query performance. Experiments on datasets of ship trajectory show that our schemes can significantly outperform other schemes.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 119
    Publication Date: 2019
    Description: Software defect prediction is an important means to guarantee software quality. Because there are no sufficient historical data within a project to train the classifier, cross-project defect prediction (CPDP) has been recognized as a fundamental approach. However, traditional defect prediction methods use feature attributes to represent samples, which cannot avoid negative transferring, may result in poor performance model in CPDP. This paper proposes a multi-source cross-project defect prediction method based on dissimilarity space (DM-CPDP). This method not only retains the original information, but also obtains the relationship with other objects. So it can enhances the discriminant ability of the sample attributes to the class label. This method firstly uses the density-based clustering method to construct the prototype set with the cluster center of samples in the target set. Then, the arc-cosine kernel is used to calculate the sample dissimilarities between the prototype set and the source domain or the target set to form the dissimilarity space. In this space, the training set is obtained with the earth mover’s distance (EMD) method. For the unlabeled samples converted from the target set, the k-Nearest Neighbor (KNN) algorithm is used to label those samples. Finally, the model is learned from training data based on TrAdaBoost method and used to predict new potential defects. The experimental results show that this approach has better performance than other traditional CPDP methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 120
    Publication Date: 2018
    Description: In recent years, when solving MOPs, especially discrete path optimization problems, MOACOs concerning other meta-heuristic algorithms have been used and improved often, and they have become a hot research topic. This article will start from the basic process of ant colony algorithms for solving MOPs to illustrate the differences between each step. Secondly, we provide a relatively complete classification of algorithms from different aspects, in order to more clearly reflect the characteristics of different algorithms. After that, considering the classification result, we have carried out a comparison of some typical algorithms which are from different categories on different sizes TSP (traveling salesman problem) instances and analyzed the results from the perspective of solution quality and convergence rate. Finally, we give some guidance about the selection of these MOACOs to solve problem and some research works for the future.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 121
    Publication Date: 2018
    Description: Conventional methods for measuring cardiovascular parameters use skin contact techniques requiring a measuring device to be worn by the user. To avoid discomfort of contact devices, camera-based techniques using photoplethysmography have been recently introduced. Nevertheless, these solutions are typically expensive and difficult to be used daily at home. In this work, we propose an innovative solution for monitoring cardiovascular parameters that is low cost and can be easily integrated within any common home environment. The proposed system is a contact-less device composed of a see-through mirror equipped with a camera that detects the person’s face and processes video frames using photoplethysmography in order to estimate the heart rate, the breath rate and the blood oxygen saturation. In addition, the color of lips is automatically detected via clustering-based color quantization. The estimated parameters are used to predict a risk of cardiovascular disease by means of fuzzy inference rules integrated in the mirror-based monitoring system. Comparing our system to a contact device in measuring vital parameters on still or slightly moving subjects, we achieve measurement errors that are within acceptable margins according to the literature. Moreover, in most cases, the response of the fuzzy rule-based system is comparable with that of the clinician in assessing a risk level of cardiovascular disease.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 122
    Publication Date: 2018
    Description: Images may be corrupted by salt and pepper impulse noise during image acquisitions or transmissions. Although promising denoising performances have been recently obtained with sparse representations, how to restore high-quality images remains challenging and open. In this work, image sparsity is enhanced with a fast multiclass dictionary learning, and then both the sparsity regularization and robust data fidelity are formulated as minimizations of L0-L0 norms for salt and pepper impulse noise removal. Additionally, a numerical algorithm of modified alternating direction minimization is derived to solve the proposed denoising model. Experimental results demonstrate that the proposed method outperforms the compared state-of-the-art ones on preserving image details and achieving higher objective evaluation criteria.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 123
    Publication Date: 2018
    Description: This paper presents a rehabilitation system based on a customizable exergame protocol to prevent falls in the elderly population. The system is based on depth sensors and exergames. The experiments carried out with several seniors, in a day care center, make it possible to evaluate the usability and the efficiency of the system. The outcomes highlight the user-friendliness, the very good usability of the developed system and the significant enhancement of the elderly in maintaining a physical activity. The performance of the postural response is improved by an average of 80%.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 124
    Publication Date: 2018
    Description: In the present paper, a smart planar electrically steerable passive array radiator (ESPAR) antenna was developed and tested at the frequency of 1.33 GHz with the main goal to control the main radiation lobe direction, ensuring precise communication between the antenna that is implemented in an unmanned aerial vehicle (UAV) and the base station. A control system was also developed and integrated into the communication system: an antenna coupled to the control system. The control system consists of an Arduino, a digital potentiometer, and an improved algorithm that allows defining the radiation-lobe direction as a function of the UAV flight needs. The ESPAR antenna was tested in an anechoic chamber with the control system coupled to it so that all previously established requirements were validated.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 125
    Publication Date: 2018
    Description: To address the issue of cloud mixed workloads scheduling which might lead to system load imbalance and efficiency degradation in cloud computing, a novel cloud task staggering peak scheduling policy based on the task types and the resource load status is proposed. First, based on different task characteristics, the task sequences submitted by the user are divided into queues of different types by the fuzzy clustering algorithm. Second, the Performance Counters (PMC) mechanism is introduced to dynamically monitor the load status of resource nodes and respectively sort the resources by the metrics of Central Processing Unit (CPU), memory, and input/output (I/O) load size, so as to reduce the candidate resources. Finally, the task sequences of specific type are scheduled for the corresponding light loaded resources, and the resources usage peak is staggered to achieve load balancing. The experimental results show that the proposed policy can balance loads and improve the system efficiency effectively and reduce the resource usage cost when the system is in the presence of mixed workloads.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 126
    Publication Date: 2018
    Description: With the advancement and diversification of information retrieval technology, such technology has been widely applied in recent years in personalized information recommender systems (RSs) and e-commerce RSs in addition to data-mining applications, especially with respect to mobile phone purchases. By integrating the weights of fuzzy ordered weighted averaging (OWA) and gray relational analysis, this research calculated the recommended F1 indices of three weight calculation methods to be 20.5%, 14.36%, and 16.43% after an examination by 30 experimenters. According to the operational results attained by the 30 experimenters, the recommended products obtained by the fuzzy OWA and gray relational analysis calculation method covered the products recommended by the other two weight calculation methods with a higher recommendation effect.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 127
    Publication Date: 2018
    Description: Polarization in online social networks has gathered a significant amount of attention in the research community and in the public sphere due to stark disagreements with millions of participants on topics surrounding politics, climate, the economy and other areas where an agreement is required. This work investigates into greater depth a type of model that can produce ideological segregation as a result of polarization depending on the strength of homophily and the ability of users to access similar minded individuals. Whether increased access can induce larger amounts of societal separation is important to investigate, and this work sheds further insight into the phenomenon. Center to the hypothesis of homophilic alignments in friendship generation is that of a discussion group or community. These are modeled and the investigation into their effect on the dynamics of polarization is presented. The social implications demonstrate that initial phases of an ideological exchange can result in increased polarization, although a consensus in the long run is expected and that the separation between groups is amplified when groups are constructed with ideological homophilic preferences.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 128
    Publication Date: 2018
    Description: In recent years, there has been a rapid evolution of wireless technologies that has led to the challenge of high demand for spectral resources. To overcome this challenge, good spectrum management is required that calls for more efficient use of the spectrum. In this paper, we present a general system, which makes a tradeoff between the spectral efficiency (SE) and energy efficiency (EE) in the cellular cognitive radio networks (CCRN) with their respective limits. We have analyzed the system taking into account the different types of power used in the CCRN, namely the spectrum detection power (Zs) and the relay power (Zr). Optimal policy for emission power allocation formulated in the function of sub-channel activity index (SAI) as an optimization problem in order to maximize spectrum utilization and minimize the energy consumption in the base station of the secondary system energy consumption, is subject to different constraints of the main user system. We also evaluate the collaborative activity index of the sub-channel describing the activity of the primary users in the CCRN. The theoretical analyses and simulation results sufficiently demonstrate that the SE and EE relationship in the CCRN is not contrary and thus the achievement of optimal tradeoff between SE and EE. By making a rapprochement with a cognitive cellular network where SBSs adopts an equal power allocation strategy for sub-channels, the results of our proposed scheme indicate a significant improvement. Therefore, the model proposed in this paper offers a better tradeoff between SE and EE.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 129
    Publication Date: 2019
    Description: In this paper, we present an analysis of the mining process of two popular assets, Bitcoin and gold. The analysis highlights that Bitcoin, more specifically its underlying technology, is a “safe haven” that allows facing the modern environmental challenges better than gold. Our analysis emphasizes that crypto-currencies systems have a social and economic impact much smaller than that of the traditional financial systems. We present an analysis of the several stages needed to produce an ounce of gold and an artificial agent-based market model simulating the Bitcoin mining process and allowing the quantification of Bitcoin mining costs. In this market model, miners validate the Bitcoin transactions using the proof of work as the consensus mechanism, get a reward in Bitcoins, sell a fraction of them to cover their expenses, and stay competitive in the market by buying and divesting hardware units and adjusting their expenses by turning off/on their machines according to the signals provided by a technical analysis indicator, the so-called relative strength index.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 130
    Publication Date: 2019
    Description: In recent years, almost all of the current top-performing object detection networks use CNN (convolutional neural networks) features. State-of-the-art object detection networks depend on CNN features. In this work, we add feature fusion in the object detection network to obtain a better CNN feature, which incorporates well deep, but semantic, and shallow, but high-resolution, CNN features, thus improving the performance of a small object. Also, the attention mechanism was applied to our object detection network, AF R-CNN (attention mechanism and convolution feature fusion based object detection), to enhance the impact of significant features and weaken background interference. Our AF R-CNN is a single end to end network. We choose the pre-trained network, VGG-16, to extract CNN features. Our detection network is trained on the dataset, PASCAL VOC 2007 and 2012. Empirical evaluation of the PASCAL VOC 2007 dataset demonstrates the effectiveness and improvement of our approach. Our AF R-CNN achieves an object detection accuracy of 75.9% on PASCAL VOC 2007, six points higher than Faster R-CNN.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 131
    Publication Date: 2018
    Description: In this paper, a novel approach to facial expression recognition based on the discrete separable shearlet transform (DSST) and normalized mutual information feature selection is proposed. The approach can be divided into five steps. First, all test and training images are preprocessed. Second, DSST is applied to the preprocessed facial expression images, and all the transformation coefficients are obtained as the original feature set. Third, an improved normalized mutual information feature selection is proposed to find the optimal feature subset of the original feature set, thus we can retain the key classification information of the original data. Fourth, the feature extraction and selection of the feature space is reduced by employing linear discriminant analysis. Finally, a support vector machine is used to recognize the expressions. In this study, experimental verification was carried out on four open facial expression databases. The results show that this method can not only improve the recognition rate of facial expressions, but also significantly reduce the computational complexity and improve the system efficiency.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 132
    Publication Date: 2018
    Description: Augmented reality (AR) combines information from databases with information derived directly through the human senses or different sensors. AR is the organic, real-time, and dynamically overlaying virtual images created by computers and other information on a real environment that the observer can see. Virtual information changes according to the movement of the observer, as if that virtual information truly exists in the real world. The convergence of geographic information systems (GIS), web mapping, mobile technology, and augmented reality (AR) implies the emergence of location based systems (LBS) and, in turn, the diminishing use of traditional analog maps by smartphone users. The focus and review of current achievements in this subject on the wider trends of the use of AR also prove that coding of some data and metadata on an image or a map (both in digital and analogue form) permits inter alia the inclusion of, for example, a paper map or analogue image into the chain of digital devices use. Some solutions, remarks, and comments concerning functioning of the digitally augmented (printed) map and their digital counterparts within the information society are presented.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 133
    Publication Date: 2018
    Description: Convolutional Neural Networks (CNNs) are brain-inspired computational models designed to recognize patterns. Recent advances demonstrate that CNNs are able to achieve, and often exceed, human capabilities in many application domains. Made of several millions of parameters, even the simplest CNN shows large model size. This characteristic is a serious concern for the deployment on resource-constrained embedded-systems, where compression stages are needed to meet the stringent hardware constraints. In this paper, we introduce a novel accuracy-driven compressive training algorithm. It consists of a two-stage flow: first, layers are sorted by means of heuristic rules according to their significance; second, a modified stochastic gradient descent optimization is applied on less significant layers such that their representation is collapsed into a constrained subspace. Experimental results demonstrate that our approach achieves remarkable compression rates with low accuracy loss (〈1%).
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 134
    Publication Date: 2018
    Description: We propose a new framework to improve the engagement of medical practitioners in a planned e-training platform for the successful identification and effective management of presumptive cases of tuberculosis (TB) in Pakistan. Our work is aligned with the World Health Organization’s TB-DOTS (Directly Observed Treatment Short-course) strategy for promoting the effective management of tuberculosis. We start by presenting previous work done at Mercy Corps Pakistan for training medical practitioners, then present the results of a recent survey we administered to a random sample of medical practitioners in Pakistan to learn about their requirements and readiness to embrace a new e-training platform and methodology. Informed by feedback from the survey, we formulated a detailed requirement analysis of the five key learning areas (or phases) that we believe are fundamental to the success of a TB e-training platform. Moreover, survey results revealed that an on-spot, on-demand, and competency-based learning tool can potentially improve the engagement of medical practitioners in the process. Building on the insights gained from the survey, we finally describe our initial UX (user experience) prototypes for phase 1, which corresponds to the identification of presumptive tuberculosis cases.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 135
    Publication Date: 2018
    Description: Discriminative correlation filters (DCFs) have been shown to perform superiorly in visual object tracking. However, visual tracking is still challenging when the target objects undergo complex scenarios such as occlusion, deformation, scale changes and illumination changes. In this paper, we utilize the hierarchical features of convolutional neural networks (CNNs) and learn a spatial-temporal context correlation filter on convolutional layers. Then, the translation is estimated by fusing the response score of the filters on the three convolutional layers. In terms of scale estimation, we learn a discriminative correlation filter to estimate scale from the best confidence results. Furthermore, we proposed a re-detection activation discrimination method to improve the robustness of visual tracking in the case of tracking failure and an adaptive model update method to reduce tracking drift caused by noisy updates. We evaluate the proposed tracker with DCFs and deep features on OTB benchmark datasets. The tracking results demonstrated that the proposed algorithm is superior to several state-of-the-art DCF methods in terms of accuracy and robustness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 136
    Publication Date: 2018
    Description: The traveling-salesman problem can be regarded as an NP-hard problem. To better solve the best solution, many heuristic algorithms, such as simulated annealing, ant-colony optimization, tabu search, and genetic algorithm, were used. However, these algorithms either are easy to fall into local optimization or have low or poor convergence performance. This paper proposes a new algorithm based on simulated annealing and gene-expression programming to better solve the problem. In the algorithm, we use simulated annealing to increase the diversity of the Gene Expression Programming (GEP) population and improve the ability of global search. The comparative experiments results, using six benchmark instances, show that the proposed algorithm outperforms other well-known heuristic algorithms in terms of the best solution, the worst solution, the running time of the algorithm, the rate of difference between the best solution and the known optimal solution, and the convergent speed of algorithms.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 137
    Publication Date: 2018
    Description: Information management is a common paradigm in modern decision-making. A wide range of decision-making techniques have been proposed in the literature to model complex business processes. In this Special Issue, 16 selected and peer-reviewed original research articles contribute to business information management in various current real-world problems by proposing crisp or uncertain multiple-criteria decision-making (MCDM) models and techniques, mostly including multi-attribute decision-making (MADM) approaches in addition to a single paper proposing an interactive multi-objective decision-making (MODM) approach. The papers are mainly concentrated in three application areas: supplier selection and rational order allocation, the evaluation and selection of goods or facilities, and personnel selection/partner selection. A number of new approaches are proposed that are expected to attract great interest from the research community.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 138
    Publication Date: 2018
    Description: The amount of information currently generated in the world has been increasing exponentially, raising the question of whether all acquired data is relevant for the learning algorithm process. If a subset of the data does not bring enough innovation, data-selection strategies can be employed to reduce the computational complexity cost and, in many cases, improve the estimation accuracy. In this paper, we explore some adaptive filtering algorithms whose characteristic features are their fast convergence and data selection. These algorithms incorporate a prescribed data-selection strategy and are compared in distinct applications environments. The simulation results include both synthetic and real data.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 139
    Publication Date: 2018
    Description: Path planning of unmanned aerial vehicles (UAVs) in threatening and adversarial areas is a constrained nonlinear optimal problem which takes a great amount of static and dynamic constraints into account. Quantum-behaved pigeon-inspired optimization (QPIO) has been widely applied to such nonlinear problems. However, conventional QPIO is suffering low global convergence speed and local optimum. In order to solve the above problems, an improved QPIO algorithm, adaptive operator QPIO, is proposed in this paper. Firstly, a new initialization process based on logistic mapping method is introduced to generate the initial population of the pigeon-swarm. After that, to improve the performance of the map and compass operation, the factor parameter will be adaptively updated in each iteration, which can balance the ability between global and local search. In the final landmark operation, the gradual decreasing pigeon population-updating strategy is introduced to prevent premature convergence and local optimum. Finally, the demonstration of the proposed algorithm on UAV path planning problem is presented, and the comparison result indicates that the performance of our algorithm is better than that of particle swarm optimization (PSO), pigeon-inspired optimization (PIO), and its variants, in terms of convergence and accuracy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 140
    Publication Date: 2018
    Description: With the explosive growth of product reviews, review rating prediction has become an important research topic which has a wide range of applications. The existing review rating prediction methods use a unified model to perform rating prediction on reviews published by different users, ignoring the differences of users within these reviews. Constructing a separate personalized model for each user to capture the user’s personalized sentiment expression is an effective attempt to improve the performance of the review rating prediction. The user-personalized sentiment information can be obtained not only by the review text but also by the user-item rating matrix. Therefore, we propose a user-personalized review rating prediction method by integrating the review text and user-item rating matrix information. In our approach, each user has a personalized review rating prediction model, which is decomposed into two components, one part is based on review text and the other is based on user-item rating matrix. Through extensive experiments on Yelp and Douban datasets, we validate that our methods can significantly outperform the state-of-the-art methods.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 141
    Publication Date: 2018
    Description: Freight transportation is becoming an increasingly critical activity for enterprises in a global world. Moreover, the distribution activities have a non-negligible impact on the environment, as well as on the citizens’ welfare. The classical vehicle routing problem (VRP) aims at designing routes that minimize the cost of serving customers using a given set of capacitated vehicles. Some VRP variants consider traveling times, either in the objective function (e.g., including the goal of minimizing total traveling time or designing balanced routes) or as constraints (e.g., the setting of time windows or a maximum time per route). Typically, the traveling time between two customers or between one customer and the depot is assumed to be both known in advance and static. However, in real life, there are plenty of factors (predictable or not) that may affect these traveling times, e.g., traffic jams, accidents, road works, or even the weather. In this work, we analyze the VRP with dynamic traveling times. Our work assumes not only that these inputs are dynamic in nature, but also that they are a function of the structure of the emerging routing plan. In other words, these traveling times need to be dynamically re-evaluated as the solution is being constructed. In order to solve this dynamic optimization problem, a learnheuristic-based approach is proposed. Our approach integrates statistical learning techniques within a metaheuristic framework. A number of computational experiments are carried out in order to illustrate our approach and discuss its effectiveness.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 142
    Publication Date: 2018
    Description: Business innovation is a process that requires creativity, and benefits from extensive collaboration. Currently, computational support in creativity processes is low, but modern techniques would allow these processes to be sped up. In this context, we provide such a computational support with software for business innovation design that uses computational creativity techniques. Furthermore, the software enables a gamified process to increase user engagement and collaboration, which mimics evolutionary methods, relying on a voting mechanism. The software includes a business innovation ontology representing the domain knowledge that is used to generate and select a set of diverse preliminary representations of business ideas. Indeed, the most promising for novelty and potential impact are identified to ignite a business innovation game where team members collaborate to elaborate new innovation ideas based on those inputs until convergence to a shortlist of business model proposals. The main features of the approach are illustrated by means of a running example concerning innovative services for smart cities.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 143
    Publication Date: 2019
    Description: Link prediction is a task predicting whether there is a link between two nodes in a network. Traditional link prediction methods that assume handcrafted features (such as common neighbors) as the link’s formation mechanism are not universal. Other popular methods tend to learn the link’s representation, but they cannot represent the link fully. In this paper, we propose Edge-Nodes Representation Neural Machine (ENRNM), a novel method which can learn abundant topological features from the network as the link’s representation to promote the formation of the link. The ENRNM learns the link’s formation mechanism by combining the representation of edge and the representations of nodes on the two sides of the edge as link’s full representation. To predict the link’s existence, we train a fully connected neural network which can learn meaningful and abundant patterns. We prove that the features of edge and two nodes have the same importance in link’s formation. Comprehensive experiments are conducted on eight networks, experiment results demonstrate that the method ENRNM not only exceeds plenty of state-of-the-art link prediction methods but also performs very well on diverse networks with different structures and characteristics.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 144
    Publication Date: 2018
    Description: In this study, fifth-order and sixth-order diagonally implicit Runge–Kutta type (DIRKT) techniques for solving fourth-order ordinary differential equations (ODEs) are derived which are denoted as DIRKT5 and DIRKT6, respectively. The first method has three and the another one has four identical nonzero diagonal elements. A set of test problems are applied to validate the methods and numerical results showed that the proposed methods are more efficient in terms of accuracy and number of function evaluations compared to the existing implicit Runge–Kutta (RK) methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 145
    Publication Date: 2018
    Description: A triple band-notched ultrawide band (UWB) antenna is presented to avoid the interference of services working in the UWB band, such as WLAN, WiMAX and X-band satellite systems. The arc H-shaped slot on the radiating patch creates a low frequency notched band, while the other two band-notched bands are formed by cutting narrow slots on the ground plane. The presented antenna can operate on the ultrawide band efficiently and inhibit interference from three different kinds of narrow band communication systems. The simulation and measurement results show that the antenna has excellent band-notched function on the rejectband and almost omnidirectional radiation pattern on the passband.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 146
    Publication Date: 2018
    Description: This paper presents a comparison among the bee colony optimization (BCO), differential evolution (DE), and harmony search (HS) algorithms. In addition, for each algorithm, a type-1 fuzzy logic system (T1FLS) for the dynamic modification of the main parameters is presented. The dynamic adjustment in the main parameters for each algorithm with the implementation of fuzzy systems aims at enhancing the performance of the corresponding algorithms. Each algorithm (modified and original versions) is analyzed and compared based on the optimal design of fuzzy systems for benchmark control problems, especially in fuzzy controller design. Simulation results provide evidence that the FDE algorithm outperforms the results of the FBCO and FHS algorithms in the optimization of fuzzy controllers. Statistically is demonstrated that the better errors are found with the implementation of the fuzzy systems to enhance each proposed algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 147
    Publication Date: 2018
    Description: E-commerce is becoming more and more the main instrument for selling goods to the mass market. This led to a growing interest in algorithms and techniques able to predict products future prices, since they allow us to define smart systems able to improve the quality of life by suggesting more affordable goods and services. The joint use of time series, reputation and sentiment analysis clearly represents one important approach to this research issue. In this paper we present Price Probe, a suite of software tools developed to perform forecasting on products’ prices. Its primary aim is to predict the future price trend of products generating a customized forecast through the exploitation of autoregressive integrated moving average (ARIMA) model. We experimented the effectiveness of the proposed approach on one of the biggest E-commerce infrastructure in the world: Amazon. We used specific APIs and dedicated crawlers to extract and collect information about products and their related prices over time and, moreover, we extracted information from social media and Google Trends that we used as exogenous features for the ARIMA model. We fine-estimated ARIMA’s parameters and tried the different combinations of the exogenous features and noticed through experimental analysis that the presence of Google Trends information significantly improved the predictions.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 148
    Publication Date: 2018
    Description: In this paper, we present a review of recent developments in artificial intelligence (AI) towards the possibility of an artificial intelligence equal that of human intelligence. AI technology has always shown a stepwise increase in its capacity and complexity. The last step took place several years ago with the increased progress in deep neural network technology. Each such step goes hand in hand with our understanding of ourselves and our understanding of human cognition. Indeed, AI was always about the question of understanding human nature. AI percolates into our lives, changing our environment. We believe that the next few steps in AI technology, and in our understanding of human behavior, will bring about much more powerful machines that are flexible enough to resemble human behavior. In this context, there are two research fields: Artificial Social Intelligence (ASI) and General Artificial Intelligence (AGI). The authors also allude to one of the main challenges for AI, embodied cognition, and explain how it can be viewed as an opportunity for further progress in AI research.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 149
    Publication Date: 2018
    Description: The fifth generation (5G) of wireless communication systems is considered the key technology to enable a wide range of application scenarios and the effective spreading of the smart city concept. Vertical business use cases, specifically designed for the future 5G city, will have a strong economical and social impact. For this reason, ongoing 5G field trials have to test newly deployed technologies as well as the capability of 5G to create a new digital economy. This paper describes the 5G field trial environment that was launched in Italy at the end of 2017. The aim is to evaluate the capability of the 5G network of supporting innovative services with reference to suitably designed key performance indicators and to evaluate the opportunities offered by these services. Indeed, vertical business use cases, specifically designed for the future 5G city, with a strong economic and social impact, are under implementation and will be evaluated. In particular, the paper provides a detailed description of the deployment of an actual complete integrated 5G network. It shows how 5G is effective enabling technology for a wide range of vertical business and use cases. Indeed, its flexibility allows to satisfy completely different performance requirements of real services. Some preliminary results, obtained during the first phase, are presented for a smart mobility scenario.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 150
    Publication Date: 2018
    Description: LoRa (along with its upper layers definition—LoRaWAN) is one of the most promising Low Power Wide Area Network (LPWAN) technologies for implementing Internet of Things (IoT)-based applications. Although being a popular technology, several works in the literature have revealed vulnerabilities and risks regarding the security of LoRaWAN v1.0 (the official 1st specification draft). The LoRa-Alliance has built upon these findings and introduced several improvements in the security and architecture of LoRa. The result of these efforts resulted in LoRaWAN v1.1, released on 11 October 2017. This work aims at reviewing and clarifying the security aspects of LoRaWAN v1.1. By following ETSI guidelines, we provide a comprehensive Security Risk Analysis of the protocol and discuss several remedies to the security risks described. A threat catalog is presented, along with discussions and analysis in view of the scale, impact, and likelihood of each threat. To the best of the authors’ knowledge, this work is one of the first of its kind, by providing a detailed security risk analysis related to the latest version of LoRaWAN. Our analysis highlights important practical threats, such as end-device physical capture, rogue gateway and self-replay, which require particular attention by developers and organizations implementing LoRa networks.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 151
    Publication Date: 2018
    Description: The combination of adaptive filters is an effective approach to improve filtering performance. In this paper, we investigate the performance of an adaptive combined scheme between two adaptive multiple-input single-output (MISO) filters, which can be easily extended to the case of multiple outputs. In order to generalize the analysis, we consider the multichannel affine projection algorithm (APA) to update the coefficients of the MISO filters, which increases the possibility of exploiting the capabilities of the filtering scheme. Using energy conservation relations, we derive a theoretical behavior of the proposed adaptive combination scheme at steady state. Such analysis entails some further theoretical insights with respect to the single channel combination scheme. Simulation results prove both the validity of the theoretical steady-state analysis and the effectiveness of the proposed combined scheme.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 152
    Publication Date: 2018
    Description: Artificial intelligence (AI) and machine learning promise to make major changes to the relationship of people and organizations with technology and information. However, as with any form of information processing, they are subject to the limitations of information linked to the way in which information evolves in information ecosystems. These limitations are caused by the combinatorial challenges associated with information processing, and by the tradeoffs driven by selection pressures. Analysis of the limitations explains some current difficulties with AI and machine learning and identifies the principles required to resolve the limitations when implementing AI and machine learning in organizations. Applying the same type of analysis to artificial general intelligence (AGI) highlights some key theoretical difficulties and gives some indications about the challenges of resolving them.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 153
    Publication Date: 2018
    Description: A Smart Home is characterized by the presence of a huge number of small, low power devices, along with more classical devices. According to the Internet of Things (IoT) paradigm, all of them are expected to be always connected to the Internet in order to provide enhanced services. In this scenario, an attacker can undermine both the network security and the user’s security/privacy. Traditional security measures are not sufficient, because they are too difficult to setup and are either too weak to effectively protect the user or too limiting for the new services effectiveness. The paper suggests to dynamically adapt the security level of the smart home network according to the user perceived risk level what we have called network sentiment analysis. The security level is not fixed, established by a central system (usually by the Internet Service Provider) but can be changed with the users cooperation. The security of the smart home network is improved by a distributed firewalls and Intrusion Detection Systems both to the smart home side as to the Internet Service Provider side. These two parts must cooperate and integrate their actions for reacting dynamically to new and on going threats. Moreover, the level of network sentiment detected can be propagate to nearby home networks (e.g., the smart home networks of the apartments inside a building) to increase/decrease their level of security, thus creating a true in-line Intrusion Prevention System (IPS). The paper also presents a test bed for Smart Home to detect and counteract to different attacks against the IoT sensors, Wi-Fi and Ethernet connections.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 154
    Publication Date: 2018
    Description: Parallel reservoir simulation is an important approach to solving real-time reservoir management problems. Recently, there is a new trend of using a graphics processing unit (GPU) to parallelize the reservoir simulations. Current GPU-aided reservoir simulations focus on compute unified device architecture (CUDA). Nevertheless, CUDA is not functionally portable across devices and incurs high amount of code. Meanwhile, domain decomposition is not well used for GPU-based reservoir simulations. In order to address the problems, we propose a parallel method with OpenACC to accelerate serial code and reduce the time and effort during porting an application to GPU. Furthermore, the GPU-aided domain decomposition is developed to accelerate the efficiency of reservoir simulation. The experimental results indicate that (1) the proposed GPU-aided approach can outperform the CPU-based one up to about two times, meanwhile with the help of OpenACC, the workload of the transplant code was reduced significantly by about 22 percent of the source code, (2) the domain decomposition method can further improve the execution efficiency up to 1.7×. The proposed parallel reservoir simulation method is a efficient tool to accelerate reservoir simulation.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 155
    Publication Date: 2018
    Description: The search for efficient methods and procedures to optimize experimental designs is a vital process in field trials that is often challenged by computational bottlenecks. Most existing methods ignore the presence of some form of correlations in the data to simplify the optimization process at the design stage. This study explores several algorithms for improving field experimental designs using a linear mixed models statistical framework adjusting for both spatial and genetic correlations based on A- and D-optimality criteria. Relative design efficiencies are estimated for an array of algorithms including pairwise swap, genetic neighborhood, and simulated annealing and evaluated with varying levels of heritabilities, spatial and genetic correlations. Initial randomized complete block designs were generated using a stochastic procedure and can also be imported directly from other design software. Results showed that at a spatial correlation of 0.6 and a heritability of 0.3, under the A-optimality criterion, both simulated annealing and simple pairwise algorithms achieved the highest design efficiencies of 7.4 % among genetically unrelated individuals, implying a reduction in average variance of the random treatment effects by 7.4 % when the algorithm was iterated 5000 times. In contrast, results under D-optimality criterion indicated that simulated annealing had the lowest design efficiency. The simple pairwise algorithm consistently maintained highest design efficiencies in all evaluated conditions. Design efficiencies for experiments with full-sib families decreased with increasing heritability. The number of successful swaps appeared to decrease with increasing heritability and were highest for both simulated annealing and simple pairwise algorithms, and lowest for genetic neighborhood algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 156
    Publication Date: 2018
    Description: Multi-valued neutrosophic sets (MVNSs) consider the truth-membership, indeterminacy-membership, and falsity-membership simultaneously, which can more accurately express the preference information of decision-makers. In this paper, the normalized multi-valued neutrosophic distance measure is developed firstly and the corresponding properties are investigated as well. Secondly, the normalized multi-valued neutrosophic distance difference is defined and the corresponding partial ordering relation is discussed. Thirdly, based on the developed distances and comparison method, an extended multi-valued neutrosophic QUALItative FLEXible multiple criteria (QUALIFLEX) method is proposed to handle MCDM problems where the weights of criteria are completely unknown. Finally, an example for selection of medical diagnostic plan is provided to demonstrate the proposed method, together with sensitivity analysis and comparison analysis.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 157
    Publication Date: 2018
    Description: The system identification problem becomes more challenging when the parameter space increases. Recently, several works have focused on the identification of bilinear forms, which are related to the impulse responses of a spatiotemporal model, in the context of a multiple-input/single-output system. In this framework, the problem was addressed in terms of the Wiener filter and different basic adaptive algorithms. This paper studies two types of algorithms tailored for the identification of such bilinear forms, i.e., the Kalman filter (along with its simplified version) and an optimized least-mean-square (LMS) algorithm. Also, a comparison between them is performed, which shows interesting similarities. In addition to the mathematical derivation of the algorithms, we also provide extensive experimental results, which support the theoretical findings and indicate the good performance of the proposed solutions.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 158
    Publication Date: 2018
    Description: The Hammer prototype is a query engine for corpora of Open Data that provides users with the concept of blind querying. Since data sets published on Open Data portals are heterogeneous, users wishing to find out interesting data sets are blind: queries cannot be fully specified, as in the case of databases. Consequently, the query engine is responsible for rewriting and adapting the blind query to the actual data sets, by exploiting lexical and semantic similarity. The effectiveness of this approach was discussed in our previous works. In this paper, we report our experience in developing the query engine. In fact, in the very first version of the prototype, we realized that the implementation of the retrieval technique was too slow, even though corpora contained only a few thousands of data sets. We decided to adopt the Map-Reduce paradigm, in order to parallelize the query engine and improve performances. We passed through several versions of the query engine, either based on the Hadoop framework or on the Spark framework. Hadoop and Spark are two very popular frameworks for writing and executing parallel algorithms based on the Map-Reduce paradigm. In this paper, we present our study about the impact of adopting the Map-Reduce approach and its two most famous frameworks to parallelize the Hammer query engine; we discuss various implementations of the query engine, either obtained without significantly rewriting the algorithm or obtained by completely rewriting the algorithm by exploiting high level abstractions provided by Spark. The experimental campaign we performed shows the benefits provided by each studied solution, with the perspective of moving toward Big Data in the future. The lessons we learned are collected and synthesized into behavioral guidelines for developers approaching the problem of parallelizing algorithms by means of Map-Reduce frameworks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 159
    Publication Date: 2018
    Description: The flexible job shop scheduling problem (FJSSP) and multi-row workshop layout problem (MRWLP) are two major focuses in sustainable manufacturing processes. There is a close interaction between them since the FJSSP provides the material handling information to guide the optimization of the MRWLP, and the layout scheme affects the effect of the scheduling scheme by the transportation time of jobs. However, in traditional methods, they are regarded as separate tasks performed sequentially, which ignores the interaction. Therefore, developing effective methods to deal with the multi-objective energy-aware integration of the FJSSP and MRWLP (MEIFM) problem in a sustainable manufacturing system is becoming more and more important. Based on the interaction between FJSSP and MRWLP, the MEIFM problem can be formulated as a multi-objective bi-level programming (MOBLP) model. The upper-level model for FJSSP is employed to minimize the makespan and total energy consumption, while the lower-level model for MRWLP is used to minimize the material handling quantity. Because the MEIFM problem is denoted as a mixed integer non-liner programming model, it is difficult to solve it using traditional methods. Thus, this paper proposes an improved multi-objective hierarchical genetic algorithm (IMHGA) to solve this model. Finally, the effectiveness of the method is verified through comparative experiments.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 160
    Publication Date: 2018
    Description: Optical flow is defined as the motion field of pixels between two consecutive images. Traditionally, in order to estimate pixel motion field (or optical flow), an energy model is proposed. This energy model is composed of (i) a data term and (ii) a regularization term. The data term is an optical flow error estimation and the regularization term imposes spatial smoothness. Traditional variational models use a linearization in the data term. This linearized version of data term fails when the displacement of the object is larger than its own size. Recently, the precision of the optical flow method has been increased due to the use of additional information, obtained from correspondences computed between two images obtained by different methods such as SIFT, deep-matching, and exhaustive search. This work presents an empirical study in order to evaluate different strategies for locating exhaustive correspondences improving flow estimation. We considered a different location for matching random locations, uniform locations, and locations on maximum gradient magnitude. Additionally, we tested the combination of large and medium gradients with uniform locations. We evaluated our methodology in the MPI-Sintel database, which represents the state-of-the-art evaluation databases. Our results in MPI-Sintel show that our proposal outperforms classical methods such as Horn-Schunk, TV-L1, and LDOF, and our method performs similar to MDP-Flow.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 161
    Publication Date: 2018
    Description: When financial market conditions change, traders adopt different strategies. The traders’ collective behaviour may cause significant changes in the statistical properties of price movements. When this happens, the market is said to have gone through “regime changes”. The purpose of this paper is to characterise what is a “normal market regime” as well as what is an “abnormal market regime”, under observations in Directional Changes (DC). Our study starts with historical data from 10 financial markets. For each market, we focus on a period of time in which significant events could have triggered regime changes. The observations of regime changes in these markets are then positioned in a designed two-dimensional indicator space based on DC. Our results suggest that the normal regimes from different markets share similar statistical characteristics. In other words, with our observations, it is possible to distinguish normal regimes from abnormal regimes. This is significant, because, for the first time, we can tell whether a market is in a normal regime by observing the DC indicators in the market. This opens the door for future work to be able to dynamically monitor the market for regime change.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 162
    Publication Date: 2018
    Description: Diabetes has become a serious health concern. The use and popularization of blood glucose measurement devices have led to a tremendous increase on health for diabetics. Tracking and maintaining traceability between glucose measurements, insulin doses and carbohydrate intake can provide useful information to physicians, health professionals, and patients. This paper presents an information system, called GLUMIS (GLUcose Management Information System), aimed to support diabetes management activities. It is made of two modules, one for glucose prediction and one for data visualization and a reasoner to aid users in their treatment. Through integration with glucose measurement devices, it is possible to collect historical data on the treatment. In addition, the integration with a tool called the REALI System allows GLUMIS to also process data on insulin doses and eating habits. Quantitative and qualitative data were collected through an experimental case study involving 10 participants. It was able to demonstrate that the GLUMIS system is feasible. It was able to discover rules for predicting future values of blood glucose by processing the past history of measurements. Then, it presented reports that can help diabetics choose the amount of insulin they should take and the amount of carbohydrate they should consume during the day. Rules found by using one patient’s measurements were analyzed by a specialist that found three of them to be useful for improving the patient’s treatment. One such rule was “if glucose before breakfast ∈ [ 47 , 89 ] , then glucose at afternoon break in [ 160 , 306 ]”. The results obtained through the experimental study and other verifications associated with the algorithm created had a double objective. It was possible to show that participants, through a questionnaire, viewed the visualizations as easy, or very easy, to understand. The secondary objective showed that the innovative algorithm applied in the GLUMIS system allows the decision maker to have much more precision and less loss of information than in algorithms that require the data to be discretized.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 163
    Publication Date: 2018
    Description: The identification of risks associated with collision for vessels is an important element in maritime safety and management. A vessel collision avoidance system is a topic that has been deeply studied, and it is a specialization in navigation technology. The automatic identification system (AIS) has been used to support navigation, route estimation, collision prediction, and abnormal traffic detection. This article examined the main elements of ship collision, developed a mathematical model for the risk assessment, and simulated a collision assessment based on AIS information, thereby providing meaningful recommendations for crew training and a warning system, in conjunction with the AIS on board.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 164
    Publication Date: 2018
    Description: In order to improve the dynamic quality of traditional sliding mode control for an active suspension system, an optimal sliding mode control (OSMC) based on a genetic algorithm (GA) is proposed. First, the overall structure and control principle of the active suspension system are introduced. Second, the mathematical model of the quarter car active suspension system is established. Third, a sliding mode control (SMC) controller is designed to manipulate the active force to control the active suspension system. Fourth, GA is applied to optimize the weight coefficients of an SMC switching function and the parameters of the control law. Finally, the simulation model is built based on MATLAB/Simulink (version 2014a), and the simulations are performed and analyzed with the proposed control strategy to identify its performance. The simulation results show that the OSMC controller tuned using a GA has better control performance than the traditional SMC controller.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 165
    Publication Date: 2018
    Description: In the context of this research work, we studied the problem of privacy preserving on spatiotemporal databases. In particular, we investigated the k-anonymity of mobile users based on real trajectory data. The k-anonymity set consists of the k nearest neighbors. We constructed a motion vector of the form (x,y,g,v) where x and y are the spatial coordinates, g is the angle direction, and v is the velocity of mobile users, and studied the problem in four-dimensional space. We followed two approaches. The former applied only k-Nearest Neighbor (k-NN) algorithm on the whole dataset, while the latter combined trajectory clustering, based on K-means, with k-NN. Actually, it applied k-NN inside a cluster of mobile users with similar motion pattern (g,v). We defined a metric, called vulnerability, that measures the rate at which k-NNs are varying. This metric varies from 1 k (high robustness) to 1 (low robustness) and represents the probability the real identity of a mobile user being discovered from a potential attacker. The aim of this work was to prove that, with high probability, the above rate tends to a number very close to 1 k in clustering method, which means that the k-anonymity is highly preserved. Through experiments on real spatial datasets, we evaluated the anonymity robustness, the so-called vulnerability, of the proposed method.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 166
    Publication Date: 2018
    Description: To obtain the target webpages from many webpages, we proposed a Method for Filtering Pages by Similarity Degree based on Dynamic Programming (MFPSDDP). The method needs to use one of three same relationships proposed between two nodes, so we give the definition of the three same relationships. The biggest innovation of MFPSDDP is that it does not need to know the structures of webpages in advance. First, we address the design ideas with queue and double threads. Then, a dynamic programming algorithm for calculating the length of the longest common subsequence and a formula for calculating similarity are proposed. Further, for obtaining detailed information webpages from 200,000 webpages downloaded from the famous website “www.jd.com”, we choose the same relationship Completely Same Relationship (CSR) and set the similarity threshold to 0.2. The Recall Ratio (RR) of MFPSDDP is in the middle in the four filtering methods compared. When the number of webpages filtered is nearly 200,000, the PR of MFPSDDP is highest in the four filtering methods compared, which can reach 85.1%. The PR of MFPSDDP is 13.3 percentage points higher than the PR of a Method for Filtering Pages by Containing Strings (MFPCS).
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 167
    Publication Date: 2018
    Description: Based on the advantages of a non-subsampled shearlet transform (NSST) in image processing and the characteristics of remote sensing imagery, NSST was applied to enhance blurred images. In the NSST transform domain, directional information measurement can highlight textural features of an image edge and reduce image noise. Therefore, NSST was applied to the detailed enhancement of high-frequency sub-band coefficients. Based on the characteristics of a low-frequency image, the retinex method was used to enhance low-frequency images. Then, an NSST inverse transformation was performed on the enhanced low- and high-frequency coefficients to obtain an enhanced image. Computer simulation experiments showed that when compared with a traditional image enhancement strategy, the method proposed in this paper can enrich the details of the image and enhance the visual effect of the image. Compared with other algorithms listed in this paper, the brightness, contrast, edge strength, and information entropy of the enhanced image by this method are improved. In addition, in the experiment of noisy images, various objective evaluation indices show that the method in this paper enhances the image with the least noise information, which further indicates that the method can suppress noise while improving the image quality, and has a certain level of effectiveness and practicability.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 168
    Publication Date: 2019
    Description: Aiming at granting wide access to their contents, online information providers often choose not to have registered users, and therefore must give up personalization. In this paper, we focus on the case of non-personalized news recommender systems, and explore persuasive techniques that can, nonetheless, be used to enhance recommendation presentation, with the aim of capturing the user’s interest on suggested items leveraging the way news is perceived. We present the results of two evaluations “in the wild”, carried out in the context of a real online magazine and based on data from 16,134 and 20,933 user sessions, respectively, where we empirically assessed the effectiveness of persuasion strategies which exploit logical fallacies and other techniques. Logical fallacies are inferential schemes known since antiquity that, even if formally invalid, appear as plausible and are therefore psychologically persuasive. In particular, our evaluations allowed us to compare three persuasive scenarios based on the Argumentum Ad Populum fallacy, on a modified version of the Argumentum ad Populum fallacy (Group-Ad Populum), and on no fallacy (neutral condition), respectively. Moreover, we studied the effects of the Accent Fallacy (in its visual variant), and of positive vs. negative Framing.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 169
    Publication Date: 2019
    Description: Recently, with the development of big data and 5G networks, the number of intelligent mobile devices has increased dramatically, therefore the data that needs to be transmitted and processed in the networks has grown exponentially. It is difficult for the end-to-end communication mechanism proposed by traditional routing algorithms to implement the massive data transmission between mobile devices. Consequently, opportunistic social networks propose that the effective data transmission process could be implemented by selecting appropriate relay nodes. At present, most existing routing algorithms find suitable next-hop nodes by comparing the similarity degree between nodes. However, when evaluating the similarity between two mobile nodes, these routing algorithms either consider the mobility similarity between nodes, or only consider the social similarity between nodes. To improve the data dissemination environment, this paper proposes an effective data transmission strategy (MSSN) utilizing mobile and social similarities in opportunistic social networks. In our proposed strategy, we first calculate the mobile similarity between neighbor nodes and destination, set a mobile similarity threshold, and compute the social similarity between the nodes whose mobile similarity is greater than the threshold. The nodes with high mobile similarity degree to the destination node are the reliable relay nodes. After simulation experiments and comparison with other existing opportunistic social networks algorithms, the results show that the delivery ratio in the proposed algorithm is 0.80 on average, the average end-to-end delay is 23.1% lower than the FCNS algorithm (A fuzzy routing-forwarding algorithm exploiting comprehensive node similarity in opportunistic social networks), and the overhead on average is 14.9% lower than the Effective Information Transmission Based on Socialization Nodes (EIMST) algorithm.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 170
    Publication Date: 2019
    Description: Communication languages convey information through the use of a set of symbols or units. Typically, this unit is word. When developing language technologies, as words in a language do not have the same prior probability, there may not be sufficient training data for each word to model. Furthermore, the training data may not cover all possible words in the language. Due to these data sparsity and word unit coverage issues, language technologies employ modeling of subword units or subunits, which are based on prior linguistic knowledge. For instance, development of speech technologies such as automatic speech recognition system presume that there exists a phonetic dictionary or at least a writing system for the target language. Such knowledge is not available for all languages in the world. In that direction, this article develops a hidden Markov model-based abstract methodology to extract subword units given only pairwise comparison between utterances (or realizations of words in the mode of communication), i.e., whether two utterances correspond to the same word or not. We validate the proposed methodology through investigations on spoken language and sign language. In the case of spoken language, we demonstrate that the proposed methodology can lead up to discovery of phone set and development of phonetic dictionary. In the case of sign language, we demonstrate how hand movement information can be effectively modeled for sign language processing and synthesized back to gain insight about the derived subunits.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 171
    Publication Date: 2019
    Description: Though the self-portrait has been hailed as the defining artistic genre of modernity, there is not yet a good account of what the self-portrait actually is. This paper provides such an account through the lens of document theory and the philosophy of information. In this paper, the self-portrait is conceptualized as a kind of document, more specifically a kind of self-document, to gain insight into the phenomenon. A self-portrait is shown to be a construction, and not just a representation, of oneself. Creating a self-portrait then is a matter of bringing oneself forth over time—constructing oneself, rather than simply depicting oneself. This account provides grounds to consider whether or how the selfie truly is a form of self-portrait, as is often asserted. In the end, it seems that while both are technologies for self-construction, the self-portrait has the capacity for deep self-construction, whereas the selfie is limited to fewer aspects of the self. This prospect leads into an ethical discussion of the changing concept of identity in the digital age.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 172
    Publication Date: 2019
    Description: The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a multi-scale feature fusion network is proposed, which mainly includes two parts: the feature extraction module and the deviation detection module. The feature extraction module uses a multi-scale feature fusion network structure to fuse low-level features with rich position and detail information and high-level features with stronger semantic information to improve network detection performance. Depthwise separable convolutions are used to achieve real-time detection. The deviation detection module identifies and monitors the deviation fault by calculating the offset of conveyor belt. In particular, a new weighted loss function is designed to optimize the network and to improve the detection effect of the conveyor belt edge. In order to evaluate the effectiveness of the proposed method, the Canny algorithm, FCNs, UNet and Deeplab v3 networks are selected for comparison. The experimental results show that the proposed algorithm achieves 78.92% in terms of pixel accuracy (PA), and reaches 13.4 FPS (Frames per Second) with the error of less than 3.2 mm, which outperforms the other four algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 173
    Publication Date: 2019
    Description: Dependability assessment is one of the most important activities for the analysis of complex systems. Classical analysis techniques of safety, risk, and dependability, like Fault Tree Analysis or Reliability Block Diagrams, are easy to implement, but they estimate inaccurate dependability results due to their simplified hypotheses that assume the components’ malfunctions to be independent from each other and from the system working conditions. Recent contributions within the umbrella of Dynamic Probabilistic Risk Assessment have shown the potential to improve the accuracy of classical dependability analysis methods. Among them, Stochastic Hybrid Fault Tree Automaton (SHyFTA) is a promising methodology because it can combine a Dynamic Fault Tree model with the physics-based deterministic model of a system process, and it can generate dependability metrics along with performance indicators of the physical variables. This paper presents the Stochastic Hybrid Fault Tree Object Oriented (SHyFTOO), a Matlab® software library for the modelling and the resolution of a SHyFTA model. One of the novel features discussed in this contribution is the ease of coupling with a Matlab® Simulink model that facilitates the design of complex system dynamics. To demonstrate the utilization of this software library and the augmented capability of generating further dependability indicators, three different case studies are discussed and solved with a thorough description for the implementation of the corresponding SHyFTA models.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 174
    Publication Date: 2019
    Description: Research and development (R&D) are always oriented towards new discoveries, based on original terms or hypotheses, and their concluding outcomes are often uncertain. The present work focused on the degree of uncertainty for R&D activities. In fact, uncertainty makes it difficult to quantify the time and resources needed to achieve a final outcome, create a work plan and budget, and finalize the resulting “innovative” products or services that could be transferred or exchanged in a specific market. The present work attempts to indicate the degree of uncertainty of the research activities developed by a set of firms. The method used aimed to quantify the five criteria defined by the Manual of Frascati. Through the creation of an uncertainty cloud, a cone of uncertainty was defined following an approach based on project management. The evaluation grid was characterized by the decomposition of the different variables divided into quartiles, which allowed for the detection of the evolution of the project and each of its component. The ancillary objective aim was to also observe the development degree of these industries towards a framework of Industry 4.0.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 175
    Publication Date: 2019
    Description: Twisted Edwards curves have been at the center of attention since their introduction by Bernstein et al. in 2007. The curve ED25519, used for Edwards-curve Digital Signature Algorithm (EdDSA), provides faster digital signatures than existing schemes without sacrificing security. The CURVE25519 is a Montgomery curve that is closely related to ED25519. It provides a simple, constant time, and fast point multiplication, which is used by the key exchange protocol X25519. Software implementations of EdDSA and X25519 are used in many web-based PC and Mobile applications. In this paper, we introduce a low-power, low-area FPGA implementation of the ED25519 and CURVE25519 scalar multiplication that is particularly relevant for Internet of Things (IoT) applications. The efficiency of the arithmetic modulo the prime number 2 255 − 19 , in particular the modular reduction and modular multiplication, are key to the efficiency of both EdDSA and X25519. To reduce the complexity of the hardware implementation, we propose a high-radix interleaved modular multiplication algorithm. One benefit of this architecture is to avoid the use of large-integer multipliers relying on FPGA DSP modules.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 176
    Publication Date: 2019
    Description: Aimed at the one-to-one certification problem of unsteady state iris at different shooting times, a multi-algorithm parallel integration general model structure is proposed in this paper. The iris in the lightweight constrained state affected by defocusing, deflection, and illumination is taken as the research object, the existing algorithms are combined into the model structure effectively, and a one-to-one certification algorithm for lightweight constrained state unsteady iris was designed based on multi-algorithm integration and maximum trusted decision. In this algorithm, a sufficient number of iris internal feature points from the unstable state texture were extracted as effective iris information through the image processing layer composed of various filtering processing algorithms, thereby eliminating defocused interference. In the feature recognition layer, iris deflection interference was excluded by the improved methods of Gabor and Hamming and Haar and BP for the stable features extracted by the image processing layer, and two certification results were obtained by means of parallel recognition. The correct number of certifications for an algorithm under a certain lighting condition were counted. The method with the most correct number was set as the maximum trusted method under this lighting condition, and the results of the maximum trusted method were taken as the final decision, thereby eliminating the effect of illumination. Experiments using the JLU and CASIA iris libraries under the prerequisites in this paper show that the correct recognition rate of the algorithm can reach a high level of 98% or more, indicating that the algorithm can effectively improve the accuracy of the one-to-one certification of lightweight constrained state unsteady iris. Compared with the latest architecture algorithms, such as CNN and deep learning, the proposed algorithm is more suitable for the prerequisites presented in this paper, which has good environmental inclusiveness and can better improve existing traditional algorithms’ effectiveness through the design of a parallel integration model structure.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 177
    Publication Date: 2019
    Description: Assembly is a very important manufacturing process in the age of Industry 4.0. Aimed at the problems of part identification and assembly inspection in industrial production, this paper proposes a method of assembly inspection based on machine vision and a deep neural network. First, the image acquisition platform is built to collect the part and assembly images. We use the Mask R-CNN model to identify and segment the shape from each part image, and to obtain the part category and position coordinates in the image. Then, according to the image segmentation results, the area, perimeter, circularity, and Hu invariant moment of the contour are extracted to form the feature vector. Finally, the SVM classification model is constructed to identify the assembly defects, with a classification accuracy rate of over 86.5%. The accuracy of the method is verified by constructing an experimental platform. The results show that the method effectively completes the identification of missing and misaligned parts in the assembly, and has good robustness.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 178
    Publication Date: 2019
    Description: The development of robotic applications for agricultural environments has several problems which are not present in the robotic systems used for indoor environments. Some of these problems can be solved with an efficient navigation system. In this paper, a new system is introduced to improve the navigation tasks for those robots which operate in agricultural environments. Concretely, the paper focuses on the problem related to the autonomous mapping of agricultural parcels (i.e., an orange grove). The map created by the system will be used to help the robots navigate into the parcel to perform maintenance tasks such as weed removal, harvest, or pest inspection. The proposed system connects to a satellite positioning service to obtain the real coordinates where the robotic system is placed. With these coordinates, the parcel information is downloaded from an online map service in order to autonomously obtain a map of the parcel in a readable format for the robot. Finally, path planning is performed by means of Fast Marching techniques using the robot or a team of two robots. This paper introduces the proof-of-concept and describes all the necessary steps and algorithms to obtain the path planning just from the initial coordinates of the robot.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 179
    Publication Date: 2019
    Description: The characteristic of the satellite repeat shift time can reflect the status of the satellite operation, and is also one of the key factors of the sidereal filtering multipath correction. Although some methods have been developed to calculate the repeat shift time, few efforts have been made to analyze and compare the performance of this feature for the GPS (Global Positioning System), BDS (BeiDou System), and Galileo in depth. Hence, three methods used for calculating the repeat shift time are presented, and used to compare and analyze the three global systems in depth, named the broadcast ephemeris method (BEM), correlation coefficient method (CCM), and aspect repeat time method (ARTM). The experiment results show that the repeat shift time of each satellite is different. Also, the difference between the maximum and minimum varies from different systems. The maximum difference is about 25 s for the BDS IGSO (Inclined Geosynchronous Orbit) and the minimum is merely 10 s for the GPS system. Furthermore, for the same satellite, the shift time calculated by the three methods is almost identical, and the maximum difference is only about 7 s between the CCM and the ARTM method for the BDS MEO (Medium Earth Orbit) satellite. Although the repeat shift time is different daily for the same satellite and the same method, the changes are very small. Moreover, in terms of the STD (Standard Deviation) of the BS (between satellites) and MS (mean shift for the same satellite), the GPS system is the best, the performance of the BDS system is medium, and the Galileo performs slightly worse than the GPS and BDS.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 180
    Publication Date: 2019
    Description: The authors wish to make the following corrections to this paper [...]
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 181
    Publication Date: 2019
    Description: In the education process, students face problems with understanding due to the complexity, necessity of abstract thinking and concepts. More and more educational centres around the world have started to introduce powerful new technology-based tools that help meet the needs of the diverse student population. Over the last several years, virtual reality (VR) has moved from being the purview of gaming to professional development. It plays an important role in teaching process, providing an interesting and engaging way of acquiring information. What follows is an overview of the big trend, opportunities and concerns associated with VR in education. We present new opportunities in VR and put together the most interesting, recent virtual reality applications used in education in relation to several education areas such as general, engineering and health-related education. Additionally, this survey contributes by presenting methods for creating scenarios and different approaches for testing and validation. Lastly, we conclude and discuss future directions of VR and its potential to improve the learning experience.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 182
    Publication Date: 2019
    Description: The big data from various sensors installed on-board for monitoring the status of ship devices is very critical for improving the efficiency and safety of ship operations and reducing the cost of operation and maintenance. However, how to utilize these data is a key issue. The temperature change of the ship propulsion devices can often reflect whether the devices are faulty or not. Therefore, this paper aims to forecast the temperature of the ship propulsion devices by data-driven methods, where potential faults can be further identified automatically. The proposed forecasting process is composed of preprocessing, feature selection, and prediction, including an autoregressive distributed lag time series model (ARDL), stepwise regression (SR) model, neural network (NN) model, and deep neural network (DNN) model. Finally, the proposed forecasting process is applied on a naval ship, and the results show that the ARDL model has higher accuracy than the three other models.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 183
    Publication Date: 2019
    Description: This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual mathematical model of the EMA is rebuilt by combining the ideal model and the nonlinear factors. To realize high performance braking pressure control, the backstepping control method is adopted to address the mismatched uncertainties in the electric braking system, and a radial basis function (RBF) neural network is established to estimate the nonlinear functions in the control system. The experimental results indicate that the proposed braking pressure control strategy can improve the servo performance of the electric braking system. In addition, the hardware-in-loop (HIL) experimental results show that the proposed EMA controller can satisfy the requirements of the aircraft antilock braking systems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 184
    Publication Date: 2019
    Description: The current inclusion of agile methodologies in web-oriented projects has been considered on a large-scale by software developers. However, the benefits and limitations go beyond the comforts that project managers delimit when choosing them. Selecting a methodology involves more than only the associated processes or some documentation. Based on the above, we could define as the main concerns the approach with which we identify the methodology, the needs of the company, the size, and qualities of the project, and especially the characteristics of agile development that they possess. However, there are several difficulties in selecting the most appropriate methodology due to the features in common; Will it be suitable for my project? What challenges will be presented in the process? Will my team understand each stage? Will I be able to deliver software that satisfies the client? Project managers create these questions, which seem manageable but have huge effects. This paper presents a systematic literature review based on the analysis of the approaches of six web development methodologies. The aim of the study is to analyze the approaches presented by relevant methodologies, identifying their common agile characteristics and managing to contrast both its benefits and limitations during a project. As a result, we could itemize five common features, which are presented within the processes; (1) flexibility, (2) constant communication of the workgroup, (3) use of UML, (4) the inclusion of the end-user and (5) some documentation.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 185
    Publication Date: 2019
    Description: After more than a decade, the supply-driven approach to publishing public (open) data has resulted in an ever-growing number of data silos. Hundreds of thousands of datasets have been catalogued and can be accessed at data portals at different administrative levels. However, usually, users do not think in terms of datasets when they search for information. Instead, they are interested in information that is most likely scattered across several datasets. In the world of proprietary in-company data, organizations invest heavily in connecting data in knowledge graphs and/or store data in data lakes with the intention of having an integrated view of the data for analysis. With the rise of machine learning, it is a common belief that governments can improve their services, for example, by allowing citizens to get answers related to government information from virtual assistants like Alexa or Siri. To provide high-quality answers, these systems need to be fed with knowledge graphs. In this paper, we share our experience of constructing and using the first open government knowledge graph in the Netherlands. Based on the developed demonstrators, we elaborate on the value of having such a graph and demonstrate its use in the context of improved data browsing, multicriteria analysis for urban planning, and the development of location-aware chat bots.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 186
    Publication Date: 2019
    Description: Computer science is a predominantly male field of study. Women face barriers while trying to insert themselves in the study of computer science. Those barriers extend to when women are exposed to the professional area of computer science. Despite decades of social fights for gender equity in Science, Technology, Engineering, and Mathematics (STEM) education and in computer science in general, few women participate in computer science, and some of the reasons include gender bias and lack of support for women when choosing a computer science career. Open source software development has been increasingly used by companies seeking the competitive advantages gained by team diversity. This diversification of the characteristics of team members includes, for example, the age of the participants, the level of experience, education and knowledge in the area, and their gender. In open source software projects women are underrepresented and a series of biases are involved in their participation. This paper conducts a systematic literature review with the objective of finding factors that could assist in increasing women’s interest in contributing to open source communities and software development projects. The main contributions of this paper are: (i) identification of factors that cause women’s lack of interest (engagement), (ii) possible solutions to increase the engagement of this public, (iii) to outline the profile of professional women who are participating in open source software projects and software development projects. The main findings of this research reveal that women are underrepresented in software development projects and in open source software projects. They represent less than 10% of the total developers and the main causes of this underrepresentation may be associated with their workplace conditions, which reflect male gender bias.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 187
    Publication Date: 2019
    Description: Objects that possess mass (e.g., automobiles, manufactured items, etc.) translationally accelerate in direct proportion to the force applied scaled by the object’s mass in accordance with Newton’s Law, while the rotational companion is Euler’s moment equations relating angular acceleration of objects that possess mass moments of inertia. Michel Chasles’s theorem allows us to simply invoke Newton and Euler’s equations to fully describe the six degrees of freedom of mechanical motion. Many options are available to control the motion of objects by controlling the applied force and moment. A long, distinguished list of references has matured the field of controlling a mechanical motion, which culminates in the burgeoning field of deterministic artificial intelligence as a natural progression of the laudable goal of adaptive and/or model predictive controllers that can be proven to be optimal subsequent to their development. Deterministic A.I. uses Chasle’s claim to assert Newton’s and Euler’s relations as deterministic self-awareness statements that are optimal with respect to state errors. Predictive controllers (both continuous and sampled-data) derived from the outset to be optimal by first solving an optimization problem with the governing dynamic equations of motion lead to several controllers (including a controller that twice invokes optimization to formulate robust, predictive control). These controllers are compared to each other with noise and modeling errors, and the many figures of merit are used: tracking error and rate error deviations and means, in addition to total mean cost. Robustness is evaluated using Monte Carlo analysis where plant parameters are randomly assumed to be incorrectly modeled. Six instances of controllers are compared against these methods and interpretations, which allow engineers to select a tailored control for their given circumstances. Novel versions of the ubiquitous classical proportional-derivative, “PD” controller, is developed from the optimization statement at the outset by using a novel re-parameterization of the optimal results from time-to-state parameterization. Furthermore, time-optimal controllers, continuous predictive controllers, and sampled-data predictive controllers, as well as combined feedforward plus feedback controllers, and the two degree of freedom controllers (i.e., 2DOF). The context of the term “feedforward” used in this study is the context of deterministic artificial intelligence, where analytic self-awareness statements are strictly determined by the governing physics (of mechanics in this case, e.g., Chasle, Newton, and Euler). When feedforward is combined with feedback per the previously mentioned method (provenance foremost in optimization), the combination is referred to as “2DOF” or two degrees of freedom to indicate the twice invocation of optimization at the genesis of the feedforward and the feedback, respectively. The feedforward plus feedback case is augmented by an online (real time) comparison to the optimal case. This manuscript compares these many optional control strategies against each other. Nominal plants are used, but the addition of plant noise reveals the robustness of each controller, even without optimally rejecting assumed-Gaussian noise (e.g., via the Kalman filter). In other words, noise terms are intentionally left unaddressed in the problem formulation to evaluate the robustness of the proposed method when the real-world noise is added. Lastly, mismodeled plants controlled by each strategy reveal relative performance. Well-anticipated results include the lowest cost, which is achieved by the optimal controller (with very poor robustness), while low mean errors and deviations are achieved by the classical controllers (at the highest cost). Both continuous predictive control and sampled-data predictive control perform well at both cost as well as errors and deviations, while the 2DOF controller performance was the best overall.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 188
    Publication Date: 2019
    Description: This paper presents a space mission planning tool, which was developed for LEO (Low Earth Orbit) observation satellites. The tool is focused on a two-phase planning strategy with clustering preprocessing and mission planning, where an improved clustering algorithm is applied, and a hybrid algorithm that combines the genetic algorithm with the simulated annealing algorithm (GA–SA) is given and discussed. Experimental simulation studies demonstrate that the GA–SA algorithm with the improved clique partition algorithm based on the graph theory model exhibits higher fitness value and better optimization performance and reliability than the GA or SA algorithms alone.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 189
    Publication Date: 2019
    Description: The ILAHS (inhomogeneous linear algebraic hybrid system) is a kind of classic hybrid system. For the purpose of optimizing the design of ILAHS, one important strategy is to introduce equivalence to reduce the states. Recent advances in the hybrid system indicate that approximate trace equivalence can further simplify the design of ILAHS. To address this issue, the paper first introduces the trajectory metric d t r j for measuring the deviation of two hybrid systems’ behaviors. Given a deviation ε ≥ 0 , the original ILAHS of H 1 can be transformed to the approximate ILAHS of H 2 , then in trace equivalence semantics, H 2 is further reduced to H 3 with the same functions, and hence H 1 is ε -approximate trace equivalent to H 3 . In particular, ε = 0 is a traditional trace equivalence. We implement an approach based on RealRootClassification to determine the approximation between the ILAHSs. The paper also shows that the existing approaches are only special cases of our method. Finally, we illustrate the effectiveness and practicality of our method on an example.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 190
    Publication Date: 2019
    Description: Conversational agents are reshaping our communication environment and have the potential to inform and persuade in new and effective ways. In this paper, we present the underlying technologies and the theoretical background behind a health-care platform dedicated to supporting medical stuff and individuals with movement disabilities and to providing advanced monitoring functionalities in hospital and home surroundings. The framework implements an intelligent combination of two research areas: (1) sensor- and camera-based monitoring to collect, analyse, and interpret people behaviour and (2) natural machine–human interaction through an apprehensive virtual assistant benefiting ailing patients. In addition, the framework serves as an important assistant to caregivers and clinical experts to obtain information about the patients in an intuitive manner. The proposed approach capitalises on latest breakthroughs in computer vision, sensor management, speech recognition, natural language processing, knowledge representation, dialogue management, semantic reasoning, and speech synthesis, combining medical expertise and patient history.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 191
    Publication Date: 2019
    Description: Image classification is one of the most important tasks in the digital era. In terms of cultural heritage, it is important to develop classification methods that obtain good accuracy, but also are less computationally intensive, as image classification usually uses very large sets of data. This study aims to train and test four classification algorithms: (i) the multilayer perceptron, (ii) averaged one dependence estimators, (iii) forest by penalizing attributes, and (iv) the k-nearest neighbor rough sets and analogy based reasoning, and compares these with the results obtained from the Convolutional Neural Network (CNN). Three types of features were extracted from the images: (i) the edge histogram, (ii) the color layout, and (iii) the JPEG coefficients. The algorithms were tested before and after applying the attribute selection, and the results indicated that the best classification performance was obtained for the multilayer perceptron in both cases.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 192
    Publication Date: 2019
    Description: In the future, automated cars may feature external human–machine interfaces (eHMIs) to communicate relevant information to other road users. However, it is currently unknown where on the car the eHMI should be placed. In this study, 61 participants each viewed 36 animations of cars with eHMIs on either the roof, windscreen, grill, above the wheels, or a projection on the road. The eHMI showed ‘Waiting’ combined with a walking symbol 1.2 s before the car started to slow down, or ‘Driving’ while the car continued driving. Participants had to press and hold the spacebar when they felt it safe to cross. Results showed that, averaged over the period when the car approached and slowed down, the roof, windscreen, and grill eHMIs yielded the best performance (i.e., the highest spacebar press time). The projection and wheels eHMIs scored relatively poorly, yet still better than no eHMI. The wheels eHMI received a relatively high percentage of spacebar presses when the car appeared from a corner, a situation in which the roof, windscreen, and grill eHMIs were out of view. Eye-tracking analyses showed that the projection yielded dispersed eye movements, as participants scanned back and forth between the projection and the car. It is concluded that eHMIs should be presented on multiple sides of the car. A projection on the road is visually effortful for pedestrians, as it causes them to divide their attention between the projection and the car itself.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 193
    Publication Date: 2019
    Description: Metagenomics studies, as well as genomics studies of polyploid species such as wheat, deal with the analysis of high variation data. Such data contain sequences from similar, but distinct genetic chains. This fact presents an obstacle to analysis and research. In particular, the detection of instrumentation errors during the digitalization of the sequences may be hindered, as they can be indistinguishable from the real biological variation inside the digital data. This can prevent the determination of the correct sequences, while at the same time make variant studies significantly more difficult. This paper details a collection of ML-based models used to distinguish a real variant from an erroneous one. The focus is on using this model directly, but experiments are also done in combination with other predictors that isolate a pool of error candidates.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 194
    Publication Date: 2019
    Description: Network Function Virtualization (NFV) has revolutionized the way network services are offered to end users. Individual network functions are decoupled from expensive and dedicated middleboxes and are now provided as software-based virtualized entities called Virtualized Network Functions (VNFs). NFV is often complemented with the Cloud Computing paradigm to provide networking functions to enterprise customers and end-users remote from their premises. NFV along with Cloud Computing has also started to be seen in Internet of Things (IoT) platforms as a means to provide networking functions to the IoT traffic. The intermix of IoT, NFV, and Cloud technologies, however, is still in its infancy creating a rich and open future research area. To this end, in this paper, we propose a novel approach to facilitate the placement and deployment of service chained VNFs in a network cloud infrastructure that can be extended using the Mobile Edge Computing (MEC) infrastructure for accommodating mission critical and delay sensitive traffic. Our aim is to minimize the end-to-end communication delay while keeping the overall deployment cost to minimum. Results reveal that the proposed approach can significantly reduce the delay experienced, while satisfying the Service Providers’ goal of low deployment costs.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 195
    Publication Date: 2019
    Description: The current paper addresses relevant network security vulnerabilities introduced by network devices within the emerging paradigm of Internet of Things (IoT) as well as the urgent need to mitigate the negative effects of some types of Distributed Denial of Service (DDoS) attacks that try to explore those security weaknesses. We design and implement a Software-Defined Intrusion Detection System (IDS) that reactively impairs the attacks at its origin, ensuring the “normal operation” of the network infrastructure. Our proposal includes an IDS that automatically detects several DDoS attacks, and then as an attack is detected, it notifies a Software Defined Networking (SDN) controller. The current proposal also downloads some convenient traffic forwarding decisions from the SDN controller to network devices. The evaluation results suggest that our proposal timely detects several types of cyber-attacks based on DDoS, mitigates their negative impacts on the network performance, and ensures the correct data delivery of normal traffic. Our work sheds light on the programming relevance over an abstracted view of the network infrastructure to timely detect a Botnet exploitation, mitigate malicious traffic at its source, and protect benign traffic.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 196
    Publication Date: 2019
    Description: Network representation learning is a key research field in network data mining. In this paper, we propose a novel multi-view network representation algorithm (MVNR), which embeds multi-scale relations of network vertices into the low dimensional representation space. In contrast to existing approaches, MVNR explicitly encodes higher order information using k-step networks. In addition, we introduce the matrix forest index as a kind of network feature, which can be applied to balance the representation weights of different network views. We also research the relevance amongst MVNR and several excellent research achievements, including DeepWalk, node2vec and GraRep and so forth. We conduct our experiment on several real-world citation datasets and demonstrate that MVNR outperforms some new approaches using neural matrix factorization. Specifically, we demonstrate the efficiency of MVNR on network classification, visualization and link prediction tasks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 197
    Publication Date: 2019
    Description: Network Function Virtualization is a new technology allowing for a elastic cloud and bandwidth resource allocation. The technology requires an orchestrator whose role is the service and resource orchestration. It receives service requests, each one characterized by a Service Function Chain, which is a set of service functions to be executed according to a given order. It implements an algorithm for deciding where both to allocate the cloud and bandwidth resources and to route the SFCs. In a traditional orchestration algorithm, the orchestrator has a detailed knowledge of the cloud and network infrastructures and that can lead to high computational complexity of the SFC Routing and Cloud and Bandwidth resource Allocation (SRCBA) algorithm. In this paper, we propose and evaluate the effectiveness of a scalable orchestration architecture inherited by the one proposed within the European Telecommunications Standards Institute (ETSI) and based on the functional separation of an NFV orchestrator in Resource Orchestrator (RO) and Network Service Orchestrator (NSO). Each cloud domain is equipped with an RO whose task is to provide a simple and abstract representation of the cloud infrastructure. These representations are notified of the NSO that can apply a simplified and less complex SRCBA algorithm. In addition, we show how the segment routing technology can help to simplify the SFC routing by means of an effective addressing of the service functions. The scalable orchestration solution has been investigated and compared to the one of a traditional orchestrator in some network scenarios and varying the number of cloud domains. We have verified that the execution time of the SRCBA algorithm can be drastically reduced without degrading the performance in terms of cloud and bandwidth resource costs.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 198
    Publication Date: 2019
    Description: In the real word, optimization problems in multi-objective optimization (MOP) and dynamic optimization can be seen everywhere. During the last decade, among various swarm intelligence algorithms for multi-objective optimization problems, glowworm swarm optimization (GSO) and bacterial foraging algorithm (BFO) have attracted increasing attention from scholars. Although many scholars have proposed improvement strategies for GSO and BFO to keep a good balance between convergence and diversity, there are still many problems to be solved carefully. In this paper, a new coupling algorithm based on GSO and BFO (MGSOBFO) is proposed for solving dynamic multi-objective optimization problems (dMOP). MGSOBFO is proposed to achieve a good balance between exploration and exploitation by dividing into two parts. Part I is in charge of exploitation by GSO and Part II is in charge of exploration by BFO. At the same time, the simulation binary crossover (SBX) and polynomial mutation are introduced into the MGSOBFO to enhance the convergence and diversity ability of the algorithm. In order to show the excellent performance of the algorithm, we experimentally compare MGSOBFO with three algorithms on the benchmark function. The results suggests that such a coupling algorithm has good performance and outperforms other algorithms which deal with dMOP.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 199
    Publication Date: 2019
    Description: In the last decade, there has been a growing scientific interest in the analysis of DNA microarray datasets, which have been widely used in basic and translational cancer research. The application fields include both the identification of oncological subjects, separating them from the healthy ones, and the classification of different types of cancer. Since DNA microarray experiments typically generate a very large number of features for a limited number of patients, the classification task is very complex and typically requires the application of a feature-selection process to reduce the complexity of the feature space and to identify a subset of distinctive features. In this framework, there are no standard state-of-the-art results generally accepted by the scientific community and, therefore, it is difficult to decide which approach to use for obtaining satisfactory results in the general case. Based on these considerations, the aim of the present work is to provide a large experimental comparison for evaluating the effect of the feature-selection process applied to different classification schemes. For comparison purposes, we considered both ranking-based feature-selection techniques and state-of-the-art feature-selection methods. The experiments provide a broad overview of the results obtainable on standard microarray datasets with different characteristics in terms of both the number of features and the number of patients.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 200
    Publication Date: 2019
    Description: In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. Detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...