ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (2,457)
  • Molecular Diversity Preservation International  (1,482)
  • MDPI  (975)
  • Institute of Electrical and Electronics Engineers (IEEE)
  • 2020-2022
  • 2015-2019  (2,393)
  • 2010-2014  (64)
  • 1990-1994
  • 1945-1949
  • 2019  (2,393)
  • 2010  (64)
  • Computer Science  (2,457)
Collection
  • Articles  (2,457)
Publisher
Years
  • 2020-2022
  • 2015-2019  (2,393)
  • 2010-2014  (64)
  • 1990-1994
  • 1945-1949
Year
Journal
  • 1
    Publication Date: 2019
    Description: Over the years, the cellular mobile network has evolved from a wireless plain telephone system to a very complex system providing telephone service, Internet connectivity and many interworking capabilities with other networks. Its air interface performance has increased drastically over time, leading to high throughput and low latency. Changes to the core network, however, have been slow and incremental, with increased complexity worsened by the necessity of backwards-compatibility with older-generation systems such as the Global System for Mobile communication (GSM). In this paper, a new virtualized Peer-to-Peer (P2P) core network architecture is presented. The key idea of our approach is that each user is assigned a private virtualized copy of the whole core network. This enables a higher degree of security and novel services that are not possible in today’s architecture. We describe the new architecture, focusing on its main elements, IP addressing, message flows, mobility management, and scalability. Furthermore, we will show some significant advantages this new architecture introduces. Finally, we investigate the performance of our architecture by analyzing voice-call traffic available in a database of a large U.S. cellular network provider.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019
    Description: The ongoing digital transformation has the potential to revolutionize nearly all industrial manufacturing processes. However, its concrete requirements and implications are still not sufficiently investigated. In order to establish a common understanding, a multitude of initiatives have published guidelines, reference frameworks and specifications, all intending to promote their particular interpretation of the Industrial Internet of Things (IIoT). As a result of the inconsistent use of terminology, heterogeneous structures and proposed processes, an opaque landscape has been created. The consequence is that both new users and experienced experts can hardly manage to get an overview of the amount of information and publications, and make decisions on what is best to use and to adopt. This work contributes to the state of the art by providing a structured analysis of existing reference frameworks, their classifications and the concerns they target. We supply alignments of shared concepts, identify gaps and give a structured mapping of regarded concerns at each part of the respective reference architectures. Furthermore, the linking of relevant industry standards and technologies to the architectures allows a more effective search for specifications and guidelines and supports the direct technology adoption.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019
    Description: Service recommendation is one of the important means of service selection. Aiming at the problems of ignoring the influence of typical data sources such as service information and interaction logs on the similarity calculation of user preferences and insufficient consideration of dynamic trust relationship in traditional trust-based Web service recommendation methods, a novel approach for Web service recommendation based on advanced trust relationships is presented. After considering the influence of indirect trust paths, the improved calculation about indirect trust degree is proposed. By quantifying the popularity of service, the method of calculating user preference similarity is investigated. Furthermore, the dynamic adjustment mechanism of trust is designed by differentiating the effect of each service recommendation. Integrating these efforts, a service recommendation mechanism is introduced, in which a new service recommendation algorithm is described. Experimental results show that, compared with existing methods, the proposed approach not only has higher accuracy of service recommendation, but also can resist attacks from malicious users more effectively.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019
    Description: We explore the class of positive integers n that admit idempotent factorizations n = p ¯ q ¯ such that λ ( n ) ∣ ( p ¯ − 1 ) ( q ¯ − 1 ) , where λ is the Carmichael lambda function. Idempotent factorizations with p ¯ and q ¯ prime have received the most attention due to their cryptographic advantages, but there are an infinite number of n with idempotent factorizations containing composite p ¯ and/or q ¯ . Idempotent factorizations are exactly those p ¯ and q ¯ that generate correctly functioning keys in the Rivest–Shamir–Adleman (RSA) 2-prime protocol with n as the modulus. While the resulting p ¯ and q ¯ have no cryptographic utility and therefore should never be employed in that capacity, idempotent factorizations warrant study in their own right as they live at the intersection of multiple hard problems in computer science and number theory. We present some analytical results here. We also demonstrate the existence of maximally idempotent integers, those n for which all bipartite factorizations are idempotent. We show how to construct them, and present preliminary results on their distribution.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019
    Description: Google’s Material Design, created in 2014, led to the extended application of floating action buttons (FAB) in user interfaces of web pages and mobile applications. FAB’s roll is to trigger an activity either on the present screen, or it can play out an activity that makes another screen. A few specialists in user experience (UX) and user interface (UI) design are sceptical regarding the usability of FAB in the interfaces of both web pages and mobile applications. They claim that the use of FAB easily distracts users and that it interferes with using other important functions of the applications, and it is unusable in applications designed for iOS systems. The aim of this paper is to investigate by an experiment the quality of experience (QoE) of a static and animated FAB and compare it to the toolbar alternative. The experimental results of different testing methods rejected the hypothesis that the usage and animation of this UI element has a positive influence on the application usability. However, its static and animated utilization enhanced the ratings of hedonic and aesthetic features of the user experience, justifying the usage of this type of button.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019
    Description: Recommender systems are nowadays an indispensable part of most personalized systems implementing information access and content delivery, supporting a great variety of user activities [...]
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019
    Description: Finite element data form an important basis for engineers to undertake analysis and research. In most cases, it is difficult to generate the internal sections of finite element data and professional operations are required. To display the internal data of entities, a method for generating the arbitrary sections of finite element data based on radial basis function (RBF) interpolation is proposed in this paper. The RBF interpolation function is used to realize arbitrary surface cutting of the entity, and the section can be generated by the triangulation of discrete tangent points. Experimental studies have proved that the method is very convenient for allowing users to obtain visualization results for an arbitrary section through simple and intuitive interactions.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019
    Description: The number of documents published on the Web in languages other than English grows every year. As a consequence, the need to extract useful information from different languages increases, highlighting the importance of research into Open Information Extraction (OIE) techniques. Different OIE methods have dealt with features from a unique language; however, few approaches tackle multilingual aspects. In those approaches, multilingualism is restricted to processing text in different languages, rather than exploring cross-linguistic resources, which results in low precision due to the use of general rules. Multilingual methods have been applied to numerous problems in Natural Language Processing, achieving satisfactory results and demonstrating that knowledge acquisition for a language can be transferred to other languages to improve the quality of the facts extracted. We argue that a multilingual approach can enhance OIE methods as it is ideal to evaluate and compare OIE systems, and therefore can be applied to the collected facts. In this work, we discuss how the transfer knowledge between languages can increase acquisition from multilingual approaches. We provide a roadmap of the Multilingual Open IE area concerning state of the art studies. Additionally, we evaluate the transfer of knowledge to improve the quality of the facts extracted in each language. Moreover, we discuss the importance of a parallel corpus to evaluate and compare multilingual systems.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019
    Description: This paper aims to explore the current status, research trends and hotspots related to the field of infrared detection technology through bibliometric analysis and visualization techniques based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) articles published between 1990 and 2018 using the VOSviewer and Citespace software tools. Based on our analysis, we first present the spatiotemporal distribution of the literature related to infrared detection technology, including annual publications, origin country/region, main research organization, and source publications. Then, we report the main subject categories involved in infrared detection technology. Furthermore, we adopt literature cocitation, author cocitation, keyword co-occurrence and timeline visualization analyses to visually explore the research fronts and trends, and present the evolution of infrared detection technology research. The results show that China, the USA and Italy are the three most active countries in infrared detection technology research and that the Centre National de la Recherche Scientifique has the largest number of publications among related organizations. The most prominent research hotspots in the past five years are vibration thermal imaging, pulse thermal imaging, photonic crystals, skin temperature, remote sensing technology, and detection of delamination defects in concrete. The trend of future research on infrared detection technology is from qualitative to quantitative research development, engineering application research and infrared detection technology combined with other detection techniques. The proposed approach based on the scientific knowledge graph analysis can be used to establish reference information and a research basis for application and development of methods in the domain of infrared detection technology studies.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019
    Description: The literature on big data analytics and firm performance is still fragmented and lacking in attempts to integrate the current studies’ results. This study aims to provide a systematic review of contributions related to big data analytics and firm performance. The authors assess papers listed in the Web of Science index. This study identifies the factors that may influence the adoption of big data analytics in various parts of an organization and categorizes the diverse types of performance that big data analytics can address. Directions for future research are developed from the results. This systematic review proposes to create avenues for both conceptual and empirical research streams by emphasizing the importance of big data analytics in improving firm performance. In addition, this review offers both scholars and practitioners an increased understanding of the link between big data analytics and firm performance.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2019
    Description: Service Level Agreements are employed to set availability commitments in cloud services. When a violation occurs as in an outage, cloud providers may be called to compensate customers for the losses incurred. Such compensation may be so large as to erode cloud providers’ profit margins. Insurance may be used to protect cloud providers against such a danger. In this paper, closed formulas are provided through the expected utility paradigm to set the insurance premium under different outage models and QoS metrics (no. of outages, no. of long outages, and unavailability). When the cloud service is paid through a fixed fee, we also provide the maximum unit compensation that a cloud provider can offer so as to meet constraints on its profit loss. The unit compensation is shown to vary approximately as the inverse square of the service fee.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2019
    Description: Term translation quality in machine translation (MT), which is usually measured by domain experts, is a time-consuming and expensive task. In fact, this is unimaginable in an industrial setting where customised MT systems often need to be updated for many reasons (e.g., availability of new training data, leading MT techniques). To the best of our knowledge, as of yet, there is no publicly-available solution to evaluate terminology translation in MT automatically. Hence, there is a genuine need to have a faster and less-expensive solution to this problem, which could help end-users to identify term translation problems in MT instantly. This study presents a faster and less expensive strategy for evaluating terminology translation in MT. High correlations of our evaluation results with human judgements demonstrate the effectiveness of the proposed solution. The paper also introduces a classification framework, TermCat, that can automatically classify term translation-related errors and expose specific problems in relation to terminology translation in MT. We carried out our experiments with a low resource language pair, English–Hindi, and found that our classifier, whose accuracy varies across the translation directions, error classes, the morphological nature of the languages, and MT models, generally performs competently in the terminology translation classification task.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2019
    Description: Radar signal processing mainly focuses on target detection, classification, estimation, filtering, and so on. Compressed sensing radar (CSR) technology can potentially provide additional tools to simultaneously reduce computational complexity and effectively solve inference problems. CSR allows direct compressive signal processing without the need to reconstruct the signal. This study aimed to solve the problem of CSR detection without signal recovery by optimizing the transmit waveform. Therefore, a waveform optimization method was introduced to improve the output signal-to-interference-plus-noise ratio (SINR) in the case where the target signal is corrupted by colored interference and noise having known statistical characteristics. Two different target models are discussed: deterministic and random. In the case of a deterministic target, the optimum transmit waveform is derived by maximizing the SINR and a suboptimum solution is also presented. In the case of random target, an iterative waveform optimization method is proposed to maximize the output SINR. This approach ensures that SINR performance is improved in each iteration step. The performance of these methods is illustrated by computer simulation.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2019
    Description: In semi-autonomous robot conferencing, not only the operator controls the robot, but the robot itself also moves autonomously. Thus, it can modify the operator’s movement (e.g., adding social behaviors). However, the sense of agency, that is, the degree of feeling that the movement of the robot is the operator’s own movement, would decrease if the operator is conscious of the discrepancy between the teleoperation and autonomous behavior. In this study, we developed an interface to control the robot head by using an eye tracker. When the robot autonomously moves its eye-gaze position, the interface guides the operator’s eye movement towards this autonomous movement. The experiment showed that our interface can maintain the sense of agency, because it provided the illusion that the autonomous behavior of a robot is directed by the operator’s eye movement. This study reports the conditions of how to provide this illusion in semi-autonomous robot conferencing.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2019
    Description: Appropriate business processes management (BPM) within an organization can help attain organizational goals. It is particularly important to effectively manage the lifecycle of these processes for organizational effectiveness in improving ever-growing performance and competitivity-building across the company. This paper presents a process discovery and how we can use it in a broader framework supporting self-organization in BPM. Process discovery is intrinsically associated with the process lifecycle. We have made a pre-evaluation of the usefulness of our facts using a generated log file. We also compared visualizations of the outcomes of our approach with different cases and showed performance characteristics of the cash loan sales process.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2019
    Description: Correlations between observed data are at the heart of all empirical research that strives for establishing lawful regularities. However, there are numerous ways to assess these correlations, and there are numerous ways to make sense of them. This essay presents a bird’s eye perspective on different interpretive schemes to understand correlations. It is designed as a comparative survey of the basic concepts. Many important details to back it up can be found in the relevant technical literature. Correlations can (1) extend over time (diachronic correlations) or they can (2) relate data in an atemporal way (synchronic correlations). Within class (1), the standard interpretive accounts are based on causal models or on predictive models that are not necessarily causal. Examples within class (2) are (mainly unsupervised) data mining approaches, relations between domains (multiscale systems), nonlocal quantum correlations, and eventually correlations between the mental and the physical.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2019
    Description: The capacity of private information retrieval (PIR) from databases coded using maximum distance separable (MDS) codes was previously characterized by Banawan and Ulukus, where it was assumed that the messages are encoded and stored separably in the databases. This assumption was also usually made in other related works in the literature, and this capacity is usually referred to as the MDS-PIR capacity colloquially. In this work, we considered the question of if and when this capacity barrier can be broken through joint encoding and storing of the messages. Our main results are two classes of novel code constructions, which allow joint encoding, as well as the corresponding PIR protocols, which indeed outperformed the separate MDS-coded systems. Moreover, we show that a simple, but novel expansion technique allows us to generalize these two classes of codes, resulting in a wider range of the cases where this capacity barrier can be broken.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2019
    Description: Collaborative filtering based recommender systems have proven to be extremely successful in settings where user preference data on items is abundant. However, collaborative filtering algorithms are hindered by their weakness against the item cold-start problem and general lack of interpretability. Ontology-based recommender systems exploit hierarchical organizations of users and items to enhance browsing, recommendation, and profile construction. While ontology-based approaches address the shortcomings of their collaborative filtering counterparts, ontological organizations of items can be difficult to obtain for items that mostly belong to the same category (e.g., television series episodes). In this paper, we present an ontology-based recommender system that integrates the knowledge represented in a large ontology of literary themes to produce fiction content recommendations. The main novelty of this work is an ontology-based method for computing similarities between items and its integration with the classical Item-KNN (K-nearest neighbors) algorithm. As a study case, we evaluated the proposed method against other approaches by performing the classical rating prediction task on a collection of Star Trek television series episodes in an item cold-start scenario. This transverse evaluation provides insights into the utility of different information resources and methods for the initial stages of recommender system development. We found our proposed method to be a convenient alternative to collaborative filtering approaches for collections of mostly similar items, particularly when other content-based approaches are not applicable or otherwise unavailable. Aside from the new methods, this paper contributes a testbed for future research and an online framework to collaboratively extend the ontology of literary themes to cover other narrative content.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2019
    Description: The advent of utility computing has revolutionized almost every sector of traditional software development. Especially commercial cloud computing services, pioneered by the likes of Amazon, Google and Microsoft, have provided an unprecedented opportunity for the fast and sustainable development of complex distributed systems. Nevertheless, existing models and tools aim primarily for systems where resource usage—by humans and bots alike—is logically and physically quite disperse resulting in a low likelihood of conflicting resource access. However, a number of resource-intensive applications, such as Massively Multiplayer Online Games (MMOGs) and large-scale simulations introduce a requirement for a very large common state with many actors accessing it simultaneously and thus a high likelihood of conflicting resource access. This paper presents a systematic mapping study of the state-of-the-art in software technology aiming explicitly to support the development of MMOGs, a class of large-scale, resource-intensive software systems. By examining the main focus of a diverse set of related publications, we identify a list of criteria that are important for MMOG development. Then, we categorize the selected studies based on the inferred criteria in order to compare their approach, unveil the challenges faced in each of them and reveal research trends that might be present. Finally we attempt to identify research directions which appear promising for enabling the use of standardized technology for this class of systems.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2019
    Description: In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2019
    Description: Military named entity recognition (MNER) is one of the key technologies in military information extraction. Traditional methods for the MNER task rely on cumbersome feature engineering and specialized domain knowledge. In order to solve this problem, we propose a method employing a bidirectional long short-term memory (BiLSTM) neural network with a self-attention mechanism to identify the military entities automatically. We obtain distributed vector representations of the military corpus by unsupervised learning and the BiLSTM model combined with the self-attention mechanism is adopted to capture contextual information fully carried by the character vector sequence. The experimental results show that the self-attention mechanism can improve effectively the performance of MNER task. The F-score of the military documents and network military texts identification was 90.15% and 89.34%, respectively, which was better than other models.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2019
    Description: This article empirically demonstrates the impacts of truthfully sharing forecast information and using forecast combinations in a fast-moving-consumer-goods (FMCG) supply chain. Although it is known a priori that sharing information improves the overall efficiency of a supply chain, information such as pricing or promotional strategy is often kept proprietary for competitive reasons. In this regard, it is herein shown that simply sharing the retail-level forecasts—this does not reveal the exact business strategy, due to the effect of omni-channel sales—yields nearly all the benefits of sharing all pertinent information that influences FMCG demand. In addition, various forecast combination methods are used to further stabilize the forecasts, in situations where multiple forecasting models are used during operation. In other words, it is shown that combining forecasts is less risky than “betting” on any component model.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2019
    Description: An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions that are typically used in popular methods, membership functions are more convenient to use, providing learning functions that do not suffer the above problem. In Logical Bayesian Inference (LBI), every label is independently learned. For multilabel learning, we can directly obtain a group of optimized membership functions from a large enough sample with labels, without preparing different samples for different labels. Furthermore, a group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions in a two-dimensional feature space, only 2–3 iterations are required for the mutual information between three classes and three labels to surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maximization (EM) algorithm is improved to form the CM-EM algorithm, which can outperform the EM algorithm when the mixture ratios are imbalanced, or when local convergence exists. The CM iteration algorithm needs to combine with neural networks for MMI classification in high-dimensional feature spaces. LBI needs further investigation for the unification of statistics and logic.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2019
    Description: In an era of accelerating digitization and advanced big data analytics, harnessing quality data and insights will enable innovative research methods and management approaches. Among others, Artificial Intelligence Imagery Analysis has recently emerged as a new method for analyzing the content of large amounts of pictorial data. In this paper, we provide background information and outline the application of Artificial Intelligence Imagery Analysis for analyzing the content of large amounts of pictorial data. We suggest that Artificial Intelligence Imagery Analysis constitutes a profound improvement over previous methods that have mostly relied on manual work by humans. In this paper, we discuss the applications of Artificial Intelligence Imagery Analysis for research and practice and provide an example of its use for research. In the case study, we employed Artificial Intelligence Imagery Analysis for decomposing and assessing thumbnail images in the context of marketing and media research and show how properly assessed and designed thumbnail images promote the consumption of online videos. We conclude the paper with a discussion on the potential of Artificial Intelligence Imagery Analysis for research and practice across disciplines.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2019
    Description: Human eye movement is one of the most important functions for understanding our surroundings. When a human eye processes a scene, it quickly focuses on dominant parts of the scene, commonly known as a visual saliency detection or visual attention prediction. Recently, neural networks have been used to predict visual saliency. This paper proposes a deep learning encoder-decoder architecture, based on a transfer learning technique, to predict visual saliency. In the proposed model, visual features are extracted through convolutional layers from raw images to predict visual saliency. In addition, the proposed model uses the VGG-16 network for semantic segmentation, which uses a pixel classification layer to predict the categorical label for every pixel in an input image. The proposed model is applied to several datasets, including TORONTO, MIT300, MIT1003, and DUT-OMRON, to illustrate its efficiency. The results of the proposed model are quantitatively and qualitatively compared to classic and state-of-the-art deep learning models. Using the proposed deep learning model, a global accuracy of up to 96.22% is achieved for the prediction of visual saliency.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2019
    Description: The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order to handle big data, such knowledge-discovery queries must be computed in parallel distributed environments. In this paper, we consider an efficient parallel algorithm for the “K-skyband query” and the “top-k dominating query”, which are popular variants of skyline query. We propose a method for computing both queries simultaneously in a parallel distributed framework called MapReduce, which is a popular framework for processing “big data” problems. Our extensive evaluation results validate the effectiveness and efficiency of the proposed algorithm on both real and synthetic datasets.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2019
    Description: A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers ( s ≥ 2 ) of all nonzero elements in G F ( p m ) , where p ≥ 2 is prime. Some of the resulting codes are optimal or near-optimal and include projective codes over G F ( 4 ) that give rise to optimal or near optimal quantum codes. In addition, the codes yield interesting combinatorial structures, such as strongly regular graphs and block designs.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2019
    Description: The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2019
    Description: A crowdsourcing contest is one of the most popular modes of crowdsourcing and is also an important tool for an enterprise to implement open innovation. The solvers’ active participation is one of the major reasons for the success of crowdsourcing contests. Research on solvers’ participation behavior is helpful in understanding the sustainability and incentives of solvers’ participation in the online crowdsourcing platform. So, how to attract more solvers to participate and put in more effort is the focus of researchers. In this regard, previous studies mainly used the submission quantity to measure solvers’ participation behavior and lacked an effective measure on the degree of participation effort expended by a solver. For the first time, we use solvers’ participation time as a dependent variable to measure their effort in a crowdsourcing contest. Thus, we incorporate participation time into the solver’s participation research. With the data from Taskcn.com, we analyze how participation time is affected four key factors including task design, task description, task process, and environment, respectively. We found that, first, for task design, higher task rewards will attract solvers to invest more time in the participation process and the relationship between participation time and task duration is inverted U-shaped. Second, for task description, the length of the task description has a negative impact on participation time and the task description attachment will positively influence the participation time. Third, for the task process, communication and supplementary explanations in a crowdsourcing process positively affect participation time. Fourth, for environmental factors, the task density of the crowdsourcing platform and the market price of all crowdsourcing contests have respectively negative and positive effects on participation time.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2019
    Description: Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced-partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks such as MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, for example, via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, and minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, for example, a label-propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: For instance, compared to the label-propagation algorithm, we report an improvement of 15–25% in the cut value. We also observe that our algorithms admit scalable distributed implementation for any number of partitions. Finally, we explain three applications of this work at Google: (1) Balanced partitioning is used to route multi-term queries to different replicas in Google Search backend in a way that reduces the cache miss rates by ≈ 0.5 % , which leads to a double-digit gain in throughput of production clusters. (2) Applied to the Google Maps Driving Directions, balanced partitioning minimizes the number of cross-shard queries with the goal of saving in CPU usage. This system achieves load balancing by dividing the world graph into several “shards”. Live experiments demonstrate an ≈ 40 % drop in the number of cross-shard queries when compared to a standard geography-based method. (3) In a job scheduling problem for our data centers, we use balanced partitioning to evenly distribute the work while minimizing the amount of communication across geographically distant servers. In fact, the hierarchical nature of our solution goes well with the layering of data center servers, where certain machines are closer to each other and have faster links to one another.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2019
    Description: Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the structural patterns and morphology of networks. We propose a distributed computing approach for the calculation of network centrality value for each user using the MapReduce approach in the Hadoop platform, which allows faster and more efficient computation as compared to the conventional implementation. A distributed approach is scalable and helps in efficient computations of large-scale datasets, such as social network data. The proposed approach improves the calculation performance of degree centrality by 39.8%, closeness centrality by 40.7% and eigenvalue centrality by 41.1% using a Twitter dataset.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2019
    Description: Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light of the possible clinical applications of these predictive models, researchers have developed approaches such as virtual sample generation. Virtual sample generation significantly improves learning and classification performance when working with small samples. The main objective of this study is to evaluate the ability of the proposed virtual sample generation to overcome the small sample size problem, which is a feature of the automated detection of a neurodevelopmental disorder, namely autism spectrum disorder. Results show that our method enhances diagnostic accuracy from 84%–95% using virtual samples generated on the basis of five actual clinical samples. The present findings show the feasibility of using the proposed technique to improve classification performance even in cases of clinical samples of limited size. Accounting for concerns in relation to small sample sizes, our technique represents a meaningful step forward in terms of pattern recognition methodology, particularly when it is applied to diagnostic classifications of neurodevelopmental disorders. Besides, the proposed technique has been tested with other available benchmark datasets. The experimental outcomes showed that the accuracy of the classification that used virtual samples was superior to the one that used original training data without virtual samples.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2019
    Description: In this paper, I first review signal detection theory (SDT) approaches to perception, and then discuss why it is thought that SDT theory implies that increasing attention improves performance. Our experiments have shown, however, that this is not necessarily true. Subjects had either focused attention on two of four possible locations in the visual field, or diffused attention to all four locations. The stimuli (offset letters), locations, conditions, and tasks were all known in advance, responses were forced-choice, subjects were properly instructed and motivated, and instructions were always valid—conditions which should optimize signal detection. Relative to diffusing attention, focusing attention indeed benefitted discrimination of forward from backward pointing Es. However, focusing made it harder to identify a randomly chosen one of 20 letters. That focusing can either aid or disrupt performance, even when cues are valid and conditions are idealized, is surprising, but it can also be explained by SDT, as shown here. These results warn the experimental researcher not to confuse focusing attention with enhancing performance, and warn the modeler not to assume that SDT is unequivocal.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2019
    Description: The growing demand on video streaming services increasingly motivates the development of a reliable and accurate models for the assessment of Quality of Experience (QoE). In this duty, human-related factors which have significant influence on QoE play a crucial role. However, the complexity caused by multiple effects of those factors on human perception has introduced challenges on contemporary studies. In this paper, we inspect the impact of the human-related factors, namely perceptual factors, memory effect, and the degree of interest. Based on our investigation, a novel QoE model is proposed that effectively incorporates those factors to reflect the user’s cumulative perception. Evaluation results indicate that our proposed model performed excellently in predicting cumulative QoE at any moment within a streaming session.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2019
    Description: This paper deals with the Arabic translation taṣawwur in Averroes’ Great Commentary of the term τῶν ἀδιαιρέτων νόησις (“ton adiaireton noesis”, thinking of the indivisibles) in Aristotle’s De anima and the Latin translation from Arabic with (in-)formatio, as quoted by Albertus Magnus [...]
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2019
    Description: Opportunistic networks are considered as the promising network structures to implement traditional and typical infrastructure-based communication by enabling smart mobile devices in the networks to contact with each other within a fixed communication area. Because of the intermittent and unstable connections between sources and destinations, message routing and forwarding in opportunistic networks have become challenging and troublesome problems recently. In this paper, to improve the data dissemination environment, we propose an improved routing-forwarding strategy utilizing node profile and location prediction for opportunistic networks, which mainly includes three continuous phases: the collecting and updating of routing state information, community detection and optimization and node location prediction. Each mobile node in the networks is able to establish a network routing matrix after the entire process of information collecting and updating. Due to the concentrated population in urban areas and relatively few people in remote areas, the distribution of location prediction roughly presents a type of symmetry in opportunistic networks. Afterwards, the community optimization and location prediction mechanisms could be regarded as an significant foundation for data dissemination in the networks. Ultimately, experimental results demonstrate that the proposed algorithm could slightly enhance the delivery ratio and substantially degrade the network overhead and end-to-end delay as compared with the other four routing strategies.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2019
    Description: With digital media, not only are media extensions of their human users, as McLuhan posited, but there is a flip or reversal in which the human users of digital media become an extension of those digital media as these media scoop up their data and use them to the advantage of those that control these media. The implications of this loss of privacy as we become “an item in a data bank” are explored and the field of captology is described. The feedback of the users of digital media become the feedforward for those media.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2019
    Description: The main scope of the presented research was the development of an innovative product for the management of city parking lots. Our application will ensure the implementation of the Smart City concept by using computer vision and communication platforms, which enable the development of new integrated digital services. The use of video cameras could simplify and lower the costs of parking lot controls. In the aim of parking space detection, an aggregated decision was proposed, employing various metrics, computed over a sliding window interval provided by the camera. The history created over 20 images provides an adaptive method for background and accurate detection. The system has shown high robustness in two benchmarks, achieving a recognition rate higher than 93%.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2019
    Description: Anomaly detection of network traffic flows is a non-trivial problem in the field of network security due to the complexity of network traffic. However, most machine learning-based detection methods focus on network anomaly detection but ignore the user anomaly behavior detection. In real scenarios, the anomaly network behavior may harm the user interests. In this paper, we propose an anomaly detection model based on time-decay closed frequent patterns to address this problem. The model mines closed frequent patterns from the network traffic of each user and uses a time-decay factor to distinguish the weight of current and historical network traffic. Because of the dynamic nature of user network behavior, a detection model update strategy is provided in the anomaly detection framework. Additionally, the closed frequent patterns can provide interpretable explanations for anomalies. Experimental results show that the proposed method can detect user behavior anomaly, and the network anomaly detection performance achieved by the proposed method is similar to the state-of-the-art methods and significantly better than the baseline methods.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2019
    Description: Parameterized complexity theory has led to a wide range of algorithmic breakthroughs within the last few decades, but the practicability of these methods for real-world problems is still not well understood. We investigate the practicability of one of the fundamental approaches of this field: dynamic programming on tree decompositions. Indisputably, this is a key technique in parameterized algorithms and modern algorithm design. Despite the enormous impact of this approach in theory, it still has very little influence on practical implementations. The reasons for this phenomenon are manifold. One of them is the simple fact that such an implementation requires a long chain of non-trivial tasks (as computing the decomposition, preparing it, …). We provide an easy way to implement such dynamic programs that only requires the definition of the update rules. With this interface, dynamic programs for various problems, such as 3-coloring, can be implemented easily in about 100 lines of structured Java code. The theoretical foundation of the success of dynamic programming on tree decompositions is well understood due to Courcelle’s celebrated theorem, which states that every MSO-definable problem can be efficiently solved if a tree decomposition of small width is given. We seek to provide practical access to this theorem as well, by presenting a lightweight model checker for a small fragment of MSO 1 (that is, we do not consider “edge-set-based” problems). This fragment is powerful enough to describe many natural problems, and our model checker turns out to be very competitive against similar state-of-the-art tools.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2019
    Description: Let V be a finite set of positive integers with sum equal to a multiple of the integer b . When does V have a partition into b parts so that all parts have equal sums? We develop algorithmic constructions which yield positive, albeit incomplete, answers for the following classes of set V , where n is a given positive integer: (1) an initial interval { a ∈ ℤ + : a ≤ n } ; (2) an initial interval of primes { p ∈ ℙ : p ≤ n } , where ℙ is the set of primes; (3) a divisor set { d ∈ ℤ + : d | n } ; (4) an aliquot set { d ∈ ℤ + : d | n ,   d 〈 n } . Open general questions and conjectures are included for each of these classes.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2019
    Description: The blockchain technique is becoming more and more popular due to its advantages such as stability and dispersed nature. This is an idea based on blockchain activity paradigms. Another important field is machine learning, which is increasingly used in practice. Unfortunately, the training or overtraining artificial neural networks is very time-consuming and requires high computing power. In this paper, we proposed using a blockchain technique to train neural networks. This type of activity is important due to the possible search for initial weights in the network, which affect faster training, due to gradient decrease. We performed the tests with much heavier calculations to indicate that such an action is possible. However, this type of solution can also be used for less demanding calculations, i.e., only a few iterations of training and finding a better configuration of initial weights.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2019
    Description: The application of blockchain technology to the energy sector promises to derive new operating models focused on local generation and sustainable practices, which are driven by peer-to-peer collaboration and community engagement. However, real-world energy blockchains differ from typical blockchain networks insofar as they must interoperate with grid infrastructure, adhere to energy regulations, and embody engineering principles. Naturally, these additional dimensions make real-world energy blockchains highly dependent on the participation of grid operators, engineers, and energy providers. Although much theoretical and proof-of-concept research has been published on energy blockchains, this research aims to establish a lens on real-world projects and implementations that may inform the alignment of academic and industry research agendas. This research classifies 131 real-world energy blockchain initiatives to develop an understanding of how blockchains are being applied to the energy domain, what type of failure rates can be observed from recently reported initiatives, and what level of technical and theoretical details are reported for real-world deployments. The results presented from the systematic analysis highlight that real-world energy blockchains are (a) growing exponentially year-on-year, (b) producing relatively low failure/drop-off rates (~7% since 2015), and (c) demonstrating information sharing protocols that produce content with insufficient technical and theoretical depth.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2019
    Description: In this study, we address the problem of compaction of Church numerals. Church numerals are unary representations of natural numbers on the scheme of lambda terms. We propose a novel decomposition scheme from a given natural number into an arithmetic expression using tetration, which enables us to obtain a compact representation of lambda terms that leads to the Church numeral of the natural number. For natural number n, we prove that the size of the lambda term obtained by the proposed method is O ( ( slog 2 n ) ( log n / log log n ) ) . Moreover, we experimentally confirmed that the proposed method outperforms binary representation of Church numerals on average, when n is less than approximately 10,000.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2019
    Description: The spectral efficiency of wireless networks can be significantly improved by exploiting spatial multiplexing techniques known as multi-user MIMO. These techniques enable the allocation of multiple users to the same time-frequency block, thus reducing the interference between users. There is ample evidence that user groupings can have a significant impact on the performance of spatial multiplexing. The situation is even more complex when the data packets have priority and deadlines for delivery. Hence, combining packet queue management and beamforming would considerably enhance the overall system performance. In this paper, we propose a combination of beamforming and scheduling to improve the overall performance of multi-user MIMO systems in realistic conditions where data packets have both priority and deadlines beyond which they become obsolete. This method dubbed Reward Per Second (RPS), combines advanced matrix factorization at the physical layer with recently-developed queue management techniques. We demonstrate the merits of the this technique compared to other state-of-the-art scheduling methods through simulations.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2019
    Description: In the vehicle routing problem with simultaneous pickup and delivery (VRPSPD), customers demanding both delivery and pickup operations have to be visited once by a single vehicle. In this work, we propose a fast randomized algorithm using a nearest neighbor strategy to tackle an extension of the VRPSPD in which the fleet of vehicles is heterogeneous. This variant is an NP-hard problem, which in practice makes it impossible to be solved to proven optimality for large instances. To evaluate the proposal, we use benchmark instances from the literature and compare our results to those obtained by a state-of-the-art algorithm. Our approach presents very competitive results, not only improving several of the known solutions, but also running in a shorter time.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2019
    Description: An exemplary paradigm of how an AI can be a disruptive technological paragon via the utilization of blockchain comes straight from the world of deep learning. Data scientists have long struggled to maintain the quality of a dataset for machine learning by an AI entity. Datasets can be very expensive to purchase, as, depending on both the proper selection of the elements and the homogeneity of the data contained within, constructing and maintaining the integrity of a dataset is difficult. Blockchain as a highly secure storage medium presents a technological quantum leap in maintaining data integrity. Furthermore, blockchain’s immutability constructs a fruitful environment for creating high quality, permanent and growing datasets for deep learning. The combination of AI and blockchain could impact fields like Internet of things (IoT), identity, financial markets, civil governance, smart cities, small communities, supply chains, personalized medicine and other fields, and thereby deliver benefits to many people.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2019
    Description: The current trend in the European Union (EU) is to support the development of online dispute resolution (ODR) that saves financial and human resources. Therefore, research articles mainly deal with the design of new ODR solutions, without researching the social aspects of using different kinds of ODR solutions. For this reason, the main aim of the article is an empirical evaluation of two kinds of ODR solutions in business-to-business (B2B) relationships from the perspective of a selected social category. The article focuses on: (1) comparing unassisted and smart assisted negotiation while using the artificial intelligence approach; (2) the satisfaction and attitudes of Generation Y members from the Czech and Slovak Republic towards different ways of negotiating. The conclusions of this study can help researchers to design or improve existing ODR solutions, and companies to choose the most suitable managers from Generation Y for B2B negotiation. The results show that Generation Y members prefer computer-mediated communication as compared to face to face negotiation; the participants were more satisfied with the negotiation process when using smart assisted negotiation. Through a computer-mediated negotiation, even sellers with lower emotional stability can maintain an advantageous position. Similarly, buyers with lower agreeableness or higher extraversion can negotiate more favorable terms and offset their loss.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2019
    Description: Nowadays, the amount of digitally available information has tremendously grown, with real-world data graphs outreaching the millions or even billions of vertices. Hence, community detection, where groups of vertices are formed according to a well-defined similarity measure, has never been more essential affecting a vast range of scientific fields such as bio-informatics, sociology, discrete mathematics, nonlinear dynamics, digital marketing, and computer science. Even if an impressive amount of research has yet been published to tackle this NP-hard class problem, the existing methods and algorithms have virtually been proven inefficient and severely unscalable. In this regard, the purpose of this manuscript is to combine the network topology properties expressed by the loose similarity and the local edge betweenness, which is a currently proposed Girvan–Newman’s edge betweenness measure alternative, along with the intrinsic user content information, in order to introduce a novel and highly distributed hybrid community detection methodology. The proposed approach has been thoroughly tested on various real social graphs, roundly compared to other classic divisive community detection algorithms that serve as baselines and practically proven exceptionally scalable, highly efficient, and adequately accurate in terms of revealing the subjacent network hierarchy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2019
    Description: With the development of artificial intelligence, machine learning algorithms and deep learning algorithms are widely applied to attack detection models. Adversarial attacks against artificial intelligence models become inevitable problems when there is a lack of research on the cross-site scripting (XSS) attack detection model for defense against attacks. It is extremely important to design a method that can effectively improve the detection model against attack. In this paper, we present a method based on reinforcement learning (called RLXSS), which aims to optimize the XSS detection model to defend against adversarial attacks. First, the adversarial samples of the detection model are mined by the adversarial attack model based on reinforcement learning. Secondly, the detection model and the adversarial model are alternately trained. After each round, the newly-excavated adversarial samples are marked as a malicious sample and are used to retrain the detection model. Experimental results show that the proposed RLXSS model can successfully mine adversarial samples that escape black-box and white-box detection and retain aggressive features. What is more, by alternately training the detection model and the confrontation attack model, the escape rate of the detection model is continuously reduced, which indicates that the model can improve the ability of the detection model to defend against attacks.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2019
    Description: Herein, robust pole placement controller design for linear uncertain discrete time dynamic systems is addressed. The adopted approach uses the so called “D regions” where the closed loop system poles are determined to lie. The discrete time pole regions corresponding to the prescribed damping of the resulting closed loop system are studied. The key issue is to determine the appropriate convex approximation to the originally non-convex discrete-time system pole region, so that numerically efficient robust controller design algorithms based on Linear Matrix Inequalities (LMI) can be used. Several alternatives for relatively simple inner approximations and their corresponding LMI descriptions are presented. The developed LMI region for the prescribed damping can be arbitrarily combined with other LMI pole limitations (e.g., stability degree). Simple algorithms to calculate the matrices for LMI representation of the proposed convex pole regions are provided in a concise way. The results and their use in a robust controller design are illustrated on a case study of a laboratory magnetic levitation system.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2019
    Description: The objective of the cell suppression problem (CSP) is to protect sensitive cell values in tabular data under the presence of linear relations concerning marginal sums. Previous algorithms for solving CSPs ensure that every sensitive cell has enough uncertainty on its values based on the interval width of all possible values. However, we find that every deterministic CSP algorithm is vulnerable to an adversary who possesses the knowledge of that algorithm. We devise a matching attack scheme that narrows down the ranges of sensitive cell values by matching the suppression pattern of an original table with that of each candidate table. Our experiments show that actual ranges of sensitive cell values are significantly narrower than those assumed by the previous CSP algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2019
    Description: Unmanned and unwomaned aerial vehicles (UAV), or drones, are breaking and creating new boundaries of image-based communication. Using social network analysis and critical discourse analysis, we examine the 60 most popular question threads about drones on Zhihu, China’s largest social question answering platform. We trace how controversial issues around these supposedly novel tech products are mediated, domesticated, visualized, or marginalized via digital representational technology. Supported by Zhihu’s topic categorization algorithm, drone-related discussions form topic clusters. These topic clusters gain currency in the government-regulated cyberspace, where their meanings remain open to widely divergent interpretations and mediation by various agents. We find that the largest drone company DJI occupies a central and strongly interconnected position in the discussions. Drones are, moreover, represented as objects of consumption, technological advancement, national future, and uncertainty. At the same time, the sense-making process of drone-related discussions evokes emerging sets of narrative user identities with potential political effects. Users engage in digital representational technologies publicly and collectively to raise questions and represent their views on new technologies. Therefore, we argue that platforms like Zhihu are essential when studying views of the Chinese citizenry towards technological developments.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2019
    Description: The ageing population has become an increasing phenomenon world-wide, leading to a growing need for specialised help. Improving the quality of life of older people can lower the risk of depression and social isolation, but it requires a multi-dimensional approach through continuous monitoring and training of the main health domains (e.g., cognitive, motor, nutritional and behavioural). To this end, the use of mobile and e-health services tailored to the user’s needs can help stabilise their health conditions, in terms of physical, mental, and social capabilities. In this context, the INTESA project proposes a set of personalised monitoring and rehabilitation services for older people, based on mobile and wearable technologies ready to be used either at home or in residential long-term care facilities. We evaluated the proposed solution by deploying a suite of services in a nursing home and defining customised protocols to involve both guests (primary users) and nursing care personnel (secondary users). In this paper, we present the extended results obtained after the one-year period of experimentation in terms of technical reliability of the system, Quality of Experience, and user acceptance for both the user categories.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2019
    Description: Presently, we are observing an explosion of data that need to be stored and processed over the Internet, and characterized by large volume, velocity and variety. For this reason, software developers have begun to look at NoSQL solutions for data storage. However, operations that are trivial in traditional Relational DataBase Management Systems (DBMSs) can become very complex in NoSQL DBMSs. This is the case of the join operation to establish a connection between two or more DB structures, whose construct is not explicitly available in many NoSQL databases. As a consequence, the data model has to be changed or a set of operations have to be performed to address particular queries on data. Thus, open questions are: how do NoSQL solutions work when they have to perform join operations on data that are not natively supported? What is the quality of NoSQL solutions in such cases? In this paper, we deal with such issues specifically considering one of the major NoSQL document oriented DB available on the market: MongoDB. In particular, we discuss an approach to perform join operations at application layer in MongoDB that allows us to preserve data models. We analyse performance of the proposes approach discussing the introduced overhead in comparison with SQL-like DBs.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2019
    Description: There are two main challenges in wireless multimedia sensors networks: energy constraints and providing DiffServ. In this paper, a joint flow control, routing, scheduling, and power control scheme based on a Lyapunov optimization framework is proposed to increase network lifetime and scheduling fairness. For an adaptive distribution of transmission opportunities, a differentiated queueing services (DQS) scheme is adopted for maintaining data queues. In the Lyapunov function, different types of queues are normalized for a unified dimension. To prolong network lifetime, control coefficients are designed according to the characteristics of the wireless sensor networks. The power control problem is proved to be a convex optimization problem and two optimal algorithms are discussed. Simulation results show that, compared with existing schemes, the proposed scheme can achieve a better trade-off between QoS performances and network lifetime. The simulation results also show that the scheme utilizing the distributed media access control scheme in scheduling performs best in the transmission of real-time services.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2019
    Description: We propose in this paper a two-phase approach that decomposes the process of solving the three-dimensional single Container Loading Problem (CLP) into subsequent tasks: (i) the generation of blocks of boxes and (ii) the loading of blocks into the container. The first phase is deterministic, and it is performed by means of constructive algorithms from the literature. The second phase is non-deterministic, and it is performed with the use of Generate-and-Solve (GS), a problem-independent hybrid optimization framework based on problem instance reduction that combines a metaheuristic with an exact solver. Computational experiments performed on benchmark instances indicate that our approach presents competitive results compared to those found by state-of-the-art algorithms, particularly for problem instances consisting of a few types of boxes. In fact, we present new best solutions for classical instances from groups BR1 and BR2.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2019
    Description: The goal of the present research is to contribute to the detection of tax fraud concerning personal income tax returns (IRPF, in Spanish) filed in Spain, through the use of Machine Learning advanced predictive tools, by applying Multilayer Perceptron neural network (MLP) models. The possibilities springing from these techniques have been applied to a broad range of personal income return data supplied by the Institute of Fiscal Studies (IEF). The use of the neural networks enabled taxpayer segmentation as well as calculation of the probability concerning an individual taxpayer’s propensity to attempt to evade taxes. The results showed that the selected model has an efficiency rate of 84.3%, implying an improvement in relation to other models utilized in tax fraud detection. The proposal can be generalized to quantify an individual’s propensity to commit fraud with regards to other kinds of taxes. These models will support tax offices to help them arrive at the best decisions regarding action plans to combat tax fraud.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2019
    Description: Gender role norms have been widely studied in the offline partner violence context. Different studies have indicated that internalizing these norms was associated with dating violence. However, very few research works have analyzed this relation in forms of aggression against partners and former partners using information and communication technologies (ICT). The objective of the present study was to examine the co-occurrence of cyber dating abuse by analyzing the extent to which victimization and perpetration overlap, and by analyzing the differences according to conformity to the masculine gender norms between men who are perpetrators or victims of cyber dating abuse. The participants were 614 male university students, and 26.5% of the sample reported having been a victim and perpetrator of cyber dating abuse. Nonetheless, the regression analyses did not reveal any statistically significant association between conformity to masculine gender norms and practicing either perpetration or victimization by cyber dating abuse.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2019
    Description: In this paper, a novel constraint-following control for uncertain robot manipulators that is inspired by analytical dynamics is developed. The motion can be regarded as external constraints of the system. However, it is not easy to obtain explicit equations for dynamic modeling of constrained systems. For a multibody system subject to motion constraints, it is a common practice to introduce Lagrange multipliers, but using these to obtain explicit dynamical equations is a very difficult task. In order to obtain such equations more simply, motion constraints are handled here using the Udwadia-Kalaba equation(UKE). Then, considering real-life robot manipulators are usually uncertain(but bounded), by using continuous controllers compensate for the uncertainties. No linearizations/approximations of the robot manipulators systems are made throughout, and the tracking errors are bounds. A redundant manipulator of the SCARA type as the example to illustrates the methodology. Numerical results are demonstrates the simplicity and ease of implementation of the methodology.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2019
    Description: To improve the overall accuracy of tidal forecasting and ameliorate the low accuracy of single harmonic analysis, this paper proposes a combined tidal forecasting model based on harmonic analysis and autoregressive integrated moving average–support vector regression (ARIMA-SVR). In tidal analysis, the resultant tide can be considered as a superposition of the astronomical tide level and the non-astronomical tidal level, which are affected by the tide-generating force and environmental factors, respectively. The tidal data are de-noised via wavelet analysis, and the astronomical tide level is subsequently calculated via harmonic analysis. The residual sequence generated via harmonic analysis is used as the sample dataset of the non-astronomical tidal level, and the tidal height of the system is calculated by the ARIMA-SVR model. Finally, the tidal values are predicted by linearly summing the calculated results of both systems. The simulation results were validated against the measured tidal data at the tidal station of Bay Waveland Yacht Club, USA. By considering the residual non-astronomical tide level effects (which are ignored in traditional harmonic analysis), the combined model improves the accuracy of tidal prediction. Moreover, the combined model is feasible and efficient.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2019
    Description: The rapid development of distributed technology has made it possible to store and query massive trajectory data. As a result, a variety of schemes for big trajectory data management have been proposed. However, the factor of data transmission is not considered in most of these, resulting in a certain impact on query efficiency. In view of that, we present THBase, a coprocessor-based scheme for big trajectory data management in HBase. THBase introduces a segment-based data model and a moving-object-based partition model to solve massive trajectory data storage, and exploits a hybrid local secondary index structure based on Observer coprocessor to accelerate spatiotemporal queries. Furthermore, it adopts certain maintenance strategies to ensure the colocation of relevant data. Based on these, THBase designs node-locality-based parallel query algorithms by Endpoint coprocessor to reduce the overhead caused by data transmission, thus ensuring efficient query performance. Experiments on datasets of ship trajectory show that our schemes can significantly outperform other schemes.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2019
    Description: Software defect prediction is an important means to guarantee software quality. Because there are no sufficient historical data within a project to train the classifier, cross-project defect prediction (CPDP) has been recognized as a fundamental approach. However, traditional defect prediction methods use feature attributes to represent samples, which cannot avoid negative transferring, may result in poor performance model in CPDP. This paper proposes a multi-source cross-project defect prediction method based on dissimilarity space (DM-CPDP). This method not only retains the original information, but also obtains the relationship with other objects. So it can enhances the discriminant ability of the sample attributes to the class label. This method firstly uses the density-based clustering method to construct the prototype set with the cluster center of samples in the target set. Then, the arc-cosine kernel is used to calculate the sample dissimilarities between the prototype set and the source domain or the target set to form the dissimilarity space. In this space, the training set is obtained with the earth mover’s distance (EMD) method. For the unlabeled samples converted from the target set, the k-Nearest Neighbor (KNN) algorithm is used to label those samples. Finally, the model is learned from training data based on TrAdaBoost method and used to predict new potential defects. The experimental results show that this approach has better performance than other traditional CPDP methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2019
    Description: In this paper, we present an analysis of the mining process of two popular assets, Bitcoin and gold. The analysis highlights that Bitcoin, more specifically its underlying technology, is a “safe haven” that allows facing the modern environmental challenges better than gold. Our analysis emphasizes that crypto-currencies systems have a social and economic impact much smaller than that of the traditional financial systems. We present an analysis of the several stages needed to produce an ounce of gold and an artificial agent-based market model simulating the Bitcoin mining process and allowing the quantification of Bitcoin mining costs. In this market model, miners validate the Bitcoin transactions using the proof of work as the consensus mechanism, get a reward in Bitcoins, sell a fraction of them to cover their expenses, and stay competitive in the market by buying and divesting hardware units and adjusting their expenses by turning off/on their machines according to the signals provided by a technical analysis indicator, the so-called relative strength index.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2019
    Description: In recent years, almost all of the current top-performing object detection networks use CNN (convolutional neural networks) features. State-of-the-art object detection networks depend on CNN features. In this work, we add feature fusion in the object detection network to obtain a better CNN feature, which incorporates well deep, but semantic, and shallow, but high-resolution, CNN features, thus improving the performance of a small object. Also, the attention mechanism was applied to our object detection network, AF R-CNN (attention mechanism and convolution feature fusion based object detection), to enhance the impact of significant features and weaken background interference. Our AF R-CNN is a single end to end network. We choose the pre-trained network, VGG-16, to extract CNN features. Our detection network is trained on the dataset, PASCAL VOC 2007 and 2012. Empirical evaluation of the PASCAL VOC 2007 dataset demonstrates the effectiveness and improvement of our approach. Our AF R-CNN achieves an object detection accuracy of 75.9% on PASCAL VOC 2007, six points higher than Faster R-CNN.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2019
    Description: Link prediction is a task predicting whether there is a link between two nodes in a network. Traditional link prediction methods that assume handcrafted features (such as common neighbors) as the link’s formation mechanism are not universal. Other popular methods tend to learn the link’s representation, but they cannot represent the link fully. In this paper, we propose Edge-Nodes Representation Neural Machine (ENRNM), a novel method which can learn abundant topological features from the network as the link’s representation to promote the formation of the link. The ENRNM learns the link’s formation mechanism by combining the representation of edge and the representations of nodes on the two sides of the edge as link’s full representation. To predict the link’s existence, we train a fully connected neural network which can learn meaningful and abundant patterns. We prove that the features of edge and two nodes have the same importance in link’s formation. Comprehensive experiments are conducted on eight networks, experiment results demonstrate that the method ENRNM not only exceeds plenty of state-of-the-art link prediction methods but also performs very well on diverse networks with different structures and characteristics.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2019
    Description: Aiming at granting wide access to their contents, online information providers often choose not to have registered users, and therefore must give up personalization. In this paper, we focus on the case of non-personalized news recommender systems, and explore persuasive techniques that can, nonetheless, be used to enhance recommendation presentation, with the aim of capturing the user’s interest on suggested items leveraging the way news is perceived. We present the results of two evaluations “in the wild”, carried out in the context of a real online magazine and based on data from 16,134 and 20,933 user sessions, respectively, where we empirically assessed the effectiveness of persuasion strategies which exploit logical fallacies and other techniques. Logical fallacies are inferential schemes known since antiquity that, even if formally invalid, appear as plausible and are therefore psychologically persuasive. In particular, our evaluations allowed us to compare three persuasive scenarios based on the Argumentum Ad Populum fallacy, on a modified version of the Argumentum ad Populum fallacy (Group-Ad Populum), and on no fallacy (neutral condition), respectively. Moreover, we studied the effects of the Accent Fallacy (in its visual variant), and of positive vs. negative Framing.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2019
    Description: Recently, with the development of big data and 5G networks, the number of intelligent mobile devices has increased dramatically, therefore the data that needs to be transmitted and processed in the networks has grown exponentially. It is difficult for the end-to-end communication mechanism proposed by traditional routing algorithms to implement the massive data transmission between mobile devices. Consequently, opportunistic social networks propose that the effective data transmission process could be implemented by selecting appropriate relay nodes. At present, most existing routing algorithms find suitable next-hop nodes by comparing the similarity degree between nodes. However, when evaluating the similarity between two mobile nodes, these routing algorithms either consider the mobility similarity between nodes, or only consider the social similarity between nodes. To improve the data dissemination environment, this paper proposes an effective data transmission strategy (MSSN) utilizing mobile and social similarities in opportunistic social networks. In our proposed strategy, we first calculate the mobile similarity between neighbor nodes and destination, set a mobile similarity threshold, and compute the social similarity between the nodes whose mobile similarity is greater than the threshold. The nodes with high mobile similarity degree to the destination node are the reliable relay nodes. After simulation experiments and comparison with other existing opportunistic social networks algorithms, the results show that the delivery ratio in the proposed algorithm is 0.80 on average, the average end-to-end delay is 23.1% lower than the FCNS algorithm (A fuzzy routing-forwarding algorithm exploiting comprehensive node similarity in opportunistic social networks), and the overhead on average is 14.9% lower than the Effective Information Transmission Based on Socialization Nodes (EIMST) algorithm.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2019
    Description: Communication languages convey information through the use of a set of symbols or units. Typically, this unit is word. When developing language technologies, as words in a language do not have the same prior probability, there may not be sufficient training data for each word to model. Furthermore, the training data may not cover all possible words in the language. Due to these data sparsity and word unit coverage issues, language technologies employ modeling of subword units or subunits, which are based on prior linguistic knowledge. For instance, development of speech technologies such as automatic speech recognition system presume that there exists a phonetic dictionary or at least a writing system for the target language. Such knowledge is not available for all languages in the world. In that direction, this article develops a hidden Markov model-based abstract methodology to extract subword units given only pairwise comparison between utterances (or realizations of words in the mode of communication), i.e., whether two utterances correspond to the same word or not. We validate the proposed methodology through investigations on spoken language and sign language. In the case of spoken language, we demonstrate that the proposed methodology can lead up to discovery of phone set and development of phonetic dictionary. In the case of sign language, we demonstrate how hand movement information can be effectively modeled for sign language processing and synthesized back to gain insight about the derived subunits.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2019
    Description: Though the self-portrait has been hailed as the defining artistic genre of modernity, there is not yet a good account of what the self-portrait actually is. This paper provides such an account through the lens of document theory and the philosophy of information. In this paper, the self-portrait is conceptualized as a kind of document, more specifically a kind of self-document, to gain insight into the phenomenon. A self-portrait is shown to be a construction, and not just a representation, of oneself. Creating a self-portrait then is a matter of bringing oneself forth over time—constructing oneself, rather than simply depicting oneself. This account provides grounds to consider whether or how the selfie truly is a form of self-portrait, as is often asserted. In the end, it seems that while both are technologies for self-construction, the self-portrait has the capacity for deep self-construction, whereas the selfie is limited to fewer aspects of the self. This prospect leads into an ethical discussion of the changing concept of identity in the digital age.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2019
    Description: The conveyor belt is an indispensable piece of conveying equipment for a mine whose deviation caused by roller sticky material and uneven load distribution is the most common failure during operation. In this paper, a real-time conveyor belt detection algorithm based on a multi-scale feature fusion network is proposed, which mainly includes two parts: the feature extraction module and the deviation detection module. The feature extraction module uses a multi-scale feature fusion network structure to fuse low-level features with rich position and detail information and high-level features with stronger semantic information to improve network detection performance. Depthwise separable convolutions are used to achieve real-time detection. The deviation detection module identifies and monitors the deviation fault by calculating the offset of conveyor belt. In particular, a new weighted loss function is designed to optimize the network and to improve the detection effect of the conveyor belt edge. In order to evaluate the effectiveness of the proposed method, the Canny algorithm, FCNs, UNet and Deeplab v3 networks are selected for comparison. The experimental results show that the proposed algorithm achieves 78.92% in terms of pixel accuracy (PA), and reaches 13.4 FPS (Frames per Second) with the error of less than 3.2 mm, which outperforms the other four algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2019
    Description: Dependability assessment is one of the most important activities for the analysis of complex systems. Classical analysis techniques of safety, risk, and dependability, like Fault Tree Analysis or Reliability Block Diagrams, are easy to implement, but they estimate inaccurate dependability results due to their simplified hypotheses that assume the components’ malfunctions to be independent from each other and from the system working conditions. Recent contributions within the umbrella of Dynamic Probabilistic Risk Assessment have shown the potential to improve the accuracy of classical dependability analysis methods. Among them, Stochastic Hybrid Fault Tree Automaton (SHyFTA) is a promising methodology because it can combine a Dynamic Fault Tree model with the physics-based deterministic model of a system process, and it can generate dependability metrics along with performance indicators of the physical variables. This paper presents the Stochastic Hybrid Fault Tree Object Oriented (SHyFTOO), a Matlab® software library for the modelling and the resolution of a SHyFTA model. One of the novel features discussed in this contribution is the ease of coupling with a Matlab® Simulink model that facilitates the design of complex system dynamics. To demonstrate the utilization of this software library and the augmented capability of generating further dependability indicators, three different case studies are discussed and solved with a thorough description for the implementation of the corresponding SHyFTA models.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2019
    Description: Research and development (R&D) are always oriented towards new discoveries, based on original terms or hypotheses, and their concluding outcomes are often uncertain. The present work focused on the degree of uncertainty for R&D activities. In fact, uncertainty makes it difficult to quantify the time and resources needed to achieve a final outcome, create a work plan and budget, and finalize the resulting “innovative” products or services that could be transferred or exchanged in a specific market. The present work attempts to indicate the degree of uncertainty of the research activities developed by a set of firms. The method used aimed to quantify the five criteria defined by the Manual of Frascati. Through the creation of an uncertainty cloud, a cone of uncertainty was defined following an approach based on project management. The evaluation grid was characterized by the decomposition of the different variables divided into quartiles, which allowed for the detection of the evolution of the project and each of its component. The ancillary objective aim was to also observe the development degree of these industries towards a framework of Industry 4.0.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2019
    Description: Twisted Edwards curves have been at the center of attention since their introduction by Bernstein et al. in 2007. The curve ED25519, used for Edwards-curve Digital Signature Algorithm (EdDSA), provides faster digital signatures than existing schemes without sacrificing security. The CURVE25519 is a Montgomery curve that is closely related to ED25519. It provides a simple, constant time, and fast point multiplication, which is used by the key exchange protocol X25519. Software implementations of EdDSA and X25519 are used in many web-based PC and Mobile applications. In this paper, we introduce a low-power, low-area FPGA implementation of the ED25519 and CURVE25519 scalar multiplication that is particularly relevant for Internet of Things (IoT) applications. The efficiency of the arithmetic modulo the prime number 2 255 − 19 , in particular the modular reduction and modular multiplication, are key to the efficiency of both EdDSA and X25519. To reduce the complexity of the hardware implementation, we propose a high-radix interleaved modular multiplication algorithm. One benefit of this architecture is to avoid the use of large-integer multipliers relying on FPGA DSP modules.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2019
    Description: Aimed at the one-to-one certification problem of unsteady state iris at different shooting times, a multi-algorithm parallel integration general model structure is proposed in this paper. The iris in the lightweight constrained state affected by defocusing, deflection, and illumination is taken as the research object, the existing algorithms are combined into the model structure effectively, and a one-to-one certification algorithm for lightweight constrained state unsteady iris was designed based on multi-algorithm integration and maximum trusted decision. In this algorithm, a sufficient number of iris internal feature points from the unstable state texture were extracted as effective iris information through the image processing layer composed of various filtering processing algorithms, thereby eliminating defocused interference. In the feature recognition layer, iris deflection interference was excluded by the improved methods of Gabor and Hamming and Haar and BP for the stable features extracted by the image processing layer, and two certification results were obtained by means of parallel recognition. The correct number of certifications for an algorithm under a certain lighting condition were counted. The method with the most correct number was set as the maximum trusted method under this lighting condition, and the results of the maximum trusted method were taken as the final decision, thereby eliminating the effect of illumination. Experiments using the JLU and CASIA iris libraries under the prerequisites in this paper show that the correct recognition rate of the algorithm can reach a high level of 98% or more, indicating that the algorithm can effectively improve the accuracy of the one-to-one certification of lightweight constrained state unsteady iris. Compared with the latest architecture algorithms, such as CNN and deep learning, the proposed algorithm is more suitable for the prerequisites presented in this paper, which has good environmental inclusiveness and can better improve existing traditional algorithms’ effectiveness through the design of a parallel integration model structure.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2019
    Description: Assembly is a very important manufacturing process in the age of Industry 4.0. Aimed at the problems of part identification and assembly inspection in industrial production, this paper proposes a method of assembly inspection based on machine vision and a deep neural network. First, the image acquisition platform is built to collect the part and assembly images. We use the Mask R-CNN model to identify and segment the shape from each part image, and to obtain the part category and position coordinates in the image. Then, according to the image segmentation results, the area, perimeter, circularity, and Hu invariant moment of the contour are extracted to form the feature vector. Finally, the SVM classification model is constructed to identify the assembly defects, with a classification accuracy rate of over 86.5%. The accuracy of the method is verified by constructing an experimental platform. The results show that the method effectively completes the identification of missing and misaligned parts in the assembly, and has good robustness.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2019
    Description: The development of robotic applications for agricultural environments has several problems which are not present in the robotic systems used for indoor environments. Some of these problems can be solved with an efficient navigation system. In this paper, a new system is introduced to improve the navigation tasks for those robots which operate in agricultural environments. Concretely, the paper focuses on the problem related to the autonomous mapping of agricultural parcels (i.e., an orange grove). The map created by the system will be used to help the robots navigate into the parcel to perform maintenance tasks such as weed removal, harvest, or pest inspection. The proposed system connects to a satellite positioning service to obtain the real coordinates where the robotic system is placed. With these coordinates, the parcel information is downloaded from an online map service in order to autonomously obtain a map of the parcel in a readable format for the robot. Finally, path planning is performed by means of Fast Marching techniques using the robot or a team of two robots. This paper introduces the proof-of-concept and describes all the necessary steps and algorithms to obtain the path planning just from the initial coordinates of the robot.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2019
    Description: The characteristic of the satellite repeat shift time can reflect the status of the satellite operation, and is also one of the key factors of the sidereal filtering multipath correction. Although some methods have been developed to calculate the repeat shift time, few efforts have been made to analyze and compare the performance of this feature for the GPS (Global Positioning System), BDS (BeiDou System), and Galileo in depth. Hence, three methods used for calculating the repeat shift time are presented, and used to compare and analyze the three global systems in depth, named the broadcast ephemeris method (BEM), correlation coefficient method (CCM), and aspect repeat time method (ARTM). The experiment results show that the repeat shift time of each satellite is different. Also, the difference between the maximum and minimum varies from different systems. The maximum difference is about 25 s for the BDS IGSO (Inclined Geosynchronous Orbit) and the minimum is merely 10 s for the GPS system. Furthermore, for the same satellite, the shift time calculated by the three methods is almost identical, and the maximum difference is only about 7 s between the CCM and the ARTM method for the BDS MEO (Medium Earth Orbit) satellite. Although the repeat shift time is different daily for the same satellite and the same method, the changes are very small. Moreover, in terms of the STD (Standard Deviation) of the BS (between satellites) and MS (mean shift for the same satellite), the GPS system is the best, the performance of the BDS system is medium, and the Galileo performs slightly worse than the GPS and BDS.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2019
    Description: The authors wish to make the following corrections to this paper [...]
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2019
    Description: In the education process, students face problems with understanding due to the complexity, necessity of abstract thinking and concepts. More and more educational centres around the world have started to introduce powerful new technology-based tools that help meet the needs of the diverse student population. Over the last several years, virtual reality (VR) has moved from being the purview of gaming to professional development. It plays an important role in teaching process, providing an interesting and engaging way of acquiring information. What follows is an overview of the big trend, opportunities and concerns associated with VR in education. We present new opportunities in VR and put together the most interesting, recent virtual reality applications used in education in relation to several education areas such as general, engineering and health-related education. Additionally, this survey contributes by presenting methods for creating scenarios and different approaches for testing and validation. Lastly, we conclude and discuss future directions of VR and its potential to improve the learning experience.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2019
    Description: The big data from various sensors installed on-board for monitoring the status of ship devices is very critical for improving the efficiency and safety of ship operations and reducing the cost of operation and maintenance. However, how to utilize these data is a key issue. The temperature change of the ship propulsion devices can often reflect whether the devices are faulty or not. Therefore, this paper aims to forecast the temperature of the ship propulsion devices by data-driven methods, where potential faults can be further identified automatically. The proposed forecasting process is composed of preprocessing, feature selection, and prediction, including an autoregressive distributed lag time series model (ARDL), stepwise regression (SR) model, neural network (NN) model, and deep neural network (DNN) model. Finally, the proposed forecasting process is applied on a naval ship, and the results show that the ARDL model has higher accuracy than the three other models.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2019
    Description: This paper proposes an adaptive backstepping control algorithm for electric braking systems with electromechanical actuators (EMAs). First, the ideal mathematical model of the EMA is established, and the nonlinear factors are analyzed, such as the deformation of the reduction gear. Subsequently, the actual mathematical model of the EMA is rebuilt by combining the ideal model and the nonlinear factors. To realize high performance braking pressure control, the backstepping control method is adopted to address the mismatched uncertainties in the electric braking system, and a radial basis function (RBF) neural network is established to estimate the nonlinear functions in the control system. The experimental results indicate that the proposed braking pressure control strategy can improve the servo performance of the electric braking system. In addition, the hardware-in-loop (HIL) experimental results show that the proposed EMA controller can satisfy the requirements of the aircraft antilock braking systems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2019
    Description: The current inclusion of agile methodologies in web-oriented projects has been considered on a large-scale by software developers. However, the benefits and limitations go beyond the comforts that project managers delimit when choosing them. Selecting a methodology involves more than only the associated processes or some documentation. Based on the above, we could define as the main concerns the approach with which we identify the methodology, the needs of the company, the size, and qualities of the project, and especially the characteristics of agile development that they possess. However, there are several difficulties in selecting the most appropriate methodology due to the features in common; Will it be suitable for my project? What challenges will be presented in the process? Will my team understand each stage? Will I be able to deliver software that satisfies the client? Project managers create these questions, which seem manageable but have huge effects. This paper presents a systematic literature review based on the analysis of the approaches of six web development methodologies. The aim of the study is to analyze the approaches presented by relevant methodologies, identifying their common agile characteristics and managing to contrast both its benefits and limitations during a project. As a result, we could itemize five common features, which are presented within the processes; (1) flexibility, (2) constant communication of the workgroup, (3) use of UML, (4) the inclusion of the end-user and (5) some documentation.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2019
    Description: After more than a decade, the supply-driven approach to publishing public (open) data has resulted in an ever-growing number of data silos. Hundreds of thousands of datasets have been catalogued and can be accessed at data portals at different administrative levels. However, usually, users do not think in terms of datasets when they search for information. Instead, they are interested in information that is most likely scattered across several datasets. In the world of proprietary in-company data, organizations invest heavily in connecting data in knowledge graphs and/or store data in data lakes with the intention of having an integrated view of the data for analysis. With the rise of machine learning, it is a common belief that governments can improve their services, for example, by allowing citizens to get answers related to government information from virtual assistants like Alexa or Siri. To provide high-quality answers, these systems need to be fed with knowledge graphs. In this paper, we share our experience of constructing and using the first open government knowledge graph in the Netherlands. Based on the developed demonstrators, we elaborate on the value of having such a graph and demonstrate its use in the context of improved data browsing, multicriteria analysis for urban planning, and the development of location-aware chat bots.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2019
    Description: Computer science is a predominantly male field of study. Women face barriers while trying to insert themselves in the study of computer science. Those barriers extend to when women are exposed to the professional area of computer science. Despite decades of social fights for gender equity in Science, Technology, Engineering, and Mathematics (STEM) education and in computer science in general, few women participate in computer science, and some of the reasons include gender bias and lack of support for women when choosing a computer science career. Open source software development has been increasingly used by companies seeking the competitive advantages gained by team diversity. This diversification of the characteristics of team members includes, for example, the age of the participants, the level of experience, education and knowledge in the area, and their gender. In open source software projects women are underrepresented and a series of biases are involved in their participation. This paper conducts a systematic literature review with the objective of finding factors that could assist in increasing women’s interest in contributing to open source communities and software development projects. The main contributions of this paper are: (i) identification of factors that cause women’s lack of interest (engagement), (ii) possible solutions to increase the engagement of this public, (iii) to outline the profile of professional women who are participating in open source software projects and software development projects. The main findings of this research reveal that women are underrepresented in software development projects and in open source software projects. They represent less than 10% of the total developers and the main causes of this underrepresentation may be associated with their workplace conditions, which reflect male gender bias.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2019
    Description: Objects that possess mass (e.g., automobiles, manufactured items, etc.) translationally accelerate in direct proportion to the force applied scaled by the object’s mass in accordance with Newton’s Law, while the rotational companion is Euler’s moment equations relating angular acceleration of objects that possess mass moments of inertia. Michel Chasles’s theorem allows us to simply invoke Newton and Euler’s equations to fully describe the six degrees of freedom of mechanical motion. Many options are available to control the motion of objects by controlling the applied force and moment. A long, distinguished list of references has matured the field of controlling a mechanical motion, which culminates in the burgeoning field of deterministic artificial intelligence as a natural progression of the laudable goal of adaptive and/or model predictive controllers that can be proven to be optimal subsequent to their development. Deterministic A.I. uses Chasle’s claim to assert Newton’s and Euler’s relations as deterministic self-awareness statements that are optimal with respect to state errors. Predictive controllers (both continuous and sampled-data) derived from the outset to be optimal by first solving an optimization problem with the governing dynamic equations of motion lead to several controllers (including a controller that twice invokes optimization to formulate robust, predictive control). These controllers are compared to each other with noise and modeling errors, and the many figures of merit are used: tracking error and rate error deviations and means, in addition to total mean cost. Robustness is evaluated using Monte Carlo analysis where plant parameters are randomly assumed to be incorrectly modeled. Six instances of controllers are compared against these methods and interpretations, which allow engineers to select a tailored control for their given circumstances. Novel versions of the ubiquitous classical proportional-derivative, “PD” controller, is developed from the optimization statement at the outset by using a novel re-parameterization of the optimal results from time-to-state parameterization. Furthermore, time-optimal controllers, continuous predictive controllers, and sampled-data predictive controllers, as well as combined feedforward plus feedback controllers, and the two degree of freedom controllers (i.e., 2DOF). The context of the term “feedforward” used in this study is the context of deterministic artificial intelligence, where analytic self-awareness statements are strictly determined by the governing physics (of mechanics in this case, e.g., Chasle, Newton, and Euler). When feedforward is combined with feedback per the previously mentioned method (provenance foremost in optimization), the combination is referred to as “2DOF” or two degrees of freedom to indicate the twice invocation of optimization at the genesis of the feedforward and the feedback, respectively. The feedforward plus feedback case is augmented by an online (real time) comparison to the optimal case. This manuscript compares these many optional control strategies against each other. Nominal plants are used, but the addition of plant noise reveals the robustness of each controller, even without optimally rejecting assumed-Gaussian noise (e.g., via the Kalman filter). In other words, noise terms are intentionally left unaddressed in the problem formulation to evaluate the robustness of the proposed method when the real-world noise is added. Lastly, mismodeled plants controlled by each strategy reveal relative performance. Well-anticipated results include the lowest cost, which is achieved by the optimal controller (with very poor robustness), while low mean errors and deviations are achieved by the classical controllers (at the highest cost). Both continuous predictive control and sampled-data predictive control perform well at both cost as well as errors and deviations, while the 2DOF controller performance was the best overall.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2019
    Description: This paper presents a space mission planning tool, which was developed for LEO (Low Earth Orbit) observation satellites. The tool is focused on a two-phase planning strategy with clustering preprocessing and mission planning, where an improved clustering algorithm is applied, and a hybrid algorithm that combines the genetic algorithm with the simulated annealing algorithm (GA–SA) is given and discussed. Experimental simulation studies demonstrate that the GA–SA algorithm with the improved clique partition algorithm based on the graph theory model exhibits higher fitness value and better optimization performance and reliability than the GA or SA algorithms alone.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2019
    Description: The ILAHS (inhomogeneous linear algebraic hybrid system) is a kind of classic hybrid system. For the purpose of optimizing the design of ILAHS, one important strategy is to introduce equivalence to reduce the states. Recent advances in the hybrid system indicate that approximate trace equivalence can further simplify the design of ILAHS. To address this issue, the paper first introduces the trajectory metric d t r j for measuring the deviation of two hybrid systems’ behaviors. Given a deviation ε ≥ 0 , the original ILAHS of H 1 can be transformed to the approximate ILAHS of H 2 , then in trace equivalence semantics, H 2 is further reduced to H 3 with the same functions, and hence H 1 is ε -approximate trace equivalent to H 3 . In particular, ε = 0 is a traditional trace equivalence. We implement an approach based on RealRootClassification to determine the approximation between the ILAHSs. The paper also shows that the existing approaches are only special cases of our method. Finally, we illustrate the effectiveness and practicality of our method on an example.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2019
    Description: Conversational agents are reshaping our communication environment and have the potential to inform and persuade in new and effective ways. In this paper, we present the underlying technologies and the theoretical background behind a health-care platform dedicated to supporting medical stuff and individuals with movement disabilities and to providing advanced monitoring functionalities in hospital and home surroundings. The framework implements an intelligent combination of two research areas: (1) sensor- and camera-based monitoring to collect, analyse, and interpret people behaviour and (2) natural machine–human interaction through an apprehensive virtual assistant benefiting ailing patients. In addition, the framework serves as an important assistant to caregivers and clinical experts to obtain information about the patients in an intuitive manner. The proposed approach capitalises on latest breakthroughs in computer vision, sensor management, speech recognition, natural language processing, knowledge representation, dialogue management, semantic reasoning, and speech synthesis, combining medical expertise and patient history.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2019
    Description: Image classification is one of the most important tasks in the digital era. In terms of cultural heritage, it is important to develop classification methods that obtain good accuracy, but also are less computationally intensive, as image classification usually uses very large sets of data. This study aims to train and test four classification algorithms: (i) the multilayer perceptron, (ii) averaged one dependence estimators, (iii) forest by penalizing attributes, and (iv) the k-nearest neighbor rough sets and analogy based reasoning, and compares these with the results obtained from the Convolutional Neural Network (CNN). Three types of features were extracted from the images: (i) the edge histogram, (ii) the color layout, and (iii) the JPEG coefficients. The algorithms were tested before and after applying the attribute selection, and the results indicated that the best classification performance was obtained for the multilayer perceptron in both cases.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2019
    Description: In the future, automated cars may feature external human–machine interfaces (eHMIs) to communicate relevant information to other road users. However, it is currently unknown where on the car the eHMI should be placed. In this study, 61 participants each viewed 36 animations of cars with eHMIs on either the roof, windscreen, grill, above the wheels, or a projection on the road. The eHMI showed ‘Waiting’ combined with a walking symbol 1.2 s before the car started to slow down, or ‘Driving’ while the car continued driving. Participants had to press and hold the spacebar when they felt it safe to cross. Results showed that, averaged over the period when the car approached and slowed down, the roof, windscreen, and grill eHMIs yielded the best performance (i.e., the highest spacebar press time). The projection and wheels eHMIs scored relatively poorly, yet still better than no eHMI. The wheels eHMI received a relatively high percentage of spacebar presses when the car appeared from a corner, a situation in which the roof, windscreen, and grill eHMIs were out of view. Eye-tracking analyses showed that the projection yielded dispersed eye movements, as participants scanned back and forth between the projection and the car. It is concluded that eHMIs should be presented on multiple sides of the car. A projection on the road is visually effortful for pedestrians, as it causes them to divide their attention between the projection and the car itself.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2019
    Description: Metagenomics studies, as well as genomics studies of polyploid species such as wheat, deal with the analysis of high variation data. Such data contain sequences from similar, but distinct genetic chains. This fact presents an obstacle to analysis and research. In particular, the detection of instrumentation errors during the digitalization of the sequences may be hindered, as they can be indistinguishable from the real biological variation inside the digital data. This can prevent the determination of the correct sequences, while at the same time make variant studies significantly more difficult. This paper details a collection of ML-based models used to distinguish a real variant from an erroneous one. The focus is on using this model directly, but experiments are also done in combination with other predictors that isolate a pool of error candidates.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2019
    Description: Network Function Virtualization (NFV) has revolutionized the way network services are offered to end users. Individual network functions are decoupled from expensive and dedicated middleboxes and are now provided as software-based virtualized entities called Virtualized Network Functions (VNFs). NFV is often complemented with the Cloud Computing paradigm to provide networking functions to enterprise customers and end-users remote from their premises. NFV along with Cloud Computing has also started to be seen in Internet of Things (IoT) platforms as a means to provide networking functions to the IoT traffic. The intermix of IoT, NFV, and Cloud technologies, however, is still in its infancy creating a rich and open future research area. To this end, in this paper, we propose a novel approach to facilitate the placement and deployment of service chained VNFs in a network cloud infrastructure that can be extended using the Mobile Edge Computing (MEC) infrastructure for accommodating mission critical and delay sensitive traffic. Our aim is to minimize the end-to-end communication delay while keeping the overall deployment cost to minimum. Results reveal that the proposed approach can significantly reduce the delay experienced, while satisfying the Service Providers’ goal of low deployment costs.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2019
    Description: The current paper addresses relevant network security vulnerabilities introduced by network devices within the emerging paradigm of Internet of Things (IoT) as well as the urgent need to mitigate the negative effects of some types of Distributed Denial of Service (DDoS) attacks that try to explore those security weaknesses. We design and implement a Software-Defined Intrusion Detection System (IDS) that reactively impairs the attacks at its origin, ensuring the “normal operation” of the network infrastructure. Our proposal includes an IDS that automatically detects several DDoS attacks, and then as an attack is detected, it notifies a Software Defined Networking (SDN) controller. The current proposal also downloads some convenient traffic forwarding decisions from the SDN controller to network devices. The evaluation results suggest that our proposal timely detects several types of cyber-attacks based on DDoS, mitigates their negative impacts on the network performance, and ensures the correct data delivery of normal traffic. Our work sheds light on the programming relevance over an abstracted view of the network infrastructure to timely detect a Botnet exploitation, mitigate malicious traffic at its source, and protect benign traffic.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2019
    Description: Network representation learning is a key research field in network data mining. In this paper, we propose a novel multi-view network representation algorithm (MVNR), which embeds multi-scale relations of network vertices into the low dimensional representation space. In contrast to existing approaches, MVNR explicitly encodes higher order information using k-step networks. In addition, we introduce the matrix forest index as a kind of network feature, which can be applied to balance the representation weights of different network views. We also research the relevance amongst MVNR and several excellent research achievements, including DeepWalk, node2vec and GraRep and so forth. We conduct our experiment on several real-world citation datasets and demonstrate that MVNR outperforms some new approaches using neural matrix factorization. Specifically, we demonstrate the efficiency of MVNR on network classification, visualization and link prediction tasks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2019
    Description: Network Function Virtualization is a new technology allowing for a elastic cloud and bandwidth resource allocation. The technology requires an orchestrator whose role is the service and resource orchestration. It receives service requests, each one characterized by a Service Function Chain, which is a set of service functions to be executed according to a given order. It implements an algorithm for deciding where both to allocate the cloud and bandwidth resources and to route the SFCs. In a traditional orchestration algorithm, the orchestrator has a detailed knowledge of the cloud and network infrastructures and that can lead to high computational complexity of the SFC Routing and Cloud and Bandwidth resource Allocation (SRCBA) algorithm. In this paper, we propose and evaluate the effectiveness of a scalable orchestration architecture inherited by the one proposed within the European Telecommunications Standards Institute (ETSI) and based on the functional separation of an NFV orchestrator in Resource Orchestrator (RO) and Network Service Orchestrator (NSO). Each cloud domain is equipped with an RO whose task is to provide a simple and abstract representation of the cloud infrastructure. These representations are notified of the NSO that can apply a simplified and less complex SRCBA algorithm. In addition, we show how the segment routing technology can help to simplify the SFC routing by means of an effective addressing of the service functions. The scalable orchestration solution has been investigated and compared to the one of a traditional orchestrator in some network scenarios and varying the number of cloud domains. We have verified that the execution time of the SRCBA algorithm can be drastically reduced without degrading the performance in terms of cloud and bandwidth resource costs.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2019
    Description: In the real word, optimization problems in multi-objective optimization (MOP) and dynamic optimization can be seen everywhere. During the last decade, among various swarm intelligence algorithms for multi-objective optimization problems, glowworm swarm optimization (GSO) and bacterial foraging algorithm (BFO) have attracted increasing attention from scholars. Although many scholars have proposed improvement strategies for GSO and BFO to keep a good balance between convergence and diversity, there are still many problems to be solved carefully. In this paper, a new coupling algorithm based on GSO and BFO (MGSOBFO) is proposed for solving dynamic multi-objective optimization problems (dMOP). MGSOBFO is proposed to achieve a good balance between exploration and exploitation by dividing into two parts. Part I is in charge of exploitation by GSO and Part II is in charge of exploration by BFO. At the same time, the simulation binary crossover (SBX) and polynomial mutation are introduced into the MGSOBFO to enhance the convergence and diversity ability of the algorithm. In order to show the excellent performance of the algorithm, we experimentally compare MGSOBFO with three algorithms on the benchmark function. The results suggests that such a coupling algorithm has good performance and outperforms other algorithms which deal with dMOP.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2019
    Description: In the last decade, there has been a growing scientific interest in the analysis of DNA microarray datasets, which have been widely used in basic and translational cancer research. The application fields include both the identification of oncological subjects, separating them from the healthy ones, and the classification of different types of cancer. Since DNA microarray experiments typically generate a very large number of features for a limited number of patients, the classification task is very complex and typically requires the application of a feature-selection process to reduce the complexity of the feature space and to identify a subset of distinctive features. In this framework, there are no standard state-of-the-art results generally accepted by the scientific community and, therefore, it is difficult to decide which approach to use for obtaining satisfactory results in the general case. Based on these considerations, the aim of the present work is to provide a large experimental comparison for evaluating the effect of the feature-selection process applied to different classification schemes. For comparison purposes, we considered both ranking-based feature-selection techniques and state-of-the-art feature-selection methods. The experiments provide a broad overview of the results obtainable on standard microarray datasets with different characteristics in terms of both the number of features and the number of patients.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2019
    Description: In recent years, due to the unnecessary wastage of electrical energy in residential buildings, the requirement of energy optimization and user comfort has gained vital importance. In the literature, various techniques have been proposed addressing the energy optimization problem. The goal of each technique is to maintain a balance between user comfort and energy requirements, such that the user can achieve the desired comfort level with the minimum amount of energy consumption. Researchers have addressed the issue with the help of different optimization algorithms and variations in the parameters to reduce energy consumption. To the best of our knowledge, this problem is not solved yet due to its challenging nature. The gaps in the literature are due to advancements in technology, the drawbacks of optimization algorithms, and the introduction of new optimization algorithms. Further, many newly proposed optimization algorithms have produced better accuracy on the benchmark instances but have not been applied yet for the optimization of energy consumption in smart homes. In this paper, we have carried out a detailed literature review of the techniques used for the optimization of energy consumption and scheduling in smart homes. Detailed discussion has been carried out on different factors contributing towards thermal comfort, visual comfort, and air quality comfort. We have also reviewed the fog and edge computing techniques used in smart homes.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2019
    Description: This article addresses the task of inferring elements in the attributes of data. Extracting data related to our interests is a challenging task. Although data on the web can be accessed through free text queries, it is difficult to obtain results that accurately correspond to user intentions because users might not express their objects of interest using exact terms (variables, outlines of data, etc.) found in the data. In other words, users do not always have sufficient knowledge of the data to formulate an effective query. Hence, we propose a method that enables the type, format, and variable elements to be inferred as attributes of data when a natural language summary of the data is provided as a free text query. To evaluate the proposed method, we used the Data Jacket’s datasets whose metadata is written in natural language. The experimental results indicate that our method outperforms those obtained from string matching and word embedding. Applications based on this study can support users who wish to retrieve or acquire new data.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...