ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (15,193)
  • Latest Papers from Table of Contents or Articles in Press  (15,193)
  • Molecular Diversity Preservation International  (15,193)
  • 2020-2022  (15,193)
  • 1980-1984
  • 1925-1929
  • Process Engineering, Biotechnology, Nutrition Technology  (12,483)
  • Computer Science  (2,710)
Collection
  • Articles  (15,193)
Source
  • Latest Papers from Table of Contents or Articles in Press  (15,193)
Years
Year
Journal
  • 1
    Publication Date: 2020-08-27
    Description: Festivals are experiential products heavily depending on the recommendations of previous visitors. With the power of social media growing, understanding the antecedents of positive electronic word-of-mouth (eWOM) intentions of festival attendees is immensely beneficial for festival organizers to better promote their festivals and control negative publicity. However, there is still limited research regarding eWOM intentions in the festival context. Thus, this study aims to fill such a gap by investigating the relationships among festival attendees’ enjoyment seeking motivation, perceived value, visitor satisfaction, and eWOM intention in a local festival setting. Additionally, the moderating role of gender was tested as it is one of the most important demographic variables to show individual differences in behavioral intentions. The results of structural equation modeling showed a positive effect of enjoyment seeking motivation on perceived value, visitor satisfaction, and eWOM intention. Moreover, gender differences in eWOM intention and a full mediating effect of visitor satisfaction between perceived value and eWOM intention for female respondents were revealed. The findings of this study extend the existing festival literature and provide insights for strategically organizing and promoting festivals to generate more positive eWOM which can be utilized as an effective marketing tool and a feedback channel.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-08-26
    Description: Information and communication technologies transform modern education into a more available learning matrix. One of the unexplored aspects of open education is the constant communicative interaction within the student group by using social media. The aim of the study was to determine principal functions of student-led communication in the educational process, the method for assessing its strong points and the disadvantages disrupting traditional learning. For the primary study of the phenomenon, we used methods that made it possible to propose approaches to further analysis. Netnography is the main research method defining the essence and characteristics of the student-led peer-communication. In our research, we applied data visualization, analytical and quantitative methods and developed a set of quantitative indicators that can be used to assess various aspects of student communication in chats. The elaborated visual model can serve as a simple tool for diagnosing group communication processes. We revealed that online group chats perform a support function in learning. They provide constant informational resource on educational and organizational issues and create emotional comfort. Identified features serve to define shortcomings (e.g., lack of students’ readiness to freely exchange answers to assignments) and significant factors (e.g., underutilized opportunities for self-organization) that exist in the modern system of higher education.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-08-28
    Description: Due to the growing success of neural machine translation (NMT), many have started to question its applicability within the field of literary translation. In order to grasp the possibilities of NMT, we studied the output of the neural machine system of Google Translate (GNMT) and DeepL when applied to four classic novels translated from English into Dutch. The quality of the NMT systems is discussed by focusing on manual annotations, and we also employed various metrics in order to get an insight into lexical richness, local cohesion, syntactic, and stylistic difference. Firstly, we discovered that a large proportion of the translated sentences contained errors. We also observed a lower level of lexical richness and local cohesion in the NMTs compared to the human translations. In addition, NMTs are more likely to follow the syntactic structure of a source sentence, whereas human translations can differ. Lastly, the human translations deviate from the machine translations in style.
    Electronic ISSN: 2227-9709
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2020-08-29
    Description: The emergence and outbreak of the novel coronavirus (COVID-19) had a devasting effect on global health, the economy, and individuals’ daily lives. Timely diagnosis of COVID-19 is a crucial task, as it reduces the risk of pandemic spread, and early treatment will save patients’ life. Due to the time-consuming, complex nature, and high false-negative rate of the gold-standard RT-PCR test used for the diagnosis of COVID-19, the need for an additional diagnosis method has increased. Studies have proved the significance of X-ray images for the diagnosis of COVID-19. The dissemination of deep-learning techniques on X-ray images can automate the diagnosis process and serve as an assistive tool for radiologists. In this study, we used four deep-learning models—DenseNet121, ResNet50, VGG16, and VGG19—using the transfer-learning concept for the diagnosis of X-ray images as COVID-19 or normal. In the proposed study, VGG16 and VGG19 outperformed the other two deep-learning models. The study achieved an overall classification accuracy of 99.3%.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-08-29
    Description: In this work, we demonstrate how the blockchain and the off-chain storage interact via Oracle-based mechanisms, which build an effective connection between a distributed database and real assets. For demonstration purposes, smart contracts were drawn up to deal with two different applications. Due to the characteristics of the blockchain, we may still encounter severe privacy issues, since the data stored on the blockchain are exposed to the public. The proposed scheme provides a general solution for resolving the above-mentioned privacy issue; that is, we try to protect the on-chain privacy of the sensitive data by using homomorphic encryption techniques. Specifically, we constructed a secure comparison protocol that can check the correctness of a logic function directly in the encrypted domain. By using the proposed access control contract and the secure comparison protocol, one can carry out sensitive data-dependent smart contract operations without revealing the data themselves.
    Electronic ISSN: 2073-431X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2020-08-29
    Description: Healthcare facilities are constantly deteriorating due to tight budgets allocated to the upkeep of building assets. This entails the need for improved deterioration modeling of such buildings in order to enforce a predictive maintenance approach that decreases the unexpected occurrence of failures and the corresponding downtime elapsed to repair or replace the faulty asset components. Currently, hospitals utilize subjective deterioration prediction methodologies that mostly rely on age as the sole indicator of degradation to forecast the useful lives of the building components. Thus, this paper aims at formulating a more efficient stochastic deterioration prediction model that integrates the latest observed condition into the forecasting procedure to overcome the subjectivity and uncertainties associated with the currently employed methods. This is achieved by means of developing a hybrid genetic algorithm-based fuzzy Markovian model that simulates the deterioration process given the scarcity of available data demonstrating the condition assessment and evaluation for such critical facilities. A nonhomogeneous transition probability matrix (TPM) based on fuzzy membership functions representing the condition, age and relative deterioration rate of the hospital systems is utilized to address the inherited uncertainties. The TPM is further calibrated by means of a genetic algorithm to circumvent the drawbacks of the expert-based models. A sensitivity analysis was carried out to analyze the possible changes in the output resulting from predefined modifications to the input parameters in order to ensure the robustness of the model. The performance of the deterioration prediction model developed is then validated through a comparison with a state-of-art stochastic model in contrast to real hospital datasets, and the results obtained from the developed model significantly outperformed the long-established Weibull distribution-based deterioration prediction methodology with mean absolute errors of 1.405 and 9.852, respectively. Therefore, the developed model is expected to assist decision-makers in creating more efficient maintenance programs as well as more data-driven capital renewal plans.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2020-08-29
    Description: The harmonic closeness centrality measure associates, to each node of a graph, the average of the inverse of its distances from all the other nodes (by assuming that unreachable nodes are at infinite distance). This notion has been adapted to temporal graphs (that is, graphs in which edges can appear and disappear during time) and in this paper we address the question of finding the top-k nodes for this metric. Computing the temporal closeness for one node can be done in O(m) time, where m is the number of temporal edges. Therefore computing exactly the closeness for all nodes, in order to find the ones with top closeness, would require O(nm) time, where n is the number of nodes. This time complexity is intractable for large temporal graphs. Instead, we show how this measure can be efficiently approximated by using a “backward” temporal breadth-first search algorithm and a classical sampling technique. Our experimental results show that the approximation is excellent for nodes with high closeness, allowing us to detect them in practice in a fraction of the time needed for computing the exact closeness of all nodes. We validate our approach with an extensive set of experiments.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2020-07-20
    Description: Computer programmers require various instructive information during coding and development. Such information is dispersed in different sources like language documentation, wikis, and forums. As an information exchange platform, programmers broadly utilize Stack Overflow, a Web-based Question Answering site. In this paper, we propose a recommender system which uses a supervised machine learning approach to investigate Stack Overflow posts to present instructive information for the programmers. This might be helpful for the programmers to solve programming problems that they confront with in their daily life. We analyzed posts related to two most popular programming languages—Python and PHP. We performed a few trials and found that the supervised approach could effectively manifold valuable information from our corpus. We validated the performance of our system from human perception which showed an accuracy of 71%. We also presented an interactive interface for the users that satisfied the users’ query with the matching sentences with most instructive information.
    Electronic ISSN: 2073-431X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2020-07-19
    Description: Background: Health benefits from physical activity (PA) can be achieved by following the WHO recommendation for PA. To increase PA in inactive individuals, digital interventions can provide cost-effective and low-threshold access. Moreover, gamification elements can raise the motivation for PA. This study analyzed which factors (personality traits, app features, gamification) are relevant to increasing PA within this target group. Methods: N = 808 inactive participants (f = 480; m = 321; age = 48 ± 6) were integrated into the analysis of the desire for PA, the appearance of personality traits and resulting interest in app features and gamification. The statistical analysis included chi-squared tests, one-way ANOVA and regression analysis. Results: The main interests in PA were fitness (97%) and outdoor activities (75%). No significant interaction between personality traits, interest in PA goals, app features and gamification were found. The interest in gamification was determined by the PA goal. Participants’ requirements for features included feedback and suggestions for activities. Monetary incentives were reported as relevant gamification aspects. Conclusion: Inactive people can be reached by outdoor activities, interventions to increase an active lifestyle, fitness and health sports. The study highlighted the interest in specific app features and gamification to increase PA in inactive people through an app.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2020-07-01
    Description: This paper presents a study related to human psychophysiological activity estimation based on a smartphone camera and sensors. In recent years, awareness of the human body, as well as human mental states, has become more and more popular. Yoga and meditation practices have moved from the east to Europe, the USA, Russia, and other countries, and there are a lot of people who are interested in them. However, recently, people have tried the practice but would prefer an objective assessment. We propose to apply the modern methods of computer vision, pattern recognition, competence management, and dynamic motivation to estimate the quality of the meditation process and provide the users with objective information about their practice. We propose an approach that covers the possibility of recognizing pictures of humans from a smartphone and utilizes wearable electronics to measure the user’s heart rate and motions. We propose a model that allows building meditation estimation scores based on these parameters. Moreover, we propose a meditation expert network through which users can find the coach that is most appropriate for him/her. Finally, we propose the dynamic motivation model, which encourages people to perform the practice every day.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2020-08-31
    Description: The recent literature concerning globalizing regional development has placed significant emphasis on the Global Production Network (GPN 2.0). GPN 2.0 in economic geography emphasizes that regional growth is caused by a shift in the strategic coupling mode from a low to high level. In addition, GPN 2.0 regards firm-level value capture trajectories as key analytical object, rather than the interactive relationships among scalar and divergent actors in GPN 1.0. To provide a better understanding of causal linkages between the GPNs and uneven regional development in the background of globalization and to test the applicability of GPN 2.0 analysis framework, the paper analyzed 62 Korean-invested automotive firms in Jiangsu Province, China. In order to explore the value capture trajectories of lead firms in the GPNs, the authors applied K-means clustering method to quantitatively analyze the local supply networks of lead firms from organizational and spatial dimensions. Then, comparisons were made between strategic coupling modes of GPNs and regional development in North and South Jiangsu. This study found obvious similarities within these two regions but obvious differences between them in terms of value capture trajectories. We observed that North Jiangsu is currently in the stage of “structural coupling”, whereas South Jiangsu is in the stage of “functional coupling.” Thus, this article argues that spatial settings such as regional assets and autonomy are key factors influencing uneven economic development. This research may provide a crucial reference for the regional development of Jiangsu, China.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2020-08-31
    Description: Software defined networking (SDN) is an emerging network paradigm that decouples the control plane from the data plane. The data plane is composed of forwarding elements called switches and the control plane is composed of controllers. SDN is gaining popularity from industry and academics due to its advantages such as centralized, flexible, and programmable network management. The increasing number of traffics due to the proliferation of the Internet of Thing (IoT) devices may result in two problems: (1) increased processing load of the controller, and (2) insufficient space in the switches’ flow table to accommodate the flow entries. These problems may cause undesired network behavior and unstable network performance, especially in large-scale networks. Many solutions have been proposed to improve the management of the flow table, reducing controller processing load, and mitigating security threats and vulnerabilities on the controllers and switches. This paper provides comprehensive surveys of existing schemes to ensure SDN meets the quality of service (QoS) demands of various applications and cloud services. Finally, potential future research directions are identified and discussed such as management of flow table using machine learning.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2020-07-16
    Description: High order convective Cahn-Hilliard type equations describe the faceting of a growing surface, or the dynamics of phase transitions in ternary oil-water-surfactant systems. In this paper, we prove the well-posedness of the classical solutions for the Cauchy problem, associated with this equation.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2020-07-15
    Description: As Web applications become more and more complex, the development costs are increasing as well. A Model Driven Architecture (MDA) approach is proposed in this paper since it simplifies modeling, design, implementation, and integration of applications by defining software mainly at the model level. We adopt the The Unified Modeling Language (UML), as modeling language. UML provides a set of diagrams to model structural and behavioral aspects of the Web applications. Automatic translation of UML diagrams to the Object-Oriented code is highly desirable because it eliminates the chances of introducing human errors. Moreover, automatic code generation helps the software designers delivering of the software on time. In our approach, the automatic transformations across the MDA’s levels are based on meta-models for two of the most important constructs of UML, namely Use Cases and classes. A proprietary tool (called xGenerator) performs the transformations up to the Java source code. The architecture of the generated Web applications respects a variant of the well-known Model-View-Controller (MVC) pattern.
    Electronic ISSN: 2073-431X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2020-07-15
    Description: It is critical for organizations to self-assess their Industry 4.0 readiness to survive and thrive in the age of the Fourth Industrial Revolution. Thereon, conceptualization or development of an Industry 4.0 readiness model with the fundamental model dimensions is needed. This paper used a systematic literature review (SLR) methodology with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines and content analysis strategy to review 97 papers in peer-reviewed academic journals and industry reports published from 2000 to 2019. The review identifies 30 Industry 4.0 readiness models with 158 unique model dimensions. Based on this review, there are two theoretical contributions. First, this paper proposes six dimensions (Technology, People, Strategy, Leadership, Process and Innovation) that can be considered as the most important dimensions for organizations. Second, this review reveals that 70 (44%) out of total 158 total unique dimensions on Industry 4.0 pertain to the assessment of technology alone. This establishes that organizations need to largely improve on their technology readiness, to strengthen their Industry 4.0 readiness. In summary, these six most common dimensions, and in particular, the dominance of the technology dimension provides a research agenda for future research on Industry 4.0 readiness.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2020-07-16
    Description: This study introduces a software-based traffic congestion monitoring system. The transportation system controls the traffic between cities all over the world. Traffic congestion happens not only in cities, but also on highways and other places. The current transportation system is not satisfactory in the area without monitoring. In order to improve the limitations of the current traffic system in obtaining road data and expand its visual range, the system uses remote sensing data as the data source for judging congestion. Since some remote sensing data needs to be kept confidential, this is a problem to be solved to effectively protect the safety of remote sensing data during the deep learning training process. Compared with the general deep learning training method, this study provides a federated learning method to identify vehicle targets in remote sensing images to solve the problem of data privacy in the training process of remote sensing data. The experiment takes the remote sensing image data sets of Los Angeles Road and Washington Road as samples for training, and the training results can achieve an accuracy of about 85%, and the estimated processing time of each image can be as low as 0.047 s. In the final experimental results, the system can automatically identify the vehicle targets in the remote sensing images to achieve the purpose of detecting congestion.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2020-07-15
    Description: Fractal’s spatially nonuniform phenomena and chaotic nature highlight the function utilization in fractal cryptographic applications. This paper proposes a new composite fractal function (CFF) that combines two different Mandelbrot set (MS) functions with one control parameter. The CFF simulation results demonstrate that the given map has high initial value sensitivity, complex structure, wider chaotic region, and more complicated dynamical behavior. By considering the chaotic properties of a fractal, an image encryption algorithm using a fractal-based pixel permutation and substitution is proposed. The process starts by scrambling the plain image pixel positions using the Henon map so that an intruder fails to obtain the original image even after deducing the standard confusion-diffusion process. The permutation phase uses a Z-scanned random fractal matrix to shuffle the scrambled image pixel. Further, two different fractal sequences of complex numbers are generated using the same function i.e. CFF. The complex sequences are thus modified to a double datatype matrix and used to diffuse the scrambled pixels in a row-wise and column-wise manner, separately. Security and performance analysis results confirm the reliability, high-security level, and robustness of the proposed algorithm against various attacks, including brute-force attack, known/chosen-plaintext attack, differential attack, and occlusion attack.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2020-07-08
    Description: In the last decade, there has been a surge in interest in connected and automated vehicles (CAVs) and related enabling technologies in the fields of communication, automation, computing, sensing, and positioning [...]
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2020-07-08
    Description: We consider a rather general problem of nonparametric estimation of an uncountable set of probability density functions (p.d.f.’s) of the form: f ( x ; r ) , where r is a non-random real variable and ranges from R 1 to R 2 . We put emphasis on the algorithmic aspects of this problem, since they are crucial for exploratory analysis of big data that are needed for the estimation. A specialized learning algorithm, based on the 2D FFT, is proposed and tested on observations that allow for estimate p.d.f.’s of a jet engine temperatures as a function of its rotation speed. We also derive theoretical results concerning the convergence of the estimation procedure that contains hints on selecting parameters of the estimation algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2020-07-08
    Description: The lockdown was crucial to stop the COVID-19 pandemic in Italy, but it affected many aspects of social life, among which traditional live science cafés. Moreover, citizens and experts asked for a direct contact, not relying on mass-media communication. In this paper, we describe how the Florence and Rome science cafés, contacted by citizens and experts, either directly or through the Florence science shop, responded to these needs by organizing online versions of traditional face-to-face events, experiencing high levels of participation. The science café methodology was also requested by a high school that needed to conclude an engagement experience with students and their families. We also report the results of a survey about the satisfaction of this new methodology with respect to the old one.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2020-07-09
    Description: This research presents a machine vision approach to detect lesions in liver ultrasound as well as resolving some issues in ultrasound such as artifacts, speckle noise, and blurring effect. The anisotropic diffusion is modified using the edge preservation conditions which found better than traditional ones in quantitative evolution. To dig for more potential information, a learnable super-resolution (SR) is embedded into the deep CNN. The feature is fused using Gabor Wavelet Transform (GWT) and Local Binary Pattern (LBP) with a pre-trained deep CNN model. Moreover, we propose a Bayes rule-based informative patch selection approach to reduce the processing time with the selective image patches and design an algorithm to mark the lesion region from identified ultrasound image patches. To train this model, standard data ensures promising resolution. The testing phase considers generalized data with a varying resolution and test the performance of the model. Exploring cross-validation, it finds that a 5-fold strategy can successfully eradicate the overfitting problem. Experiment data are collected using 298 consecutive ultrasounds comprising 15,296 image patches. This proposed feature fusion technique confirms satisfactory performance compared to the current relevant works with an accuracy of 98.40%.
    Electronic ISSN: 2504-4990
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2020-07-10
    Description: QR (quick response) Codes are one of the most popular types of two-dimensional (2D) matrix codes currently used in a wide variety of fields. Two-dimensional matrix codes, compared to 1D bar codes, can encode significantly more data in the same area. We have compared algorithms capable of localizing multiple QR Codes in an image using typical finder patterns, which are present in three corners of a QR Code. Finally, we present a novel approach to identify perspective distortion by analyzing the direction of horizontal and vertical edges and by maximizing the standard deviation of horizontal and vertical projections of these edges. This algorithm is computationally efficient, works well for low-resolution images, and is also suited to real-time processing.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2020-07-08
    Description: Deep learning models have been applied for varied electrical applications in smart grids with a high degree of reliability and accuracy. The development of deep learning models requires the historical data collected from several electric utilities during the training of the models. The lack of historical data for training and testing of developed models, considering security and privacy policy restrictions, is considered one of the greatest challenges to machine learning-based techniques. The paper proposes the use of homomorphic encryption, which enables the possibility of training the deep learning and classical machine learning models whilst preserving the privacy and security of the data. The proposed methodology is tested for applications of fault identification and localization, and load forecasting in smart grids. The results for fault localization show that the classification accuracy of the proposed privacy-preserving deep learning model while using homomorphic encryption is 97–98%, which is close to 98–99% classification accuracy of the model on plain data. Additionally, for load forecasting application, the results show that RMSE using the homomorphic encryption model is 0.0352 MWh while RMSE without application of encryption in modeling is around 0.0248 MWh.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2020-07-07
    Description: Fifth generation (5G) is a new generation mobile communication system developed for the growing demand for mobile communication. Channel coding is an indispensable part of most modern digital communication systems, for it can improve the transmission reliability and anti-interference. In order to meet the requirements of 5G communication, a dual threshold self-corrected minimum sum (DT-SCMS) algorithm for low-density parity-check (LDPC) decoders is proposed in this paper. Besides, an architecture of LDPC decoders is designed. By setting thresholds to judge the reliability of messages, the DT-SCMS algorithm erases unreliable messages, improving the decoding performance and efficiency. Simulation results show that the performance of DT-SCMS is better than that of SCMS. When the code rate is 1/3, the performance of DT-SCMS has been improved by 0.2 dB at the bit error rate of 10 − 4 compared with SCMS. In terms of the convergence, when the code rate is 2/3, the number of iterations of DT-SCMS can be reduced by up to 20.46% compared with SCMS, and the average proportion of reduction is 18.68%.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2020-07-09
    Description: We report the design of a Spiking Neural Network (SNN) edge detector with biologically inspired neurons that has a conceptual similarity with both Hodgkin-Huxley (HH) model neurons and Leaky Integrate-and-Fire (LIF) neurons. The computation of the membrane potential, which is used to determine the occurrence or absence of spike events, at each time step, is carried out by using the analytical solution to a simplified version of the HH neuron model. We find that the SNN based edge detector detects more edge pixels in images than those obtained by a Sobel edge detector. We designed a pipeline for image classification with a low-exposure frame simulation layer, SNN edge detection layers as pre-processing layers and a Convolutional Neural Network (CNN) as a classification module. We tested this pipeline for the task of classification with the Digits dataset, which is available in MATLAB. We find that the SNN based edge detection layer increases the image classification accuracy at lower exposure times, that is, for 1 〈 t 〈 T /4, where t is the number of milliseconds in a simulated exposure frame and T is the total exposure time, with reference to a Sobel edge or Canny edge detection layer in the pipeline. These results pave the way for developing novel cognitive neuromorphic computing architectures for millisecond timescale detection and object classification applications using event or spike cameras.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2020-07-08
    Description: The collection and processing of personal data offers great opportunities for technological advances, but the accumulation of vast amounts of personal data also increases the risk of misuse for malicious intentions, especially in health care. Therefore, personal data are legally protected, e.g., by the European General Data Protection Regulation (GDPR), which states that individuals must be transparently informed and have the right to take control over the processing of their personal data. In real applications privacy policies are used to fulfill these requirements which can be negotiated via user interfaces. The literature proposes privacy languages as an electronic format for privacy policies while the users privacy preferences are represented by preference languages. However, this is only the beginning of the personal data life-cycle, which also includes the processing of personal data and its transfer to various stakeholders. In this work we define a personal privacy workflow, considering the negotiation of privacy policies, privacy-preserving processing and secondary use of personal data, in context of health care data processing to survey applicable Privacy Enhancing Technologies (PETs) to ensure the individuals’ privacy. Based on a broad literature review we identify open research questions for each step of the workflow.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2020-07-05
    Description: Microscopic crowd simulation can help to enhance the safety of pedestrians in situations that range from museum visits to music festivals. To obtain a useful prediction, the input parameters must be chosen carefully. In many cases, a lack of knowledge or limited measurement accuracy add uncertainty to the input. In addition, for meaningful parameter studies, we first need to identify the most influential parameters of our parametric computer models. The field of uncertainty quantification offers standardized and fully automatized methods that we believe to be beneficial for pedestrian dynamics. In addition, many methods come at a comparatively low cost, even for computationally expensive problems. This allows for their application to larger scenarios. We aim to identify and adapt fitting methods to microscopic crowd simulation in order to explore their potential in pedestrian dynamics. In this work, we first perform a variance-based sensitivity analysis using Sobol’ indices and then crosscheck the results by a derivative-based measure, the activity scores. We apply both methods to a typical scenario in crowd simulation, a bottleneck. Because constrictions can lead to high crowd densities and delays in evacuations, several experiments and simulation studies have been conducted for this setting. We show qualitative agreement between the results of both methods. Additionally, we identify a one-dimensional subspace in the input parameter space and discuss its impact on the simulation. Moreover, we analyze and interpret the sensitivity indices with respect to the bottleneck scenario.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2020-06-30
    Description: The use of chatbots in news media platforms, although relatively recent, offers many advantages to journalists and media professionals and, at the same time, facilitates users’ interaction with useful and timely information. This study shows the usability of a news chatbot during a crisis situation, employing the 2020 COVID-19 pandemic as a case study. The basic targets of the research are to design and implement a chatbot in a news media platform with a two-fold aim in regard to evaluation: first, the technical effort of creating a functional and robust news chatbot in a crisis situation both from the AI perspective and interoperability with other platforms, which constitutes the novelty of the approach; and second, users’ perception regarding the appropriation of this news chatbot as an alternative means of accessing existing information during a crisis situation. The chatbot designed was evaluated in terms of effectively fulfilling the social responsibility function of crisis reporting, to deliver timely and accurate information on the COVID-19 pandemic to a wide audience. In this light, this study shows the advantages of implementing chatbots in news platforms during a crisis situation, when the audience’s needs for timely and accurate information rapidly increase.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2020-06-30
    Description: Twitter is a microblogging platform that generates large volumes of data with high velocity. This daily generation of unbounded and continuous data leads to Big Data streams that often require real-time distributed and fully automated processing. Hashtags, hyperlinked words in tweets, are widely used for tweet topic classification, retrieval, and clustering. Hashtags are used widely for analyzing tweet sentiments where emotions can be classified without contexts. However, regardless of the wide usage of hashtags, general tweet topic classification using hashtags is challenging due to its evolving nature, lack of context, slang, abbreviations, and non-standardized expression by users. Most existing approaches, which utilize hashtags for tweet topic classification, focus on extracting hashtag concepts from external lexicon resources to derive semantics. However, due to the rapid evolution and non-standardized expression of hashtags, the majority of these lexicon resources either suffer from the lack of hashtag words in their knowledge bases or use multiple resources at once to derive semantics, which make them unscalable. Along with scalable and automated techniques for tweet topic classification using hashtags, there is also a requirement for real-time analytics approaches to handle huge and dynamic flows of textual streams generated by Twitter. To address these problems, this paper first presents a novel semi-automated technique that derives semantically relevant hashtags using a domain-specific knowledge base of topic concepts and combines them with the existing tweet-based-hashtags to produce Hybrid Hashtags. Further, to deal with the speed and volume of Big Data streams of tweets, we present an online approach that updates the preprocessing and learning model incrementally in a real-time streaming environment using the distributed framework, Apache Storm. Finally, to fully exploit the batch and stream environment performance advantages, we propose a comprehensive framework (Hybrid Hashtag-based Tweet topic classification (HHTC) framework) that combines batch and online mechanisms in the most effective way. Extensive experimental evaluations on a large volume of Twitter data show that the batch and online mechanisms, along with their combination in the proposed framework, are scalable, efficient, and provide effective tweet topic classification using hashtags.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2020-06-30
    Description: Standard (Lomb-Scargle, likelihood, etc.) procedures for power-spectrum analysis provide convenient estimates of the significance of any peak in a power spectrum, based—typically—on the assumption that the measurements being analyzed have a normal (i.e., Gaussian) distribution. However, the measurement sequence provided by a real experiment or a real observational program may not meet this requirement. The RONO (rank-order normalization) procedure generates a proxy distribution that retains the rank-order of the original measurements but has a strictly normal distribution. The proxy distribution may then be analyzed by standard power-spectrum analysis. We show by an example that the resulting power spectrum may prove to be quite close to the power spectrum obtained from the original data by a standard procedure, even if the distribution of the original measurements is far from normal. Such a comparison would tend to validate the original analysis.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2020-06-30
    Description: Toward strong demand for very high-speed I/O for processors, physical performance growth of hardware I/O speed was drastically increased in this decade. However, the recent Big Data applications still demand the larger I/O bandwidth and the lower latency for the speed. Because the current I/O performance does not improve so drastically, it is the time to consider another way to increase it. To overcome this challenge, we focus on lossless data compression technology to decrease the amount of data itself in the data communication path. The recent Big Data applications treat data stream that flows continuously and never allow stalling processing due to the high speed. Therefore, an elegant hardware-based data compression technology is demanded. This paper proposes a novel lossless data compression, called ASE coding. It encodes streaming data by applying the entropy coding approach. ASE coding instantly assigns the fewest bits to the corresponding compressed data according to the number of occupied entries in a look-up table. This paper describes the detailed mechanism of ASE coding. Furthermore, the paper demonstrates performance evaluations to promise that ASE coding adaptively shrinks streaming data and also works on a small amount of hardware resources without stalling or buffering any part of data stream.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2020-06-30
    Description: When highly automated driving is realized, the role of the driver will change dramatically. Drivers will even be able to sleep during the drive. However, when awaking from sleep, drivers often experience sleep inertia, meaning they are feeling groggy and are impaired in their driving performance―which can be an issue with the concept of dual-mode vehicles that allow both manual and automated driving. Proactive methods to avoid sleep inertia like the widely applied ‘NASA nap’ are not immediately practicable in automated driving. Therefore, a reactive countermeasure, the sleep inertia counter-procedure for drivers (SICD), has been developed with the aim to activate and motivate the driver as well as to measure the driver’s alertness level. The SICD is evaluated in a study with N = 21 drivers in a level highly automation driving simulator. The SICD was able to activate the driver after sleep and was perceived as “assisting” by the drivers. It was not capable of measuring the driver’s alertness level. The interpretation of the findings is limited due to a lack of a comparative baseline condition. Future research is needed on direct comparisons of different countermeasures to sleep inertia that are effective and accepted by drivers.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2020-07-01
    Description: Text annotation is the process of identifying the sense of a textual segment within a given context to a corresponding entity on a concept ontology. As the bag of words paradigm’s limitations become increasingly discernible in modern applications, several information retrieval and artificial intelligence tasks are shifting to semantic representations for addressing the inherent natural language polysemy and homonymy challenges. With extensive application in a broad range of scientific fields, such as digital marketing, bioinformatics, chemical engineering, neuroscience, and social sciences, community detection has attracted great scientific interest. Focusing on linguistics, by aiming to identify groups of densely interconnected subgroups of semantic ontologies, community detection application has proven beneficial in terms of disambiguation improvement and ontology enhancement. In this paper we introduce a novel distributed supervised knowledge-based methodology employing community detection algorithms for text annotation with Wikipedia Entities, establishing the unprecedented concept of community Coherence as a metric for local contextual coherence compatibility. Our experimental evaluation revealed that deeper inference of relatedness and local entity community coherence in the Wikipedia graph bears substantial improvements overall via a focus on accuracy amelioration of less common annotations. The proposed methodology is propitious for wider adoption, attaining robust disambiguation performance.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2020-07-02
    Description: The problem posed by complex, articulated or deformable objects has been at the focus of much tracking research for a considerable length of time. However, it remains a major challenge, fraught with numerous difficulties. The increased ubiquity of technology in all realms of our society has made the need for effective solutions all the more urgent. In this article, we describe a novel method which systematically addresses the aforementioned difficulties and in practice outperforms the state of the art. Global spatial flexibility and robustness to deformations are achieved by adopting a pictorial structure based geometric model, and localized appearance changes by a subspace based model of part appearance underlain by a gradient based representation. In addition to one-off learning of both the geometric constraints and part appearances, we introduce a continuing learning framework which implements information discounting i.e., the discarding of historical appearances in favour of the more recent ones. Moreover, as a means of ensuring robustness to transient occlusions (including self-occlusions), we propose a solution for detecting unlikely appearance changes which allows for unreliable data to be rejected. A comprehensive evaluation of the proposed method, the analysis and discussing of findings, and a comparison with several state-of-the-art methods demonstrates the major superiority of our algorithm.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2020-07-02
    Description: Image fusion is a process that integrates similar types of images collected from heterogeneous sources into one image in which the information is more definite and certain. Hence, the resultant image is anticipated as more explanatory and enlightening both for human and machine perception. Different image combination methods have been presented to consolidate significant data from a collection of images into one image. As a result of its applications and advantages in variety of fields such as remote sensing, surveillance, and medical imaging, it is significant to comprehend image fusion algorithms and have a comparative study on them. This paper presents a review of the present state-of-the-art and well-known image fusion techniques. The performance of each algorithm is assessed qualitatively and quantitatively on two benchmark multi-focus image datasets. We also produce a multi-focus image fusion dataset by collecting the widely used test images in different studies. The quantitative evaluation of fusion results is performed using a set of image fusion quality assessment metrics. The performance is also evaluated using different statistical measures. Another contribution of this paper is the proposal of a multi-focus image fusion library, to the best of our knowledge, no such library exists so far. The library provides implementation of numerous state-of-the-art image fusion algorithms and is made available publicly at project website.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2020-07-02
    Description: Fitness and physical exercise are preferred in the pursuit of healthier and active lifestyles. The number of mobile applications aiming to replace or complement a personal trainer is increasing. However, this also raises questions about the reliability, integrity, and even safety of the information provided by such applications. In this study, we review mobile applications that serve as virtual personal trainers. We present a systematic review of 36 related mobile applications, updated between 2017 and 2020, classifying them according to their characteristics. The selection criteria considers the following combination of keywords: “workout”, “personal trainer”, “physical activity”, “fitness”, “gymnasium”, and “daily plan”. Based on the analysis of the identified mobile applications, we propose a new taxonomy and present detailed guidelines on creating mobile applications for personalised workouts. Finally, we investigated how can mobile applications promote health and well-being of users and whether the identified applications are used in any scientific studies.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2020-08-31
    Description: Text similarity measurement is the basis of natural language processing tasks, which play an important role in information retrieval, automatic question answering, machine translation, dialogue systems, and document matching. This paper systematically combs the research status of similarity measurement, analyzes the advantages and disadvantages of current methods, develops a more comprehensive classification description system of text similarity measurement algorithms, and summarizes the future development direction. With the aim of providing reference for related research and application, the text similarity measurement method is described by two aspects: text distance and text representation. The text distance can be divided into length distance, distribution distance, and semantic distance; text representation is divided into string-based, corpus-based, single-semantic text, multi-semantic text, and graph-structure-based representation. Finally, the development of text similarity is also summarized in the discussion section.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2020-06-30
    Description: Partially automated driving (PAD, Society of Automotive Engineers (SAE) level 2) features provide steering and brake/acceleration support, while the driver must constantly supervise the support feature and intervene if needed to maintain safety. PAD could potentially increase comfort, road safety, and traffic efficiency. As during manual driving, users might engage in non-driving related tasks (NDRTs). However, studies systematically examining NDRT execution during PAD are rare and most importantly, no established methodologies to systematically evaluate driver distraction during PAD currently exist. The current project’s goal was to take the initial steps towards developing a test protocol for systematically evaluating NDRT’s effects during PAD. The methodologies used for manual driving were extended to PAD. Two generic take-over situations addressing system limits of a given PAD regarding longitudinal and lateral control were implemented to evaluate drivers’ supervisory and take-over capabilities while engaging in different NDRTs (e.g., manual radio tuning task). The test protocol was evaluated and refined across the three studies (two simulator and one test track). The results indicate that the methodology could sensitively detect differences between the NDRTs’ influences on drivers’ take-over and especially supervisory capabilities. Recommendations were formulated regarding the test protocol’s use in future studies examining the effects of NDRTs during PAD.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2020-06-30
    Description: This research concerns the application of micro X-ray fluorescence (µXRF) mapping to the investigation of a group of selected metal objects from the archaeological site of Ferento, a Roman and then medieval town in Central Italy. Specifically, attention was focused on two test pits, named IV and V, in which metal objects were found, mainly pertaining to the medieval period and never investigated before the present work from a compositional point of view. The potentiality of µXRF mapping was tested through a Bruker Tornado M4 equipped with an Rh tube, operating at 50 kV, 500 μA, and spot 25 μm obtained with polycapillary optics. Principal component analysis (PCA) and multivariate curve resolution (MCR) were used for processing the X-ray fluorescence spectra. The results showed that the investigated items are characterized by different compositions in terms of chemical elements. Three little wheels are made of lead, while the fibulae are made of copper-based alloys with varying amounts of tin, zinc, and lead. Only one ring is iron-based, and the other objects, namely a spatula and an applique, are also made of copper-based alloys, but with different relative amounts of the main elements. In two objects, traces of gold were found, suggesting the precious character of these pieces. MCR analysis was demonstrated to be particularly useful to confirm the presence of trace elements, such as gold, as it could differentiate the signals related to minor elements from those due to major chemical elements.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2020-06-30
    Description: Geomechanical modelling of the processes associated to the exploitation of subsurface resources, such as land subsidence or triggered/induced seismicity, is a common practice of major interest. The prediction reliability depends on different sources of uncertainty, such as the parameterization of the constitutive model characterizing the deep rock behaviour. In this study, we focus on a Sobol’-based sensitivity analysis and uncertainty reduction via assimilation of land deformations. A synthetic test case application on a deep hydrocarbon reservoir is considered, where land settlements are predicted with the aid of a 3-D Finite Element (FE) model. Data assimilation is performed via the Ensemble Smoother (ES) technique and its variation in the form of Multiple Data Assimilation (ES-MDA). However, the ES convergence is guaranteed with a large number of Monte Carlo (MC) simulations, that may be computationally infeasible in large scale and complex systems. For this reason, a surrogate model based on the generalized Polynomial Chaos Expansion (gPCE) is proposed as an approximation of the forward problem. This approach allows to efficiently compute the Sobol’ indices for the sensitivity analysis and greatly reduce the computational cost of the original ES and MDA formulations, also enhancing the accuracy of the overall prediction process.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2020-06-30
    Description: Prior research found that user personality significantly affects technology acceptance perceptions and decisions. Yet, evidence on the moderating influence of user gender on the relationship between personality and technology acceptance is barely existent despite theoretical consideration. Considering this research gap, the present study reports the results of a survey in which we examined the relationships between personality and technology acceptance from a gender perspective. This study draws upon a sample of N = 686 participants (n = 209 men, n = 477 women) and applied the HEXACO Personality Inventory—Revised along with established technology acceptance measures. The major result of this study is that we do not find significant influence of user gender on the relationship between personality and technology acceptance, except for one aspect of personality, namely altruism. We found a negative association between altruism and intention to use the smartphone in men, but a positive association in women. Consistent with this finding, we also found the same association pattern for altruism and predicted usage: a negative one in men and a positive one in women. Implications for research and practice are discussed, along with limitations of the present study and possible avenues for future research.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2020-06-30
    Description: Clustering is an unsupervised machine learning technique with many practical applications that has gathered extensive research interest. Aside from deterministic or probabilistic techniques, fuzzy C-means clustering (FCM) is also a common clustering technique. Since the advent of the FCM method, many improvements have been made to increase clustering efficiency. These improvements focus on adjusting the membership representation of elements in the clusters, or on fuzzifying and defuzzifying techniques, as well as the distance function between elements. This study proposes a novel fuzzy clustering algorithm using multiple different fuzzification coefficients depending on the characteristics of each data sample. The proposed fuzzy clustering method has similar calculation steps to FCM with some modifications. The formulas are derived to ensure convergence. The main contribution of this approach is the utilization of multiple fuzzification coefficients as opposed to only one coefficient in the original FCM algorithm. The new algorithm is then evaluated with experiments on several common datasets and the results show that the proposed algorithm is more efficient compared to the original FCM as well as other clustering methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2020-07-02
    Description: Knowing an accurate passengers attendance estimation on each metro car contributes to the safely coordination and sorting the crowd-passenger in each metro station. In this work we propose a multi-head Convolutional Neural Network (CNN) architecture trained to infer an estimation of passenger attendance in a metro car. The proposed network architecture consists of two main parts: a convolutional backbone, which extracts features over the whole input image, and a multi-head layers able to estimate a density map, needed to predict the number of people within the crowd image. The network performance is first evaluated on publicly available crowd counting datasets, including the ShanghaiTech part_A, ShanghaiTech part_B and UCF_CC_50, and then trained and tested on our dataset acquired in subway cars in Italy. In both cases a comparison is made against the most relevant and latest state of the art crowd counting architectures, showing that our proposed MH-MetroNet architecture outperforms in terms of Mean Absolute Error (MAE) and Mean Square Error (MSE) and passenger-crowd people number prediction.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2020-07-03
    Description: For imaging events of extremely short duration, like shock waves or explosions, it is necessary to be able to image the object with a single-shot exposure. A suitable setup is given by a laser-induced X-ray source such as the one that can be found at GSI (Helmholtzzentrum für Schwerionenforschung GmbH) in Darmstadt (Society for Heavy Ion Research), Germany. There, it is possible to direct a pulse from the high-energy laser Petawatt High Energy Laser for Heavy Ion eXperiments (PHELIX) on a tungsten wire to generate a picosecond polychromatic X-ray pulse, called backlighter. For grating-based single-shot phase-contrast imaging of shock waves or exploding wires, it is important to know the weighted mean energy of the X-ray spectrum for choosing a suitable setup. In propagation-based phase-contrast imaging the knowledge of the weighted mean energy is necessary to be able to reconstruct quantitative phase images of unknown objects. Hence, we developed a method to evaluate the weighted mean energy of the X-ray backlighter spectrum using propagation-based phase-contrast images. In a first step wave-field simulations are performed to verify the results. Furthermore, our evaluation is cross-checked with monochromatic synchrotron measurements with known energy at Diamond Light Source (DLS, Didcot, UK) for proof of concepts.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2020-07-02
    Description: The number of Internet of Things (IoT) devices is growing at a fast pace in smart homes, producing large amounts of data, which are mostly transferred over wireless communication channels. However, various IoT devices are vulnerable to different threats, such as cyber-attacks, fluctuating network connections, leakage of information, etc. Statistical analysis and machine learning can play a vital role in detecting the anomalies in the data, which enhances the security level of the smart home IoT system which is the goal of this paper. This paper investigates the trustworthiness of the IoT devices sending house appliances’ readings, with the help of various parameters such as feature importance, root mean square error, hyper-parameter tuning, etc. A spamicity score was awarded to each of the IoT devices by the algorithm, based on the feature importance and the root mean square error score of the machine learning models to determine the trustworthiness of the device in the home network. A dataset publicly available for a smart home, along with weather conditions, is used for the methodology validation. The proposed algorithm is used to detect the spamicity score of the connected IoT devices in the network. The obtained results illustrate the efficacy of the proposed algorithm to analyze the time series data from the IoT devices for spam detection.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2020-07-02
    Description: Humans are capable of learning new concepts from small numbers of examples. In contrast, supervised deep learning models usually lack the ability to extract reliable predictive rules from limited data scenarios when attempting to classify new examples. This challenging scenario is commonly known as few-shot learning. Few-shot learning has garnered increased attention in recent years due to its significance for many real-world problems. Recently, new methods relying on meta-learning paradigms combined with graph-based structures, which model the relationship between examples, have shown promising results on a variety of few-shot classification tasks. However, existing work on few-shot learning is only focused on the feature embeddings produced by the last layer of the neural network. The novel contribution of this paper is the utilization of lower-level information to improve the meta-learner performance in few-shot learning. In particular, we propose the Looking-Back method, which could use lower-level information to construct additional graphs for label propagation in limited data settings. Our experiments on two popular few-shot learning datasets, miniImageNet and tieredImageNet, show that our method can utilize the lower-level information in the network to improve state-of-the-art classification performance.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2020-07-06
    Description: Virtual worlds have become global platforms connecting millions of people and containing various technologies. For example, No Man’s Sky (nomanssky.com), a cross-platform virtual world, can dynamically and automatically generate content with the progress of user adventure. AltspaceVR (altvr.com) is a social virtual reality platform supporting motion capture through Microsoft’s Kinect, eye tracking, and mixed reality extension. The changes in industrial investment, market revenue, user population, and consumption drive the evolution of virtual-world-related technologies (e.g., computing infrastructure and interaction devices), which turns into new design requirements and thus results in the requirement satisfaction problem in virtual world system architecture design. In this paper, we first study the new or evolving features of virtual worlds and emerging requirements of system development through market/industry trend analysis, including infrastructure mobility, content diversity, function interconnectivity, immersive environment, and intelligent agents. Based on the trend analysis, we propose a new design requirement space. We, then, discuss the requirement satisfaction of existing system architectures and highlight their limitations through a literature review. The feature-based requirement satisfaction comparison of existing system architectures sheds some light on the future virtual world system development to match the changing trends of the user market. At the end of this study, a new architecture from an ongoing research, called Virtual Net, is discussed, which can provide higher resource sufficiency, computing reliability, content persistency, and service credibility.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2020-07-06
    Description: With the rise of partially automated cars, drivers are more and more required to judge the degree of responsibility that can be delegated to vehicle assistant systems. This can be supported by utilizing interfaces that intuitively convey real-time reliabilities of system functions such as environment sensing. We designed a vibrotactile interface that communicates spatiotemporal information about surrounding vehicles and encodes a representation of spatial uncertainty in a novel way. We evaluated this interface in a driving simulator experiment with high and low levels of human and machine confidence respectively caused by simulated degraded vehicle sensor precision and limited human visibility range. Thereby we were interested in whether drivers (i) could perceive and understand the vibrotactile encoding of spatial uncertainty, (ii) would subjectively benefit from the encoded information, (iii) would be disturbed in cases of information redundancy, and (iv) would gain objective safety benefits from the encoded information. To measure subjective understanding and benefit, a custom questionnaire, Van der Laan acceptance ratings and NASA TLX scores were used. To measure the objective benefit, we computed the minimum time-to-contact as a measure of safety and gaze distributions as an indicator for attention guidance. Results indicate that participants were able to understand the encoded uncertainty and spatiotemporal information and purposefully utilized it when needed. The tactile interface provided meaningful support despite sensory restrictions. By encoding spatial uncertainties, it successfully extended the operating range of the assistance system.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2020-07-03
    Description: The COVID-19 pandemic exploded at the beginning of 2020, with over four million cases in five months, overwhelming the healthcare sector. Several national governments decided to adopt containment measures, such as lockdowns, social distancing, and quarantine. Among these measures, contact tracing can contribute in bringing under control the outbreak, as quickly identifying contacts to isolate suspected cases can limit the number of infected people. In this paper we present BubbleBox, a system relying on a dedicated device to perform contact tracing. BubbleBox integrates Internet of Things and software technologies into different components to achieve its goal—providing a tool to quickly react to further outbreaks, by allowing health operators to rapidly reach and test possible infected people. This paper describes the BubbleBox architecture, presents its prototype implementation, and discusses its pros and cons, also dealing with privacy concerns.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2020-07-05
    Description: Variation, adaptation, heredity and fitness, constraints and affordances, speciation, and extinction form the building blocks of the (Neo-)Darwinian research program, and several of these have been called “Darwinian principles”. Here, we suggest that caution should be taken in calling these principles Darwinian because of the important role played by reticulate evolutionary mechanisms and processes in also bringing about these phenomena. Reticulate mechanisms and processes include symbiosis, symbiogenesis, lateral gene transfer, infective heredity mediated by genetic and organismal mobility, and hybridization. Because the “Darwinian principles” are brought about by both vertical and reticulate evolutionary mechanisms and processes, they should be understood as foundational for a more pluralistic theory of evolution, one that surpasses the classic scope of the Modern and the Neo-Darwinian Synthesis. Reticulate evolution moreover demonstrates that what conventional (Neo-)Darwinian theories treat as intra-species features of evolution frequently involve reticulate interactions between organisms from very different taxonomic categories. Variation, adaptation, heredity and fitness, constraints and affordances, speciation, and extinction therefore cannot be understood as “traits” or “properties” of genes, organisms, species, or ecosystems because the phenomena are irreducible to specific units and levels of an evolutionary hierarchy. Instead, these general principles of evolution need to be understood as common goods that come about through interactions between different units and levels of evolutionary hierarchies, and they are exherent rather than inherent properties of individuals.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2020-07-04
    Description: This paper presents an experiment on newsreaders’ behavior and preferences on the interaction with online personalized news. Different recommendation approaches, based on consumption profiles and user location, and the impact of personalized news on several aspects of consumer decision-making are examined on a group of volunteers. Results show a significant preference for reading recommended news over other news presented on the screen, regardless of the chosen editorial layout. In addition, the study also provides support for the creation of profiles taking into consideration the evolution of user’s interests. The proposed solution is valid for users with different reading habits and can be successfully applied even to users with small consumption history. Our findings can be used by news providers to improve online services, thus increasing readers’ perceived satisfaction.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2020-07-06
    Description: Many industries today are struggling with early the identification of quality issues, given the shortening of product design cycles and the desire to decrease production costs, coupled with the customer requirement for high uptime. The vehicle industry is no exception, as breakdowns often lead to on-road stops and delays in delivery missions. In this paper we consider quality issues to be an unexpected increase in failure rates of a particular component; those are particularly problematic for the original equipment manufacturers (OEMs) since they lead to unplanned costs and can significantly affect brand value. We propose a new approach towards the early detection of quality issues using machine learning (ML) to forecast the failures of a given component across the large population of units. In this study, we combine the usage information of vehicles with the records of their failures. The former is continuously collected, as the usage statistics are transmitted over telematics connections. The latter is based on invoice and warranty information collected in the workshops. We compare two different ML approaches: the first is an auto-regression model of the failure ratios for vehicles based on past information, while the second is the aggregation of individual vehicle failure predictions based on their individual usage. We present experimental evaluations on the real data captured from heavy-duty trucks demonstrating how these two formulations have complementary strengths and weaknesses; in particular, they can outperform each other given different volumes of the data. The classification approach surpasses the regressor model whenever enough data is available, i.e., once the vehicles are in-service for a longer time. On the other hand, the regression shows better predictive performance with a smaller amount of data, i.e., for vehicles that have been deployed recently.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2020-07-03
    Description: Business processes evolve over time to adapt to changing business environments. This requires continuous monitoring of business processes to gain insights into whether they conform to the intended design or deviate from it. The situation when a business process changes while being analysed is denoted as Concept Drift. Its analysis is concerned with studying how a business process changes, in terms of detecting and localising changes and studying the effects of the latter. Concept drift analysis is crucial to enable early detection and management of changes, that is, whether to promote a change to become part of an improved process, or to reject the change and make decisions to mitigate its effects. Despite its importance, there exists no comprehensive framework for analysing concept drift types, affected process perspectives, and granularity levels of a business process. This article proposes the CONcept Drift Analysis in Process Mining (CONDA-PM) framework describing phases and requirements of a concept drift analysis approach. CONDA-PM was derived from a Systematic Literature Review (SLR) of current approaches analysing concept drift. We apply the CONDA-PM framework on current approaches to concept drift analysis and evaluate their maturity. Applying CONDA-PM framework highlights areas where research is needed to complement existing efforts.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2020-04-14
    Description: Let P be a set of n points in R d , k ≥ 1 be an integer and ε ∈ ( 0 , 1 ) be a constant. An ε-coreset is a subset C ⊆ P with appropriate non-negative weights (scalars), that approximates any given set Q ⊆ R d of k centers. That is, the sum of squared distances over every point in P to its closest point in Q is the same, up to a factor of 1 ± ε to the weighted sum of C to the same k centers. If the coreset is small, we can solve problems such as k-means clustering or its variants (e.g., discrete k-means, where the centers are restricted to be in P, or other restricted zones) on the small coreset to get faster provable approximations. Moreover, it is known that such coreset support streaming, dynamic and distributed data using the classic merge-reduce trees. The fact that the coreset is a subset implies that it preserves the sparsity of the data. However, existing such coresets are randomized and their size has at least linear dependency on the dimension d. We suggest the first such coreset of size independent of d. This is also the first deterministic coreset construction whose resulting size is not exponential in d. Extensive experimental results and benchmarks are provided on public datasets, including the first coreset of the English Wikipedia using Amazon’s cloud.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2020-08-25
    Description: Today, convolutional and deconvolutional neural network models are exceptionally popular thanks to the impressive accuracies they have been proven in several computer-vision applications. To speed up the overall tasks of these neural networks, purpose-designed accelerators are highly desirable. Unfortunately, the high computational complexity and the huge memory demand make the design of efficient hardware architectures, as well as their deployment in resource- and power-constrained embedded systems, still quite challenging. This paper presents a novel purpose-designed hardware accelerator to perform 2D deconvolutions. The proposed structure applies a hardware-oriented computational approach that overcomes the issues of traditional deconvolution methods, and it is suitable for being implemented within any virtually system-on-chip based on field-programmable gate array devices. In fact, the novel accelerator is simply scalable to comply with resources available within both high- and low-end devices by adequately scaling the adopted parallelism. As an example, when exploited to accelerate the Deep Convolutional Generative Adversarial Network model, the novel accelerator, running as a standalone unit implemented within the Xilinx Zynq XC7Z020 System-on-Chip (SoC) device, performs up to 72 GOPs. Moreover, it dissipates less than 500mW@200MHz and occupies 5.6%, 4.1%, 17%, and 96%, respectively, of the look-up tables, flip-flops, random access memory, and digital signal processors available on-chip. When accommodated within the same device, the whole embedded system equipped with the novel accelerator performs up to 54 GOPs and dissipates less than 1.8W@150MHz. Thanks to the increased parallelism exploitable, more than 900 GOPs can be executed when the high-end Virtex-7 XC7VX690T device is used as the implementation platform. Moreover, in comparison with state-of-the-art competitors implemented within the Zynq XC7Z045 device, the system proposed here reaches a computational capability up to 20% higher, and saves more than 60% and 80% of power consumption and logic resources requirement, respectively, using 5.7× fewer on-chip memory resources.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2020-04-23
    Description: This study presents an analysis of RePair, which is a grammar compression algorithm known for its simple scheme,while also being practically effective. First, we show that the main process of RePair, that is, the step by step substitution of the most frequent symbol pairs, works within the corresponding most frequent maximal repeats. Then, we reveal the relation between maximal repeats and grammars constructed by RePair. On the basis of this analysis, we further propose a novel variant of RePair, called MR-RePair, which considers the one-time substitution of the most frequent maximal repeats instead of the consecutive substitution of the most frequent pairs. The results of the experiments comparing the size of constructed grammars and execution time of RePair and MR-RePair on several text corpora demonstrate that MR-RePair constructs more compact grammars than RePair does, especially for highly repetitive texts.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2020-04-24
    Description: The process of moving from experimental data to modeling and characterizing the dynamics and interactions in natural processes is a challenging task. This paper proposes an interactive platform for fitting data derived from experiments to mathematical expressions and carrying out spatial visualization. The platform is designed using a component-based software architectural approach, implemented in R and the Java programming languages. It uses experimental data as input for model fitting, then applies the obtained model at the landscape level via a spatial temperature grid data to yield regional and continental maps. Different modules and functionalities of the tool are presented with a case study, in which the tool is used to establish a temperature-dependent virulence model and map the potential zone of efficacy of a fungal-based biopesticide. The decision support system (DSS) was developed in generic form, and it can be used by anyone interested in fitting mathematical equations to experimental data collected following the described protocol and, depending on the type of investigation, it offers the possibility of projecting the model at the landscape level.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2020-04-23
    Description: In this paper, we proposed a verification method for the message passing behavior of IoT systems by checking the accumulative event relation of process models. In an IoT system, it is hard to verify the behavior of message passing by only looking at the sequence of packet transmissions recorded in the system log. We proposed a method to extract event relations from the log and check for any minor deviations that exist in the system. Using process mining, we extracted the variation of a normal process model from the log. We checked for any deviation that is hard to be detected unless the model is accumulated and stacked over time. Message passing behavior can be verified by comparing the similarity of the process tree model, which represents the execution relation between each message passing event. As a result, we can detect minor deviations such as missing events and perturbed event order with occurrence probability as low as 3%.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2020-04-22
    Description: The English language is the most dominant language in the Western world and its influence can be noticed in every aspect of human communication. It’s increasing diffusion, especially since the turn of the century, is hard to measure with conventional means. The present research studies the use of language in websites of European Union (EU) member states, in order to collect data about the prevalence of the English language in the different countries and regions of the European Union. To achieve a realistic representation of today’s landscape of the European Web, this study uses a vast population of websites and a representative sampling size and methodology. By analyzing and processing the findings from over 100,000 websites from every country in the EU, a solid foundation is set that is used to explore the dominance of the English language in the European World Wide Web in general. This is the first study that examines the presence of English content in the websites of all EU member countries and provides statistical evidence regarding the ratio of English content availability for each country. Conclusively, the results of the research demonstrate that the English language is available on more than one quarter of all websites of non-English speaking EU member states. Moreover, it is available in the vast majority of multilingual and bilingual websites, while at the same time being the only language that is available in a number of monolingual websites. In addition, it is shown preference over the national language in a significant number of cases. A moderate negative correlation is found between a member state’s population and the availability of English in these countries’ websites and the same holds true for a member state’s Gross Domestic Product (GDP). Both these correlations indicate that smaller countries tend to provide more content in English in order to establish a stronger presence in the international environment. Taking into account the role of language in the expression of national identity, this study provides data and insights which may contribute to the discussion about the changes underway in the national identity of EU member states.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2020-04-22
    Description: In the era of ubiquitous digitization, the Internet of Things (IoT), information plays a vital role. All types of data are collected, and some of this data are stored as text files. An important aspect—regardless of the type of data—is related to file storage, especially the amount of disk space that is required. The less space is used on storing data sets, the lower is the cost of this service. Another important aspect of storing data warehouses in the form of files is the cost of data transmission needed for file transfer and its processing. Moreover, the data that are stored should be minimally protected against access and reading by other entities. The aspects mentioned above are particularly important for large data sets like Big Data. Considering the above criteria, i.e., minimizing storage space, data transfer, ensuring minimum security, the main goal of the article was to show the new way of storing text files. This article presents a method that converts data from text files like txt, json, html, py to images (image files) in png format. Taking into account such criteria as the output size of the file, the results obtained for the test files confirm that presented method enables to reduce the need for disk space, as well as to hide data in an image file. The described method can be used for texts saved in extended ASCII and UTF-8 coding.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2020-04-22
    Description: The design of a computer-supported serious game concerning inhibition skills in children with Attention Deficit/Hyperactivity Disorder (ADHD) is reported. The game consists of a series of activities, each eliciting the tendency to respond in an immediate, inadequate way. The game is based on the Dual Pathway Model of ADHD proposed by Sonuga-Barke. In the game, children must block impulsive tendencies, reflect upon the situation, inhibit irrelevant thoughts, and find the non-intuitive solution. In the game, the player personifies a superhero, who is asked to save a realm on the opposite side of the Earth (Antonyms) where things happen according to the opposite of the usual rules. The hero faces a series of challenges, in the form of mini-games, to free the planet from enemies crossing different scenarios. To succeed in the game, the player should change his/her attitude by thinking before performing any action rather than acting on impulse. The player is induced to be reflective and thoughtful as well. Results from the evaluation of a preliminary version of the serious game are reported. They support the notion that Antonyms is an adequate tool to lead children to inhibit their tendency to behave impulsively.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2020-04-23
    Description: The Hough transform is commonly used for detecting linear features within an image. A line is mapped to a peak within parameter space corresponding to the parameters of the line. By analysing the shape of the peak, or peak locus, within parameter space, it is possible to also use the line Hough transform to detect or analyse arbitrary (non-parametric) curves. It is shown that there is a one-to-one relationship between the curve in image space, and the peak locus in parameter space, enabling the complete curve to be reconstructed from its peak locus. In this paper, we determine the patterns of the peak locus for closed curves (including circles and ellipses), linear segments, inflection points, and corners. It is demonstrated that the curve shape can be simplified by ignoring parts of the peak locus. One such simplification is to derive the convex hull of shapes directly from the representation within the Hough transform. It is also demonstrated that the parameters of elliptical blobs can be measured directly from the Hough transform.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2020-04-24
    Description: Within a workshop on evaluation methods for automated vehicles (AVs) at the Driving Assessment 2019 symposium in Santa Fe; New Mexico, a heuristic evaluation methodology that aims at supporting the development of human–machine interfaces (HMIs) for AVs was presented. The goal of the workshop was to bring together members of the human factors community to discuss the method and to further promote the development of HMI guidelines and assessment methods for the design of HMIs of automated driving systems (ADSs). The workshop included hands-on experience of rented series production partially automated vehicles, the application of the heuristic assessment method using a checklist, and intensive discussions about possible revisions of the checklist and the method itself. The aim of the paper is to summarize the results of the workshop, which will be used to further improve the checklist method and make the process available to the scientific community. The participants all had previous experience in HMI design of driver assistance systems, as well as development and evaluation methods. They brought valuable ideas into the discussion with regard to the overall value of the tool against the background of the intended application, concrete improvements of the checklist (e.g., categorization of items; checklist items that are currently perceived as missing or redundant in the checklist), when in the design process the tool should be applied, and improvements for the usability of the checklist.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2020-04-22
    Description: Digital territories focus on community transformation through sustainable development, saving resources in local governments, bridging the digital gap, and using technology to build smart infrastructure. This article presents the design and implementation of a smart system, called the Waste Treatment System (WTS), for controlling parameters of waste decomposition in the trash bins installed in Riobamba city (Ecuador). The prototype allows monitoring in real time both the amount of waste and the level of rottenness of garbage by means of the measurement of different parameters that indicate the characteristics of the leachates generated inside. The motivation of this work was to yield an efficient solution to urban waste treatment that optimizes resources in the collection process by providing real-time information to improve collection frequency vehicles and also reduce emissions by the decomposition of organic waste. The tests allowed assessing technical aspects such as the maximum coverage of wireless communication, the transmission channel capacity for each prototype, the data-processing requirements, and other more particular parameters such as the production of leachates due to the frequency of collection and the environmental conditions, which will be useful in future work on environmental impact.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2020-04-23
    Description: This paper studies the impact of machine translation (MT) on the translation workflow at the Directorate-General for Translation (DGT), focusing on two language pairs and two MT paradigms: English-into-French with statistical MT and English-into-Finnish with neural MT. We collected data from 20 professional translators at DGT while they carried out real translation tasks in normal working conditions. The participants enabled/disabled MT for half of the segments in each document. They filled in a survey at the end of the logging period. We measured the productivity gains (or losses) resulting from the use of MT and examined the relationship between technical effort and temporal effort. The results show that while the usage of MT leads to productivity gains on average, this is not the case for all translators. Moreover, the two technical effort indicators used in this study show weak correlations with post-editing time. The translators’ perception of their speed gains was more or less in line with the actual results. Reduction of typing effort is the most frequently mentioned reason why participants preferred working with MT, but also the psychological benefits of not having to start from scratch were often mentioned.
    Electronic ISSN: 2227-9709
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2020-04-23
    Description: Big data and artificial intelligence are currently two of the most important and trending pieces for innovation and predictive analytics in healthcare, leading the digital healthcare transformation. Keralty organization is already working on developing an intelligent big data analytic platform based on machine learning and data integration principles. We discuss how this platform is the new pillar for the organization to improve population health management, value-based care, and new upcoming challenges in healthcare. The benefits of using this new data platform for community and population health include better healthcare outcomes, improvement of clinical operations, reducing costs of care, and generation of accurate medical information. Several machine learning algorithms implemented by the authors can use the large standardized datasets integrated into the platform to improve the effectiveness of public health interventions, improving diagnosis, and clinical decision support. The data integrated into the platform come from Electronic Health Records (EHR), Hospital Information Systems (HIS), Radiology Information Systems (RIS), and Laboratory Information Systems (LIS), as well as data generated by public health platforms, mobile data, social media, and clinical web portals. This massive volume of data is integrated using big data techniques for storage, retrieval, processing, and transformation. This paper presents the design of a digital health platform in a healthcare organization in Colombia to integrate operational, clinical, and business data repositories with advanced analytics to improve the decision-making process for population health management.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2020-04-24
    Description: The non-contact detection of buried ferromagnetic pipeline is a long-standing problem in the field of inspection of outside pipelines, and the extraction of magnetic anomaly signal is a prerequisite for accurate detection. Pipeline defects can cause the fluctuation of magnetic signals, which are easily submerged in wide-band background noise without external excitation sources. Previously, Variational Mode Decomposition (VMD) was used to separate modal components; however, VMD is based on narrow-band signal processing algorithm and the calculation is complex. In this article, a method of pipeline defect signal based on Variational Specific Mode Extraction (VSME) is employed to extract the signal of a specific central frequency by signal modal decomposition, i.e., the specific mode is weak magnetic anomaly signal of pipeline defects. VSME is based on the fact that a wide-band signal can be converted into a narrow-band signal by demodulation method. Furthermore, the problem of wide-band signal decomposition is expressed as an optimal demodulation problem, which can be solved by alternating direction method of multipliers. The proposed algorithm is verified by artificially synthesized signals, and its performance is better than that of VMD. The results showed that the VSME method can extract the magnetic anomaly signal of pipeline damage using experimental data, while obtaining a better accuracy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2020-04-29
    Description: In person re-identification, extracting image features is an important step when retrieving pedestrian images. Most of the current methods only extract global features or local features of pedestrian images. Some inconspicuous details are easily ignored when learning image features, which is not efficient or robust to for scenarios with large differences. In this paper, we propose a Multi-level Feature Fusion model that combines both global features and local features of images through deep learning networks to generate more discriminative pedestrian descriptors. Specifically, we extract local features from different depths of network by the Part-based Multi-level Net to fuse low-to-high level local features of pedestrian images. Global-Local Branches are used to extract the local features and global features at the highest level. The experiments have proved that our deep learning model based on multi-level feature fusion works well in person re-identification. The overall results outperform the state of the art with considerable margins on three widely-used datasets. For instance, we achieve 96% Rank-1 accuracy on the Market-1501 dataset and 76.1% mAP on the DukeMTMC-reID dataset, outperforming the existing works by a large margin (more than 6%).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2020-04-26
    Description: This article presents the use of data processing to apprehend mathematical questions such as the Riemann Hypothesis (RH) by numerical calculation. Calculations are performed alongside graphs of the argument of the complex numbers ζ ( x + i y ) = a + i b and ξ ( x + i y ) = p + i q , in the critical strip. On the one hand, the two-dimensional surface angle tan − 1 ( b / a ) of the Riemann Zeta function ζ is related to the semi-angle of the fractional part of y 2 π ln ( y 2 π e ) and, on the other hand, the Ksi function ξ of the Riemann functional equation is analyzed with respect to the coordinates ( x , 1 − x ; y ) . The computation of the power series expansion of the ξ function with its symmetry analysis highlights the RH by the underlying ratio of Gamma functions inside the ξ formula. The ξ power series beside the angle of both surfaces of the ζ function enables to exhibit a Bézout identity a u + b v ≡ c between the components ( a , b ) of the ζ function, which illustrates the RH. The geometric transformations in complex space of the Zeta and Ksi functions, illustrated graphically, as well as series expansions, calculated by computer, make it possible to elucidate this mathematical problem numerically. A final theoretical outlook gives deeper insights on the functional equation’s mechanisms, by adopting a computer–scientific perspective.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2020-04-26
    Description: The use of technological solutions to address the production of goods and offering of services is ubiquitous. Health and social issues, however, have only slowly been permeated by technological solutions. Whilst several advances have been made in health in recent years, the adoption of technology to combat social problems has lagged behind. In this paper, we explore Big Data-driven Artificial Intelligence (AI) applied to social systems; i.e., social computing, the concept of artificial intelligence as an enabler of novel social solutions. Through a critical analysis of the literature, we elaborate on the social and human interaction aspects of technology that must be in place to achieve such enabling and address the limitations of the current state of the art in this regard. We review cultural, political, and other societal impacts of social computing, impact on vulnerable groups, and ethically-aligned design of social computing systems. We show that this is not merely an engineering problem, but rather the intersection of engineering with health sciences, social sciences, psychology, policy, and law. We then illustrate the concept of ethically-designed social computing with a use case of our ongoing research, where social computing is used to support safety and security in home-sharing settings, in an attempt to simultaneously combat youth homelessness and address loneliness in seniors, identifying the risks and potential rewards of such a social computing application.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2020-04-26
    Description: Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2020-04-25
    Description: Data handling and provisioning play a dominant role in the structure of modern cloud–fog-based architectures. Without a strict, fast, and deterministic method of exchanging data we cannot be sure about the performance and efficiency of transactions and applications. In the present work we propose an architecture for a Data as a Service (DaaS) Marketplace, hosted exclusively in a cloud environment. The architecture includes a storage management engine that ensures the Quality of Service (QoS) requirements, a monitoring component that enables real time decisions about the resources used, and a resolution engine that provides semantic data discovery and ranking based on user queries. We show that the proposed system outperforms the classic ElasticSearch queries in data discovery use cases, providing more accurate results. Furthermore, the semantic enhancement of the process adds extra results which extend the user query with a more abstract definition to each notion. Finally, we show that the real-time scaling, provided by the data storage manager component, limits QoS requirements by decreasing the latency of the read and write data requests.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2020-04-25
    Description: Tennis is a sport with a very complex technique. Amateur tennis players have trainers and/or coaches, but are not usually accompanied by them to championships. Curiously, in this sport, the result of many matches can be changed by a small hint like ‘hit the ball a little higher when serving’. However, the biomechanical of a tennis stroke is only clear to an expert. We, therefore, developed a prototype of a rule-based expert system (RBES) aimed at an amateur competition player that is not accompanied by his/her coach to a championship and is not serving as usual (the RBES is so far restricted to serving). The player has to answer a set of questions about how he/she is serving that day and his/her usual serving technique and the RBES obtains a diagnosis using logic inference about the possible reasons (according of the logic rules that have been previously given to the RBES). A certain knowledge of the tennis terminology and technique is required from the player, but that is something known at this level. The underlying logic is Boolean and the inference engine is algebraic (it uses Groebner bases).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2020-04-26
    Description: The quality assurance of publication data in collaborative knowledge bases and in current research information systems (CRIS) becomes more and more relevant by the use of freely available spatial information in different application scenarios. When integrating this data into CRIS, it is necessary to be able to recognize and assess their quality. Only then is it possible to compile a result from the available data that fulfills its purpose for the user, namely to deliver reliable data and information. This paper discussed the quality problems of source metadata in Wikipedia and CRIS. Based on real data from over 40 million Wikipedia articles in various languages, we performed preliminary quality analysis of the metadata of scientific publications using a data quality tool. So far, no data quality measurements have been programmed with Python to assess the quality of metadata from scientific publications in Wikipedia and CRIS. With this in mind, we programmed the methods and algorithms as code, but presented it in the form of pseudocode in this paper to measure the quality related to objective data quality dimensions such as completeness, correctness, consistency, and timeliness. This was prepared as a macro service so that the users can use the measurement results with the program code to make a statement about their scientific publications metadata so that the management can rely on high-quality data when making decisions.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2020-04-29
    Description: Tree comparisons are used in various areas with various statistical or dissimilarity measures. Given that data in various domains are diverse, and a particular comparison approach could be more appropriate for specific applications, there is a need to evaluate different comparison approaches. As gathering real data is often an extensive task, using generated trees provides a faster evaluation of the proposed solutions. This paper presents three algorithms for generating random trees: parametrized by tree size, shape based on the node distribution and the amount of difference between generated trees. The motivation for the algorithms came from unordered trees that are created from class hierarchies in object-oriented programs. The presented algorithms are evaluated by statistical and dissimilarity measures to observe stability, behavior, and impact on node distribution. The results in the case of dissimilarity measures evaluation show that the algorithms are suitable for tree comparison.
    Electronic ISSN: 2073-431X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2020-04-29
    Description: In Japan, traffic congestions often occur on the expressways connecting tourism areas with the Tokyo metropolitan area. This congestion can be mitigated if tourists delayed their departure of homeward trips to avoid peak traffic hours. A potential method to promote staggered departure times is providing the estimates of near-future traffic congestion. This study hypothesized and experimentally confirmed that some tourists would delay their departure to avoid traffic based on near-future traffic estimates. The experiment was conducted in the Yatsugatake area using a mobile application that provided this information to tourists. The results suggest that approximately 40% of self-driving tourists will perform an additional stopover if the returning route is congested and near-future traffic congestion estimate is provided.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2020-04-29
    Description: This article addresses the challenges of a digital cloud environment when it comes to global scalability with a large number of remote servers and an unsecure public cloud, such as Amazon. The goal of the study was to work out an approach for evaluating the reasonable system capacity under heavy workload. For that purpose, Zabbix monitoring solution is used and business metrics are applied in relation to existing system ones. A prediction data model is proposed to compute the future forecast of the user activity based on the collected historical statistics and to verify whether capacity adjustment is possible or not. The results of capacity planning are implemented at Genesys International Telecommunications Company. System analysis of the production environment indicates the possibility to downscale the capacity of certain virtual servers, which allowed savings to the annual operational costs of $3500 (50%) for each affected server.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2020-04-27
    Description: Time crystal was conceived in the 1970s as an autonomous engine made of only clocks to explain the life-like features of a virus. Later, time crystal was extended to living cells like neurons. The brain controls most biological clocks that regenerate the living cells continuously. Most cognitive tasks and learning in the brain run by periodic clock-like oscillations. Can we integrate all cognitive tasks in terms of running clocks of the hardware? Since the existing concept of time crystal has only one clock with a singularity point, we generalize the basic idea of time crystal so that we could bond many clocks in a 3D architecture. Harvesting inside phase singularity is the key. Since clocks reset continuously in the brain–body system, during reset, other clocks take over. So, we insert clock architecture inside singularity resembling brain components bottom-up and top-down. Instead of one clock, the time crystal turns to a composite, so it is poly-time crystal. We used century-old research on brain rhythms to compile the first hardware-free pure clock reconstruction of the human brain. Similar to the global effort on connectome, a spatial reconstruction of the brain, we advocate a global effort for more intricate mapping of all brain clocks, to fill missing links with respect to the brain’s temporal map. Once made, reverse engineering the brain would remain a mere engineering challenge.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2020-04-29
    Description: Paraphrase detection is important for a number of applications, including plagiarism detection, authorship attribution, question answering, text summarization, text mining in general, etc. In this paper, we give a performance overview of various types of corpus-based models, especially deep learning (DL) models, with the task of paraphrase detection. We report the results of eight models (LSI, TF-IDF, Word2Vec, Doc2Vec, GloVe, FastText, ELMO, and USE) evaluated on three different public available corpora: Microsoft Research Paraphrase Corpus, Clough and Stevenson and Webis Crowd Paraphrase Corpus 2011. Through a great number of experiments, we decided on the most appropriate approaches for text pre-processing: hyper-parameters, sub-model selection—where they exist (e.g., Skipgram vs. CBOW), distance measures, and semantic similarity/paraphrase detection threshold. Our findings and those of other researchers who have used deep learning models show that DL models are very competitive with traditional state-of-the-art approaches and have potential that should be further developed.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2020-04-29
    Description: Humans create teams to be more successful in a large variety of tasks. Groups are characterized by an emergent property called collective intelligence, which leads them to be smarter than single individuals. Previous studies proved that collective intelligence characterizes both real and online environments, focusing on adults’ performances. In this work, we explored which factors promote group success in an offline and online logical task with adolescents. Five hundred and fifty high school students participated in the experiment and faced Raven’s Advanced Progressive Matrices, first by themselves using the computer, then in a group. Groups interactions could have been computer-mediated or face-to-face, and the participants were randomly assigned to one of the two experimental conditions. Results suggest that groups perform better than singles, regardless of the experimental condition. Among adolescents, online groups performance was negatively affected by participants’ average perception of group cohesion, the difficulty of the problem, and the number of communicative exchanges that occur in the interaction. On the contrary, the factors that improve their performances were the average intelligence of the teammates, their levels of neuroticism, and the group heterogeneity in terms of social abilities. This work contributes to the literature with a comprehensive model of collective intelligence among young people.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2020-04-29
    Description: Since its creation by Nawaz, Enscore, and Ham in 1983, NEH remains the best heuristic method to solve flowshop scheduling problems. In the large body of literature dealing with the application of this heuristic, it can be clearly noted that results differ from one paper to another. In this paper, two methods are proposed to improve the original NEH, based on the two points in the method where choices must be made, in case of equivalence between two job orders or partial sequences. When an equality occurs in a sorting method, two results are equivalent, but can lead to different final results. In order to propose the first improvement to NEH, the factorial basis decomposition method is introduced, which makes a number computationally correspond to a permutation. This method is very helpful for the first improvement, and allows testing of all the sequencing possibilities for problems counting up to 50 jobs. The second improvement is located where NEH keeps the best partial sequence. Similarly, a list of equivalent partial sequences is kept, rather than only one, to provide the global method a chance of better performance. The results obtained with the successive use of the two methods of improvement present an average improvement of 19% over the already effective results of the original NEH method.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2020-04-28
    Description: Research on the role of non-driving related tasks (NDRT) in the area of automated driving is indispensable. At the same time, the construct mode awareness has received considerable interest in regard to human–machine interface (HMI) evaluation. Based on the expectation that HMI design and practice with different levels of driving automation influence NDRT engagement, a driving simulator study was conducted. In a 2 × 5 (automation level x block) design, N = 49 participants completed several transitions of control. They were told that they could engage in an NDRT if they felt safe and comfortable to do so. The NDRT was the Surrogate Reference Task (SuRT) as a representative of a wide range of visual–manual NDRTs. Engagement (i.e., number of inputs on the NDRT interface) was assessed at the onset of a respective episode of automated driving (i.e., after transition) and during ongoing automation (i.e., before subsequent transition). Results revealed that over time, NDRT engagement increased during both L2 and L3 automation until stable engagement at the third block. This trend was observed for both onset and ongoing NDRT engagement. The overall engagement level and the increase in engagement are significantly stronger for L3 automation compared to L2 automation. These results outline the potential of NDRT engagement as an online non-intrusive measure for mode awareness. Moreover, repeated interaction is necessary until users are familiar with the automated system and its HMI to engage in NDRTs. These results provide researchers and practitioners with indications about users’ minimum degree of familiarity with driving automation and HMIs for mode awareness testing.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2020-04-28
    Description: The projected introduction of conditional automated driving systems to the market has sparked multifaceted research on human–machine interfaces (HMIs) for such systems. By moderating the roles of the human driver and the driving automation system, the HMI is indispensable in avoiding side effects of automation such as mode confusion, misuse, and disuse. In addition to safety aspects, the usability of HMIs plays a vital role in improving the trust and acceptance of the automated driving system. This paper aggregates common research methods and findings based on an extensive literature review. Empirical studies, frameworks, and review articles are included. Findings and conclusions are presented with a focus on study characteristics such as test cases, dependent variables, testing environments, or participant samples. These methods and findings are discussed critically, taking into consideration requirements for usability assessments of HMIs in the context of conditional automated driving. The paper concludes with a derivation of recommended study characteristics framing best practice advice for the design of experiments. The advised selection of scenarios and metrics will be applied in a future validation study series comprising a driving simulator experiment and three real driving experiments on test tracks in Germany, the USA, and Japan.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2020-04-28
    Description: This paper introduces four safety training episodes and virtual training technology called Immersive Safe Oceans which can be used in further education of professionals in immersive training scenarios. These episodes were developed for maritime safety and are under testing. Immersive Safe Oceans Technology is a cost-effective, portable technology that can be used on board just in time or in maritime training centers. Four introduced episodes, namely, (1) command bridge, (2) machine room, (3) crane, and (4) fire safety, illustrate how Immersive Safe Oceans technology can be used in various professional training scenarios. These episodes also emphasize the growing need for virtual reality training in the shipping industry. As a result, next generation learning will happen onboard in sophisticated virtual training centers.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2020-04-27
    Description: In the context of the challenges facing human computer interaction (HCI) on the one hand and the future Internet on the other, the purpose of this study is to explore the multi-dimensionality of smart cities, looking at relationships and interdependencies through correlating selected dimensions of smartness. Key dimensions of smartness are identified for exploration in the context of smart cities in this work through a review of the research literature. Methodologically, this work combines an exploratory case study approach consisting of multiple methods of data collection including survey and in-depth interviews, with an explanatory correlational design. In terms of results, the main findings of this work shed light on the relationships between selected dimensions of the multi-dimensionality construct of smartness in data-rich urban environments. This work is significant in that it provides correlational information for smart city dimensionalities while contributing to the research literature in this domain; uses a hybrid case study and correlational design in relation to the study of multi-dimensionality; and, opens spaces for the study of innovative urban initiatives, while taking the ideas and experiences of people from many sectors into consideration.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2020-04-28
    Description: In this paper, we briefly present several modifications and generalizations of the concept of self-organizing neural networks—usually referred to as self-organizing maps (SOMs)—to illustrate their advantages in applications that range from high-dimensional data visualization to complex data clustering. Starting from conventional SOMs, Growing SOMs (GSOMs), Growing Grid Networks (GGNs), Incremental Grid Growing (IGG) approach, Growing Neural Gas (GNG) method as well as our two original solutions, i.e., Generalized SOMs with 1-Dimensional Neighborhood (GeSOMs with 1DN also referred to as Dynamic SOMs (DSOMs)) and Generalized SOMs with Tree-Like Structures (GeSOMs with T-LSs) are discussed. They are characterized in terms of (i) the modification mechanisms used, (ii) the range of network modifications introduced, (iii) the structure regularity, and (iv) the data-visualization/data-clustering effectiveness. The performance of particular solutions is illustrated and compared by means of selected data sets. We also show that the proposed original solutions, i.e., GeSOMs with 1DN (DSOMs) and GeSOMS with T-LSs outperform alternative approaches in various complex clustering tasks by providing up to 20 % increase in the clustering accuracy. The contribution of this work is threefold. First, algorithm-oriented original computer-implementations of particular SOM’s generalizations are developed. Second, their detailed simulation results are presented and discussed. Third, the advantages of our earlier-mentioned original solutions are demonstrated.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2020-04-28
    Description: Civil engineering applications are often characterized by a large uncertainty on the material parameters. Discretization of the underlying equations is typically done by means of the Galerkin Finite Element method. The uncertain material parameter can be expressed as a random field represented by, for example, a Karhunen–Loève expansion. Computation of the stochastic responses, i.e., the expected value and variance of a chosen quantity of interest, remains very costly, even when state-of-the-art Multilevel Monte Carlo (MLMC) is used. A significant cost reduction can be achieved by using a recently developed multilevel method: p-refined Multilevel Quasi-Monte Carlo (p-MLQMC). This method is based on the idea of variance reduction by employing a hierarchical discretization of the problem based on a p-refinement scheme. It is combined with a rank-1 Quasi-Monte Carlo (QMC) lattice rule, which yields faster convergence compared to the use of random Monte Carlo points. In this work, we developed algorithms for the p-MLQMC method for two dimensional problems. The p-MLQMC method is first benchmarked on an academic beam problem. Finally, we use our algorithm for the assessment of the stability of slopes, a problem that arises in geotechnical engineering, and typically suffers from large parameter uncertainty. For both considered problems, we observe a very significant reduction in the amount of computational work with respect to MLMC.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2020-04-24
    Description: In this research we presented different approaches to investigate the possible relationships between the largest crowd-based knowledge source and the market potential of particular cryptocurrencies. Identification of such relations is crucial because their existence may be used to create a broad spectrum of analyses and reports about cryptocurrency projects and to obtain a comprehensive outlook of the blockchain domain. The activities on the blockchain reach different levels of anonymity which renders them hard objects of studies. In particular, the standard tools used to characterize social trends and variables that describe cryptocurrencies’ situations are unsuitable to be used in the environment that extensively employs cryptographic techniques to hide real users. The employment of Wikipedia to trace crypto assets value need examination because the portal allows gathering of different opinions—content of the articles is edited by a group of people. Consequently, the information can be more attractive and useful for the readers than in case of non-collaborative sources of information. Wikipedia Articles often appears in the premium position of such search engines as Google, Bing, Yahoo and others. One may expect different demand on information about particular cryptocurrency depending on the different events (e.g., sharp fluctuations of price). Wikipedia offers only information about cryptocurrencies that are important from the point of view of language community of the users in Wikipedia. This “filter” helps to better identify those cryptocurrencies that have a significant influence on the regional markets. The models encompass linkages between different variables and properties. In one model cryptocurrency projects are ranked with the means of articles sentiment and quality. In another model, Wikipedia visits are linked to cryptocurrencies’ popularity. Additionally, the interactions between information demand in different Wikipedia language versions are elaborated. They are used to assess the geographical esteem of certain crypto coins. The information about the legal status of cryptocurrency technologies in different states that are offered by Wikipedia is used in another proposed model. It allows assessment of the adoption of cryptocurrencies in a given legislature. Finally, a model is developed that joins Wikipedia articles editions and deletions with the social sentiment towards particular cryptocurrency projects. The mentioned analytical purposes that permit assessment of the popularity of blockchain technologies in different local communities are not the only results of the paper. The models can show which country has the biggest demand on particular cryptocurrencies, such as Bitcoin, Ethereum, Ripple, Bitcoin Cash, Monero, Litecoin, Dogecoin and others.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2020-04-26
    Description: Every day large quantities of spatio-temporal data are captured, whether by Web-based companies for social data mining or by other industries for a variety of applications ranging from disaster relief to marine data analysis. Making sense of all this data dramatically increases the need for intelligent backend systems to provide realtime query response times while scaling well (in terms of storage and performance) with increasing quantities of structured or semi-structured, multi-dimensional data. Currently, relational database solutions with spatial extensions such as PostGIS, seem to come to their limits. However, the use of graph database technology has been rising in popularity and has been found to handle graph-like spatio-temporal data much more effectively. Motivated by the need to effectively store multi-dimensional, interconnected data, this paper investigates whether or not graph database technology is better suited when compared to the extended relational approach. Three database technologies will be investigated using real world datasets namely: PostgreSQL, JanusGraph, and TigerGraph. The datasets used are the Yelp challenge dataset and an ambulance response simulation dataset, thus combining real world spatial data with realistic simulations offering more control over the dataset. Our extensive evaluation is based on how each database performs under practical data analysis scenarios similar to those found on enterprise level.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2020-04-26
    Description: This paper examines how machine learning (ML) and natural language processing (NLP) can be used to identify, analyze, and generate West African folk tales. Two corpora of West African and Western European folk tales are compiled and used in three experiments on cross-cultural folk tale analysis. In the text generation experiment, two types of deep learning text generators are built and trained on the West African corpus. We show that although the texts range between semantic and syntactic coherence, each of them contains West African features. The second experiment further examines the distinction between the West African and Western European folk tales by comparing the performance of an LSTM (acc. 0.79) with a BoW classifier (acc. 0.93), indicating that the two corpora can be clearly distinguished in terms of vocabulary. An interactive t-SNE visualization of a hybrid classifier (acc. 0.85) highlights the culture-specific words for both. The third experiment describes an ML analysis of narrative structures. Classifiers trained on parts of folk tales according to the three-act structure are quite capable of distinguishing these parts (acc. 0.78). Common n-grams extracted from these parts not only underline cross-cultural distinctions in narrative structures, but also show the overlap between verbal and written West African narratives.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2020-04-20
    Description: The high volume of user-generated content caused by the popular use of online social network services exposes users to different kinds of content that can be harmful or unwanted. Solutions to protect user privacy from such unwanted content cannot be generalized due to different perceptions of what is considered as unwanted for each individual. Thus, there is a substantial need to design a personalized privacy protection mechanism that takes into consideration differences in users’ privacy requirements. To achieve personalization, a user attitude about certain content must be acknowledged by the automated protection system. In this paper, we investigate the relationship between user attitude and user behavior among users from the Makkah region in Saudi Arabia to determine the applicability of considering users’ behaviors, as indicators of their attitudes towards unwanted content. We propose a semi-explicit attitude measure to infer user attitude from user-selected examples. Results revealed that semi-explicit attitude is a more reliable attitude measure to represent users’ actual attitudes than self-reported preferences for our sample. In addition, results show a statistically significant relationship between a user’s commenting behavior and the user’s semi-explicit attitude within our sample. Thus, commenting behavior is an effective indicator of the user’s semi-explicit attitude towards unwanted content for a user from the Makkah region in Saudi Arabia. We believe that our findings can have positive implications for designing an effective automated personalized privacy protection mechanism by reproducing the study considering other populations.
    Electronic ISSN: 2073-431X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2020-04-17
    Description: In this paper, we propose a method for halftoning color images based on an error diffusion technique, a color design criterion and Neugebauer models for expressing colors. For a natural extension of the conventional method for grayscale error diffusion to its color version, we first reformulate grayscale error diffusion with a one-dimensional Neugebauer model. Then we increase the dimension of the model to derive a color error diffusion method based on a three-dimensional Neugebauer model in RGB (red, green and blue) color space. Moreover, we propose a sparse Neugebauer model based on a color design criterion, or the minimal brightness variation criterion (MBVC), from which we derive a sparse Neugebauer model-based error diffusion method. Experimental results show that color halftone images produced by the proposed methods preserve the color contents in original continuous-tone images better than that by conventional color error diffusion methods. We also demonstrate that the proposed sparse method reduce halftone noise better than the state-of-the-art method based on MBVC.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2020-04-17
    Description: The orientation of plate images in license plate recognition is one of the factors that influence its accuracy. In particular, tilted plate images are harder to detect and recognize characters with than aligned ones. To this end, the rectification of plates in a preprocessing step is essential to improve their performance. We propose deep models to estimate four-corner coordinates of tilted plates. Since the predicted corners can then be used to rectify plate images, they can help improve plate recognition in plate recognition. The main contributions of this work are a set of open-structured hybrid networks to predict corner positions and a novel loss function that combines pixel-wise differences with position-wise errors, producing performance improvements. Regarding experiments using proprietary plate images, one of the proposed modes produces a 3.1% improvement over the established warping method.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2020-04-17
    Description: The Infrastructure for Spatial Information in Europe (INSPIRE) Directive (2007) requires public organisations across Europe to share environmentally-related spatial datasets to support decision making and management of the environment. Despite the environmental focus of INSPIRE, it offers limited guidance for archaeological datasets. Most primary data is created outside, but ultimately curated within, the public sector. As spatial evidence from fieldwork activities is not considered by the Directive, it overlooks a range of barriers to sharing data, such as project-based fieldwork, a lack of data standards, and formatting and licencing variations. This paper submits that these challenges are best addressed through the formalised management of primary research data through an archaeological Spatial Data Infrastructure (SDI). SDIs deliver more efficient data management and release economic value by saving time and money. Better stewardship of archaeological data will also lead to more informed research and stewardship of the historic environment. ARIADNE already provides a digital infrastructure for research data, but the landscape and spatial component has been largely overlooked. However, rather than developing a separate solution, the full potential of spatial data from archaeological research can and should be realised through ARIADNE.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2020-04-17
    Description: Recently, IoT technology, low power networking technology for IoT, and geopositioning systems are very useful tools for future applications. One of the applications is the logistics system, especially for the point to point delivery. The current parcel delivery system is delivered through the logistics hub despite latent economic disadvantages in terms of delivery time and delivery distance. The major disadvantage of current hub and spoke delivery system is that individual parcels must go to the logistics hub first even if the origin and destination are nearby or within a distance that can be delivered through the hub. These sorts of delivery are inefficient and increase total expense of delivery in terms of delivery time and delivery costs. We designed a new delivery system of point to point delivery using IoT technology and end-to-end IoT platform based on ThingPlug and LoRa technology for fast delivery and less cost of delivery. We designed an IoT device that can be part of parcels or attached to parcels, which have various functionalities including the capability to understand current delivery routes with the use of geopositioning systems. In addition, we addressed the difficulties with issues such as linking IoT device into IoT platforms and designing IoT device functionalities. We also showed methods to improve the efficiency of IoT based parcel delivery systems. The proposed system may improve economics of parcel delivery system by reducing travel distance for delivery and dissipation of delivery time.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2020-04-21
    Description: The aim of this survey is to review some recent developments in devising efficient preconditioners for sequences of symmetric positive definite (SPD) linear systems A k x k = b k , k = 1 , … arising in many scientific applications, such as discretization of transient Partial Differential Equations (PDEs), solution of eigenvalue problems, (Inexact) Newton methods applied to nonlinear systems, rational Krylov methods for computing a function of a matrix. In this paper, we will analyze a number of techniques of updating a given initial preconditioner by a low-rank matrix with the aim of improving the clustering of eigenvalues around 1, in order to speed-up the convergence of the Preconditioned Conjugate Gradient (PCG) method. We will also review some techniques to efficiently approximate the linearly independent vectors which constitute the low-rank corrections and whose choice is crucial for the effectiveness of the approach. Numerical results on real-life applications show that the performance of a given iterative solver can be very much enhanced by the use of low-rank updates.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2020-04-20
    Description: There is a strong correlation between the like/dislike responses to audio–visual stimuli and the emotional arousal and valence reactions of a person. In the present work, our attention is focused on the automated detection of dislike responses based on EEG activity when music videos are used as audio–visual stimuli. Specifically, we investigate the discriminative capacity of the Logarithmic Energy (LogE), Linear Frequency Cepstral Coefficients (LFCC), Power Spectral Density (PSD) and Discrete Wavelet Transform (DWT)-based EEG features, computed with and without segmentation of the EEG signal, on the dislike detection task. We carried out a comparative evaluation with eighteen modifications of the above-mentioned EEG features that cover different frequency bands and use different energy decomposition methods and spectral resolutions. For that purpose, we made use of Naïve Bayes classifier (NB), Classification and regression trees (CART), k-Nearest Neighbors (kNN) classifier, and support vector machines (SVM) classifier with a radial basis function (RBF) kernel trained with the Sequential Minimal Optimization (SMO) method. The experimental evaluation was performed on the well-known and widely used DEAP dataset. A classification accuracy of up to 98.6% was observed for the best performing combination of pre-processing, EEG features and classifier. These results support that the automated detection of like/dislike reactions based on EEG activity is feasible in a personalized setup. This opens opportunities for the incorporation of such functionality in entertainment, healthcare and security applications.
    Electronic ISSN: 2073-431X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2020-04-21
    Description: We propose a tracking-by-detection algorithm to track the movements of meeting participants from an overhead camera. An advantage of using overhead cameras is that all objects can typically be seen clearly, with little occlusion; however, detecting people from a wide-angle overhead view also poses challenges such as people’s appearance significantly changing due to their position in the wide-angle image, and generally from a lack of strong image features. Our experimental datasets do not include empty meeting rooms, and this means that standard motion based detection techniques (e.g., background subtraction or consecutive frame differencing) struggle since there is no prior knowledge for a background model. Additionally, standard techniques may perform poorly when there is a wide range of movement behaviours (e.g. periods of no movement and periods of fast movement), as is often the case in meetings. Our algorithm uses a novel coarse-to-fine detection and tracking approach, combining motion detection using adaptive accumulated frame differencing (AAFD) with Shi-Tomasi corner detection. We present quantitative and qualitative evaluation which demonstrates the robustness of our method to track people in environments where object features are not clear and have similar colour to the background. We show that our approach achieves excellent performance in terms of the multiple object tracking accuracy (MOTA) metrics, and that it is particularly robust to initialisation differences when compared with baseline and state of the art trackers. Using the Online Tracking Benchmark (OTB) videos we also demonstrate that our tracker is very strong in the presence of background clutter, deformation and illumination variation.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2020-04-19
    Description: Unmanned aerial vehicles (UAVs) received an unprecedented surge of people’s interest worldwide in recent years. This paper investigates the specific problem of cooperative mission planning for multiple UAVs on the battlefield from a hierarchical decision-making perspective. From the view of the actual mission planning issue, the two key problems to be solved in UAV collaborative mission planning are mission allocation and route planning. In this paper, both of these problems are taken into account via a hierarchical decision-making model. Firstly, we use a target clustering algorithm to divide the original targets into target subgroups, where each target subgroup contains multiple targets. Secondly, a fuzzy ant colony algorithm is used to calculate the global path between target subgroups for a single-target group. Thirdly, a fuzzy ant colony algorithm is also used to calculate the local path between multiple targets for a single-target subgroup. After three levels of decision-making, the complete path for multiple UAVs can be obtained. In order to improve the efficiency of a collaborative task between different types of UAVs, a cooperative communication strategy is developed, which can reduce the number of UAVs performing tasks. Finally, experimental results demonstrate the effectiveness of the proposed cooperative mission planning and cooperative communication strategy for multiple UAVs.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2020-04-19
    Description: The consumption of natural, green, organic products represents an increasingly important subject for contemporary society, organizations, consumers and researchers. Demographic and cultural factors, traditions and consumption habits, along with the individual desire to adopt a healthy lifestyle in accordance with principles of sustainability and environmental protection are relevant vectors in the search, choice and consumption of green products. Producers and retailers have identified the interest of modern consumers, introducing a varied range of green grocery and non-food products to match expectations and needs. Using the case study method, this paper highlights the transition of the organic market in an emerging European country: Romania. During the era of state economy, organic and natural products were interchangeable, but after liberalization of the market, the rise of the organic sector began with the establishment of inspection and certification bodies, establishment of procedures, and the appearance of specialized agricultural farms, processors and sellers. Consumers understood soon enough the advantages and benefits of organic products and a healthy lifestyle, and the market for organic products has been developing steadily. We show the current state of development and discuss its evolution, outlining the different market statistics, and making recommendations regarding future development possibilities.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...