ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (361)
  • 2015-2019  (361)
Collection
  • Articles  (361)
Publisher
Years
Year
Journal
  • 1
    Publication Date: 2019-12-13
    Description: Accidental falls are the main cause of fatal and nonfatal injuries, which typically lead to hospital admissions among elderly people. A wearable system capable of detecting unintentional falls and sending remote notifications will clearly improve the quality of the life of such subjects and also helps to reduce public health costs. In this paper, we describe an edge computing wearable system based on deep learning techniques. In particular, we give special attention to the description of the classification and communication modules, which have been developed by keeping in mind the limits in terms of computational power, memory occupancy, and power consumption of the designed wearable device. The system thus developed is capable of classifying 3D-accelerometer signals in real-time and to issue remote alerts while keeping power consumption low and improving on the present state-of-the-art solutions in the literature.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019-12-09
    Description: Community detection is an important analysis task for complex networks, including bipartite networks, which consist of nodes of two types and edges connecting only nodes of different types. Many community detection methods take the number of communities in the networks as a fixed known quantity; however, it is impossible to give such information in advance in real-world networks. In our paper, we propose a projection-free Bayesian inference method to determine the number of pure-type communities in bipartite networks. This paper makes the following contributions: (1) we present the first principle derivation of a practical method, using the degree-corrected bipartite stochastic block model that is able to deal with networks with broad degree distributions, for estimating the number of pure-type communities of bipartite networks; (2) a prior probability distribution is proposed over the partition of a bipartite network; (3) we design a Monte Carlo algorithm incorporated with our proposed method and prior probability distribution. We give a demonstration of our algorithm on synthetic bipartite networks including an easy case with a homogeneous degree distribution and a difficult case with a heterogeneous degree distribution. The results show that the algorithm gives the correct number of communities of synthetic networks in most cases and outperforms the projection method especially in the networks with heterogeneous degree distributions.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019-11-29
    Description: Internet addiction refers to excessive internet use that interferes with daily life. Due to its negative impact on college students’ study and life, discovering students’ internet addiction tendencies and making correct guidance for them timely is necessary. However, at present, the research methods used in analyzing students’ internet addiction are mainly questionnaires and statistical analysis, which relies on the domain experts heavily. Fortunately, with the development of the smart campus, students’ behavior data such as consumption and trajectory information in the campus are stored. With this information, we can analyze students’ internet addiction levels quantitatively. In this paper, we provide an approach to estimate college students’ internet addiction levels using their behavior data in the campus. In detail, we consider students’ addiction towards the internet is a hidden variable which affects students’ daily time online together with other behavior. By predicting students’ daily time online, we will find students’ internet addiction levels. Along this line, we develop a linear internet addiction (LIA) model, a neural network internet addiction (NIA) model, and a clustering-based internet addiction (CIA) model to calculate students’ internet addiction levels, respectively. These three models take the regularity of students’ behavior and the similarity among students’ behavior into consideration. Finally, extensive experiments are conducted on a real-world dataset. The experimental results show the effectiveness of our method, and it is also consistent with some psychological findings.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019-11-26
    Description: Digitalization of handwritten documents has created a greater need for accurate online recognition of hand-drawn sketches. However, the online recognition of hand-drawn diagrams is an enduring challenge in human-computer interaction due to the complexity in extracting and recognizing the visual objects reliably from a continuous stroke stream. This paper focuses on the design and development of a new, efficient stroke-based online hand-drawn sketch recognition scheme named SKETRACK for hand-drawn arrow diagrams and digital logic circuit diagrams. The fundamental parts of this model are text separation, symbol segmentation, feature extraction, classification, and structural analysis. The proposed scheme utilizes the concepts of normalization and segmentation to isolate the text from the sketches. Then, the features are extracted to model different structural variations of the strokes that are categorized into the arrows/lines and the symbols for effective processing. The strokes are clustered using the spectral clustering algorithm based on p-distance and Euclidean distance to compute the similarity between the features and minimize the feature dimensionality by grouping similar features. Then, the symbol recognition is performed using modified support vector machine (MSVM) classifier in which a hybrid kernel function with a lion optimized tuning parameter of SVM is utilized. Structural analysis is performed with lion-based task optimization for recognizing the symbol candidates to form the final diagram representations. This proposed recognition model is suitable for simpler structures such as flowcharts, finite automata, and the logic circuit diagrams. Through the experiments, the performance of the proposed SKETRACK scheme is evaluated on three domains of databases and the results are compared with the state-of-the-art methods to validate its superior efficiency.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019-11-23
    Description: Energy consumption information for devices, as available in the literature, is typically obtained with ad hoc approaches, thus making replication and consumption data comparison difficult. We propose a process for measuring the energy consumption of a software application. The process contains four phases, each providing a structured deliverable that reports the information required to replicate the measurement. The process also guides the researcher on a threat to validity analysis to be included in each deliverable. This analysis ensures better reliability, trust, and confidence to reuse the collected consumption data. Such a process produces a structured consumption data for any kind of electronic device (IoT devices, mobile phones, personal computers, servers, etc.), which can be published and shared with other researchers fostering comparison or further investigations. A real case example demonstrates how to apply the process and how to create the required deliverables.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019-11-18
    Description: Reducing energy consumption of data centers is an important way for cloud providers to improve their investment yield, but they must also ensure that the services delivered meet the various requirements of consumers. In this paper, we propose a resource management strategy to reduce both energy consumption and Service Level Agreement (SLA) violations in cloud data centers. It contains three improved methods for subproblems in dynamic virtual machine (VM) consolidation. For making hosts detection more effective and improving the VM selection results, first, the overloaded hosts detecting method sets a dynamic independent saturation threshold for each host, respectively, which takes the CPU utilization trend into consideration; second, the underutilized hosts detecting method uses multiple factors besides CPU utilization and the Naive Bayesian classifier to calculate the combined weights of hosts in prioritization step; and third, the VM selection method considers both current CPU usage and future growth space of CPU demand of VMs. To evaluate the performance of the proposed strategy, it is simulated in CloudSim and compared with five existing energy–saving strategies using real-world workload traces. The experimental results show that our strategy outperforms others with minimum energy consumption and SLA violation.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019-11-03
    Description: Mutation testing is a technique for evaluating the quality of a test suite. However, the costly computation from a large number of mutants affects the practical application of mutation testing, and reducing the number of mutants is reasonably an efficient way for mutation testing. We propose a new method for reducing mutants by analyzing dominance between statements in the program under test. The proposed method only selects the mutants generated from the nondominated statements, and the mutants generated from the dominated statements are reduced. After applying the proposed method to nine programs, the experimental results show that our method reduces over 75% mutants and well maintains the mutation adequacy.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019-10-31
    Description: In recent times, selection of a suitable hotel location and reservation of accommodation have become a critical issue for the travelers. The online hotel search has been increased at a very fast pace and became very time-consuming due to the presence of huge amount of online information. Recommender systems (RSs) are getting importance due to their significance in making decisions and providing detailed information about the required product or a service. To acquire the hotel recommendations while dealing with textual hotel reviews, numerical ranks, votes, ratings, and number of video views have become difficult. To generate true recommendations, we have proposed an intelligent approach which also deals with large-sized heterogeneous data to fulfill the needs of the potential customers. The collaborative filtering (CF) approach is one of the most popular techniques of the RS to generate recommendations. We have proposed a novel CF recommendation approach in which opinion-based sentiment analysis is used to achieve hotel feature matrix by polarity identification. Our approach combines lexical analysis, syntax analysis, and semantic analysis to understand sentiment towards hotel features and the profiling of guest type (solo, family, couple etc). The proposed system recommends hotels based on the hotel features and guest type for personalized recommendation. The developed system not only has the ability to handle heterogeneous data using big data Hadoop platform but it also recommends hotel class based on guest type using fuzzy rules. Different experiments are performed over the real-world datasets obtained from two hotel websites. Moreover, the values of precision and recall and F-measure have been calculated, and the results are discussed in terms of improved accuracy and response time, significantly better than the traditional approaches.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019-10-24
    Description: Cloud is a widely used platform for intensive computing, bulk storage, and networking. In the world of cloud computing, scaling is a preferred tool for resource management and performance determination. Scaling is generally of two types: horizontal and vertical. The horizontal scale connects users’ agreement with the hardware and software entities and is implemented physically as per the requirement and demand of the datacenter for its further expansion. Vertical scaling can essentially resize server without any change in code and can increase the capacity of existing hardware or software by adding resources. The present study aims at describing two approaches for scaling, one is a predator-prey method and second is genetic algorithm (GA) along with differential evolution (DE). The predator-prey method is a mathematical model used to implement vertical scaling of task for optimal resource provisioning and genetic algorithm (GA) along with differential evolution(DE) based metaheuristic approach that is used for resource scaling. In this respect, the predator-prey model introduces two algorithms, namely, sustainable and seasonal scaling algorithm (SSSA) and maximum profit scaling algorithm (MPSA). The SSSA tries to find the approximation of resource scaling and the mechanism for maximizing sustainable as well as seasonal scaling. On the other hand, the MPSA calculates the optimal cost per reservation and maximum sustainable profit. The experimental results reflect that the proposed logistic scaling-based predator-prey method (SSSA-MPSA) provides a comparable result with GA-DE algorithm in terms of execution time, average completion time, and cost of expenses incurred by the datacenter.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019-10-13
    Description: In recent years, there has been renewed interest in the use of field-programmable gate arrays (FPGAs) for high-performance computing (HPC). In this paper, we explore the techniques required by traditional HPC programmers in porting HPC applications to FPGAs, using as an example the LFRic weather and climate model. We report on the first steps in porting LFRic to the FPGAs of the EuroExa architecture. We have used Vivado High-Level Syntheusywwi to implement a matrix-vector kernel from the LFRic code on a Xilinx UltraScale+ development board containing an XCZU9EG multiprocessor system-on-chip. We describe the porting of the code, discuss the optimization decisions, and report performance of 5.34 Gflop/s with double precision and 5.58 Gflop/s with single precision. We discuss sources of inefficiencies, comparisons with peak performance, comparisons with CPU and GPU performance (taking into account power and price), comparisons with published techniques, and comparisons with published performance, and we conclude with some comments on the prospects for future progress with FPGA acceleration of the weather forecast model. The realization of practical exascale-class high-performance computinems requires significant improvements in the energy efficiency of such systems and their components. This has generated interest in computer architectures which utilize accelerators alongside traditional CPUs. FPGAs offer huge potential as an accelerator which can deliver performance for scientific applications at high levels of energy efficiency. The EuroExa project is developing and building a high-performance architecture based upon ARM CPUs with FPGA acceleration targeting exascale-class performance within a realistic power budget.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...