ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (1,002)
Collection
  • Articles  (1,002)
Publisher
Years
Journal
  • 1
    Publication Date: 2020-10-10
    Description: The heart of the current wireless communication systems (including 5G) is the Fourier transform-based orthogonal frequency division multiplex (OFDM). Over time, a lot of research has proposed the wavelet transform-based OFDM as a better replacement of Fourier in the physical layer solutions because of its performance and ability to support network-intensive applications such as the Internet of Things (IoT). In this paper, we weigh the wavelet transform performances against the future wireless application system requirements and propose guidelines and approaches for wavelet applications in 5G waveform design. This is followed by a detailed impact on healthcare. Using an image as the test data, a comprehensive performance comparison between Fourier transform and various wavelet transforms has been done considering the following 5G key performance indicators (KPIs): energy efficiency, modulation and demodulation complexity, reliability, latency, spectral efficiency, effect of transmission/reception under asynchronous transmission, and robustness to time-/frequency-selective channels. Finally, the guidelines for wavelet transform use are presented. The guidelines are sufficient to serve as approaches for tradeoffs and also as the guide for further developments.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-09-27
    Description: In recent years, increased attention is being given to software quality assurance and protection. With considerable verification and protection schemes proposed and deployed, today’s software unfortunately still fails to be protected from cyberattacks, especially in the presence of insecure organization of heap metadata. In this paper, we aim to explore whether heap metadata could be corrupted and exploited by cyberattackers, in an attempt to assess the exploitability of vulnerabilities and ensure software quality. To this end, we propose RELAY, a software testing framework to simulate human exploitation behavior for metadata corruption at the machine level. RELAY employs the heap layout serialization method to construct exploit patterns from human expertise and decomposes complex exploit-solving problems into a series of intermediate state-solving subproblems. With the heap layout procedural method, RELAY makes use of the fewer resources consumed to solve a layout problem according to the exploit pattern, activates the intermediate state, and generates the final exploit. Additionally, RELAY can be easily extended and can continuously assimilate human knowledge to enhance its ability for exploitability evaluation. Using 20 CTF&RHG programs, we then demonstrate that RELAY has the ability to evaluate the exploitability of metadata corruption vulnerabilities and works more efficiently compared with other state-of-the-art automated tools.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-09-25
    Description: Graphics processing units (GPUs) have a strong floating-point capability and a high memory bandwidth in data parallelism and have been widely used in high-performance computing (HPC). Compute unified device architecture (CUDA) is used as a parallel computing platform and programming model for the GPU to reduce the complexity of programming. The programmable GPUs are becoming popular in computational fluid dynamics (CFD) applications. In this work, we propose a hybrid parallel algorithm of the message passing interface and CUDA for CFD applications on multi-GPU HPC clusters. The AUSM + UP upwind scheme and the three-step Runge–Kutta method are used for spatial discretization and time discretization, respectively. The turbulent solution is solved by the K−ω SST two-equation model. The CPU only manages the execution of the GPU and communication, and the GPU is responsible for data processing. Parallel execution and memory access optimizations are used to optimize the GPU-based CFD codes. We propose a nonblocking communication method to fully overlap GPU computing, CPU_CPU communication, and CPU_GPU data transfer by creating two CUDA streams. Furthermore, the one-dimensional domain decomposition method is used to balance the workload among GPUs. Finally, we evaluate the hybrid parallel algorithm with the compressible turbulent flow over a flat plate. The performance of a single GPU implementation and the scalability of multi-GPU clusters are discussed. Performance measurements show that multi-GPU parallelization can achieve a speedup of more than 36 times with respect to CPU-based parallel computing, and the parallel algorithm has good scalability.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2020-09-23
    Description: Aiming at low classification accuracy of imbalanced datasets, an oversampling algorithm—AGNES-SMOTE (Agglomerative Nesting-Synthetic Minority Oversampling Technique) based on hierarchical clustering and improved SMOTE—is proposed. Its key procedures include hierarchically cluster majority samples and minority samples, respectively; divide minority subclusters on the basis of the obtained majority subclusters; select “seed sample” based on the sampling weight and probability distribution of minority subcluster; and restrict the generation of new samples in a certain area by centroid method in the sampling process. The combination of AGNES-SMOTE and SVM (Support Vector Machine) is presented to deal with imbalanced datasets classification. Experiments on UCI datasets are conducted to compare the performance of different algorithms mentioned in the literature. Experimental results indicate AGNES-SMOTE excels in synthesizing new samples and improves SVM classification performance on imbalanced datasets.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-09-18
    Description: Insect intelligent building (I2B) is a novel decentralized, flat-structured intelligent building platform with excellent flexibility and scalability. I2B allows users to develop applications that include control strategies for efficiently managing and controlling buildings. However, developing I2B APPs (applications) is considered a challenging and complex task due to the complex structural features and parallel computing models of the I2B platform. Existing studies have been shown to encounter difficulty in supporting a high degree of abstraction and in allowing users to define control scenarios in a concise and comprehensible way. This paper aims to facilitate the development of such applications and to reduce the programming difficulty. We propose Touch, a textual domain-specific language (DSL) that provides a high-level abstraction of I2B APPs. Specifically, we first establish the conceptual programming architecture of the I2B APP, making the application more intuitive by abstracting different levels of physical entities in I2B. Then, we present special language elements to effectively support the parallel computing model of the I2B platform and provide a formal definition of the concrete Touch syntax. We also implement supporting tools for Touch, including a development environment as well as target code generation. Finally, we present experimental results to demonstrate the effectiveness and efficiency of Touch.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2020-09-18
    Description: “Double-Line Ship Mooring” (DLSM) mode has been applied as an initiative operation mode for solving berth allocation problems (BAP) in certain giant container terminals in China. In this study, a continuous berth scheduling problem with the DLSM model is illustrated and solved with exact and heuristic methods with an objective to minimize the total operation cost, including both the additional transportation cost for vessels not located at their minimum-cost berthing position and the penalties for vessels not being able to leave as planned. First of all, this problem is formulated as a mixed-integer programming model and solved by the CPLEX solver for small-size instances. Afterwards, a particle swarm optimization (PSO) algorithm is developed to obtain good quality solutions within reasonable execution time for large-scale problems. Experimental results show that DLSM mode can not only greatly reduce the total operation cost but also significantly improve the efficiency of berth scheduling in comparison with the widely used single-line ship mooring (SLSM) mode. The comparison made between the results obtained by the proposed PSO algorithm and that obtained by the CPLEX solver for both small-size and large-scale instances are also quite encouraging. To sum up, this study can not only validate the effectiveness of DLSM mode for heavy-loaded ports but also provide a powerful decision support tool for the port operators to make good quality berth schedules with the DLSM mode.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2020-09-17
    Description: Energy consumption has been one of the main concerns to support the rapid growth of cloud data centers, as it not only increases the cost of electricity to service providers but also plays an important role in increasing greenhouse gas emissions and thus environmental pollution, and has a negative impact on system reliability and availability. As a result, energy consumption and efficiency metrics have become a vital issue for parallel scheduling applications based on tasks performed at cloud data centers. In this paper, we present a time and energy-aware two-phase scheduling algorithm called best heuristic scheduling (BHS) for directed acyclic graph (DAG) scheduling on cloud data center processors. In the first phase, the algorithm allocates resources to tasks by sorting, based on four heuristic methods and a grasshopper algorithm. It then selects the most appropriate method to perform each task, based on the importance factor determined by the end-user or service provider to achieve a solution designed at the right time. In the second phase, BHS minimizes the makespan and energy consumption according to the importance factor determined by the end-user or service provider and taking into account the start time, setup time, end time, and energy profile of virtual machines. Finally, a test dataset is developed to evaluate the proposed BHS algorithm compared to the multiheuristic resource allocation algorithm (MHRA). The results show that the proposed algorithm facilitates 19.71% more energy storage than the MHRA algorithm. Furthermore, the makespan is reduced by 56.12% in heterogeneous environments.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2020-09-15
    Description: Sepsis is a leading cause of mortality in intensive care units and costs hospitals billions of dollars annually worldwide. Predicting survival time for sepsis patients is a time-critical prediction problem. Considering the useful sequential information for sepsis development, this paper proposes a time-critical topic model (TiCTM) inspired by the latent Dirichlet allocation (LDA) model. The proposed TiCTM approach takes into account the time dependency structure between notes, measurement, and survival time of a sepsis patient. Experimental results on the public MIMIC-III database show that, overall, our method outperforms the conventional LDA and linear regression model in terms of recall, precision, accuracy, and F1-measure. It is also found that our method achieves the best performance by using 5 topics when predicting the probability for 30-day survival time.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2020-09-10
    Description: Access control models are an important tool developed for securing today’s data systems. Institutions use the access control models specifically to define who their employees are, what they can do, which resources they can reach, and which processes they can perform and use them to manage the whole process. This is a very hard and costly process for institutions with distributed database systems. However, access control models cannot be implemented in a qualified way due to the fact that the conditions for defining users’ demands to reach resources distributed on different servers, one of which is consequentially bound to the other, the verification and authorization of those user demands, and being able to monitor the actions of the users cannot be configured in an efficient way all the time. With our model suggested in this study, the aim is to automatically calculate the permissions and access levels of all users defined in the distributed database systems for the objects, and, in this way, we will reach a more efficient decision as to which objects the users can access while preventing their access to the information they do not need. Our proposed model in this study has been applied to real life data clusters from organizations providing health and education services and a public service. With the proposed model, all models have been run on servers sharing resources in a private network. The performance of the proposed model has been compared to that of traditional access models. It was confirmed that the proposed model presented an access control model providing more accurate access level results as well as being scalable to many distributed database systems.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2020-09-10
    Description: The feature ranking as a subcategory of the feature selection is an essential preprocessing technique that ranks all features of a dataset such that many important features denote a lot of information. The ensemble learning has two advantages. First, it has been based on the assumption that combining different model’s output can lead to a better outcome than the output of any individual models. Second, scalability is an intrinsic characteristic that is so crucial in coping with a large scale dataset. In this paper, a homogeneous ensemble feature ranking algorithm is considered, and the nine rank fusion methods used in this algorithm are analyzed comparatively. The experimental studies are performed on real six medium datasets, and the area under the feature-forward-addition curve criterion is assessed. Finally, the statistical analysis by repeated-measures analysis of variance results reveals that there is no big difference in the performance of the rank fusion methods applied in a homogeneous ensemble feature ranking; however, this difference is a statistical significance, and the B-Min method has a little better performance.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...