ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (129)
  • Other Sources
  • Hindawi  (129)
  • 2020-2022  (129)
  • Media Resources and Communication Sciences, Journalism  (129)
Collection
  • Articles  (129)
  • Other Sources
Publisher
  • Hindawi  (129)
Years
Year
Journal
  • 1
    Publication Date: 2020-08-25
    Description: Electroencephalography-(EEG-) based control is a noninvasive technique which employs brain signals to control electrical devices/circuits. Currently, the brain-computer interface (BCI) systems provide two types of signals, raw signals and logic state signals. The latter signals are used to turn on/off the devices. In this paper, the capabilities of BCI systems are explored, and a survey is conducted how to extend and enhance the reliability and accuracy of the BCI systems. A structured overview was provided which consists of the data acquisition, feature extraction, and classification algorithm methods used by different researchers in the past few years. Some classification algorithms for EEG-based BCI systems are adaptive classifiers, tensor classifiers, transfer learning approach, and deep learning, as well as some miscellaneous techniques. Based on our assessment, we generally concluded that, through adaptive classifiers, accurate results are acquired as compared to the static classification techniques. Deep learning techniques were developed to achieve the desired objectives and their real-time implementation as compared to other algorithms.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-08-25
    Description: To solve the problems of current short-term forecasting methods for metro passenger flow, such as unclear influencing factors, low accuracy, and high time-space complexity, a method for metro passenger flow based on ST-LightGBM after considering transfer passenger flow is proposed. Firstly, using historical data as the training set to transform the problem into a data-driven multi-input single-output regression prediction problem, the problem of the short-term prediction of metro passenger flow is formalized and the difficulties of the problem are identified. Secondly, we extract the candidate temporal and spatial features that may affect passenger flow at a metro station from passenger travel data based on the spatial transfer and spatial similarity of passenger flow. Thirdly, we use a maximal information coefficient (MIC) feature selection algorithm to select the significant impact features as the input. Finally, a short-term forecasting model for metro passenger flow based on the light gradient boosting machine (LightGBM) model is established. Taking transfer passenger flow into account, this method has a low space-time cost and high accuracy. The experimental results on the dataset of Lianban metro station in Xiamen city show that the proposed method obtains higher prediction accuracy than SARIMA, SVR, and BP network.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2020-08-25
    Description: Most of the existing knowledge graph embedding models are supervised methods and largely relying on the quality and quantity of obtainable labelled training data. The cost of obtaining high quality triples is high and the data sources are facing a serious problem of data sparsity, which may result in insufficient training of long-tail entities. However, unstructured text encoding entities and relational knowledge can be obtained anywhere in large quantities. Word vectors of entity names estimated from the unlabelled raw text using natural language model encode syntax and semantic properties of entities. Yet since these feature vectors are estimated through minimizing prediction error on unsupervised entity names, they may not be the best for knowledge graphs. We propose a two-phase approach to adapt unsupervised entity name embeddings to a knowledge graph subspace and jointly learn the adaptive matrix and knowledge representation. Experiments on Freebase show that our method can rely less on the labelled data and outperforms the baselines when the labelled data is relatively less. Especially, it is applicable to zero-shot scenario.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2020-08-25
    Description: Portfolio investment is adopted by the venture capital to diversify those risks involved in project selection, investing or operating so that the venture capitalist can expect a relatively stable income and lower financing risks. Based on the design of portfolio investment contract with unlimited funds developed by Kanniainen and Keuschnigg, and Inderst et al., this article makes a modification and presents a model given the limitation of funds available for the venture capitalist. It is demonstrated that the marginal benefit of efforts paid by the entrepreneurs exceeds the marginal cost, given the limitation of funds available, which will conduce to a high-level engagement of the entrepreneurs. Thus, by adopting the design of renegotiation contract, the venture capitalist can manage to stimulate the entrepreneurs to make efforts, which is to result in moral hazard reduction.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2020-08-25
    Description: Recently, knowledge graph embedding methods have attracted numerous researchers’ interest due to their outstanding effectiveness and robustness in knowledge representation. However, there are still some limitations in the existing methods. On the one hand, translation-based representation models focus on conceiving translation principles to represent knowledge from a global perspective, while they fail to learn various types of relational facts discriminatively. It is prone to make the entity congestion of complex relational facts in the embedding space reducing the precision of representation vectors associating with entities. On the other hand, parallel subgraphs extracted from the original graph are used to learn local relational facts discriminatively. However, it probably causes the relational fact damage of the original knowledge graph to some degree during the subgraph extraction. Thus, previous methods are unable to learn local and global knowledge representation uniformly. To that end, we propose a multiview translation learning model, named MvTransE, which learns relational facts from global-view and local-view perspectives, respectively. Specifically, we first construct multiple parallel subgraphs from an original knowledge graph by considering entity semantic and structural features simultaneously. Then, we embed the original graph and construct subgraphs into the corresponding global and local feature spaces. Finally, we propose a multiview fusion strategy to integrate multiview representations of relational facts. Extensive experiments on four public datasets demonstrate the superiority of our model in knowledge graph representation tasks compared to state-of-the-art methods.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2020-08-28
    Description: A multimode resource-constrained project scheduling problem (MRCPSP) may have multifeasible solutions, due to its nature of targeting multiobjectives. Given the NP-hard MRCPSP and intricate multiobjective algorithms, finding the optimized result among those solutions seems impossible. This paper adopts data envelopment analysis (DEA) to evaluate a series of solutions of an MRCPSP and to find an appropriate choice in an objective way. Our approach is applied to a typical MRCPSP in practice, and the results validate that DEA is an effective and objective method for MRCPSP solution selection.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2020-08-28
    Description: Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-to-use deep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide autodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a neural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects are written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a software library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are plentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers, loss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the robustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network emulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and radiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the model’s emergent behavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a previously unrecognized strong relationship between offline validation error and online performance, in which the choice of the optimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable improvements in climate model stability including some with reduced error, for an especially challenging training dataset.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2020-08-28
    Description: This study investigates a multidepot heterogeneous vehicle routing problem for a variety of hazardous materials with risk analysis, which is a practical problem in the actual industrial field. The objective of the problem is to design a series of routes that minimize the total cost composed of transportation cost, risk cost, and overtime work cost. Comprehensive consideration of factors such as transportation costs, multiple depots, heterogeneous vehicles, risks, and multiple accident scenarios is involved in our study. The problem is defined as a mixed integer programming model. A bidirectional tuning heuristic algorithm and particle swarm optimization algorithm are developed to solve the problem of different scales of instances. Computational results are competitive such that our algorithm can obtain effective results in small-scale instances and show great efficiency in large-scale instances with 70 customers, 30 vehicles, and 3 types of hazardous materials.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2020-08-28
    Description: Single image super-resolution (SISR) is a traditional image restoration problem. Given an image with low resolution (LR), the task of SISR is to find the homologous high-resolution (HR) image. As an ill-posed problem, there are works for SISR problem from different points of view. Recently, deep learning has shown its amazing performance in different image processing tasks. There are works for image super-resolution based on convolutional neural network (CNN). In this paper, we propose an adaptive residual channel attention network for image super-resolution. We first analyze the limitation of residual connection structure and propose an adaptive design for suitable feature fusion. Besides the adaptive connection, channel attention is proposed to adjust the importance distribution among different channels. A novel adaptive residual channel attention block (ARCB) is proposed in this paper with channel attention and adaptive connection. Then, a simple but effective upscale block design is proposed for different scales. We build our adaptive residual channel attention network (ARCN) with proposed ARCBs and upscale block. Experimental results show that our network could not only achieve better PSNR/SSIM performances on several testing benchmarks but also recover structural textures more effectively.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2020-08-28
    Description: The Cloud Computing paradigm is focused on the provisioning of reliable and scalable virtual infrastructures that deliver execution and storage services. This paradigm is particularly suitable to solve resource-greedy scientific computing applications such as parameter sweep experiments (PSEs). Through the implementation of autoscalers, the virtual infrastructure can be scaled up and down by acquiring or terminating instances of virtual machines (VMs) at the time that application tasks are being scheduled. In this paper, we extend an existing study centered in a state-of-the-art autoscaler called multiobjective evolutionary autoscaler (MOEA). MOEA uses a multiobjective optimization algorithm to determine the set of possible virtual infrastructure settings. In this context, the performance of MOEA is greatly influenced by the underlying optimization algorithm used and its tuning. Therefore, we analyze two well-known multiobjective evolutionary algorithms (NSGA-II and NSGA-III) and how they impact on the performance of the MOEA autoscaler. Simulated experiments with three real-world PSEs show that MOEA gets significantly improved when using NSGA-III instead of NSGA-II due to the former provides a better exploitation versus exploration trade-off.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2020-08-28
    Description: There is a large amount of information and maintenance data in the aviation industry that could be used to obtain meaningful results in forecasting future actions. This study aims to introduce machine learning models based on feature selection and data elimination to predict failures of aircraft systems. Maintenance and failure data for aircraft equipment across a period of two years were collected, and nine input and one output variables were meticulously identified. A hybrid data preparation model is proposed to improve the success of failure count prediction in two stages. In the first stage, ReliefF, a feature selection method for attribute evaluation, is used to find the most effective and ineffective parameters. In the second stage, a K-means algorithm is modified to eliminate noisy or inconsistent data. Performance of the hybrid data preparation model on the maintenance dataset of the equipment is evaluated by Multilayer Perceptron (MLP) as Artificial Neural network (ANN), Support Vector Regression (SVR), and Linear Regression (LR) as machine learning algorithms. Moreover, performance criteria such as the Correlation Coefficient (CC), Mean Absolute Error (MAE), and Root Mean Square Error (RMSE) are used to evaluate the models. The results indicate that the hybrid data preparation model is successful in predicting the failure count of the equipment.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2020-08-28
    Description: The optical images collected by remotely operated vehicles (ROV) contain a lot of information about underwater (such as distributions of underwater creatures and minerals), which plays an important role in ocean exploration. However, due to the absorption and scattering characteristics of the water medium, some of the images suffer from serious color distortion. These distorted color images usually need to be enhanced so that we can analyze them further. However, at present, no image enhancement algorithm performs well in any scene. Therefore, in order to monitor image quality in the display module of ROV, a no-reference image quality predictor (NIPQ) is proposed in this paper. A unique property that differentiates the proposed NIPQ metric from existing works is the consideration of the viewing behavior of the human visual system and imaging characteristics of the underwater image in different water types. The experimental results based on the underwater optical image quality database (UOQ) show that the proposed metric can provide an accurate prediction for the quality of the enhanced image.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    Publication Date: 2020-07-01
    Description: Recently, the use of NoSQL databases has grown to manage unstructured data for applications to ensure performance and scalability. However, many organizations prefer to transfer data from an operational NoSQL database to a SQL-based relational database for using existing tools for business intelligence, analytics, decision making, and reporting. The existing methods of NoSQL to relational database transformation require manual schema mapping, which requires domain expertise and consumes noticeable time. Therefore, an efficient and automatic method is needed to transform an unstructured NoSQL database into a structured database. In this paper, we proposed and evaluated an efficient method to transform a NoSQL database into a relational database automatically. In our experimental evaluation, we used MongoDB as a NoSQL database, and MySQL and PostgreSQL as relational databases to perform transformation tasks for different dataset sizes. We observed excellent performance, compared to the existing state-of-the-art methods, in transforming data from a NoSQL database into a relational database.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2020-07-14
    Description: Knowledge graph is a kind of semantic network for information retrieval. How to construct a knowledge graph that can serve the power system based on the behavior data of dispatchers is a hot research topic in the area of electric power artificial intelligence. In this paper, we propose a method to construct the dispatch knowledge graph for the power grid. By leveraging on dispatch data from the power domain, this method first extracts entities and then identifies dispatching behavior relationship patterns. More specifically, the method includes three steps. First, we construct a corpus of power dispatching behaviors by semi-automated labeling. And then, we propose a model, called the BiLSTM-CRF model, to extract entities and identify the dispatching behavior relationship patterns. Finally, we construct a knowledge graph of power dispatching data. The knowledge graph provides an underlying knowledge model for automated power dispatching and related services and helps dispatchers perform better power dispatch knowledge retrieval and other operations during the dispatch process.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2020-07-14
    Description: With the advancement in ICT, web search engines have become a preferred source to find health-related information published over the Internet. Google alone receives more than one billion health-related queries on a daily basis. However, in order to provide the results most relevant to the user, WSEs maintain the users’ profiles. These profiles may contain private and sensitive information such as the user’s health condition, disease status, and others. Health-related queries contain privacy-sensitive information that may infringe user’s privacy, as the identity of a user is exposed and may be misused by the WSE and third parties. This raises serious concerns since the identity of a user is exposed and may be misused by third parties. One well-known solution to preserve privacy involves issuing the queries via peer-to-peer private information retrieval protocol, such as useless user profile (UUP), thereby hiding the user’s identity from the WSE. This paper investigates the level of protection offered by UUP. For this purpose, we present QuPiD (query profile distance) attack: a machine learning-based attack that evaluates the effectiveness of UUP in privacy protection. QuPiD attack determines the distance between the user’s profile (web search history) and upcoming query using our proposed novel feature vector. The experiments were conducted using ten classification algorithms belonging to the tree-based, rule-based, lazy learner, metaheuristic, and Bayesian families for the sake of comparison. Furthermore, two subsets of an America Online dataset (noisy and clean datasets) were used for experimentation. The results show that the proposed QuPiD attack associates more than 70% queries to the correct user with a precision of over 72% for the clean dataset, while for the noisy dataset, the proposed QuPiD attack associates more than 40% queries to the correct user with 70% precision.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2020-07-06
    Description: As a sportification form of human-computer interaction, eSports is facing great gender stereotype threat and causing female players’ withdraw. This study aims to investigate the relationship between gender-swapping and females’ continuous participation intention in eSports, the mediating effect of self-efficacy, and the moderating effect of discrimination. The results demonstrate (1) that the effect of gender-swapping on continuous participation intention in eSports was not significant, while gender-swapping had a significant association with self-efficacy, and self-efficacy had a significant association with continuous participation intention in eSports; (2) that gender-swapping had an indirect effect (via self-efficacy) on continuous participation intention in eSports; and (3) that discrimination moderated the effect of self-efficacy on continuous participation intention. Female players who had experienced discrimination displayed higher continuous participation intention in the context of self-efficacy enhanced by gender-swapping.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2020-07-07
    Description: Source code similarity detection has extensive applications in computer programming teaching and software intellectual property protection. In the teaching of computer programming courses, students may utilize some complex source code obfuscation techniques, e.g., opaque predicates, loop unrolling, and function inlining and outlining, to reduce the similarity between code fragments and avoid the plagiarism detection. Existing source code similarity detection approaches only consider static features of source code, making it difficult to cope with more complex code obfuscation techniques. In this paper, we propose a novel source code similarity detection approach by considering the dynamic features at runtime of source code using process mining. More specifically, given two pieces of source code, their running logs are obtained by source code instrumentation and execution. Next, process mining is used to obtain the flow charts of the two pieces of source code by analyzing their collected running logs. Finally, similarity of the two pieces of source code is measured by computing the similarity of these two flow charts. Experimental results show that the proposed approach can deal with more complex obfuscation techniques including opaque predicates and loop unrolling as well as function inlining and outlining, which cannot be handled by existing work properly. Therefore, we argue that our approach can defeat commonly used code obfuscation techniques more effectively for source code similarity detection than the existing state-of-the-art approaches.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2020-07-01
    Description: With the rapid development of Internet and big data, place retrieval has become an indispensable part of daily life. However, traditional retrieval technology cannot meet the semantic needs of users. Knowledge graph has been introduced into the new-generation retrieval systems to improve retrieval performance. Knowledge graph abstracts things into entities and establishes relationships among entities, which are expressed in the form of triples. However, with the expansion of knowledge graph and the rapid increase of data volume, traditional place retrieval methods on knowledge graph have low performance. This paper designs a place retrieval method in order to improve the efficiency of place retrieval. Firstly, perform data preprocessing and problem model building in the offline stage. Meanwhile, build semantic distance index, spatial quadtree index, and spatial semantic hybrid index according to semantic and spatial information. At the same time, in the online retrieval stage, this paper designs an efficient query algorithm and ranking model based on the index information constructed in the offline stage, aiming at improving the overall performance of the retrieval system. Finally, we use experiment to verify the effectiveness and feasibility of the place retrieval method based on knowledge graph in terms of retrieval accuracy and retrieval efficiency under the real data.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2020-07-01
    Description: The motivation for the research is the need to develop an integrated and holistic approach to fostering students’ scientific inquiry based on scientific programming education by conducting computational experiments and simulations. At the same time, the implementation of the learner-centred approaches to scientific programming education and the related development of science, technology, engineering, and mathematics (STEM) learner-centred educational environment are of primary importance for K-16 education. The key interest is how to design and integrate learning resources which include software learning objects for making simulations. The research investigates educational aspects of the technological, pedagogical, and content knowledge (TPACK) framework applied to scientific computing and scientific programming educational domain and provides methodological guidelines and design principles of practical implementation of educational resources. These include design principles for the development of the model-based scientific inquiry-centred learning resources, generic design templates for designing educational aspects of scientific programming education, generic use case models for learning resources for scientific programming education, and supportive methodological considerations.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2020-07-03
    Description: Lysine malonylation is a novel-type protein post-translational modification and plays essential roles in many biological activities. Having a good knowledge of malonylation sites can provide guidance in many issues, including disease prevention and drug discovery and other related fields. There are several experimental approaches to identify modification sites in the field of biology. However, these methods seem to be expensive. In this study, we proposed malNet, which employed neural network and utilized several novel and effective feature description methods. It was pointed that ANN’s performance is better than other models. Furthermore, we trained the classifiers according to an original crossvalidation method named Split to Equal validation (SEV). The results achieved AUC value of 0.6684, accuracy of 54.93%, and MCC of 0.1045, which showed great improvement than before.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2020-04-21
    Description: This paper introduces an image interpolation method that provides performance superior to that of the state-of-the-art algorithms. The simple linear method, if used for interpolation, provides interpolation at the cost of blurring, jagging, and other artifacts; however, applying complex methods provides better interpolation results, but sometimes they fail to preserve some specific edge patterns or results in oversmoothing of the edges due to postprocessing of the initial interpolation process. The proposed method uses a new gradient-based approach that makes an intelligent decision based on the edge direction using the edge map and gradient map of an image and interpolates unknown pixels in the predicted direction using known intensity pixels. The input image is subjected to the efficient hysteresis thresholding-based edge map calculation, followed by interpolation of low-resolution edge map to obtain a high-resolution edge map. Edge map interpolation is followed by classification of unknown pixels into obvious edges, uniform regions, and transitional edges using the decision support system. Coefficient-based interpolation that involves gradient coefficient and distance coefficient is applied to obvious edge pixels in the high-resolution image, whereas transitional edges in the neighborhood of an obvious edge are interpolated in the same direction to provide uniform interpolation. Simple line averaging is applied to pixels that are not detected as an edge to decrease the complexity of the proposed method. Applying line averaging to smooth pixels helps to control the complexity of the algorithm, whereas applying gradient-based interpolation preserves edges and hence results in better performance at reasonable complexity.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2020-05-08
    Description: Accurate evaluation of the risk level and operation performances of P2P online lending platforms is not only conducive to better functioning of information intermediaries but also effective protection of investors’ interests. This paper proposes a genetic algorithm (GA) improved hybrid kernel support vector machine (SVM) with an index system to construct such an evaluation model. A hybrid kernel consisting of polynomial function and radial basis function is improved, specifically kernel parameters and the weight of two kernels, by GA method with excellent global optimization and rapid convergence. Empirical testing based on cross-sectional data from Chinese P2P lending market demonstrates the superiority of the improved hybrid kernel SVM model. The classification accuracy of credit risk level and operation quality is higher than the single kernel SVM model as well as the hybrid kernel model with empirical parameter values.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2020-05-15
    Description: Fingerprint registration and verification is an active area of research in the field of image processing. Usually, fingerprints are obtained from sensors; however, there is recent interest in using images of fingers obtained from digital cameras instead of scanners. An unaddressed issue in the processing of fingerprints extracted from digital images is the angle of the finger during image capture. To match a fingerprint with 100% accuracy, the angles of the matching features should be similar. This paper proposes a rotation and scale-invariant decision-making method for the intelligent registration and recognition of fingerprints. A digital image of a finger is taken as the input and compared with a reference image for derotation. Derotation is performed by applying binary segmentation on both images, followed by the application of speeded up robust feature (SURF) extraction and then feature matching. Potential inliers are extracted from matched features by applying the M-estimator. Matched inlier points are used to form a homography matrix, the difference in the rotation angles of the finger in both the input and reference images is calculated, and finally, derotation is performed. Input fingerprint features are extracted and compared or stored based on the decision support system required for the situation.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2020-05-14
    Description: In the framework of coastal groundwater-dependent irrigation agriculture, modelling becomes indispensable to know how this renewable resource responds to complex (usually not conceptualized nor monitored) biophysical, social, and economic interactions. Friendly user interfaces are essential to involve nonmodeling experts in exploiting and improving models. Decision support systems (DSS) are software systems that integrate models, databases, or other decision aids and package them in a way that decision makers can use. This paper addresses these two issues: firstly with the implementation of a System Dynamics (SD) model in Vensim software that considers the integration of hydrological, agronomic, and economic drivers and secondly with the design of a Venapp, push-button interfaces that allow users access to a Vensim model without going through the Vensim modelling environment. The prototype designed, the AQUACOAST tool, gives an idea of the possibilities of this type of models to identify and analyze the impact of apparently unrelated factors such as the prices of cultivated products, subsidies or exploitation costs on the advance of saltwater intrusion, and the great threat to coastal groundwater-dependent irrigation agriculture systems.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2020-05-11
    Description: With the rapid development of Internet technology, live broadcast industry has also flourished. However, in the public network live broadcast platform, live broadcast security issues have become increasingly prominent. The detection of suspected pornographic videos in live broadcast platforms is still in the manual detection stage, that is, through the supervision of administrators and user reports. At present, there are many online live broadcast platforms in China. In mainstream live streaming platforms, the number of live broadcasters at the same time can reach more than 100,000 people/times. Only through manual detection, there are a series of problems such as low efficiency, poor pertinence, and slow progress. This approach is obviously not up to the task requirements of real-time network supervision. For the identification of whether live broadcasts on the Internet contain pornographic content, a deep neural network model based on residual networks (ResNet-50) is proposed to detect pictures and videos in live broadcast platforms. The core idea of detection is to classify each image in the video into two categories: (1) pass and (2) violation. The experiments verify that the network proposed can heighten the efficiency of pornographic detection in webcasts. The detection method proposed in this article can improve the accuracy of detection on the one hand and can standardize the detection indicators in the detection process on the other. These detection indicators have a certain promotion effect on the classification of pornographic videos.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2020-05-19
    Description: As an important component of universal sign language and the basis of other sign language learning, finger sign language is of great significance. This paper proposed a novel fingerspelling identification method for Chinese Sign Language via AlexNet-based transfer learning and Adam optimizer, which tested four different configurations of transfer learning. Besides, in the experiment, Adam algorithm was compared with stochastic gradient descent with momentum (SGDM) and root mean square propagation (RMSProp) algorithms, and comparison of using data augmentation (DA) against not using DA was executed to pursue higher performance. Finally, the best accuracy of 91.48% and average accuracy of 89.48 ± 1.16% were yielded by configuration M1 (replacing the last FCL8) with Adam algorithm and using 181x DA, which indicates that our method can identify Chinese finger sign language effectively and stably. Meanwhile, the proposed method is superior to other five state-of-the-art approaches.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2020-05-22
    Description: Nowadays, data are flooding into online web forums, and it is highly desirable to turn gigantic amount of data into actionable knowledge. Online web forums have become an integral part of the web and are main sources of knowledge. People use this platform to post their questions and get answers from other forum members. Usually, an initial post (question) gets more than one reply posts (answers) that make it difficult for a user to scan all of them for most relevant and quality answer. Thus, how to automatically extract the most relevant answer for a question within a thread is an important issue. In this research, we treat the task of answer extraction as classification problem. A reply post can be classified as relevant, partially relevant, or irrelevant to the initial post. To find the relevancy/similarity of a reply to the question, both lexical and nonlexical features are used. We proposed to use LinearSVC, a variant of support vector machine (SVM), for answer classification. Two selection techniques such as chi-square and univariate are employed to reduce the feature space size. The experimental results showed that LinearSVC classifier outperformed the other state-of-the-art classifiers in the context of classification accuracy for both Ubuntu and TripAdvisor (NYC) discussion forum datasets.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2020-05-20
    Description: Recently, the automatic detection of decayed blueberries is still a challenge in food industry. Early decay of blueberries happens on surface peel, which may adopt the feasibility of hyperspectral imaging mode to detect decayed region of blueberries. An improved deep residual 3D convolutional neural network (3D-CNN) framework is proposed for hyperspectral images classification so as to realize fast training, classification, and parameter optimization. Rich spectral and spatial features can be rapidly extracted from samples of complete hyperspectral images using our proposed network. This combines the tree structured Parzen estimator (TPE) adaptively and selects the super parameters to optimize the network performance. In addition, aiming at the problem of few samples, this paper proposes a novel strategy to enhance the hyperspectral image sample data, which can improve the training effect. Experimental results on the standard hyperspectral blueberry datasets show that the proposed framework improves the classification accuracy compared with AlexNet and GoogleNet. In addition, our proposed network reduces the number of parameters by half and the training time by about 10%.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2020-05-20
    Description: Performance of face detection and recognition is affected and damaged because occlusion often leads to missed detection. To reduce the recognition accuracy caused by facial occlusion and enhance the accuracy of face detection, a visual attention mechanism guidance model is proposed in this paper, which uses the visual attention mechanism to guide the model highlight the visible area of the occluded face; the face detection problem is simplified into the high-level semantic feature detection problem through the improved analytical network, and the location and scale of the face are predicted by the activation map to avoid additional parameter settings. A large number of simulation experiment results show that our proposed method is superior to other comparison algorithms for the accuracy of occlusion face detection and recognition on the face database. In addition, our proposed method achieves a better balance between detection accuracy and speed, which can be used in the field of security surveillance.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2020-05-20
    Description: In this paper, a bike repositioning problem with stochastic demand is studied. The problem is formulated as a two-stage stochastic programming model to optimize the routing and loading/unloading decisions of the repositioning truck at each station and depot under stochastic demands. The goal of the model is to minimize the expected total sum of the transportation costs, the expected penalty costs at all stations, and the holding cost of the depot. A simulated annealing algorithm is developed to solve the model. Numerical experiments are conducted on a set of instances from 20 to 90 stations to demonstrate the effectiveness of the solution algorithm and the accuracy of the proposed two-stage stochastic model.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2020-05-26
    Description: The rate of growth of mining copper industry in Chile requires higher consumption of water, which is a resource limited in quality and quantity and a major point of concern in present times. In addition, the efficient use of water is restricted due to high levels of evaporation (10 to 15 (l/m2) per day), in particular at the north highland mining sites (Chile). On the contrary, the final disposal of tailings is mainly on pond, which loses water by evaporation and in some cases by percolation. An alternative are the paste thickeners, which generate stable paste (70% solids), reducing evaporation and percolation and therefore reducing water make up. Water is a resource with more demand as the industries are expanding, making the water recovery processes more of a necessity than a simple upgrade in efficiency. This technology was developed in Canada (early 80s) and it has widely been used in Australia (arid zones with similar weather conditions to Chile), although few plants are using this technology. The tendency in the near future is to move from open ponds to paste thickeners. One of the examples of this is Minera El Tesoro. This scenario requires developing technical capacity in both paste flow characterization and rheology modifiers (fluidity enhancer) in order to make possible the final disposal of this paste. In this context, a new technique is introduced and experimental results of fluidity modifiers are discussed. This study describes how water content affects the flow behavior and depositional geometry of tailings and silica flour pastes. The depositional angle determined from the flume tests, and the yield stresses is determined from slump test and a rheological model. Both techniques incorporate digital video and image analysis. The results indicate that the new technique can be incorporated in order to determine the proper solid content and modifiers to a given fluidity requirement. In addition, the experimental results showed that the pH controls strongly the fluid paste behavior.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2020-04-30
    Description: To reduce costs and improve organizational efficiency, the adoption of innovative services such as Cloud services is the current trend in today’s highly competitive global business venture. The aim of the study is to guide the software development organization (SDO) for Cloud-based testing (CBT) adoption. To achieve the aim, this study first explores the determinants and predictors of Cloud adoption for software testing. Grounded on the collected data, this study designs a technology acceptance model using fuzzy multicriteria decision-making (FMCDM) approach. For the stated model development, this study identifies a list of predictors (main criteria) and factors (subcriteria) using systematic literature review (SLR). In the results of SLR, this study identifies seventy subcriteria also known as influential factors (IFs) from a sample of 136 papers. To provide a concise understanding of the facts, this study classifies the identified factors into ten predictors. To verify the SLR results and to rank the factors and predictors, an empirical survey was conducted with ninety-five experts from twenty different countries. The application value in the industrial field and academic achievement of the present study is the development of a general framework incorporating fuzzy set theory for improving MCDM models. The model can be applied to predict organizational Cloud adoption possibility taking various IFs and predictors as assessment criteria. The developed model can be divided into two main parts, ranking and rating. To measure the success or failure contribution of the individual IFs towards successful CBT adoption, the ranking part of the model will be used, while for a complete organizational assessment in order to identify the weak area for possible improvements, the assessment part of the model will be used. Collectively, it can be used as a decision support system to gauge SDO readiness towards successful CBT.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2020-06-29
    Description: Desertification is a major global environmental issue exacerbated by climate change. Strategies to combat desertification include prevention which seeks to reverse the process before the system reaches the stable desertified state. One of these initiatives is to implement early warning tools. This paper presents SAT (the Spanish acronym for Early Warning System), a decision support system (DSS), for assessing the risk of desertification in Spain, where 20% of the land has already been desertified and 1% is in active degradation. SAT relies on three versions of a Generic Desertification Model (GDM) that integrates economics and ecology under the predator-prey paradigm. The models have been programmed using Vensim, a type of software used to build and simulate System Dynamics (SD) models. Through Visual Basic programming, these models are operated from the Excel environment. In addition to the basic simulation exercises, specially designed tools have been coupled to assess the risk of desertification and determine the ranking of the most influential factors of the process. The users targeted by SAT are government land-use planners as well as desertification experts. SAT tool is implemented for five case studies, each one of them representing a desertification syndrome identified in Spain. Given the general nature of the tool and the fact that all United Nations Convention to Combat Desertification (UNCCD) signatory countries are committed to developing their National Plans to Combat Desertification (NPCD), SAT could be exported to regions threatened by desertification and expanded to cover more case studies.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2020-06-29
    Description: Recently, biometric authorizations using fingerprint, voiceprint, and facial features have garnered considerable attention from the public with the development of recognition techniques and popularization of the smartphone. Among such biometrics, voiceprint has a personal identity as high as that of fingerprint and also uses a noncontact mode to recognize similar faces. Speech signal-processing is one of the keys to accuracy in voice recognition. Most voice-identification systems still employ the mel-scale frequency cepstrum coefficient (MFCC) as the key vocal feature. The quality and accuracy of the MFCC are dependent on the prepared phrase, which belongs to text-dependent speaker identification. In contrast, several new features, such as d-vector, provide a black-box process in vocal feature learning. To address these aspects, a novel data-driven approach for vocal feature extraction based on a decision-support system (DSS) is proposed in this study. Each speech signal can be transformed into a vector representing the vocal features using this DSS. The establishment of this DSS involves three steps: (i) voice data preprocessing, (ii) hierarchical cluster analysis for the inverse discrete cosine transform cepstrum coefficient, and (iii) learning the E-vector through minimization of the Euclidean metric. We compare experiments to verify the E-vectors extracted by this DSS with other vocal features measures and apply them to both text-dependent and text-independent datasets. In the experiments containing one utterance of each speaker, the average accuracy of the E-vector is improved by approximately 1.5% over the MFCC. In the experiments containing multiple utterances of each speaker, the average micro-F1 score of the E-vector is also improved by approximately 2.1% over the MFCC. The results of the E-vector show remarkable advantages when applied to both the Texas Instruments/Massachusetts Institute of Technology corpus and LibriSpeech corpus. These improvements of the E-vector contribute to the capabilities of speaker identification and also enhance its usability for more real-world identification tasks.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2020-06-29
    Description: In the 21st century, transportation brought great convenience to people, but at the same time, automobile transportation is the major factor causing greenhouse gas emissions and climate change. Movements of the world towards green environments, there is hike in use and production of electric vehicles (energy vehicles). However, with the continuous growth in the number of energy vehicles, it is necessary for the government to provide strong support in the construction of charging piles. Real-time and effective management has become a practical problem for the relevant departments which needs to be solved. This paper uses the information research method to fuse the huge amount of heterogeneous data generated by the charging pile resultant to the new energy electric vehicle in the vehicle network and introduces cloud computing as its storage module to facilitate the storage and related expansion of the big data. This paper proposes a system scheme of heterogeneous data fusion based on cloud computing for the acquisition, storage, and fusion of heterogeneous data in the vehicle network. After testing the results, it shows that the system is stable and effective in practical application, which can meet the design requirements of the system. What is the significance of analyzing big data of charging point? Considering from the supply side, obtaining the user’s charging behaviour data is helpful to build a digital map of the charging pile of new energy vehicles, connect the service information between the vehicle enterprises and the charging pile enterprises, and provide the most comprehensive and effective real-time charging information covering the widest range of vehicles, which can solve many problems of information asymmetry in the current charging information service.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2020-06-29
    Description: Traditionally, the High-Level Synthesis (HLS) for Field Programmable Gate Array (FPGA) devices is a methodology that transforms a behavioral description, as the timing-independent specification, to an abstraction level that is synthesizable, like the Register Transfer Level. This process can be performed under a framework that is known as Design Space Exploration (DSE), which helps to determine the best design by addressing scheduling, allocation, and binding problems, all three of which are NP-hard problems. In this manner, and due to the increased complexity of modern digital circuit designs and concerns regarding the capacity of the FPGAs, designers are proposing novel HLS techniques capable of performing automatic optimization. HLS has several conflicting metrics or objective functions, such as delay, area, power, wire length, digital noise, reliability, and security. For this reason, it is suitable to apply Multiobjective Optimization Algorithms (MOAs), which can handle the different trade-offs among the objective functions. During the last two decades, several MOAs have been applied to solve this problem. This paper introduces a comprehensive analysis of different MOAs that are suitable to perform HLS for FPGA devices. We highlight significant aspects of MOAs, namely, optimization methods, intermediate structures where the optimizations are performed, HLS techniques that are addressed, and benchmarks and performance assessments employed for experimentation. In addition, we show the analysis of how multiple objectives are optimized currently in the algorithms and which are the objective functions that are optimized. Finally, we provide insights and suggestions to contribute to the solution of major research challenges in this area.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2020-06-29
    Description: The uncertainty, complexity, and behavioral preference are widely existing in real-world decision-making problems. In this paper, different from the previous grey target decision method, we propose a novel grey target group decision method considering decision-maker’s loss aversion with positive and negative clouts under the dual hesitant fuzzy environment. Firstly, defining the dual hesitant fuzzy ideal optimization scheme as the positive clout and the ideal inferior scheme as the negative clout, positive and negative target-eye distances are measured by the normalized Hamming distances from the DHFEs to the positive clout and the negative clout. Then, a new comprehensive target-eye distance is proposed to evaluate alternatives between the positive and the negative clout. A nonlinear optimization model is established to obtain the optimal initial attribute weights with the goal of minimizing the comprehensive target-eye distance. Then, a grey target group decision method with dual hesitant fuzzy information considering decision-maker’s loss aversion and variable weights is proposed. Finally, a numeral example is given to verify the effectiveness and practicality of the proposed model and method.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2020-06-29
    Description: Global warming associated with greenhouse emissions will modify the availability of water resources in the future. Methodologies and tools to assess the impacts of climate change are useful for policy making. In this work, a new tool to generate potential future climate scenarios in a water resources system from historical and regional climate models’ information has been developed. The GROUNDS tool allows generation of the future series of precipitation, temperature (minimum, mean, and maximum), and potential evapotranspiration. It is a valuable tool for assessing the impacts of climate change in hydrological applications since these variables play a significant role in the water cycle, and it can be applicable to any case study. The tool uses different approaches and statistical correction techniques to generate individual local projections and ensembles of them. The non-equifeasible ensembles are created by combining the individual projections whose control or corrected control simulation has a better fit to the historical series in terms of basic and droughts statistics. In this work, the tool is presented, and the methodology implemented is described. It is also applied to a case study to illustrate how the tool works. The tool was previously tested in different typologies of water resources systems that cover different spatial scales (river basin, aquifer, mountain range, and country), obtaining satisfactory results. The local future scenarios can be propagated through appropriate hydrological models to study the impacts on other variables (e.g., aquifer recharge, chloride concentration in coastal aquifers, streamflow, snow cover area, and snow depth). The tool is also useful in quantifying the uncertainties of the future scenarios by combining them with stochastic weather generators.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2020-05-18
    Description: Currently, data classification is one of the most important ways to analysis data. However, along with the development of data collection, transmission, and storage technologies, the scale of the data has been sharply increased. Additionally, due to multiple classes and imbalanced data distribution in the dataset, the class imbalance issue is also gradually highlighted. The traditional machine learning algorithms lack of abilities for handling the aforementioned issues so that the classification efficiency and precision may be significantly impacted. Therefore, this paper presents an improved artificial neural network in enabling the high-performance classification for the imbalanced large volume data. Firstly, the Borderline-SMOTE (synthetic minority oversampling technique) algorithm is employed to balance the training dataset, which potentially aims at improving the training of the back propagation neural network (BPNN), and then, zero-mean, batch-normalization, and rectified linear unit (ReLU) are further employed to optimize the input layer and hidden layers of BPNN. At last, the ensemble learning-based parallelization of the improved BPNN is implemented using the Hadoop framework. Positive conclusions can be summarized according to the experimental results. Benefitting from Borderline-SMOTE, the imbalanced training dataset can be balanced, which improves the training performance and the classification accuracy. The improvements for the input layer and hidden layer also enhance the training performances in terms of convergence. The parallelization and the ensemble learning techniques enable BPNN to implement the high-performance large-scale data classification. The experimental results show the effectiveness of the presented classification algorithm.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2020-06-29
    Description: Within the sentiment classification field, the convolutional neural network (CNN) and long short-term memory (LSTM) are praised for their classification and prediction performance, but their accuracy, loss rate, and time are not ideal. To this purpose, a deep learning structure combining the improved cross entropy and weight for word is proposed for solving cross-domain sentiment classification, which focuses on achieving better text sentiment classification by optimizing and improving recurrent neural network (RNN) and CNN. Firstly, we use the idea of hinge loss function (hinge loss) and the triplet loss function (triplet loss) to improve the cross entropy loss. The improved cross entropy loss function is combined with the CNN model and LSTM network which are tested in the two classification problems. Then, the LSTM binary-optimize (LSTM-BO) model and CNN binary-optimize (CNN-BO) model are proposed, which are more effective in fitting the predicted errors and preventing overfitting. Finally, considering the characteristics of the processing text of the recurrent neural network, the influence of input words for the final classification is analysed, which can obtain the importance of each word to the classification results. The experiment results show that within the same time, the proposed weight-recurrent neural network (W-RNN) model gives higher weight to words with stronger emotional tendency to reduce the loss of emotional information, which improves the accuracy of classification.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2020-07-16
    Description: In the recent years, the subject of Golgi classification has been studied intensively. It has been scientifically proven that Golgi can synthesize many substances, such as polysaccharides, and it can also combine proteins with sugars or lipids with glycoproteins and lipoproteins. In some cells (such as liver cells), the Golgi apparatus is also involved in the synthesis and secretion of lipoproteins. Therefore, the loss of Golgi protein function may have severe effects on the human body. For example, Alzheimer’s disease and diabetes are related to the loss of Golgi protein function. Because the classification of Golgi proteins has a specific effect on the treatment of these diseases, many scholars have studied the classification of Golgi proteins, but the data sets they used were complete Golgi sequences. The focus of this article is whether there is redundancy in the Golgi protein classification or, in other words, whether a part of the entire Golgi protein sequence can be used to complete the Golgi protein classification. Besides, we have adopted a new method to deal with the problem of sample imbalance. After experiments, our model has certain observability.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2020-07-16
    Description: Offloading computation from mobile to remote cloud servers is a promising way to reduce energy consumption and improve the performance of mobile applications. However, a great challenge arises as automatic integration of powerful computing resources in remote cloud infrastructure and the portability of mobile devices. In this paper, we develop a Java annotation-based offloading framework, called MCAF, for android mobile devices. This framework is designed and committed to simplifying the development of android applications enabled with the offload capability. All the developers need to do is to import the SDK library of our MCAF and annotate the computation-intensive methods. MCAF can automatically extract the annotated source code and generate the code that will be run in the Cloud. Moreover, the codes of making the offloading decisions are automatically inserted into the original source code. We also conducted the real experiments to show the applicability of our MCAF.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2020-07-15
    Description: Based on the classical MapReduce concept, we propose an extended MapReduce scheduling model. In the extended MapReduce scheduling problem, we assumed that each job contains an open-map task (the map task can be divided into multiple unparallel operations) and series-reduce tasks (each reduce task consists of only one operation). Different from the classical MapReduce scheduling problem, we also assume that all the operations cannot be processed in parallel, and the machine settings are unrelated machines. For solving the extended MapReduce scheduling problem, we establish a mixed-integer programming model with the minimum makespan as the objective function. We then propose a genetic algorithm, a simulated annealing algorithm, and an L-F algorithm to solve this problem. Numerical experiments show that L-F algorithm has better performance in solving this problem.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2020-07-14
    Description: In recent years, millions of source codes are generated in different languages on a daily basis all over the world. A deep neural network-based intelligent support model for source code completion would be a great advantage in software engineering and programming education fields. Vast numbers of syntax, logical, and other critical errors that cannot be detected by normal compilers continue to exist in source codes, and the development of an intelligent evaluation methodology that does not rely on manual compilation has become essential. Even experienced programmers often find it necessary to analyze an entire program in order to find a single error and are thus being forced to waste valuable time debugging their source codes. With this point in mind, we proposed an intelligent model that is based on long short-term memory (LSTM) and combined it with an attention mechanism for source code completion. Thus, the proposed model can detect source code errors with locations and then predict the correct words. In addition, the proposed model can classify the source codes as to whether they are erroneous or not. We trained our proposed model using the source code and then evaluated the performance. All of the data used in our experiments were extracted from Aizu Online Judge (AOJ) system. The experimental results obtained show that the accuracy in terms of error detection and prediction of our proposed model approximately is 62% and source code classification accuracy is approximately 96% which outperformed a standard LSTM and other state-of-the-art models. Moreover, in comparison to state-of-the-art models, our proposed model achieved an interesting level of success in terms of error detection, prediction, and classification when applied to long source code sequences. Overall, these experimental results indicate the usefulness of our proposed model in software engineering and programming education arena.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2020-07-14
    Description: In order to identify the modal parameters of time invariant three-dimensional engineering structures with damping and small nonlinearity, a novel isometric feature mapping (Isomap)-based three-dimensional operational modal analysis (OMA) method is proposed to extract nonlinear features in this paper. Using this Isomap-based OMA method, a low-dimensional embedding matrix is multiplied by a transformation matrix to obtain the original matrix. We find correspondence relationships between the low-dimensional embedding matrix and the modal coordinate response and between the transformation matrix and the modal shapes. From the low-dimensional embedding matrix, the natural frequencies can be determined using a Fourier transform and the damping ratios can be identified by the random decrement technique or natural excitation technique. The modal shapes can be estimated from the Moore–Penrose matrix inverse of the low-dimensional embedding matrix. We also discuss the effects of different parameters (i.e., number of neighbors and matrix assembly) on the results of modal parameter identification. The modal identification results from numerical simulations of the vibration response signals of a cylindrical shell under white noise excitation demonstrate that the proposed method can identify the modal shapes, natural frequencies, and ratios of three-dimensional structures in operational conditions only from the vibration response signals.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2020-07-14
    Description: One of the major challenges in the formal verification of embedded system software is the complexity and substantially large size of the implementation. The problem becomes crucial when the embedded system is a complex medical device that is executing convoluted algorithms. In refinement-based verification, both specification and implementation are expressed as transition systems. Each behavior of the implementation transition system is matched to the specification transition system with the help of a refinement map. The refinement map can only project those values from the implementation which are responsible for labeling the current state of the system. When the refinement map is applied at the object code level, numerous instructions map to a single state in the specification transition system called stuttering instructions. We use the concept of Static Stuttering Abstraction (SSA) that filters the common multiple segments of stuttering instructions and replaces each segment with a merger. SSA algorithm reduces the implementation state space in embedded software, subsequently decreasing the efforts involved in manual verification with WEB refinement. The algorithm is formally proven for correctness. SSA is implemented on the pacemaker object code to evaluate the effectiveness of abstracted code in verification process. The results helped to establish the fact that, despite code size reduction, the bugs and errors can still be found. We implemented the SSA technique on two different platforms and it has been proven to be consistent in decreasing the code size significantly and hence the complexity of the implementation transition system. The results illustrate that there is considerable reduction in time and effort required for the verification of a complex software control, i.e., pacemaker when statically stuttering abstracted code is employed.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2020-07-02
    Description: Face detection and alignment in unconstrained environment is always deployed on edge devices which have limited memory storage and low computing power. This paper proposes a one-stage method named CenterFace to simultaneously predict facial box and landmark location with real-time speed and high accuracy. The proposed method also belongs to the anchor-free category. This is achieved by (a) learning face existing possibility by the semantic maps, (b) learning bounding box, offsets, and five landmarks for each position that potentially contains a face. Specifically, the method can run in real time on a single CPU core and 200 FPS using NVIDIA 2080TI for VGA-resolution images and can simultaneously achieve superior accuracy (WIDER FACE Val/Test-Easy: 0.935/0.932, Medium: 0.924/0.921, Hard: 0.875/0.873, and FDDB discontinuous: 0.980 and continuous: 0.732).
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2020-07-03
    Description: In order to more accurately identify multistage fracturing horizontal well (MFHW) parameters and address the heterogeneity of reservoirs and the randomness of well-production data, a new method based on the PSO-RBF neural network model is proposed. First, the GPU parallel program is used to calculate the bottomhole pressure of a multistage fracturing horizontal well. Second, most of the above pressure data are imported into the RBF neural network model for training. In the training process, the optimization function of the global optimal solution of the PSO algorithm is employed to optimize the parameters of the RBF neural network, and eventually, the required PSO-RBF neural network model is established. Third, the resulting neural network is tested using the remaining data. Finally, a field case of a multistage fracturing horizontal well is studied by using the presented PSO-RBF neural network model. The results show that in most cases, the proposed model performs better than other models, with the highest correlation coefficient, the lowest mean, and absolute error. This proves that the PSO-RBF neural network model can be applied effectively to horizontal well parameter identification. The proposed model has great potential to improve the prediction accuracy of reservoir physical parameters.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2020-07-04
    Description: Worldwide, about 700 million people are estimated to suffer from mental illnesses. In recent years, due to the extensive growth rate in mental disorders, it is essential to better understand the inadequate outcomes from mental health problems. Mental health research is challenging given the perceived limitations of ethical principles such as the protection of autonomy, consent, threat, and damage. In this survey, we aimed to investigate studies where big data approaches were used in mental illness and treatment. Firstly, different types of mental illness, for instance, bipolar disorder, depression, and personality disorders, are discussed. The effects of mental health on user’s behavior such as suicide and drug addiction are highlighted. A description of the methodologies and tools is presented to predict the mental condition of the patient under the supervision of artificial intelligence and machine learning.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2020-07-15
    Description: The integration of the new-generation information technology and the automobile manufacturing industry has significantly improved the production efficiency of the automobile manufacturing industry, but it will also increase the technology application cost of the automobile manufacturing industry. The boundary value of the income change of the automobile manufacturing industry can be obtained by examining the influence of new-generation information technology on the price of parts, the price of automobiles, and the quantity of production in the upstream and downstream of the automobile manufacturing industry chain. The study found that the benefit of the automobile manufacturing industry that meets the conditions of technology application costs has increased. The value added to the downstream enterprises in the industrial chain is greater than the value added to the upstream companies. The lower the cost of technology application, the greater the impact on the number of automobile production. In the end, an example is used to verify the reliability of the research results.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2020-07-15
    Description: In this paper, we address a hybrid flow-shop scheduling problem with the objective of minimizing the makespan and the cost of delay. The concerned problem considers the diversity of the customers’ requirements, which influences the procedures of the productions and increases the complexity of the problem. The features of the problem are inspired by the real-world situations, and the problem is formulated as a mixed-integer programming model in the paper. In order to tackle the concerned problem, a hybrid metaheuristic algorithm with Differential Evolution (DE) and Local Search (LS) (denoted by DE-LS) has been proposed in the paper. The differential evolution is a state-of-the-art metaheuristic algorithm which can solve complex optimization problem in an efficient way and has been applied in many fields, especially in flow-shop scheduling problem. Moreover, the study not only combines the DE and LS, but also modifies the mutation process and provides the novel initialization process and correction strategy of the approach. The proposed DE-LS has been compared with four variants of algorithms in order to justify the improvements of the proposed algorithm. Experimental results show that the superiority and robustness of the proposed algorithm have been verified.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    Publication Date: 2020-07-14
    Description: Scheduling is a fundamental factor in managing the network resource of the Internet of things. For IoT systems such as production lines, to operate effectively, it is necessary to find an intelligent management scheme, i.e., schedule, for network resources. In this study, we focus on multiskill resource-constrained project scheduling problem (MS-RCPSP), a combinational optimization problem that has received extensive attention from the research community due to its advantages in network resource management. In recent years, many approaches have been utilized for solving this problem such as genetic algorithm and ant colony optimization. Although these approaches introduce various optimization techniques, premature convergence issue also occurs. Moreover, previous studies have only been verified on simulation data but not on real data of factories. This paper formulated the MS-RCPSP and then proposed a novel algorithm called DEM to solve it. The proposed algorithm was developed based on the differential evolution metaheuristic. Besides, we build the reallocate function to improve the solution quality so that the proposed algorithm converges rapidly to global extremum while also avoiding getting trapped in a local extremum. To evaluate the performance of the proposed algorithm, we conduct the experiment on iMOPSE, the simulated dataset used by previous authors in their research studies. In addition, DEM was verified on real dataset supported by a famous textile industry factory. Experimental results on both simulated data and real data show that the proposed algorithm not only finds a better schedule compared with related works but also can reduce the processing time of the production line currently used at the textile industry factory.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2020-07-14
    Description: The interest in face recognition studies has grown rapidly in the last decade. One of the most important problems in face recognition is the identification of ethnics of people. In this study, a new deep learning convolutional neural network is designed to create a new model that can recognize the ethnics of people through their facial features. The new dataset for ethnics of people consists of 3141 images collected from three different nationalities. To the best of our knowledge, this is the first image dataset collected for the ethnics of people and that dataset will be available for the research community. The new model was compared with two state-of-the-art models, VGG and Inception V3, and the validation accuracy was calculated for each convolutional neural network. The generated models have been tested through several images of people, and the results show that the best performance was achieved by our model with a verification accuracy of 96.9%.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    Publication Date: 2020-07-13
    Description: Over the past decade, data recorded (due to digitization) in healthcare sectors have continued to increase, intriguing the thought about big data in healthcare. There already exists plenty of information, ready for analysis. Researchers are always putting their best effort to find valuable insight from the healthcare big data for quality medical services. This article provides a systematic review study on healthcare big data based on the systematic literature review (SLR) protocol. In particular, the present study highlights some valuable research aspects on healthcare big data, evaluating 34 journal articles (between 2015 and 2019) according to the defined inclusion-exclusion criteria. More specifically, the present study focuses to determine the extent of healthcare big data analytics together with its applications and challenges in healthcare adoption. Besides, the article discusses big data produced by these healthcare systems, big data characteristics, and various issues in dealing with big data, as well as how big data analytics contributes to achieve a meaningful insight on these data set. In short, the article summarizes the existing literature based on healthcare big data, and it also helps the researchers with a foundation for future study in healthcare contexts.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2020-07-22
    Description: Smartphones with gym exercises predictors can act as trainers for the gym-goers. However, various available solutions do not have the complete set of most practiced exercises. Therefore, in this research, a complete set of most practiced 26 exercises was identified from the literature. Among the exercises, 14 were unique and 12 were common to the existing literature. Furthermore, finding suitable smartphone attachment position(s) and the number of sensors to predict exercises with the highest possible accuracy were also the objectives of the research. Besides, this study considered the most number of participants (20) as compared to the existing literature (maximum 10). The results indicate three key lessons: (a) the most suitable classifier to predict a class (exercise) from the sensor-based data was found to be KNN (K-nearest neighbors); (b) the sensors placed at the three positions (arm, belly, and leg) could be more accurate than other positions for the gym exercises; and (c) accelerometer and gyroscope when combined can provide accurate classification up to 99.72% (using KNN as classifier at all 3 positions).
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2020-06-02
    Description: Using the k-nearest neighbor (kNN) algorithm in the supervised learning method to detect anomalies can get more accurate results. However, when using kNN algorithm to detect anomaly, it is inefficient at finding k neighbors from large-scale log data; at the same time, log data are imbalanced in quantity, so it is a challenge to select proper k neighbors for different data distributions. In this paper, we propose a log-based anomaly detection method with efficient selection of neighbors and automatic selection of k neighbors. First, we propose a neighbor search method based on minhash and MVP-tree. The minhash algorithm is used to group similar logs into the same bucket, and MVP-tree model is built for samples in each bucket. In this way, we can reduce the effort of distance calculation and the number of neighbor samples that need to be compared, so as to improve the efficiency of finding neighbors. In the process of selecting k neighbors, we propose an automatic method based on the Silhouette Coefficient, which can select proper k neighbors to improve the accuracy of anomaly detection. Our method is verified on six different types of log data to prove its universality and feasibility.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2020-05-31
    Description: Sensitive data need to be protected from being stolen and read by unauthorized persons regardless of whether the data are stored in hard drives, flash memory, laptops, desktops, and other storage devices. In an enterprise environment where sensitive data is stored on storage devices, such as financial or military data, encryption is used in the storage device to ensure data confidentiality. Nowadays, the SSD-based NAND storage devices are favored over HDD and SSHD to store data because they offer increased performance and reduced access latency to the client. In this paper, the performance of different symmetric encryption algorithms is evaluated on HDD, SSHD, and SSD-based NAND MLC flash memory using two different storage encryption software. Based on the experiments we carried out, Advanced Encryption Standard (AES) algorithm on HDD outperforms Serpent and Twofish algorithms in terms of random read speed and write speed (both sequentially and randomly), whereas Twofish algorithm is slightly faster than AES in sequential reading on SSHD and SSD-based NAND MLC flash memory. By conducting full range of evaluative tests across HDD, SSHD, and SSD, our experimental results can give better idea for the storage consumers to determine which kind of storage device and encryption algorithm is suitable for their purposes. This will give them an opportunity to continuously achieve the best performance of the storage device and secure their sensitive data.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2020-07-22
    Description: As a new social e-commerce model, community group purchase of fresh agricultural products has been gradually welcomed by the public. However, its development and operation still face homogeneous competition problems. In order to enhance competitive advantages of operators, this paper proposes a collaborative optimization mechanism, including a new pricing model and a new cold chain vehicle route planning model. It aims to ensure the quality of fresh products, reduce logistics costs, and improve enterprise profitability. The model takes into account not only the quality of fresh products and their impact on price and demand but also the impact of quality changes on total distribution costs. A two-layer programming method is applied to realize the collaborative optimization mechanism, and then the upper and lower models are solved by mathematical derivation, proof methods, and optimization procedures, respectively. Finally, the feasibility and effectiveness of the model are verified by combining with specific examples, and the following conclusions are obtained: price, delivery quality, and total profit increase with the increase of potential market demand rate. The lower the refrigeration temperature of the vehicle we choose within a certain range, the higher the quality can be obtained. In order to obtain the highest profit, community group purchase operators can choose a higher distribution temperature on the premise that they can guarantee that the quality of fresh agricultural products can be at an appropriate level.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2020-06-16
    Description: With the accelerating growth of big data, especially in the healthcare area, information extraction is more needed currently than ever, for it can convey unstructured information into an easily interpretable structured data. Relation extraction is the second of the two important tasks of relation extraction. This study presents an overview of relation extraction using distant supervision, providing a generalized architecture of this task based on the state-of-the-art work that proposed this method. Besides, it surveys the methods used in the literature targeting this topic with a description of different knowledge bases used in the process along with the corpora, which can be helpful for beginner practitioners seeking knowledge on this subject. Moreover, the limitations of the proposed approaches and future challenges were highlighted, and possible solutions were proposed.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2020-06-10
    Description: Cloud service providers (CSPs) can offer infinite storage space with cheaper maintenance cost compared to the traditional storage mode. Users tend to store their data in geographical and diverse CSPs so as to avoid vendor lock-in. Static data placement has been widely studied in recent works. However, the data access pattern is often time-varying and users may pay more cost if static placement is adopted during the data lifetime. Therefore, it is a pending problem and challenge of how to dynamically store users’ data under time-varying data access pattern. To this end, we propose ADPA, an adaptive data placement architecture that can adjust the data placement scheme based on the time-varying data access pattern and subject for minimizing the total cost and maximizing the data availability. The proposed architecture includes two main components: data retrieval frequency prediction module based on LSTM and data placement optimization module based on Q-learning. The performance of ADPA is evaluated through several experimental scenarios using NASA-HTTP workload and cloud providers information.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2020-06-10
    Description: Spinal cerebellar ataxia type 3 is a common SCA subtype in the world. It is a neurodegenerative disease characterized by ataxia. Patients exhibit common neuropsychological symptoms such as depression and anxiety. Some patients have suicidal tendencies when they are severely depressed. So, it is very important to study the severity of depression and clinical symptoms (SARA), to find out the patient’s psychological state in time and to help patients actively respond to treatment. A total of 97 Chinese SCA3 patients were enrolled in the study. The Beck Depression Scale was used to investigate the prevalence of depression in the confirmed patients. The distribution of depression data in these patients was investigated. Then, the quantifier was used to model the depression status of Chinese SCA3 patients. An analysis was conducted to identify the key factors affecting depression under different quantiles. Studies have shown that SARA and gender are important factors affecting depression; the effect of initial SARA is small, then the degree of influence increases, and the degree of influence decreases in the later period, but it is always positively correlated with depression; the development of women’s SARA is gentler than that of men, and the degree of depression is lower than that of men.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2020-07-20
    Description: There are many definitions of software Technical Debt (TD) that were proposed over time. While many techniques to measure TD emerged in recent times, there is still not a clear understanding about how different techniques compare when applied to software projects. The goal of this paper is to shed some light on this aspect, by comparing three techniques about TD identification that were proposed over time: (i) the Maintainability Index (MI), (ii) SIG TD models, and (iii) SQALE analysis. Considering 20 open source Python libraries, we compare the TD measurements time series in terms of trends and evolution according to different sets of releases (major, minor, and micro), to see if the perception of practitioners about TD evolution could be impacted. While all methods report generally growing trends of TD over time, there are different patterns. SQALE reports more periods of steady states compared to MI and SIG TD. MI is the method that reports more repayments of TD compared to the other methods. SIG TD and MI are the models that show more similarity in the way TD evolves, while SQALE and MI are less similar. The implications are that each method gives slightly a different perception about TD evolution.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2020-07-20
    Description: The interactions between proteins play important roles in several organisms, and such issue can be involved in almost all activities in the cell. The research of protein-protein interactions (PPIs) can make a huge contribution to the prevention and treatment of diseases. Currently, many prediction methods based on machine learning have been proposed to predict PPIs. In this article, we propose a novel method ACT-SVM that can effectively predict PPIs. The ACT-SVM model maps protein sequences to digital features, performs feature extraction twice on the protein sequence to obtain vector A and descriptor CT, and combines them into a vector. Then, the feature vectors of the protein pair are merged as the input of the support vector machine (SVM) classifier. We utilize nonredundant H. pylori and human dataset to verify the prediction performance of our method. Finally, the proposed method has a prediction accuracy of 0.727897 for H. pylori data and a prediction accuracy of 0.838799 for human dataset. The results demonstrate that this method can be called a stable and reliable prediction model of PPIs.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2020-07-20
    Description: As the transaction subject of contract farming, agricultural products are featured with a long production cycle and a short sales cycle, just like other perishable commodities. In the process of executing a contract, both the company and the farmer are running the risk of great uncertainty. This paper studies the coordination of agricultural supply chain in terms of the uncertainty of agricultural output and market demand. First of all, the random output volatility factor and the market demand volatility factor as two random variables are used to represent the uncertainty of the agricultural output and market demand, and revenue functions are set up, respectively, for the company and the farmer with the objective of maximizing expected returns. The theoretical derivation of these revenue functions proves that there is an optimal targeted yield in a centralized decision-making supply chain system and a single optimal solution that maximizes farmers’ revenue can be obtained in a decentralized one, but the centralized decision-making supply chain is superior to the decentralized and uncoordinated counterpart for overall benefit. Secondly, a revenue-sharing-plus-margin contract mechanism is proposed to coordinate income distribution between the two parties of the supply chain through the revenue-sharing coefficient and margin. Thirdly, calculation examples are given and solved by MATLAB based on the assumption that both the agricultural output volatility factor and the market demand volatility factor are uniformly distributed, and the theory and result are then verified consistently. Finally, the numerical analysis of the coordination mechanism of the revenue-sharing coefficient and the margin on both sides of the supply chain provides an optimal value range so that Pareto improvement on the company’s and the farmers’ income can be achieved.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2020-06-06
    Description: In the field of object detection, recently, tremendous success is achieved, but still it is a very challenging task to detect and identify objects accurately with fast speed. Human beings can detect and recognize multiple objects in images or videos with ease regardless of the object’s appearance, but for computers it is challenging to identify and distinguish between things. In this paper, a modified YOLOv1 based neural network is proposed for object detection. The new neural network model has been improved in the following ways. Firstly, modification is made to the loss function of the YOLOv1 network. The improved model replaces the margin style with proportion style. Compared to the old loss function, the new is more flexible and more reasonable in optimizing the network error. Secondly, a spatial pyramid pooling layer is added; thirdly, an inception model with a convolution kernel of 1 ∗ 1 is added, which reduced the number of weight parameters of the layers. Extensive experiments on Pascal VOC datasets 2007/2012 showed that the proposed method achieved better performance.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2020-07-17
    Description: Internet has revolutionized business model and given birth to sharing economy. A large number of platform enterprises are growing rapidly but with sustainability problems. Platform enterprises have to continue innovating business models in order to obtain sustainable competitive advantages. In complex and varying environment, dynamic capabilities help enterprises overcome core rigidity and promote business model innovation. This article analyzes the elements of business model innovation of platform enterprises and also the relationship between dynamic capabilities and business model innovation. It concludes that the elements of business model innovation are value proposition, product, partnership, and profit model innovation. Dynamic capabilities promote business model innovation which has different guiding effects on the cultivation of dynamic capabilities. An exploratory case study was conducted, using DiDi taxi as an example, and verified the theory model.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2020-06-26
    Description: After an unconventional emergency event occurs, a reasonable and effective emergency decision should be made within a short time period. In the emergency decision making process, decision makers’ opinions are often uncertain and imprecise, and determining the optimal solution to respond to an emergency event is a complex group decision making problem. In this study, a novel large group emergency decision making method, called the linguistic Z-QUALIFLEX method, is developed by extending the QUALIFLEX method using linguistic Z-numbers. The evaluations of decision makers on the alternative solutions are first expressed as linguistic Z-numbers, and the group decision matrix is then constructed by aggregating the evaluations of all subgroups. The QUALIFLEX method is used to rank the alternative solutions for the unconventional emergency event. Besides, a real-life example of emergency decision making is presented, and a comparison with existing methods is performed to validate the effectiveness and practicability of the proposed method. The results show that the proposed linguistic Z-QUALIFLEX can accurately express the evaluations of the decision makers and obtain a more reasonable ranking result of solutions for emergency decision making.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2020-05-31
    Description: The prediction of stock excess returns is an important research topic for quantitative trading, and stock price prediction based on machine learning is receiving more and more attention. This article takes the data of Chinese A-shares from July 2014 to September 2017 as the research object, and proposes a method of stock excess return forecasting that combines research reports and investor sentiment. The proposed method measures individual stocks released by analysts, separates the two indicators of research report attention and rating sentiment, calculates investor sentiment based on external market factors, and uses the LSTM model to represent the time series characteristics of stocks. The results show that (1) the accuracy and F1 evaluation indicators are used, and the proposed algorithm is better than the benchmark algorithm. (2) The performance of deep learning LSTM algorithm is better than traditional machine learning algorithm SVM. (3) Investor sentiment as the initial hidden state of the model can improve the accuracy of the algorithm. (4) The attention of the split research report takes the two indicators of investor sentiment and price as the input of the model, which can effectively improve the performance of the model.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2020-07-22
    Description: A parallel framework software, CCFD, based on the structure grid, and suitable for parallel computing of super-large-scale structure blocks, is designed and implemented. An overdecomposition method, in which the load balancing strategy is based on the domain decomposition method, is designed for the graph subdivision algorithm. This method takes computation and communication as the limiting condition and realizes the load balance between blocks by dividing the weighted graph. The fast convergence technique of a high-efficiency parallel geometric multigrid greatly improves the parallel efficiency and convergence speed of CCFD software. This paper introduces the software structure, process invocations, and calculation method of CCFD and introduces a hybrid parallel acceleration technology based on the Sunway TaihuLight heterogeneous architecture. The results calculated by Onera-M6 and DLR-F6 standard model show that the software structure and method in this paper are feasible and can meet the requirements of a large-scale parallel solution.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2020-06-03
    Description: The booming development of data science and big data technology stacks has inspired continuous iterative updates of data science research or working methods. At present, the granularity of the labor division between data science and big data is more refined. Traditional work methods, from work infrastructure environment construction to data modelling and analysis of working methods, will greatly delay work and research efficiency. In this paper, we focus on the purpose of the current friendly collaboration of the data science team to build data science and big data analysis application platform based on microservices architecture for education or nonprofessional research field. In the environment based on microservices that facilitates updating the components of each component, the platform has a personal code experiment environment that integrates JupyterHub based on Spark and HDFS for multiuser use and a visualized modelling tools which follow the modular design of data science engineering based on Greenplum in-database analysis. The entire web service system is developed based on spring boot.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2020-06-08
    Description: The Electronic Medical Record (EMR) contains a great deal of medical knowledge related to patients, which has been widely used in the construction of medical knowledge graphs. Previous studies mainly focus on the features based on surface semantics of EMRs for relation extraction, such as contextual feature, but the features of sentence structure in Chinese EMRs have been neglected. In this paper, a fusion dependency parsing-based relation extraction method is proposed. Specifically, this paper extends basic features with medical record feature and indicator feature that are applicable to Chinese EMRs. Furthermore, dependency syntactic features are introduced to analyse the dependency structure of sentences. Finally, the F1 value of relation extraction based on extended features is 4.87% higher than that of relation extraction based on basic features. And compared with the former, the F1 value of relation extraction based on fusion dependency parsing is increased by 4.39%. The results of experiments performed on a Chinese EMR data set show that the extended features and dependency parsing all contribute to the relation extraction.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2020-06-08
    Description: Online product reviews are exploring on e-commerce platforms, and mining aspect-level product information contained in those reviews has great economic benefit. The aspect category classification task is a basic task for aspect-level sentiment analysis which has become a hot research topic in the natural language processing (NLP) field during the last decades. In various e-commerce platforms, there emerge various user-generated question-answering (QA) reviews which generally contain much aspect-related information of products. Although some researchers have devoted their efforts on the aspect category classification for traditional product reviews, the existing deep learning-based approaches cannot be well applied to represent the QA-style reviews. Thus, we propose a 4-dimension (4D) textual representation model based on QA interaction-level and hyperinteraction-level by modeling with different levels of the text representation, i.e., word-level, sentence-level, QA interaction-level, and hyperinteraction-level. In our experiments, the empirical studies on datasets from three domains demonstrate that our proposals perform better than traditional sentence-level representation approaches, especially in the Digit domain.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2020-06-12
    Description: The aim of the Internet of things (IoT) is to bring every object (wearable sensors, healthcare sensors, cameras, home appliances, smart phones, etc.) online. These different objects generate huge data which consequently lead to the need of requirements of efficient storage and processing. Cloud computing is an emerging technology to overcome this problem. However, there are some applications (healthcare) which need to process data in real time to improve its performance and require low latency and delay. Fog computing is one of the promising solutions which facilitate healthcare domain in terms of reducing the delay multihop data communication, distributing resource demands, and promoting service flexibility. In this study, a fog-based IoT healthcare framework is proposed in order to minimize the energy consumption of the fog nodes. Experimental results reveal that the performance of the proposed framework is efficient in terms of network delay and energy usage. Furthermore, the authors discussed and suggested important services of big data infrastructure which need to be present in fog devices for the analytics of healthcare big data.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2020-09-18
    Description: Insect intelligent building (I2B) is a novel decentralized, flat-structured intelligent building platform with excellent flexibility and scalability. I2B allows users to develop applications that include control strategies for efficiently managing and controlling buildings. However, developing I2B APPs (applications) is considered a challenging and complex task due to the complex structural features and parallel computing models of the I2B platform. Existing studies have been shown to encounter difficulty in supporting a high degree of abstraction and in allowing users to define control scenarios in a concise and comprehensible way. This paper aims to facilitate the development of such applications and to reduce the programming difficulty. We propose Touch, a textual domain-specific language (DSL) that provides a high-level abstraction of I2B APPs. Specifically, we first establish the conceptual programming architecture of the I2B APP, making the application more intuitive by abstracting different levels of physical entities in I2B. Then, we present special language elements to effectively support the parallel computing model of the I2B platform and provide a formal definition of the concrete Touch syntax. We also implement supporting tools for Touch, including a development environment as well as target code generation. Finally, we present experimental results to demonstrate the effectiveness and efficiency of Touch.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2020-09-17
    Description: Energy consumption has been one of the main concerns to support the rapid growth of cloud data centers, as it not only increases the cost of electricity to service providers but also plays an important role in increasing greenhouse gas emissions and thus environmental pollution, and has a negative impact on system reliability and availability. As a result, energy consumption and efficiency metrics have become a vital issue for parallel scheduling applications based on tasks performed at cloud data centers. In this paper, we present a time and energy-aware two-phase scheduling algorithm called best heuristic scheduling (BHS) for directed acyclic graph (DAG) scheduling on cloud data center processors. In the first phase, the algorithm allocates resources to tasks by sorting, based on four heuristic methods and a grasshopper algorithm. It then selects the most appropriate method to perform each task, based on the importance factor determined by the end-user or service provider to achieve a solution designed at the right time. In the second phase, BHS minimizes the makespan and energy consumption according to the importance factor determined by the end-user or service provider and taking into account the start time, setup time, end time, and energy profile of virtual machines. Finally, a test dataset is developed to evaluate the proposed BHS algorithm compared to the multiheuristic resource allocation algorithm (MHRA). The results show that the proposed algorithm facilitates 19.71% more energy storage than the MHRA algorithm. Furthermore, the makespan is reduced by 56.12% in heterogeneous environments.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2020-09-23
    Description: Aiming at low classification accuracy of imbalanced datasets, an oversampling algorithm—AGNES-SMOTE (Agglomerative Nesting-Synthetic Minority Oversampling Technique) based on hierarchical clustering and improved SMOTE—is proposed. Its key procedures include hierarchically cluster majority samples and minority samples, respectively; divide minority subclusters on the basis of the obtained majority subclusters; select “seed sample” based on the sampling weight and probability distribution of minority subcluster; and restrict the generation of new samples in a certain area by centroid method in the sampling process. The combination of AGNES-SMOTE and SVM (Support Vector Machine) is presented to deal with imbalanced datasets classification. Experiments on UCI datasets are conducted to compare the performance of different algorithms mentioned in the literature. Experimental results indicate AGNES-SMOTE excels in synthesizing new samples and improves SVM classification performance on imbalanced datasets.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2020-09-15
    Description: Sepsis is a leading cause of mortality in intensive care units and costs hospitals billions of dollars annually worldwide. Predicting survival time for sepsis patients is a time-critical prediction problem. Considering the useful sequential information for sepsis development, this paper proposes a time-critical topic model (TiCTM) inspired by the latent Dirichlet allocation (LDA) model. The proposed TiCTM approach takes into account the time dependency structure between notes, measurement, and survival time of a sepsis patient. Experimental results on the public MIMIC-III database show that, overall, our method outperforms the conventional LDA and linear regression model in terms of recall, precision, accuracy, and F1-measure. It is also found that our method achieves the best performance by using 5 topics when predicting the probability for 30-day survival time.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2020-09-18
    Description: “Double-Line Ship Mooring” (DLSM) mode has been applied as an initiative operation mode for solving berth allocation problems (BAP) in certain giant container terminals in China. In this study, a continuous berth scheduling problem with the DLSM model is illustrated and solved with exact and heuristic methods with an objective to minimize the total operation cost, including both the additional transportation cost for vessels not located at their minimum-cost berthing position and the penalties for vessels not being able to leave as planned. First of all, this problem is formulated as a mixed-integer programming model and solved by the CPLEX solver for small-size instances. Afterwards, a particle swarm optimization (PSO) algorithm is developed to obtain good quality solutions within reasonable execution time for large-scale problems. Experimental results show that DLSM mode can not only greatly reduce the total operation cost but also significantly improve the efficiency of berth scheduling in comparison with the widely used single-line ship mooring (SLSM) mode. The comparison made between the results obtained by the proposed PSO algorithm and that obtained by the CPLEX solver for both small-size and large-scale instances are also quite encouraging. To sum up, this study can not only validate the effectiveness of DLSM mode for heavy-loaded ports but also provide a powerful decision support tool for the port operators to make good quality berth schedules with the DLSM mode.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2020-09-25
    Description: Graphics processing units (GPUs) have a strong floating-point capability and a high memory bandwidth in data parallelism and have been widely used in high-performance computing (HPC). Compute unified device architecture (CUDA) is used as a parallel computing platform and programming model for the GPU to reduce the complexity of programming. The programmable GPUs are becoming popular in computational fluid dynamics (CFD) applications. In this work, we propose a hybrid parallel algorithm of the message passing interface and CUDA for CFD applications on multi-GPU HPC clusters. The AUSM + UP upwind scheme and the three-step Runge–Kutta method are used for spatial discretization and time discretization, respectively. The turbulent solution is solved by the K−ω SST two-equation model. The CPU only manages the execution of the GPU and communication, and the GPU is responsible for data processing. Parallel execution and memory access optimizations are used to optimize the GPU-based CFD codes. We propose a nonblocking communication method to fully overlap GPU computing, CPU_CPU communication, and CPU_GPU data transfer by creating two CUDA streams. Furthermore, the one-dimensional domain decomposition method is used to balance the workload among GPUs. Finally, we evaluate the hybrid parallel algorithm with the compressible turbulent flow over a flat plate. The performance of a single GPU implementation and the scalability of multi-GPU clusters are discussed. Performance measurements show that multi-GPU parallelization can achieve a speedup of more than 36 times with respect to CPU-based parallel computing, and the parallel algorithm has good scalability.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2020-08-01
    Description: Background. Currently, echocardiography has become an essential technology for the diagnosis of cardiovascular diseases. Accurate classification of apical two-chamber (A2C), apical three-chamber (A3C), and apical four-chamber (A4C) views and the precise detection of the left ventricle can significantly reduce the workload of clinicians and improve the reproducibility of left ventricle segmentation. In addition, left ventricle detection is significant for the three-dimensional reconstruction of the heart chambers. Method. RetinaNet is a one-stage object detection algorithm that can achieve high accuracy and efficiency at the same time. RetinaNet is mainly composed of the residual network (ResNet), the feature pyramid network (FPN), and two fully convolutional networks (FCNs); one FCN is for the classification task, and the other is for the border regression task. Results. In this paper, we use the classification subnetwork to classify A2C, A3C, and A4C images and use the regression subnetworks to detect the left ventricle simultaneously. We display not only the position of the left ventricle on the test image but also the view category on the image, which will facilitate the diagnosis. We used the mean intersection-over-union (mIOU) as an index to measure the performance of left ventricle detection and the accuracy as an index to measure the effect of the classification of the three different views. Our study shows that both classification and detection effects are noteworthy. The classification accuracy rates of A2C, A3C, and A4C are 1.000, 0.935, and 0.989, respectively. The mIOU values of A2C, A3C, and A4C are 0.858, 0.794, and 0.838, respectively.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2020-08-01
    Description: Ransomware (RW) is a distinctive variety of malware that encrypts the files or locks the user’s system by keeping and taking their files hostage, which leads to huge financial losses to users. In this article, we propose a new model that extracts the novel features from the RW dataset and performs classification of the RW and benign files. The proposed model can detect a large number of RW from various families at runtime and scan the network, registry activities, and file system throughout the execution. API-call series was reutilized to represent the behavior-based features of RW. The technique extracts fourteen-feature vector at runtime and analyzes it by applying online machine learning algorithms to predict the RW. To validate the effectiveness and scalability, we test 78550 recent malign and benign RW and compare with the random forest and AdaBoost, and the testing accuracy is extended at 99.56%.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2020-08-01
    Description: Some cloud services may be invalid since they are located in a dynamically changing network environment. Service substitution is necessary when a cloud service cannot be used. Existing work mainly concerned on service function and quality in service substitution. To select a more suitable substitutive service, process collaboration similarity needs to be considered. This paper proposes a cluster and process collaboration-aware method to achieve service substitution. To compute the process collaboration similarity, we use logic Petri nets to model service processes. All the service processes are transformed into path strings. Service vectors for cloud services are generated by Word2Vec from these path strings. Process collaboration similarity of two cloud services is obtained by computing the cosine value of their service vectors. Meanwhile, similar cloud services are classified as a service cluster. By calculating function similarity and quality matching, a candidate set for services substitution is generated. The service with the highest process collaboration similarity to invalid one in the candidate set is chosen as the substitutive one. Simulation experiments show the proposed method is less time-consuming than traditional methods in finding substitutive service. Meanwhile, the substitutive one has a high cooccurrence rate with neighboring services of the invalid cloud service. Thus, the proposed method is efficient and integrates process collaboration well in service substitution.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2020-08-01
    Description: Due to the tastiness of mushroom, this edible fungus often appears in people’s daily meals. Nevertheless, there are still various mushroom species that have not been identified. Thus, the automatic identification of mushroom toxicity is of great value. A number of methods are commonly employed to recognize mushroom toxicity, such as folk experience, chemical testing, animal experiments, and fungal classification, all of which cannot produce quick, accurate results and have a complicated cycle. To solve these problems, in this paper, we proposed an automatic toxicity identification method based on visual features. The proposed method regards toxicity identification as a binary classification problem. First, intuitive and easily accessible appearance data, such as the cap shape and color of mushrooms, were taken as features. Second, the missing data in any of the features were handled in two ways. Finally, three pattern-recognition methods, including logistic regression, support vector machine, and multigrained cascade forest, were used to construct 3 different toxicity classifiers for mushrooms. Compared with the logistic regression and support vector machine classifiers, the multigrained cascade forest classifier had better performance with an accuracy of approximately 98%, enhancing the possibility of preventing food poisoning. These classifiers can recognize the toxicity of mushrooms—even that of some unknown species—according to their appearance features and important social and application value.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2020-08-01
    Description: With the development of technology, the hardware requirement and expectations of user for visual enjoyment are getting higher and higher. The multitype tree (MTT) architecture is proposed by the Joint Video Experts Team (JVET). Therefore, it is necessary to determine not only coding unit (CU) depth but also its split mode in the H.266/Versatile Video Coding (H.266/VVC). Although H.266/VVC achieves significant coding performance on the basis of H.265/High Efficiency Video Coding (H.265/HEVC), it causes significantly coding complexity and increases coding time, where the most time-consuming part is traversal calculation rate-distortion (RD) of CU. To solve these problems, this paper proposes an adaptive CU split decision method based on deep learning and multifeature fusion. Firstly, we develop a texture classification model based on threshold to recognize complex and homogeneous CU. Secondly, if the complex CUs belong to edge CU, a Convolutional Neural Network (CNN) structure based on multifeature fusion is utilized to classify CU. Otherwise, an adaptive CNN structure is used to classify CUs. Finally, the division of CU is determined by the trained network and the parameters of CU. When the complex CUs are split, the above two CNN schemes can successfully process the training samples and terminate the rate-distortion optimization (RDO) calculation for some CUs. The experimental results indicate that the proposed method reduces the computational complexity and saves 39.39% encoding time, thereby achieving fast encoding in H.266/VVC.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2020-08-01
    Description: Posterior transfacet approach has been proved to be a safe and effective access to treat thoracic disc herniation. However, the influencing factors of posterior modified transarticular debridement for thoracic tuberculosis have not been reported in the clinical literature. From 2009 to 2014, 37 patients with TST underwent a posterior modified transfacet debridement, interbody fusion following posterior instrumentation, under the cover of 18 months of antituberculosis chemotherapy. The patients were evaluated preoperatively and postoperatively in terms of Frankel Grade, visual analog scale (VAS) pain score, kyphotic Cobb angle, and bone fusion. Blood loss (positive correlation) and focal debridement (positive correlation) could affect operative time. Operative time (positive correlation) could affect blood loss. While, age (positive correlation), PostE (negative correlation), and T_FocalDebridement (positive correlation) could affect bone fusion. The accuracy of naive bayes classifier model is 86.11%. Our preliminary results show that blood loss and focal debridement could affect operative time; operative time could affect blood loss; age, PostE, and T_FocalDebridement could affect bone fusion; the naive Bayes classifier model can predict the KirkaldyWillis accurately.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2020-08-01
    Description: Although significant advances have been made recently in the field of face recognition, these have some limitations, especially when faces are in different poses or have different levels of illumination, or when the face is blurred. In this study, we present a system that can directly identify an individual under all conditions by extracting the most important features and using them to identify a person. Our method uses a deep convolutional network that is trained to extract the most important features. A filter is then used to select the most significant of these features by finding features greater than zero, storing their indices, and comparing the features of other identities with the same indices as the original image. Finally, the selected features of each identity in the dataset are subtracted from features of the original image to find the minimum number that refers to that identity. This method gives good results, as we only extract the most important features using the filter to recognize the face in different poses. We achieve state-of-the-art face recognition performance using only half of the 128 bytes per face. The system has an accuracy of 99.7% on the Labeled Faces in the Wild dataset and 94.02% on YouTube Faces DB.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2020-08-01
    Description: Label imbalance is one of the characteristics of multilabel data, and imbalanced data seriously affects the performance of the classifiers. In multilabel classification, resampling methods are mostly used to deal with imbalanced problems. Existing resampling methods balance the data by either undersampling or oversampling, which causes overfitting and information loss. Resampling has a significant impact on the minority labels. Furthermore, the high concurrency of majority labels and minority labels in many instances also affects the performance of classification. In this study, we proposed a bidirectional resampling method to decouple multilabel datasets. On one hand, the concurrency of labels can be reduced by setting termination conditions for decoupling, and on the other hand, the loss of instance information and overfitting can be alleviated by combining oversampling and undersampling. By measuring the minority labels of the instances, the instances that have less impact on minority labels are selected to resample. The number of resampling is limited to keep the original distribution of the data during the resampling phase. The experiments on seven benchmark multilabel datasets have proved the effectiveness of the algorithm, especially on datasets with high concurrency of majority labels and minority labels.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2020-08-01
    Description: Information is exploding on the web at exponential pace, so online movie review is becoming a substantial information resource for online users. However, users post millions of movie reviews on regular basis, and it is not possible for users to summarize the reviews. Movie review classification and summarization is one of the challenging tasks in natural language processing. Therefore, an automatic approach is demanded to summarize the vast amount of movie reviews, and it will allow the users to speedily distinguish the positive and negative aspects of a movie. This study has proposed an approach for movie review classification and summarization. For movie review classification, bag-of-words feature extraction technique is used to extract unigrams, bigrams, and trigrams as a feature set from given review documents, and represent the review documents as a vector space model. Next, the Naïve Bayes algorithm is employed to classify the movie reviews (represented as a feature vector) into positive and negative reviews. For the task of movie review summarization, Word2vec feature extraction technique is used to extract features from classified movie review sentences, and then semantic clustering technique is used to cluster semantically related review sentences. Different text features are used to calculate the salience score of each review sentence in clusters. Finally, the top-ranked sentences are chosen based on highest salience scores to produce the extractive summary of movie reviews. Experimental results reveal that the proposed machine learning approach is superior than other state-of-the-art approaches.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2020-08-01
    Description: In the east of China, low temperature and light energy in winter are the main factors for the decline in cucumber yield, as well as in greenhouses without supplementary light. Optimal utilization of light energy is critical to increase cucumber yield. In this study, experimental measurements were conducted in two scenarios, April to May (Apr-May) and November to December (Nov-Dec) 2015, respectively, to analyze leaf development, dry matter accumulation, and yield of cultivated cucumber. Statistical analysis showed that leaves grew in Nov-Dec had larger leaf area and lower dry matter than leaves grew in Apr-May. This revealed that the dry matter accumulation rate per unit area was lower in winter. To be precise, the yield 0.174 kg/m2 per day in Nov-Dec was 35.3% lower than the yield in Apr-May. Environmental monitoring data showed that there was no significant difference in the average temperature between two scenarios, but the light intensity in Nov-Dec was only 2/3 of that in Apr-May. Three-dimensional (3D) cucumber canopy models were used in this study to quantify the effects of weak light on dry matter production in Nov-Dec. Three 3D canopies of cucumber were reconstructed with 20, 25, and 30 leaves per plant, respectively, by using a parametric modeling method. Light interception of three canopies from 8:00 to 15:00 on 4 November 2015 was simulated by using the radiosity-graphic combined model (RGM) with an hourly time step. CO2 assimilation per unit area was calculated using the FvCB photosynthetic model. As a result, the effects of light intensity and CO2 concentration on the photosynthetic rate were considered. The leaf photosynthesis simulation result showed that during the daytime in winter, the RuBP regeneration-limited assimilation Aj was always less than the Rubisco-limited assimilation Ac. This means that the limiting factor affecting the photosynthesis rate in winter was rather light intensity. As the CO2 concentration in the greenhouse was utmost in the morning, increasing the light intensity and therefrom increasing the canopy light interception at this time will be highly beneficial to increase the yield production. Through a comparative analysis of photosynthetic characteristics in these three virtual 3D canopies, the 25-leaf canopy was the best-performed canopy structure in photosynthetic production in winter. This study provides an insight into the light deficiency for yield production in winter and a solution to make optimal use of light in the greenhouse.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2020-08-01
    Description: Education is mandatory, and much research has been invested in this sector. An important aspect of education is how to evaluate the learners’ progress. Multiple-choice tests are widely used for this purpose. The tests for learners in the same exam should come in equal difficulties for fair judgment. Thus, this requirement leads to the problem of generating tests with equal difficulties, which is also known as the specific case of generating tests with a single objective. However, in practice, multiple requirements (objectives) are enforced while making tests. For example, teachers may require the generated tests to have the same difficulty and the same test duration. In this paper, we propose the use of Multiswarm Multiobjective Particle Swarm Optimization (MMPSO) for generating k tests with multiple objectives in a single run. Additionally, we also incorporate Simulated Annealing (SA) to improve the diversity of tests and the accuracy of solutions. The experimental results with various criteria show that our approaches are effective and efficient for the problem of generating multiple tests.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2020-08-01
    Description: In the field of satellite data broadcasting, the management quality of data broadcasting bandwidth is directly related to the throughput of the broadcasting system and plays an important role in the performance of satellites. In this paper, for the sun-synchronous orbit meteorological satellite broadcasting which has the conventional product files and emergency information, a broadcast bandwidth statistical multiplexing and control method is designed for bandwidth management. It can be used for the management of broadcasting between regular products and emergency information, as well as internal broadcasting among regular products. This paper is the first to apply common multiplexing of PID and channel mode (CMPCM) to satellite broadcasting. The test verified that the broadcast channel of the parameters and the broadcast schedule management channel resources achieved statistical multiplexing of bandwidth, ratio of channel management functions, and data broadcast control. Broadcasting occupation ratio (BOR) and broadcasting file error ratio (BER) improved significantly. This is significant for improving the efficiency of satellite uplink broadcasting.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2020-08-04
    Description: Software maintainability is a crucial property of software projects. It can be defined as the ease with which a software system or component can be modified to be corrected, improved, or adapted to its environment. The software engineering literature proposes many models and metrics to predict the maintainability of a software project statically. However, there is no common accordance with the most dependable metrics or metric suites to evaluate such nonfunctional property. The goals of the present manuscript are as follows: (i) providing an overview of the most popular maintainability metrics according to the related literature; (ii) finding what tools are available to evaluate software maintainability; and (iii) linking the most popular metrics with the available tools and the most common programming languages. To this end, we performed a systematic literature review, following Kitchenham’s SLR guidelines, on the most relevant scientific digital libraries. The SLR outcome provided us with 174 software metrics, among which we identified a set of 15 most commonly mentioned ones, and 19 metric computation tools available to practitioners. We found optimal sets of at most five tools to cover all the most commonly mentioned metrics. The results also highlight missing tool coverage for some metrics on commonly used programming languages and minimal coverage of metrics for newer or less popular programming languages. We consider these results valuable for researchers and practitioners who want to find the best selection of tools to evaluate the maintainability of their projects or to bridge the discussed coverage gaps for newer programming languages.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2020-08-04
    Description: In order to investigate the intrinsic relationship between residence choice and urban rail transit, this paper establishes a housing valuation model, explores the interface link between the rail transit and other transport modes by the establishment of a model, and also obtains the family transportation impedance. According to the balanced housing price, the various districts’ hedonic cost, and the generalized transportation impedance, the attractiveness of the various districts with respect to each mobile home is obtained. Satisfaction of any resident is received by establishing a close degree model. Due to the satisfaction and the price, we construct a largest consumer surplus model and then obtain the residence of the greatest consumer surplus for mobile home. Numerical example’s result indicates that all high-income mobile homes will chose the residence for the commute destination district, especially the one in the suburbs. Furthermore, the low-income families chose the residence for the commute destination district, which has the rail transit if the income is allowed, or the nearest district to the destination with rail transportation if not. This illustrates that whether a road having urban rail transit plays a significant impact only on the low-income family residence choice when the commuter routes pass through the road and almost has no influence for other families. Hence, it is shown that the reasonable urban planning is important and that urban rail transit should form a network that will play a key role.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2020-08-10
    Description: In a distributed system, cross-domain access control is an important mechanism to realize secure data sharing among multiple domains. Most of the existing cross-domain access control mechanisms are generally based on a single-server architecture, which has limitations in terms of security and reliability (the access decision may be incorrect) and completeness and confidentiality (the access records can be modified). Blockchain technology with decentralization, verifiability, and immutability properties can solve these problems. Motivated by these facts, in this article, we construct a trusted and efficient cross-domain access control system based on blockchain. Consequently, we integrate blockchain and role mapping technology to provide reliable and verifiable cross-domain access process. We use blockchain to record user roles, role mapping rules, access policies, and audit records, realizing user self-validation, and access nonreputation. Considering the low throughput of the blockchain, we design an efficient smart contract to make the access decision based on the access history of users. Finally, a performance evaluation of the system is presented to demonstrate the feasibility of the proposed system.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2020-08-01
    Description: Innovation is a game process; in particular, the behavior among multiple agents in responsible innovation is susceptible to the influence of benefits, risks, responsibilities, and other factors, resulting in unstable collaborative relationships. Therefore, this paper constructs a tripartite evolutionary game model including the government, enterprises, and the public, combined with system dynamics modeling to simulate and analyze the tripartite behavior strategy and sensitivity to relevant exogenous variables. The study shows that the tripartite game eventually converges to a stable state of the government active supervision, enterprises making responsible innovation, and the public’s positive participation. The positive participation of the public drives rapidly the game to a steady state, while the behavioral strategies of enterprises are more susceptible to the behavior of the government. Supervision cost, penalty amount, and value compensation are the most critical factors influencing the change of the corresponding agents’ behavior strategy, and the final strategic stability of tripartite is affected by multiple exogenous variables.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    Publication Date: 2020-08-01
    Description: During the last years, big data analysis has become a popular means of taking advantage of multiple (initially valueless) sources to find relevant knowledge about real domains. However, a large number of big data sources provide textual unstructured data. A proper analysis requires tools able to adequately combine big data and text-analysing techniques. Keeping this in mind, we combined a pipelining framework (BDP4J (Big Data Pipelining For Java)) with the implementation of a set of text preprocessing techniques in order to create NLPA (Natural Language Preprocessing Architecture), an extendable open-source plugin implementing preprocessing steps that can be easily combined to create a pipeline. Additionally, NLPA incorporates the possibility of generating datasets using either a classical token-based representation of data or newer synset-based datasets that would be further processed using semantic information (i.e., using ontologies). This work presents a case study of NLPA operation covering the transformation of raw heterogeneous big data into different dataset representations (synsets and tokens) and using the Weka application programming interface (API) to launch two well-known classifiers.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2020-08-03
    Description: With the development of open source community, the software ecosystem has become a popular perspective in the research on software development process and environment. Software productivity is an important evaluation indicator of the software ecosystem health. A successful software ecosystem relies on long-term and stable production activities by the users, which ensures that the software ecosystem can continuously provide the value needed by users. Therefore, the measurement of software ecosystem productivity can help maintain the user development efficiency and the stability of the software ecosystem. However, there is still little literature on the productivity of open source software ecosystems. By analogy with the natural ecosystem, this paper gives the relevant definitions of software ecosystem productivity and analyzes the factors affecting the productivity of software ecosystem. Based on the factors of the ecosystem productivity and their interrelationships, this paper establishes a software ecosystem productivity model and takes the GitHub platform as an example for detailed analysis and explanation. The results show that the model can better explain the factors affecting the productivity of software ecosystems. It is helpful for the research on the measurement of the software ecosystem health and the software development efficiency.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2020-08-03
    Description: With the advancement of China's interest rate marketization reform, commercial banks' net interest margin has narrowed. This paper selects 16 representative listed banks as the research object and conducts an empirical analysis from the two dimensions: profit level and profit structure. The study finds that the marketization of interest rates promoted the narrowing of net interest margins caused by the narrowing of net interest margins, and the profitability of commercial banks was suppressed. The narrowing of net interest spreads forced commercial banks to actively expand their intermediate business activities and adjust business structure correspondingly. The narrowing of net interest spreads has different impacts on the profitability of commercial banks of different sizes.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2020-08-03
    Description: The Internet has revolutionized the patterns of financial development and economic growth. To assess the impacts of internet penetration on the financial industry, this paper analyzed ten-year Chinese provincial panel data and concluded that regional Internet penetration accelerates financial development. Furthermore, the efficiency of Internet investment in underdeveloped provinces is better than that in developed provinces. More meaningfully, Internet penetration promotes the transparency of the securities market and regional financial participation. This indicates that Internet technology facilitates the advancement of the finance industry and the securities market.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2020-09-27
    Description: In recent years, increased attention is being given to software quality assurance and protection. With considerable verification and protection schemes proposed and deployed, today’s software unfortunately still fails to be protected from cyberattacks, especially in the presence of insecure organization of heap metadata. In this paper, we aim to explore whether heap metadata could be corrupted and exploited by cyberattackers, in an attempt to assess the exploitability of vulnerabilities and ensure software quality. To this end, we propose RELAY, a software testing framework to simulate human exploitation behavior for metadata corruption at the machine level. RELAY employs the heap layout serialization method to construct exploit patterns from human expertise and decomposes complex exploit-solving problems into a series of intermediate state-solving subproblems. With the heap layout procedural method, RELAY makes use of the fewer resources consumed to solve a layout problem according to the exploit pattern, activates the intermediate state, and generates the final exploit. Additionally, RELAY can be easily extended and can continuously assimilate human knowledge to enhance its ability for exploitability evaluation. Using 20 CTF&RHG programs, we then demonstrate that RELAY has the ability to evaluate the exploitability of metadata corruption vulnerabilities and works more efficiently compared with other state-of-the-art automated tools.
    Print ISSN: 1058-9244
    Electronic ISSN: 1875-919X
    Topics: Computer Science , Media Resources and Communication Sciences, Journalism
    Published by Hindawi
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...