ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (2,481)
  • 2015-2019  (2,481)
  • 1945-1949
  • Algorithms  (852)
  • IEEE Transactions on Computers (T-C)  (825)
  • 110151
  • 1288
  • Computer Science  (2,481)
  • 1
    Publication Date: 2015-08-12
    Description: We examine a distributed detection problem in a wireless sensor network, where sensor nodes collaborate to detect a Gaussian signal with an unknown change of power, i.e., a scale parameter. Due to power/bandwidth constraints, we consider the case where each sensor quantizes its observation into a binary digit. The binary data are then transmitted through error-prone wireless links to a fusion center, where a generalized likelihood ratio test (GLRT) detector is employed to perform a global decision. We study the design of a binary quantizer based on an asymptotic analysis of the GLRT. Interestingly, the quantization threshold of the quantizer is independent of the unknown scale parameter. Numerical results are included to illustrate the performance of the proposed quantizer and GLRT in binary symmetric channels (BSCs).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2015-08-13
    Description: More and more hybrid electric vehicles are driven since they offer such advantages as energy savings and better active safety performance. Hybrid vehicles have two or more power driving systems and frequently switch working condition, so controlling stability is very important. In this work, a two-stage Kalman algorithm method is used to fuse data in hybrid vehicle stability testing. First, the RT3102 navigation system and Dewetron system are introduced. Second, a modeling of data fusion is proposed based on the Kalman filter. Then, this modeling is simulated and tested on a sample vehicle, using Carsim and Simulink software to test the results. The results showed the merits of this modeling.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2015-08-05
    Description: Currently deep learning has made great breakthroughs in visual and speech processing, mainly because it draws lessons from the hierarchical mode that brain deals with images and speech. In the field of NLP, a topic model is one of the important ways for modeling documents. Topic models are built on a generative model that clearly does not match the way humans write. In this paper, we propose Event Model, which is unsupervised and based on the language processing mechanism of neurolinguistics, to model documents. In Event Model, documents are descriptions of concrete or abstract events seen, heard, or sensed by people and words are objects in the events. Event Model has two stages: word learning and dimensionality reduction. Word learning is to learn semantics of words based on deep learning. Dimensionality reduction is the process that representing a document as a low dimensional vector by a linear mode that is completely different from topic models. Event Model achieves state-of-the-art results on document retrieval tasks.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Digital circuits are expected to increasingly suffer from more hard faults due to technology scaling. Especially, a single hard fault in ALU (Arithmetic Logic Unit) might lead to a total failure in processors or significantly reduce their performance. To address these increasingly important problems, we propose a novel cost-efficient fault-tolerant mechanism for the ALU, called LIZARD. LIZARD employs two half-word ALUs, instead of a single full-word ALU, to perform computations with concurrent fault detection. When a fault is detected, the two ALUs are partitioned into four quarter-word ALUs. After diagnosing and isolating a faulty quarter-word ALU, LIZARD continues its operation using the remaining ones, which can detect and isolate another fault. Even though LIZARD uses narrow ALUs for computations, it adds negligible performance overhead through exploiting predictability of the results in the arithmetic computations. We also present the architectural modifications when employing LIZARD for scalar as well as superscalar processors. Through comparative evaluation, we demonstrate that LIZARD outperforms other competitive fault-tolerant mechanisms in terms of area, energy consumption, performance and reliability.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Information searches are the most common application within social networks. Normally, the social network is modeled as a network graph, consisting of nodes (In the rest of the paper, unless otherwise specified, we will use the terms “user” and “node” interchangeably.) representing users within the network and edges representing relationships between users. Choosing the appropriate nodes to form an auxiliary structure for supporting the effective query message spreading can reduce the troublesome repeated queries. To accomplish this, a hybrid search (HS) scheme is proposed. If the query message is received by a node belonging the auxiliary structure constructed by dynamic weighted distributed label clustering (DW-DLC), it would be flooded to all neighbors of the visited node; otherwise, it would be forwarded to one neighbor of the visited node. The DW-DLC based auxiliary structure can accelerate the process of obtaining required information within the network. The simulation results show that the HS+DW-DLC scheme can reduce the average searching delay time, even in a required-information-scarce social network. In addition, the proposed scheme can generate a relatively low amount of repeated messages to lower repeatedly asking social network users.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: This paper presents a derivation of four radix-2 division algorithms by digit recurrence. Each division algorithm selects a quotient digit from the over-redundant digit set {−2, −1, 0, 1, 2}, and the selection of each quotient digit depends only on the two most-significant digits of the partial remainder in a redundant representation. Two algorithms use a two’s complement representation for the partial remainder and carry-save additions, and the other two algorithms use a binary signed-digit representation for the partial remainder and carry-free additions. Three algorithms are novel. The fourth algorithm has been presented before. Results from the synthesized netlists show that two of our fastest algorithms achieve an improvement of 10 percent in latency per iteration over a standard radix-2 SRT algorithm at the cost of 36 percent more power and 50 percent more area.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: We present WaFS, a user-level file system, and a related scheduling algorithm for scientific workflow computation in the cloud. WaFS’s primary design goal is to automatically detect and gather the explicit and implicit data dependencies between workflow jobs, rather than high-performance file access. Using WaFS’s data, a workflow scheduler can either make effective cost-performance tradeoffs or improve storage utilization. Proper resource provisioning and storage utilization on pay-as-you-go clouds can be more cost effective than the uses of resources in traditional HPC systems. WaFS and the scheduler controls the number of concurrent workflow instances at runtime so that the storage is well used, while the total makespan (i.e., turnaround time for a workload) is not severely compromised. We describe the design and implementation of WaFS and the new workflow scheduling algorithm based on our previous work. We present empirical evidence of the acceptable overheads of our prototype WaFS and describe a simulation-based study, using representative workflows, to show the makespan benefits of our WaFS-enabled scheduling algorithm.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2015-08-07
    Description: This paper presents an anomaly detection model that is granular and distributed to accurately and efficiently identify sensed data anomalies within wireless sensor networks. A more decentralised mechanism is introduced with wider use of in-network processing on a hierarchical sensor node topology resulting in a robust framework for dynamic data domains. This efficiently addresses the big data issue that is encountered in large scale industrial sensor network applications. Data vectors on each node’s observation domain is first partitioned using an unsupervised approach that is adaptive regarding dynamic data streams using cumulative point-wise entropy and average relative density . Second order statistical analysis applied on average relative densities and mean entropy values is then used to differentiate anomalies through robust and adaptive thresholds that are responsive to a dynamic environment. Anomaly detection is then performed in a non-parametric and non-probabilistic manner over the different network tiers in the hierarchical topology in offering increased granularity for evaluation. Experiments were performed extensively using both real and artificial data distributions representative of different dynamic and multi-density observation domains. Results demonstrate higher accuracies in detection as more than 94 percent accompanied by a desirable reduction of more than 85 percent in communication costs when compared to existing centralized methods.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2015-08-22
    Description: Community detection in a complex network is an important problem of much interest in recent years. In general, a community detection algorithm chooses an objective function and captures the communities of the network by optimizing the objective function, and then, one uses various heuristics to solve the optimization problem to extract the interesting communities for the user. In this article, we demonstrate the procedure to transform a graph into points of a metric space and develop the methods of community detection with the help of a metric defined for a pair of points. We have also studied and analyzed the community structure of the network therein. The results obtained with our approach are very competitive with most of the well-known algorithms in the literature, and this is justified over the large collection of datasets. On the other hand, it can be observed that time taken by our algorithm is quite less compared to other methods and justifies the theoretical findings.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2015-08-21
    Description: A three-step iterative method with fifth-order convergence as a new modification of Newton’s method was presented. This method is for finding multiple roots of nonlinear equation with unknown multiplicity m whose multiplicity m is the highest multiplicity. Its order of convergence is analyzed and proved. Results for some numerical examples show the efficiency of the new method.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: The problem of securing data present on USB memories and SD cards has not been adequately addressed in the cryptography literature. While the formal notion of a tweakable enciphering scheme (TES) is well accepted as the proper primitive for secure data storage, the real challenge is to design a low cost TES which can perform at the data rates of the targeted memory devices. In this work, we provide the first answer to this problem. Our solution, called STES, combines a stream cipher with a XOR universal hash function. The security of STES is rigorously analyzed in the usual manner of provable security approach. By carefully defining appropriate variants of the multi-linear hash function and the pseudo-dot product based hash function we obtain controllable trade-offs between area and throughput. We combine the hash function with the recent hardware oriented stream ciphers, namely Mickey, Grain and Trivium. Our implementations are targeted towards two low cost FPGAs—Xilinx Spartan 3 and Lattice ICE40. Simulation results demonstrate that the speeds of encryption/decryption match the data rates of different USB and SD memories. We believe that our work opens up the possibility of actually putting FPGAs within controllers of such memories to perform low-level in-place encryption.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Cellular automata (CAs) have been widely used to model and simulate physical systems and processes. CAs have also been successfully used as a VLSI architecture that proved to be very efficient at least in terms of silicon-area utilization and clock-speed maximization. Quantum cellular automata (QCAs) as one of the promising emerging technologies for nanoscale and quantum computing circuit implementation, provides very high scale integration, very high switching frequency and extremely low power characteristics. In this paper we present a new automated design architecture and a tool, namely DATICAQ (Design Automation Tool of 1-D CAs using QCAs), that builds a bridge between 1-D CAs as models of physical systems and processes and 1-D QCAs as nanoelectronic architecture. The QCA implementation of CAs not only drives the already developed CAs circuits to the nanoelectronics era but improves their performance significantly. The inputs of the proposed architecture are CA dimensionality, size, local rule, and initial and boundary conditions imposed by the particular problem. DATICAQ produces as output the layout of the QCA implementation of the particular 1-D CA model. Simulations of CA models for zero and periodic boundary conditions and the corresponding QCA circuits showed that the CA models have been successfully implemented.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Role-based access control is an important access control method for securing computer systems. A role-based access control policy can be implemented incorrectly due to various reasons, such as programming errors. Defects in the implementation may lead to unauthorized access and security breaches. To reveal access control defects, this paper presents a model-based approach to automated generation of executable access control tests using predicate/transition nets. Role-permission test models are built by integrating declarative access control rules with functional test models or contracts (preconditions and postconditions) of the associated activities (the system functions). The access control tests are generated automatically from the test models to exercise the interactions of access control activities. They are transformed into executable code through a model-implementation mapping that maps the modeling elements to implementation constructs. The approach has been implemented in an industry-adopted test automation framework that supports the generation of test code in a variety of languages. The full model-based testing process has been applied to three systems implemented in Java. The effectiveness is evaluated through mutation analysis of role-based access control rules. The experiments show that the model-based approach is highly effective in detecting the seeded access control defects.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: Heterogeneous multiprocessor systems, which are composed of a mix of processing elements, such as commodity multicore processors, graphics processing units (GPUs), and others, have been widely used in scientific computing community. Software applications incorporate the code designed and optimized for different types of processing elements in order to exploit the computing power of such heterogeneous computing systems. In this paper, we consider the problem of optimal distribution of the workload of data-parallel scientific applications between processing elements of such heterogeneous computing systems. We present a solution that uses functional performance models (FPMs) of processing elements and FPM-based data partitioning algorithms. Efficiency of this approach is demonstrated by experiments with parallel matrix multiplication and numerical simulation of lid-driven cavity flow on hybrid servers and clusters.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: In this paper, we propose a new notion called $k$ -times attribute-based anonymous access control , which is particularly designed for supporting cloud computing environment. In this new notion, a user can authenticate himself/herself to the cloud computing server anonymously. The server only knows the user acquires some required attributes, yet it does not know the identity of this user. In addition, we provide a $k$ -times limit for anonymous access control. That is, the server may limit a particular set of users (i.e., those users with the same set of attribute) to access the system for a maximum $k$ -times within a period or an event. Further additional access will be denied. We also prove the security of our instantiation. Our implementation result shows that our scheme is practical.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-08-07
    Description: In face of high partial and complete disk failure rates and untimely system crashes, the executions of low-priority background tasks become increasingly frequent in large-scale data centers. However, the existing algorithms are all reactive optimizations and only exploit the temporal locality of workloads to reduce the user I/O requests during the low-priority background tasks. To address the problem, this paper proposes Intelligent Data Outsourcing (IDO), a zone-based and proactive data migration optimization, to significantly improve the efficiency of the low-priority background tasks. The main idea of IDO is to proactively identify the hot data zones of RAID-structured storage systems in the normal operational state. By leveraging the prediction tools to identify the upcoming events, IDO proactively migrates the data blocks belonging to the hot data zones on the degraded device to a surrogate RAID set in the large-scale data centers. Upon a disk failure or crash reboot, most user I/O requests addressed to the degraded RAID set can be serviced directly by the surrogate RAID set rather than the much slower degraded RAID set. Consequently, the performance of the background tasks and user I/O performance during the background tasks are improved simultaneously. Our lightweight prototype implementation of IDO and extensive trace-driven experiments on two case studies demonstrate that, compared with the existing state-of-the-art approaches, IDO effectively improves the performance of the low-priority background tasks. Moreover, IDO is portable and can be easily incorporated into any existing algorithms for RAID-structured storage systems.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2015-08-07
    Description: Cloud computing that provides elastic computing and storage resource on demand has become increasingly important due to the emergence of “big data”. Cloud computing resources are a natural fit for processing big data streams as they allow big data application to run at a scale which is required for handling its complexities (data volume, variety and velocity). With the data no longer under users’ direct control, data security in cloud computing is becoming one of the most concerns in the adoption of cloud computing resources. In order to improve data reliability and availability, storing multiple replicas along with original datasets is a common strategy for cloud service providers. Public data auditing schemes allow users to verify their outsourced data storage without having to retrieve the whole dataset. However, existing data auditing techniques suffers from efficiency and security problems. First, for dynamic datasets with multiple replicas, the communication overhead for update verifications is very large, because each update requires updating of all replicas, where verification for each update requires O(log n ) communication complexity. Second, existing schemes cannot provide public auditing and authentication of block indices at the same time. Without authentication of block indices, the server can build a valid proof based on data blocks other than the blocks client requested to verify. In order to address these problems, in this paper, we present a novel public auditing scheme named MuR-DPA. The new scheme incorporated a novel authenticated data structure (ADS) based on the Merkle hash tree (MHT), which we call MR-MHT. To support full dynamic data updates and authentication of block indices, we included rank and level values in computation of MHT nodes. In contrast to existing schemes, level values of nodes in MR-MHT are assigned in a top-down order, and all replica blocks for each data block are organized into a - ame replica sub-tree. Such a configuration allows efficient verification of updates for multiple replicas. Compared to existing integrity verification and public auditing schemes, theoretical analysis and experimental results show that the proposed MuR-DPA scheme can not only incur much less communication overhead for both update verification and integrity verification of cloud datasets with multiple replicas, but also provide enhanced security against dishonest cloud service providers.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: Bandwidth reservation has been recognized as a value-added service to the cloud provider in recent years. We consider an open market of cloud bandwidth reservation, in which cloud providers offer bandwidth reservation services to cloud tenants, especially online streaming service providers, who have strict requirements on the amount of bandwidth to guarantee their quality of services. In this paper, we model the open market as a double-sided auction, and propose the first family of ST rategy-proof double A uctions for multi-cloud, multi-tenant bandwidth R eservation (STAR). STAR contains two auction mechanisms. The first one, STAR-Grouping, divides the tenants into groups by a bid-independent way, and carefully matches the cloud providers with the tenant groups to form good trades. The second one, STAR-Padding, greedily matches the cloud providers with the tenants, and fills the partially reserved cloud provider(s) with a novel virtual padding tenant who can be a component of the auctioneer. Our analysis shows that both of the two auction mechanisms achieve strategy-proofness and ex-post budget balance. Our evaluation results show that they achieve good performance in terms of social welfare, cloud bandwidth utilization, and tenant satisfaction ratio.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: In a distributed real-time system (DRTS), jobs are often executed on a number of processors and must complete by their end-to-end deadlines. Job deadline requirements may be violated if resource competition among different jobs on a given processor is not considered. This paper introduces a distributed, locally optimal algorithm to assign local deadlines to the jobs on each processor without any restrictions on the mappings of the applications to the processors in the distributed soft real-time system. Improvedschedulability results are achieved by the algorithm since disparate workloads among the processors due to competing jobs havingdifferent paths are considered. Given its distributed nature, the proposed algorithm is adaptive to dynamic changes of the applications and avoids the overhead of global clock synchronization. In order to make the proposed algorithm more practical, two derivatives of the algorithm are proposed and compared. Simulation results based on randomly generated workloads indicate that the proposed approach outperforms existing work both in terms of the number of feasible jobs (between 51% and 313% on average) and the number of feasible task sets (between 12% and 71% on average).
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: Reproducibility, i.e. getting bitwise identical floating point results from multiple runs of the same program, is a property that many users depend on either for debugging or correctness checking in many codes [10] . However, the combination of dynamic scheduling of parallel computing resources, and floating point nonassociativity, makes attaining reproducibility a challenge even for simple reduction operations like computing the sum of a vector of numbers in parallel. We propose a technique for floating point summation that is reproducible independent of the order of summation. Our technique uses Rump’s algorithm for error-free vector transformation [7] , and is much more efficient than using (possibly very) high precision arithmetic. Our algorithm reproducibly computes highly accurate results with an absolute error bound of $n cdot 2^{-28} cdot macheps cdot max _i |v_i|$ at a cost of $7n$ FLOPs and a small constant amount of extra memory usage. Higher accuracies are also possible by increasing the number of error-free transformations. As long as all operations are performed in to-nearest rounding mode, results computed by the proposed algorithms are reproducible for any run on any platform. In particular, our algorithm requires the minimum number of reductions, i.e. one reduction of an array of six double precision floating point numbers per sum, and hence is well suited for massively parallel environments.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: In recent years, embedded dynamic random-access memory (eDRAM) technology has been implemented in last-level caches due to its low leakage energy consumption and high density. However, the fact that eDRAM presents slower access time than static RAM (SRAM) technology has prevented its inclusion in higher levels of the cache hierarchy. This paper proposes to mingle SRAM and eDRAM banks within the data array of second-level (L2) caches. The main goal is to achieve the best trade-off among performance, energy, and area. To this end, two main directions have been followed. First, this paper explores the optimal percentage of banks for each technology. Second, the cache controller is redesigned to deal with performance and energy. Performance is addressed by keeping the most likely accessed blocks in fast SRAM banks. In addition, energy savings are further enhanced by avoiding unnecessary destructive reads of eDRAM blocks. Experimental results show that, compared to a conventional SRAM L2 cache, a hybrid approach requiring similar or even lower area speedups the performance on average by 5.9 percent, while the total energy savings are by 32 percent. For a 45 nm technology node, the energy-delay-area product confirms that a hybrid cache is a better design than the conventional SRAM cache regardless of the number of eDRAM banks, and also better than a conventional eDRAM cache when the number of SRAM banks is an eighth of the total number of cache banks.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: Nearly all of the currently used signature schemes, such as RSA or DSA, are based either on the factoring assumption or the presumed intractability of the discrete logarithm problem. As a consequence, the appearance of quantum computers or algorithmic advances on these problems may lead to the unpleasant situation that a large number of today’s schemes will most likely need to be replaced with more secure alternatives. In this work we present such an alternative—an efficient signature scheme whose security is derived from the hardness of lattice problems. It is based on recent theoretical advances in lattice-based cryptography and is highly optimized for practicability and use in embedded systems. The public and secret keys are roughly $1.5$  kB and $0.3$  kB long, while the signature size is approximately $1.1$  kB for a security level of around $80$ bits. We provide implementation results on reconfigurable hardware (Spartan/Virtex-6) and demonstrate that the scheme is scalable, has low area consumption, and even outperforms classical schemes.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: With the rising demands on cloud services, the electricity consumption has been increasing drastically as the main operational expenditure (OPEX) to data center providers. The geographical heterogeneity of electricity prices motivates us to study the task placement problem over geo-distributed data centers. We exploit the dynamic frequency scaling technique and formulate an optimization problem that minimizes OPEX while guaranteeing the quality-of-service, i.e., the expected response time of tasks. Furthermore, an optimal solution is discovered for this formulated problem. The experimental results show that our proposal achieves much higher cost-efficiency than the traditional resizing scheme, i.e., by activating/deactivating certain servers in data centers.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-06-09
    Description: A new methodology for DRAM performance analysis has been proposed based on accurate characterization of DRAM bus cycles. The proposed methodology allows cycle-accurate performance analysis of arbitrary DRAM traces, obviates the need for functional simulations, allows accurate estimation of DRAM performance maximum, and enables root causing of suboptimal DRAM operation.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2015-07-30
    Description: In this paper, we present three improvements to a three-point third order variant of Newton’s method derived from the Simpson rule. The first one is a fifth order method using the same number of functional evaluations as the third order method, the second one is a four-point 10th order method and the last one is a five-point 20th order method. In terms of computational point of view, our methods require four evaluations (one function and three first derivatives) to get fifth order, five evaluations (two functions and three derivatives) to get 10th order and six evaluations (three functions and three derivatives) to get 20th order. Hence, these methods have efficiency indexes of 1.495, 1.585 and 1.648, respectively which are better than the efficiency index of 1.316 of the third order method. We test the methods through some numerical experiments which show that the 20th order method is very efficient.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2015-07-30
    Description: Robust small target detection of low signal-to-noise ratio (SNR) is very important in infrared search and track applications for self-defense or attacks. Due to the complex background, current algorithms have some unsolved issues with false alarm rate. In order to reduce the false alarm rate, an infrared small target detection algorithm based on saliency detection and support vector machine was proposed. Firstly, we detect salient regions that may contain targets with phase spectrum Fourier transform (PFT) approach. Then, target recognition was performed in the salient regions. Experimental results show the proposed algorithm has ideal robustness and efficiency for real infrared small target detection applications.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2015-08-06
    Description: In dynamic propagation environments, beamforming algorithms may suffer from strong interference, steering vector mismatches, a low convergence speed and a high computational complexity. Reduced-rank signal processing techniques provide a way to address the problems mentioned above. This paper presents a low-complexity robust data-dependent dimensionality reduction based on an iterative optimization with steering vector perturbation (IOVP) algorithm for reduced-rank beamforming and steering vector estimation. The proposed robust optimization procedure jointly adjusts the parameters of a rank reduction matrix and an adaptive beamformer. The optimized rank reduction matrix projects the received signal vector onto a subspace with lower dimension. The beamformer/steering vector optimization is then performed in a reduced dimension subspace. We devise efficient stochastic gradient and recursive least-squares algorithms for implementing the proposed robust IOVP design. The proposed robust IOVP beamforming algorithms result in a faster convergence speed and an improved performance. Simulation results show that the proposed IOVP algorithms outperform some existing full-rank and reduced-rank algorithms with a comparable complexity.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2015-08-07
    Description: Recently, wireless sensor networks (WSNs) have drawn great interest due to their outstanding monitoring and management potential in medical, environmental and industrial applications. Most of the applications that employ WSNs demand all of the sensor nodes to run on a common time scale, a requirement that highlights the importance of clock synchronization. The clock synchronization problem in WSNs is inherently related to parameter estimation. The accuracy of clock synchronization algorithms depends essentially on the statistical properties of the parameter estimation algorithms. Recently, studies dedicated to the estimation of synchronization parameters, such as clock offset and skew, have begun to emerge in the literature. The aim of this article is to provide an overview of the state-of-the-art clock synchronization algorithms for WSNs from a statistical signal processing point of view. This article focuses on describing the key features of the class of clock synchronization algorithms that exploit the traditional two-way message (signal) exchange mechanism. Upon introducing the two-way message exchange mechanism, the main clock offset estimation algorithms for pairwise synchronization of sensor nodes are first reviewed, and their performance is compared. The class of fully-distributed clock offset estimation algorithms for network-wide synchronization is then surveyed. The paper concludes with a list of open research problems pertaining to clock synchronization of WSNs.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2015-09-16
    Description: In this paper we investigate some parallel variants of Broyden’s method and, for the basic variant, we present its convergence properties. The main result is that the behavior of the considered parallel Broyden’s variants is comparable with the classical parallel Newton method, and significantly better than the parallel Cimmino method, both for linear and nonlinear cases. The considered variants are also compared with two more recently proposed parallel Broyden’s method. Some numerical experiments are presented to illustrate the advantages and limits of the proposed algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2015-09-26
    Description: The sign least mean square with reweighted L1-norm constraint (SLMS-RL1) algorithm is an attractive sparse channel estimation method among Gaussian mixture model (GMM) based algorithms for use in impulsive noise environments. The channel sparsity can be exploited by SLMS-RL1 algorithm based on appropriate reweighted factor, which is one of key parameters to adjust the sparse constraint for SLMS-RL1 algorithm. However, to the best of the authors’ knowledge, a reweighted factor selection scheme has not been developed. This paper proposes a Monte-Carlo (MC) based reweighted factor selection method to further strengthen the performance of SLMS-RL1 algorithm. To validate the performance of SLMS-RL1 using the proposed reweighted factor, simulations results are provided to demonstrate that convergence speed can be reduced by increasing the channel sparsity, while the steady-state MSE performance only slightly changes with different GMM impulsive-noise strengths.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2015-11-21
    Description: We present a local convergence analysis of an eighth order three step methodin order to approximate a locally unique solution of nonlinear equation in a Banach spacesetting. In an earlier study by Sharma and Arora (2015), the order of convergence wasshown using Taylor series expansions and hypotheses up to the fourth order derivative oreven higher of the function involved which restrict the applicability of the proposed scheme.However, only first order derivative appears in the proposed scheme. In order to overcomethis problem, we proposed the hypotheses up to only the first order derivative. In this way,we not only expand the applicability of the methods but also propose convergence domain.Finally, where earlier studies cannot be applied, a variety of concrete numerical examplesare proposed to obtain the solutions of nonlinear equations. Our study does not exhibit thistype of problem/restriction.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2015-11-21
    Description: Lung cancer continues to rank as the leading cause of cancer deaths worldwide. One of the most promising techniques for early detection of cancerous cells relies on sputum cell analysis. This was the motivation behind the design and the development of a new computer aided diagnosis (CAD) system for early detection of lung cancer based on the analysis of sputum color images. The proposed CAD system encompasses four main processing steps. First is the preprocessing step which utilizes a Bayesian classification method using histogram analysis. Then, in the second step, mean shift segmentation is applied to segment the nuclei from the cytoplasm. The third step is the feature analysis. In this step, geometric and chromatic features are extracted from the nucleus region. These features are used in the diagnostic process of the sputum images. Finally, the diagnosis is completed using an artificial neural network and support vector machine (SVM) for classifying the cells into benign or malignant. The performance of the system was analyzed based on different criteria such as sensitivity, specificity and accuracy. The evaluation was carried out using Receiver Operating Characteristic (ROC) curve. The experimental results demonstrate the efficiency of the SVM classifier over other classifiers, with 97% sensitivity and accuracy as well as a significant reduction in the number of false positive and false negative rates.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2015-11-20
    Description: Near-infrared spectroscopy (NIRS) enables the non-invasive measurement of changes in hemodynamics and oxygenation in tissue. Changes in light-coupling due to movement of the subject can cause movement artifacts (MAs) in the recorded signals. Several methods have been developed so far that facilitate the detection and reduction of MAs in the data. However, due to fixed parameter values (e.g., global threshold) none of these methods are perfectly suitable for long-term (i.e., hours) recordings or were not time-effective when applied to large datasets. We aimed to overcome these limitations by automation, i.e., data adaptive thresholding specifically designed for long-term measurements, and by introducing a stable long-term signal reconstruction. Our new technique (“acceleration-based movement artifact reduction algorithm”, AMARA) is based on combining two methods: the “movement artifact reduction algorithm” (MARA, Scholkmann et al. Phys. Meas. 2010, 31, 649–662), and the “accelerometer-based motion artifact removal” (ABAMAR, Virtanen et al. J. Biomed. Opt. 2011, 16, 087005). We describe AMARA in detail and report about successful validation of the algorithm using empirical NIRS data, measured over the prefrontal cortex in adolescents during sleep. In addition, we compared the performance of AMARA to that of MARA and ABAMAR based on validation data.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2015-08-27
    Description: This paper focuses on the parameter identification problem for Wiener nonlinear dynamic systems with moving average noises. In order to improve the convergence rate, the gradient-based iterative algorithm is presented by replacing the unmeasurable variables with their corresponding iterative estimates, and to compute iteratively the noise estimates based on the obtained parameter estimates. The simulation results show that the proposed algorithm can effectively estimate the parameters of Wiener systems with moving average noises.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2015-06-02
    Description: In this paper, the dynamical behavior of different optimal iterative schemes for solving nonlinear equations with increasing order, is studied. The tendency of the complexity of the Julia set is analyzed and referred to the fractal dimension. In fact, this fractal dimension can be shown to be a powerful tool to compare iterative schemes that estimate the solution of a nonlinear equation. Based on the box-counting algorithm, several iterative derivative-free methods of different convergence orders are compared.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2016-07-22
    Description: Clustering is a fundamental task in data mining. Affinity propagation clustering (APC) is an effective and efficient clustering technique that has been applied in various domains. APC iteratively propagates information between affinity samples, updates the responsibility matrix and availability matrix, and employs these matrices to choose cluster centers (or exemplars) of respective clusters. However, since it mainly uses negative Euclidean distance between exemplars and samples as the similarity between them, it is difficult to identify clusters with complex structure. Therefore, the performance of APC deteriorates on samples distributed with complex structure. To mitigate this problem, we propose an improved APC based on a path-based similarity (APC-PS). APC-PS firstly utilizes negative Euclidean distance to find exemplars of clusters. Then, it employs the path-based similarity to measure the similarity between exemplars and samples, and to explore the underlying structure of clusters. Next, it assigns non-exemplar samples to their respective clusters via that similarity. Our empirical study on synthetic and UCI datasets shows that the proposed APC-PS significantly outperforms original APC and other related approaches.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2016-07-23
    Description: Graph-based semi-supervised classification uses a graph to capture the relationship between samples and exploits label propagation techniques on the graph to predict the labels of unlabeled samples. However, it is difficult to construct a graph that faithfully describes the relationship between high-dimensional samples. Recently, low-rank representation has been introduced to construct a graph, which can preserve the global structure of high-dimensional samples and help to train accurate transductive classifiers. In this paper, we take advantage of low-rank representation for graph construction and propose an inductive semi-supervised classifier called Semi-Supervised Classification based on Low-Rank Representation (SSC-LRR). SSC-LRR first utilizes a linearized alternating direction method with adaptive penalty to compute the coefficient matrix of low-rank representation of samples. Then, the coefficient matrix is adopted to define a graph. Finally, SSC-LRR incorporates this graph into a graph-based semi-supervised linear classifier to classify unlabeled samples. Experiments are conducted on four widely used facial datasets to validate the effectiveness of the proposed SSC-LRR and the results demonstrate that SSC-LRR achieves higher accuracy than other related methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2016-07-23
    Description: This research proposes a two-stage user-based collaborative filtering process using an artificial immune system for the prediction of student grades, along with a filter for professor ratings in the course recommendation for college students. We test for cosine similarity and Karl Pearson (KP) correlation in affinity calculations for clustering and prediction. This research uses student information and professor information datasets of Yuan Ze University from the years 2005–2009 for the purpose of testing and training. The mean average error and confusion matrix analysis form the testing parameters. A minimum professor rating was tested to check the results, and observed that the recommendation systems herein provide highly accurate results for students with higher mean grades.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    Publication Date: 2016-07-31
    Description: This paper is concerned with the application of computational intelligence techniques to the conceptual design and development of a large-scale floating settlement. The settlement in question is a design for the area of Urla, which is a rural touristic region located on the west coast of Turkey, near the metropolis of Izmir. The problem at hand includes both engineering and architectural aspects that need to be addressed in a comprehensive manner. We thus adapt the view as a multi-objective constrained real-parameter optimization problem. Specifically, we consider three objectives, which are conflicting. The first one aims at maximizing accessibility of urban functions such as housing and public spaces, as well as special functions, such as a marina for yachts and a yacht club. The second one aims at ensuring the wind protection of the general areas of the settlement, by adequately placing them in between neighboring land masses. The third one aims at maximizing visibility of the settlement from external observation points, so as to maximize the exposure of the settlement. To address this complex multi-objective optimization problem and identify lucrative alternative design solutions, a multi-objective harmony search algorithm (MOHS) is developed and applied in this paper. When compared to the Differential Evolution algorithm developed for the problem in the literature, we demonstrate that MOHS achieves competitive or slightly better performance in terms of hyper volume calculation, and gives promising results when the Pareto front approximation is examined.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-12
    Description: During at-speed test of high performance sequential ICs using scan-based Logic BIST, the IC activity factor (AF) induced by the applied test vectors is significantly higher than that experienced during its in field operation. Consequently, power droop (PD) may take place during both shift and capture phases, which will slow down the circuit under test (CUT) signal transitions. At capture, this phenomenon is likely to be erroneously recognized as due to delay faults. As a result, a false test fail may be generated, with consequent increase in yield loss. In this paper, we propose two approaches to reduce the PD generated at capture during at-speed test of sequential circuits with scan-based Logic BIST using the Launch-On-Shift scheme. Both approaches increase the correlation between adjacent bits of the scan chains with respect to conventional scan-based LBIST. This way, the AF of the scan chains at capture is reduced. Consequently, the AF of the CUT at capture, thus the PD at capture, is also reduced compared to conventional scan-based LBIST. The former approach, hereinafter referred to as Low-Cost Approach (LCA), enables a 50 percent reduction in the worst case magnitude of PD during conventional logic BIST. It requires a small cost in terms of area overhead (of approximately 1.5 percent on average), and it does not increase the number of test vectors over the conventional scan-based LBIST to achieve the same Fault Coverage (FC). Moreover, compared to three recent alternative solutions, LCA features a comparable AF in the scan chains at capture, while requiring lower test time and area overhead. The second approach, hereinafter referred to as High-Reduction Approach (HRA), enables scalable PD reductions at capture of up to 87 percent, with limited additional costs in terms of area overhead and number of required test vectors for a given target FC, over our LCA approach. Particularly, compared to two of the three recent alternative solutions mentioned above, HRA en- bles a significantly lower AF in the scan chains during the application of test vectors, while requiring either a comparable area overhead or a significantly lower test time. Compared to the remaining alternative solutions mentioned above, HRA enables a similar AF in the scan chains at capture (approximately 90 percent lower than conventional scan-based LBIST), while requiring a significantly lower test time (approximately 4.87 times on average lower number of test vectors) and comparable area overhead (of approximately 1.9 percent on average).
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2016-07-19
    Description: During a construction project life cycle, project costs and time estimations contribute greatly to baseline scheduling. Besides, schedule risk analysis and project control are also influenced by the above factors. Although many papers have offered estimation techniques, little attempt has been made to generate project time series data as daily progressive estimations in different project environments that could help researchers in generating general and customized formulae in further studies. This paper, however, is an attempt to introduce a new simulation approach to reflect the data regarding time series progress of the project, considering the specifications and the complexity of the project and the environment where the project is performed. Moreover, this simulator can equip project managers with estimated information, which reassures them of the execution stages of the project although they lack historical data. A case study is presented to show the usefulness of the model and its applicability in practice. In this study, singular spectrum analysis has been employed to analyze the simulated outputs, and the results are separated based on their signal and noise trends. The signal trend is used as a point-of-reference to compare the outputs of a simulation employing S-curve technique results and the formulae corresponding to earned value management, as well as the life of a given project.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    Publication Date: 2016-07-27
    Description: This paper discusses the parameter estimation problems of multi-input output-error autoregressive (OEAR) systems. By combining the auxiliary model identification idea and the data filtering technique, a data filtering based recursive generalized least squares (F-RGLS) identification algorithm and a data filtering based iterative least squares (F-LSI) identification algorithm are derived. Compared with the F-RGLS algorithm, the proposed F-LSI algorithm is more effective and can generate more accurate parameter estimates. The simulation results confirm this conclusion.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2016-08-05
    Description: The force-directed paradigm is one of the few generic approaches to drawing graphs. Since force-directed algorithms can be extended easily, they are used frequently. Most of these algorithms are, however, quite slow on large graphs, as they compute a quadratic number of forces in each iteration. We give a new algorithm that takes only O ( m + n log n ) time per iteration when laying out a graph with n vertices and m edges. Our algorithm approximates the true forces using the so-called well-separated pair decomposition. We perform experiments on a large number of graphs and show that we can strongly reduce the runtime, even on graphs with less than a hundred vertices, without a significant influence on the quality of the drawings (in terms of the number of crossings and deviation in edge lengths).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: The advent of the cloud computing makes storage outsourcing become a rising trend, which promotes the secure remote data auditing a hot topic that appeared in the research literature. Recently some research consider the problem of secure and efficient public data integrity auditing for shared dynamic data. However, these schemes are still not secure against the collusion of cloud storage server and revoked group users during user revocation in practical cloud storage system. In this paper, we figure out the collusion attack in the exiting scheme and provide an efficient public integrity auditing scheme with secure group user revocation based on vector commitment and verifier-local revocation group signature. We design a concrete scheme based on the our scheme definition. Our scheme supports the public checking and efficient user revocation and also some nice properties, such as confidently, efficiency, countability and traceability of secure group user revocation. Finally, the security and experimental analysis show that, compared with its relevant schemes our scheme is also secure and efficient.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: To select an appropriate level of error protection in caches, the impact of various protection schemes on the cache Failure In Time (FIT) rate must be evaluated for a target benchmark suite. However, while many simulation tools exist to evaluate area, power and performance for a set of benchmark programs, there is a dearth of such tools for reliability. This paper introduces a new cache reliability model called PARMA+ that has unique features which distinguish it from previous models. PARMA+ estimates a cache's FIT rate in the presence of spatial multi-bit faults, single-bit faults, temporal multi-bit faults and different error protection schemes including parity, ECC, early write-back and bit-interleaving. We first develop the model formally, then we demonstrate its accuracy. We have run reliability simulations for many distributions of large and small fault patterns and have compared them with accelerated fault injection simulations. PARMA+ has high accuracy and low computational complexity.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Although the travel time is the most important information in road networks, many spatial queries, e.g., $k$ -nearest-neighbor ( $k$ -NN) and range queries, for location-based services (LBS) are only based on the network distance. This is because it is costly for an LBS provider to collect real-time traffic data from vehicles or roadside sensors to compute the travel time between two locations. With the advance of web mapping services, e.g., Google Maps, Microsoft Bing Maps, and MapQuest Maps, there is an invaluable opportunity for using such services for processing spatial queries based on the travel time. In this paper, we propose a server-side S patial M ashup S ervice (SMS) that enables the LBS provider to efficiently evaluate $k$ -NN queries in road networks using the route information and travel time retrieved from an external web mapping service. Due to the high cost of retrieving such external information, the usage limits of web mapping services, and the large number of spatial queries, we optimize the SMS for a large number of $k$ -NN queries. We first discuss how the SMS processes a single $k$ -NN query using two optimizations, namely, direction sharing and parallel requesting . Then, we extend them to process multiple concurrent $k$ -NN queries and design a performance tuning tool to provide a trade-off between the query response time and the number of external requests and more importantly, to prevent a starvation problem in the parallel requesting optimization for concurrent queries. We evaluate the performance of the proposed SMS using MapQuest Maps, a real road network, real and synthetic data sets. Experimental results show the efficiency and scalability of our optimizations designed for the SMS.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2016-07-08
    Description: Several recent works have studied mobile vehicle scheduling to recharge sensor nodes via wireless energy transfer technologies. Unfortunately, most of them overlooked important factors of the vehicles’ moving energy consumption and limited recharging capacity, which may lead to problematic schedules or even stranded vehicles. In this paper, we consider the recharge scheduling problem under such important constraints. To balance energy consumption and latency, we employ one dedicated data gathering vehicle and multiple charging vehicles. We first organize sensors into clusters for easy data collection, and obtain theoretical bounds on latency. Then we establish a mathematical model for the relationship between energy consumption and replenishment, and obtain the minimum number of charging vehicles needed. We formulate the scheduling into a Profitable Traveling Salesmen Problem that maximizes profit - the amount of replenished energy less the cost of vehicle movements, and prove it is NP-hard. We devise and compare two algorithms: a greedy one that maximizes the profit at each step; an adaptive one that partitions the network and forms Capacitated Minimum Spanning Trees per partition. Through extensive evaluations, we find that the adaptive algorithm can keep the number of nonfunctional nodes at zero. It also reduces transient energy depletion by 30-50 percent and saves 10-20 percent energy. Comparisons with other common data gathering methods show that we can save 30 percent energy and reduce latency by two orders of magnitude.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: The capability of selectively sharing encrypted data with different users via public cloud storage may greatly ease security concerns over inadvertent data leaks in the cloud. A key challenge to designing such encryption schemes lies in the efficient management of encryption keys. The desired flexibility of sharing any group of selected documents with any group of users demands different encryption keys to be used for different documents. However, this also implies the necessity of securely distributing to users a large number of keys for both encryption and search, and those users will have to securely store the received keys, and submit an equally large number of keyword trapdoors to the cloud in order to perform search over the shared data. The implied need for secure communication, storage, and complexity clearly renders the approach impractical. In this paper, we address this practical problem, which is largely neglected in the literature, by proposing the novel concept of key-aggregate searchable encryption and instantiating the concept through a concrete KASE scheme, in which a data owner only needs to distribute a single key to a user for sharing a large number of documents, and the user only needs to submit a single trapdoor to the cloud for querying the shared documents. The security analysis and performance evaluation both confirm that our proposed schemes are provably secure and practically efficient.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Infrastructure-as-a-service (IaaS) cloud providers offer tenants elastic computing resources in the form of virtual machine (VM) instances to run their jobs. Recently, providing predictable performance (i.e., performance guarantee) for tenant applications is becoming increasingly compelling in IaaS clouds. However, the hardware heterogeneity and performance interference across the same type of cloud VM instances can bring substantial performance variation to tenant applications, which inevitably stops the tenants from moving their performance-sensitive applications to the IaaS cloud. To tackle this issue, this paper proposes Heifer, a He terogeneity and i nter fer ence-aware VM provisioning framework for tenant applications, by focusing on MapReduce as a representative cloud application. It predicts the performance of MapReduce applications by designing a lightweight performance model using the online-measured resource utilization and capturing VM interference. Based on such a performance model, Heifer provisions the VM instances of the good-performing hardware type (i.e., the hardware that achieves the best application performance) to achieve predictable performance for tenant applications, by explicitly exploring the hardware heterogeneity and capturing VM interference. With extensive prototype experiments in our local private cloud and a real-world public cloud (i.e., Microsoft Azure) as well as complementary large-scale simulations, we demonstrate that Heifer can guarantee the job performance while saving the job budget for tenants. Moreover, our evaluation results show that Heifer can improve the job throughput of cloud datacenters, such that the revenue of cloud providers can be increased, thereby achieving a win-win situation between providers and tenants.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2016-07-08
    Description: Gaussian normal bases (GNBs) are special set of normal bases (NBs) which yield low complexity $GFleft(2^{m}right)$ arithmetic operations. In this paper, we present new architectures for the digit-level single, hybrid-double, and hybrid-triple multiplication of $GFleft(2^{m}right)$ elements based on the GNB representation for odd values of $m > 1$ . The proposed fully-serial-in single multipliers perform multiplication of two field elements and offer high throughput when the data-path capacity for entering inputs is limited. The proposed hybrid-double and hybrid-triple digit-level GNB multipliers perform, respectively, two and three field multiplications using the same latency required for a single digit-level multiplier, at the expense of increased area. In addition, we present a new eight-ary field exponentiation architecture which does not require precomputed or stored intermediate values.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Shamir's secret sharing scheme is an effective way to distribute secret to a group of shareholders. The security of the unprotected sharing scheme, however, can be easily broken by cheaters or attackers who maliciously feed incorrect shares during the secret recovery stage or inject faults into hardware computing the secret. In this paper, we propose cheater detection and identification schemes based on robust and algebraic manipulation detection (AMD) codes and m-disjunct matrices (superimposed codes). We present the constructions of codes for cheater detection and identification and describe how the cheater identification problem can be related to the classic group testing algorithms based on m-disjunct matrices. Simulation and synthesis results show that the proposed architecture can improve the security level significantly even under strong cheating attack models with reasonable area and timing overheads.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Cloud platforms encompass a large number of storage services that can be used to manage the needs of customers. Each of these services, offered by a different provider, is characterized by specific features, limitations and prices. In presence of multiple options, it is crucial to select the best solution fitting the customer requirements in terms of quality of service and costs. Most of the available approaches are not able to handle uncertainty in the expression of subjective preferences from customers, and can result in wrong (or sub-optimal) service selections in presence of rational/selfish providers, exposing untrustworthy indications concerning the quality of service levels and prices associated to their offers. In addition, due to its multi-objective nature, the optimal service selection process results in a very complex task to be managed, when possible, in a distributed way, for well-known scalability reasons. In this work, we aim at facing the above challenges by proposing three novel contributions. The fuzzy sets theory is used to express vagueness in the subjective preferences of the customers. The service selection is resolved with the distributed application of fuzzy inference or Dempster-Shafer theory of evidence. The selection strategy is also complemented by the adoption of a game theoretic approach for promoting truth-telling ones among service providers. We present empirical evidence of the proposed solution effectiveness through properly crafted simulation experiments.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: As the cloud computing technology develops during the last decade, outsourcing data to cloud service for storage becomes an attractive trend, which benefits in sparing efforts on heavy data maintenance and management. Nevertheless, since the outsourced cloud storage is not fully trustworthy, it raises security concerns on how to realize data deduplication in cloud while achieving integrity auditing. In this work, we study the problem of integrity auditing and secure deduplication on cloud data. Specifically, aiming at achieving both data integrity and deduplication in cloud, we propose two secure systems, namely SecCloud and SecCloud $^+$ . SecCloud introduces an auditing entity with a maintenance of a MapReduce cloud, which helps clients generate data tags before uploading as well as audit the integrity of data having been stored in cloud. Compared with previous work, the computation by user in SecCloud is greatly reduced during the file uploading and auditing phases. SecCloud $^+$ is designed motivated by the fact that customers always want to encrypt their data before uploading, and enables integrity auditing and secure deduplication on encrypted data.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Cache compression improves the performance of a multi-core system by being able to store more cache blocks in a compressed format. Compression is achieved by exploiting data patterns present within a block. For a given cache space, compression increases the effective cache capacity. However, this increase is limited by the number of tags that can be accommodated at the cache. Prefetching is another technique that improves system performance by fetching the cache blocks ahead of time into the cache and hiding the off-chip latency. Commonly used hardware prefetchers, such as stream and stride, fetch multiple contiguous blocks into the cache. In this paper we propose prefetched blocks compaction (PBC) wherein we exploit the data patterns present across these prefetched blocks. PBC compacts the prefetched blocks into a single block with a single tag, effectively increasing the cache capacity. We also modify the cache organization to access these multiple cache blocks residing in a single block without any need for extra tag look-ups. PBC improves the system performance by 11.1 percent with a maximum of 43.4 percent on a four-core system.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Multi-core processors achieve a trade-off between the performance and the power consumption by using Dynamic Voltage Scaling (DVS) techniques. In this paper, we study the power efficient scheduling problem of real-time tasks in an identical multi-core system, and present Node Scaling model to achieve power-aware scheduling. We prove that there is a bound speed which results in the minimal power consumption for a given task set, and the maximal value of task utilization, $u_{max}$ , in a task set is a key element to decide its minimal power consumption. Based on the value $u_{max}$ , we classify task sets into two categories: the bounded task sets and the non-bounded task sets, and we prove the lower bound of power consumption for each type of task set. Simulations based on Intel Xeon X5550 and PXA270 processors show Node Scaling model can achieve power efficient scheduling by applying to existing algorithms such as EDF-FF and SPA2. The ratio of power reduction depends on the multi-core processor's property which is defined as the ratio of the bound speed to the maximal speed of the cores. When the ratio of speeds decreases, the ratio of power reduction increases for all the power efficient algorithms.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Existing secure and privacy-preserving schemes for vehicular communications in vehicular ad hoc networks face some challenges, e.g., reducing the dependence on ideal tamper-proof devices, building efficient member revocation mechanisms and avoiding computation and communication bottlenecks. To cope with those challenges, we propose a highly efficient secure and privacy-preserving scheme based on identity-based aggregate signatures. Our scheme enables hierarchical aggregation and batch verification. The individual identity-based signatures generated by different vehicles can be aggregated and verified in a batch. The aggregated signatures can be re-aggregated by a message collector (e.g., traffic management authority). With our hierarchical aggregation technique, we significantly reduce the transmission/storage overhead of the vehicles and other parties. Furthermore, existing batch verification based schemes in vehicular ad hoc networks require vehicles to wait for enough messages to perform a batch verification. In contrast, we assume that vehicles will generate messages (and the corresponding signatures) in certain time spans, so that vehicles only need to wait for a very short period before they can start the batch verification procedure. Simulation shows that a vehicle can verify the received messages with very low latency and fast response.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Computer vision applications have a large disparity in operations, data representation and memory access patterns from the early vision stages to the final classification and recognition stages. A hardware system for computer vision has to provide high flexibility without compromising performance, exploiting massively spatial-parallel operations but also keeping a high throughput on data-dependent and complex program flows. Furthermore, the architecture must be modular, scalable and easy to adapt to the needs of different applications. Keeping this in mind, a hybrid SIMD/MIMD architecture for embedded computer vision is proposed. It consists of a coprocessor designed to provide fast and flexible computation of demanding image processing tasks of vision applications. A 32-bit 128-unit device was prototyped on a Virtex-6 FPGA which delivers a peak performance of 19.6 GOP/s and 7.2 W of power dissipation.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: The key to reducing static energy in supercomputers is switching off their unused components. Routers are the major components of a supercomputer. Whether routers can be effectively switched off or not has become the key to static energy management for supercomputers. For many typical applications, the routers in a supercomputer exhibit low utilization. However, there is no effective method to switch the routers off when they are idle. By analyzing the router occupancy in time and space, for the first time, we present a routing-policy guided topology partitioning methodology to solve this problem. We propose topology partitioning methods for three kinds of commonly used topologies (mesh, torus and fat-tree) equipped with the three most popular routing policies (deterministic routing, directionally adaptive routing and fully adaptive routing). Based on the above methods, we propose the key techniques required in this topology partitioning based static energy management in supercomputer interconnection networks to switch off unused routers in both time and space dimensions. Three topology-aware resource allocation algorithms have been developed to handle effectively different job-mixes running on a supercomputer. We validate the effectiveness of our methodology by using Tianhe-2 and a simulator for the aforementioned topologies and routing policies. The energy savings achieved on a subsystem of Tianhe-2 range from 3.8 to 79.7 percent. This translates into a yearly energy cost reduction of up to half a million US dollars for Tianhe-2.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: This paper proposes several designs of approximate restoring dividers; two different levels of approximation (cell and array levels) are employed. Three approximate subtractor cells are utilized for integer subtraction as basic step of division; these cells tend to mitigate accuracy in subtraction with other metrics, such as circuit complexity and power dissipation. At array level, exact cells are either replaced or truncated in the approximate divider designs. A comprehensive evaluation of approximation at both cell- and array (divider) levels is pursued using error analysis and HSPICE simulation; different circuit metrics including complexity and power dissipation are evaluated. Different applications are investigated by utilizing the proposed approximate arithmetic circuits. The simulation results show that with extensive savings for power dissipation and circuit complexity, the proposed designs offer better error tolerant capabilities for quotient oriented applications (image processing) than remainder oriented application (modulo operations). The proposed approximate restoring divider is significantly better than the approximate non-restoring scheme presented in the technical literature.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Wireless sensor networks (WSNs) have been considered to be the next generation paradigm of structural health monitoring (SHM) systems due to the low cost, high scalability and ease of deployment. Due to the intrinsically energy-intensive nature of the sensor nodes in SHM application, it is highly preferable that they can be divided into subsets and take turns to monitor the condition of a structure. This approach is generally called as ‘coverage-preserving scheduling’ and has been widely adopted in existing WSN applications. The problem of partitioning the nodes into subsets is generally called as the ’maximum lifetime coverage problem (MLCP)’. However, existing solutions to the MLCP cannot be directly applied to SHM application. As compared to other WSN applications, we cannot define a specific coverage area independently for each sensor node in SHM, which is however the basic assumption in all existing solutions to the MLCP. In this paper, we proposed two approaches to solve the MLCP in SHM. The performance of the methods is demonstrated through both extensive simulations and real experiments.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: NAND flash memory is widely used for the secondary storage of computer systems. The flash translation layer (FTL) is the firmware that manages and operates a flash-based storage device. One of the FTL's modules manages the RAM buffer of the flash device. Now this RAM buffer is sufficient to be used for both address mapping and data buffering. As the fastest component of the flash layer interface, effective management of this buffer has a significant impact on the performance of data storage and access. This paper proposes a novel scheme called TreeFTL for this purpose. TreeFTL organizes address translation pages and data storage pages in a tree-like structure in the RAM buffer. The tree enables TreeFTL to adapt to the access behaviors of workloads by dynamically adjusting the partitions for address mapping and data buffering. Furthermore, TreeFTL employs a lightweight mechanism to evict the least-recently-used victim pages when the need arises. Our experiments show that TreeFTL is able to spend 46.6 and 49.0 percent less service time over various workloads than two state-of-the-art algorithms, respectively, for a 64 MB RAM buffer.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Whereas clustered microarchitectures themselves have been extensively studied, the memory units for these clustered microarchitectures have received relatively little attention. This article discusses some of the inherent challenges of clustered memory units and shows how these can be overcome. Clustered memory pipelines work well with the late allocation of load/store queue entries and physically unordered queues. Yet this approach has characteristic problems such as queue overflows and allocation patterns that lead to deadlocks. We propose techniques to solve each of these problems and show that a distributed memory unit can offer significant energy savings and speedups over a centralized unit. For instance, compared to a centralized cache with a load/store queue of 64/24 entries, our four-cluster distributed memory unit with load/store queues of 16/8 entries each consumes 31 percent less energy and performs 4,7 percent better on SPECint and consumes 36 percent less energy and performs 7 percent better for SPECfp.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: In Broadcast Encryption (BE) systems like Pay-TV, AACS, online content sharing and broadcasting, reducing the header length (communication overhead per session) is of practical interest. The Subset Difference (SD) scheme due to Naor-Naor-Lotspiech (NNL) is the most popularly used BE scheme. We introduce the $(a,b,gamma)$ augmented binary tree subset difference ( $(a,b,gamma)$ -ABTSD) scheme which is a generalization of the NNL-SD scheme. By varying the parameters $(a,b,gamma)$ , it is possible to obtain $O(nlog n)$ different schemes. The average header length achieved by the new schemes is smaller than all known schemes having the same decryption time as that of the NNL-SD scheme and achieving non-trivial trade-offs between the user storage and the header size. The amount of key material that a user is required to store increases. For the earlier mentioned applications, reducing header size and achieving fast decryption is perhaps more of a concern than the user storage.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: We propose a new optimal data placement technique to improve the performance of MapReduce in cloud data centers by considering not only the data locality but also the global data access costs. We first conducted an analytical and experimental study to identify the performance issues of MapReduce in data centers and to show that MapReduce tasks that are involved in unexpected remote data access have much greater communication costs and execution time, and can significantly deteriorate the overall performance. Next, we formulated the problem of optimal data placement and proposed a generative model to minimize global data access cost in data centers and showed that the optimal data placement problem is NP-hard. To solve the optimal data placement problem, we propose a topology-aware heuristic algorithm by first constructing a replica-balanced distribution tree for the abstract tree structure, and then building a replica-similarity distribution tree for detail tree construction, to construct an optimal replica distribution tree. The experimental results demonstrated that our optimal data placement approach can improve the performance of MapReduce with lower communication and computation costs by effectively minimizing global data access costs, more specifically reducing unexpected remote data access.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: This paper presents a Ternary Content-addressable Memory (TCAM) design which is based on the use of floating-gate (flash) transistors. TCAMs are extensively used in high speed IP networking, and are commonly found in routers in the internet core. Traditional TCAM ICs are built using CMOS devices, and a single TCAM cell utilizes 17 transistors. In contrast, our TCAM cell utilizes only two flash transistors, thereby significantly reducing circuit area. We cover the chip-level architecture of the TCAM IC briefly, focusing mainly on the TCAM block which does fast parallel IP routing table lookup. Our flash-based TCAM (FTCAM) block is simulated in SPICE, and we show that it has a significantly lowered area compared to a CMOS based TCAM block, with a speed that can meet current ( $sim$ 400 Gb/s) data rates that are found in the internet core.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: The Booth multiplier has been widely used for high performance signed multiplication by encoding and thereby reducing the number of partial products. A multiplier using the radix- $4$ (or modified Booth) algorithm is very efficient due to the ease of partial product generation, whereas the radix- $8$ Booth multiplier is slow due to the complexity of generating the odd multiples of the multiplicand. In this paper, this issue is alleviated by the application of approximate designs. An approximate $2$ -bit adder is deliberately designed for calculating the sum of $1times$ and $2times$ of a binary number. This adder requires a small area, a low power and a short critical path delay. Subsequently, the $2$ -bit adder is employed to implement the less significant section of a recoding adder for generating the triple multiplicand with no carry propagation. In the pursuit of a trade-off between accuracy and power consumption, two signed $16times 16$ bit approximate radix-8 Booth multipliers are designed using the approximate recoding adder with and without the truncation of a number of less significant bits in the partial products. The proposed approximate multipliers are faster and more power efficient than the accurate Booth multiplier. The multiplier with 15-bit truncation achieves the best overall performance in terms of hardware and accuracy when compared to other approximate Booth multiplier designs. Finally, the approximate multipliers are applied to the design of a low-pass FIR filter and they show better performance than other approximate Booth multipliers.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: Solid State Drives (SSDs) have been extensively deployed as the cache of hard disk-based storage systems. The SSD-based cache generally supplies ultra-large capacity, whereas managing so large a cache introduces excessive memory overhead, which in turn makes the SSD-based cache neither cost-effective nor energy-efficient. This work targets to reduce the memory overhead introduced by the replacement policy of SSD-based cache. Traditionally, data structures involved in cache replacement policy reside in main memory. While these in-memory data structures are not suitable for SSD-based cache any more since the cache is much larger than ever. We propose a memory-efficient framework which keeps most data structures in SSD while just leaving the memory-efficient data structure (i.e., a new bloom proposed in this work) in main memory. Our framework can be used to implement any LRU-based replacement policies under negligible memory overhead. We evaluate our proposals via theoretical analysis and prototype implementation. Experimental results demonstrate that, our framework is practical to implement most replacement policies for large caches, and is able to reduce the memory overhead by about $10 times$ .
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: A large portion of existing multithreaded embedded sofware has been programmed according to symmetric shared memory platforms where a monolithic memory block is shared by all cores. Such platforms accommodate popular parallel programming models such as POSIX threads and OpenMP. However with the growing number of cores in modern manycore embedded architectures, they present a bottleneck related to their centralized memory accesses. This paper proposes a solution tailored for an efficient execution of applications defined with shared-memory programming models onto on-chip distributed-memory multicore architectures. It shows how performance, area and energy consumption are significantly improved thanks to the scalability of these architectures. This is illustrated in an open-source realistic design framework, including tools from ASIC to microkernel.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: This paper describes a procedure that computes seeds for $LFSR$ -based generation of partially-functional broadside tests. Existing $LFSR$ -based test data compression methods compute seeds based on incompletely-specified test cubes. Functional broadside tests are fully-specified, and they have fully-specified scan-in states. This is the main challenge that the test generation procedure described in this paper needs to address. It addresses it by using a process that modifies an initial seed $s_i$ in order to reduce the Hamming distance between the scan-in state $p_i$ that $s_i$ creates and a reachable state $r_j$ . When the Hamming distance is reduced to zero, the seed can be used for generating functional broadside tests. When the distance is larger than zero, the tests are partially-functional. Experimental results are presented for transition faults in benchmark circuits to demonstrate the resulting distances and fault co- erage.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-07-08
    Description: A new apparatus for fast multiplication of two numbers is introduced. Inputs are split into partitions, and one number is replaced by two with zeros interlaced in every other partition. Products are computed with no carries between partitions, in the time required to multiply the short partitions and add the partial sums. Component adders and multipliers can be chosen to trade off area and speed. A new graphical tool is used to compare this multiplier to existing ones based on CMOS VLSI simulations.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2016-06-22
    Description: Sentiment analysis of online social media has attracted significant interest recently. Many studies have been performed, but most existing methods focus on either only textual content or only visual content. In this paper, we utilize deep learning models in a convolutional neural network (CNN) to analyze the sentiment in Chinese microblogs from both textual and visual content. We first train a CNN on top of pre-trained word vectors for textual sentiment analysis and employ a deep convolutional neural network (DNN) with generalized dropout for visual sentiment analysis. We then evaluate our sentiment prediction framework on a dataset collected from a famous Chinese social media network (Sina Weibo) that includes text and related images and demonstrate state-of-the-art results on this Chinese sentiment analysis benchmark.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2016-06-23
    Description: We investigate the problem of minimizing the total power consumption under the constraint of the signal-to-noise ratio (SNR) requirement for the physical layer multicasting system with large-scale antenna arrays. In contrast with existing work, we explicitly consider both the transmit power and the circuit power scaling with the number of antennas. The joint antenna selection and beamforming technique is proposed to minimize the total power consumption. The problem is a challenging one, which aims to minimize the linear combination of ℓ 0 -norm and ℓ 2 -norm. To our best knowledge, this minimization problem has not yet been well solved. A random decremental antenna selection algorithm is designed, which is further modified by an approximation of the minimal transmit power based on the asymptotic orthogonality of the channels. Then, a more efficient decremental antenna selection algorithm is proposed based on minimizing the ℓ 0 norm. Performance results show that the ℓ 0 norm minimization algorithm greatly outperforms the random selection algorithm in terms of the total power consumption and the average run time.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: Covert channels are widely considered as a major risk of information leakage in various operating systems, such as desktop, cloud, and mobile systems. The existing works of modeling covert channels have mainly focused on using finite state machines (FSMs) and their transforms to describe the process of covert channel transmission. However, a FSM is rather an abstract model, where information about the shared resource, synchronization, and encoding/decoding cannot be presented in the model, making it difficult for researchers to realize and analyze the covert channels. In this paper, we use the high-level Petri Nets (HLPN) to model the structural and behavioral properties of covert channels. We use the HLPN to model the classic covert channel protocol. Moreover, the results from the analysis of the HLPN model are used to highlight the major shortcomings and interferences in the protocol. Furthermore, we propose two new covert channel models, namely: (a) two channel transmission protocol (TCTP) model and (b) self-adaptive protocol (SAP) model. The TCTP model circumvents the mutual inferences in encoding and synchronization operations; whereas the SAP model uses sleeping time and redundancy check to ensure correct transmission in an environment with strong noise. To demonstrate the correctness and usability of our proposed models in heterogeneous environments, we implement the TCTP and SAP in three different systems: (a) Linux, (b) Xen, and (c) Fiasco.OC. Our implementation also indicates the practicability of the models in heterogeneous, scalable and flexible environments.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: Recovery from sudden power-off (SPO) is one of the primary concerns among practitioners which bars the quick and wide deployment of flash storage devices. In this work, we propose Metadata Embedded Write (MEW), a novel scheme for handling the sudden power-off recovery in modern flash storage devices. Given that a large fraction of commercial SSDs employ compression technology, MEW exploits the compression-induced internal fragmentation in the data area to store rich metadata for fast and complete recovery. MEW consists of (i) a metadata embedding scheme to harbor SSD metadata in a physical page together with multiple compressed logical pages, (ii) an allocation chain based fast recovery scheme, and (iii) a light-weight metadata logging scheme which enables MEW to maintain the metadata for incompressible data, too. We performed extensive experiments to examine the performance of MEW. The performance overhead of MEW is 3 percent in the worst case, in terms of the write amplification factor, compared to the pure compression-based FTL that does not have any recovery scheme.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: Scaling the CMOS devices deep into the nanorange reduces their reliability margins significantly. Consequently, accurately calculating the reliability of digital nanocircuits is becoming a necessity for investigating design alternatives to optimize the trade-offs between area-power-delay and reliability. However, accurate reliability calculation of large and highly connected circuits is complex and very time consuming. This paper proposes a progressive consensus-based algorithm for identifying the worst reliability input vectors and the associated critical logic gates. Improving the reliability of the critical gates helps circuit designers to effectively improve the circuit overall reliability while having a minimal impact on the traditional power-area-deal design parameters. The accuracy and efficiency of the algorithm can be tuned to fit a variety of applications. The algorithm scales well with circuit size, and is independent of the interconnect complexity and the logic depth. Extensive computational results show that the accuracy and the efficiency of the proposed algorithm are better than the most recent results reported in the literature.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: The Serial-out bit-level multiplication scheme is characterized by an important latency feature. It has an ability to sequentially generate an output bit of the multiplication result in each clock cycle. However, the computational complexity of the existing serial-out bit-level multipliers in $GF$ ( $2^m$ ) using normal basis representation, limits its usefulness in many applications; hence, an optimized serial-out bit-level multiplier using polynomial basis representation is needed. In this paper, we propose new serial-out bit-level Mastrovito multiplier schemes. We show that in terms of the time complexities, the proposed multiplier schemes outperform the existing serial-out bit-level schemes available in the literature. In addition, using the proposed multiplier schemes, we present new hybrid-double multiplication architectures. To the best of our knowledge, this is the first time such a hybrid multiplier structure using the polynomial basis is proposed. Prototypes of the presented serial-out bit-level schemes and the proposed hybrid-double multiplication architectures (10 schemes in total) are implemented over both $GF(2^{163})$ and $GF(2^{233})$ , and experimental results are presented.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: Though a cooperative broadcast scheme has been proposed for fading environments, it has two defects: First, it only handles a packet flow from a single source node in the network, but does not consider the scenario of multiple packet flows simultaneously broadcasted from different source nodes. Second, it only allows a single relay node to forward a packet in each time slot, though multiple relay nodes forwarding in a time slot can significantly reduce broadcast latency. In this paper, we aim achieve low-latency multi-flow broadcast in wireless multi-hop networks with fading channels. To describe the interference among the transmission in different flows, we incorporate the Rayleigh fading model to the signal to noise ratio (SNR) model. Then, we introduce a cooperative diversity scheme which allows multiple relays forwarding in a time slot to reduce broadcast latency. We then formulate an interesting problem: In a fading environment, what is the optimal relay allocation schedule to minimize the broadcast latency? We propose a warm up heuristic algorithm for single-flow cooperative broadcast, based on which, we further propose a heuristic algorithm for multi-flow cooperative broadcast. Simulation results demonstrate that the two algorithms achieve lower broadcast latency than a previous method.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: On modern multicore machines, the memory management typically combines address interleaving in hardware and random allocation in the operating system (OS) to improve performance of both memory and cache. The conventional solutions, however, are increasingly strained as a wide variety of workloads run on complicated memory hierarchy and cause contention at multiple levels. We describe a new framework (named HVR) in OS memory management to support a flexible policy space for tackling diverse application needs, integrating vertical partitioning across layers, horizontal partitioning and random-interleaved allocation at a single layer. We exhaustively study the performance of these policies for over 2,000 workloads and correlate performance with application characteristics. Based on this correlation we derive several practical rules of memory allocation that we integrate into the unified HVR framework to guide resource partitioning and sharing for dynamic and diverse workloads. We implement our approach in Linux kernel 2.6.32 as a restructured page indexing system plus a series of kernel modules. Experimental results show that our framework consistently outperforms the unmodified Linux kernel, with up to 21 percent performance gains, and outperforms prior solutions at individual levels of the memory hierarchy.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: With the explosive growth in data volume, the I/O bottleneck has become an increasingly daunting challenge for big data analytics in the Cloud. Recent studies have shown that moderate to high data redundancy clearly exists in primary storage systems in the Cloud. Our experimental studies reveal that data redundancy exhibits a much higher level of intensity on the I/O path than that on disks due to relatively high temporal access locality associated with small I/O requests to redundant data. Moreover, directly applying data deduplication to primary storage systems in the Cloud will likely cause space contention in memory and data fragmentation on disks. Based on these observations, we propose a performance-oriented I/O deduplication, called POD, rather than a capacity-oriented I/O deduplication, exemplified by iDedup, to improve the I/O performance of primary storage systems in the Cloud without sacrificing capacity savings of the latter. POD takes a two-pronged approach to improving the performance of primary storage systems and minimizing performance overhead of deduplication, namely, a request-based selective deduplication technique, called Select-Dedupe, to alleviate the data fragmentation and an adaptive memory management scheme, called iCache, to ease the memory contention between the bursty read traffic and the bursty write traffic. We have implemented a prototype of POD as a module in the Linux operating system. The experiments conducted on our lightweight prototype implementation of POD show that POD significantly outperforms iDedup in the I/O performance measure by up to 87.9 percent with an average of 58.8 percent. Moreover, our evaluation results also show that POD achieves comparable or better capacity savings than iDedup.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: This manuscript proposes three classes of codes for error correction in a storage system in which the memory cells do not have the same number of levels, i.e., a multiscale storage. The proposed codes are single multiscale-symbol error correction (SMSEC) codes and are capable of correcting any errors occurring on a single memory cell, namely a column-deleted SMSEC code, an element-compacted SMSEC code and a product SMSEC code. In the proposed codes, the codewords are divided into two partitions, the elements on the first partition are over GF(2 b 1 ), while those on the remaining partition are over GF(2 b 2 ). This paper also gives guidelines for selection among the three SMSEC codes to meet the desired hardware overhead in the parallel decoder for realistic parameters of the partition pair, such as ( b 1 , b 2 ) = (4,3), (4,2) and (3,2). Moreover it is shown that the best choice for a MSS system is the SMSEC code with the shortest check bit length; if the check bit lengths of at least two codes are equal, then the use of the element-compacted SMSEC code incurs in the smallest hardware overhead.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: Multi-cloud storage can provide better features such as availability and scalability. Current works use multiple cloud storage providers with erasure coding to achieve certain benefits including fault-tolerance improving or vendor lock-in avoiding. However, these works only use the multi-cloud storage in ad-hoc ways, and none of them considers the optimization issue in general. In fact, the key to optimize the multi-cloud storage is to effectively choose providers and erasure coding parameters. Meanwhile, the data placement should satisfy system or application developers’ requirements. As developers often demand various objectives to be optimized simultaneously, such complex requirement optimization cannot be easily fulfilled by ad-hoc ways. This paper presents Triones, a systematic model to formally formulate data placement in multi-cloud storage by using erasure coding. Firstly, Triones addresses the problem of data placement optimization by applying non-linear programming and geometric space abstraction. It could satisfy complex requirements involving multi-objective optimization. Secondly, Triones can effectively balance among different objectives in optimization and is scalable to incorporate new ones. The effectiveness of the model is proved by extensive experiments on multiple cloud storage providers in the real world. For simple requirements, Triones can achieve 50 percent access latency reduction, compared with the model in $mu$ LibCloud. For complex requirements, Triones can improve fault-tolerance level by 2 $times$ and reduce access latency and vendor lock-in level by 30 $sim$ 70 percent and 49.85 percent respectively with about 19.19 percent more cost, compared with the model only optimizing cost in Scalia.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2016-05-10
    Description: In this paper, we propose a two-factor data security protection mechanism with factor revocability for cloud storage system. Our system allows a sender to send an encrypted message to a receiver through a cloud storage server. The sender only needs to know the identity of the receiver but no other information (such as its public key or its certificate). The receiver needs to possess two things in order to decrypt the ciphertext. The first thing is his/her secret key stored in the computer. The second thing is a unique personal security device which connects to the computer. It is impossible to decrypt the ciphertext without either piece. More importantly, once the security device is stolen or lost, this device is revoked. It cannot be used to decrypt any ciphertext. This can be done by the cloud server which will immediately execute some algorithms to change the existing ciphertext to be un-decryptable by this device. This process is completely transparent to the sender. Furthermore, the cloud server cannot decrypt any ciphertext at any time. The security and efficiency analysis show that our system is not only secure but also practical.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2016-05-27
    Description: Recently manifold learning has received extensive interest in the community of pattern recognition. Despite their appealing properties, most manifold learning algorithms are not robust in practical applications. In this paper, we address this problem in the context of the Hessian locally linear embedding (HLLE) algorithm and propose a more robust method, called RHLLE, which aims to be robust against both outliers and noise in the data. Specifically, we first propose a fast outlier detection method for high-dimensional datasets. Then, we employ a local smoothing method to reduce noise. Furthermore, we reformulate the original HLLE algorithm by using the truncation function from differentiable manifolds. In the reformulated framework, we explicitly introduce a weighted global functional to further reduce the undesirable effect of outliers and noise on the embedding result. Experiments on synthetic as well as real datasets demonstrate the effectiveness of our proposed algorithm.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2016-02-07
    Description: A new orthogonal projection method for computing the minimum distance between a point and a spatial parametric curve is presented. It consists of a geometric iteration which converges faster than the existing Newton’s method, and it is insensitive to the choice of initial values. We prove that projecting a point onto a spatial parametric curve under the method is globally second-order convergence.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2016-07-30
    Description: We consider the problem of estimating the measure of subsets in very large networks. A prime tool for this purpose is the Markov Chain Monte Carlo (MCMC) algorithm. This algorithm, while extremely useful in many cases, still often suffers from the drawback of very slow convergence. We show that in a special, but important case, it is possible to obtain significantly better bounds on the convergence rate. This special case is when the huge state space can be aggregated into a smaller number of clusters, in which the states behave approximately the same way (but their behavior still may not be identical). A Markov chain with this structure is called quasi-lumpable. This property allows the aggregation of states (nodes) into clusters. Our main contribution is a rigorously proved bound on the rate at which the aggregated state distribution approaches its limit in quasi-lumpable Markov chains. We also demonstrate numerically that in certain cases this can indeed lead to a significantly accelerated way of estimating the measure of subsets. The result can be a useful tool in the analysis of complex networks, whenever they have a clustering that aggregates nodes with similar (but not necessarily identical) behavior.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2015-05-08
    Description: The construction of a similarity matrix is one significant step for the spectral clustering algorithm; while the Gaussian kernel function is one of the most common measures for constructing the similarity matrix. However, with a fixed scaling parameter, the similarity between two data points is not adaptive and appropriate for multi-scale datasets. In this paper, through quantitating the value of the importance for each vertex of the similarity graph, the Gaussian kernel function is scaled, and an adaptive Gaussian kernel similarity measure is proposed. Then, an adaptive spectral clustering algorithm is gotten based on the importance of shared nearest neighbors. The idea is that the greater the importance of the shared neighbors between two vertexes, the more possible it is that these two vertexes belong to the same cluster; and the importance value of the shared neighbors is obtained with an iterative method, which considers both the local structural information and the distance similarity information, so as to improve the algorithm’s performance. Experimental results on different datasets show that our spectral clustering algorithm outperforms the other spectral clustering algorithms, such as the self-tuning spectral clustering and the adaptive spectral clustering based on shared nearest neighbors in clustering accuracy on most datasets.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2015-05-09
    Description: In this paper, we propose a detection method of pulmonary nodules in X-ray computed tomography (CT) scans by use of three image filters and appearance-based k-means clustering. First, voxel values are suppressed in radial directions so as to eliminate extra regions in the volumes of interest (VOIs). Globular regions are enhanced by moment-of-inertia tensors where the voxel values in the VOIs are regarded as mass. Excessively enhanced voxels are reduced based on displacement between the VOI centers and the gravity points of the voxel values in the VOIs. Initial nodule candidates are determined by these filtering processings. False positives are reduced by, first, normalizing the directions of intensity distributions in the VOIs by rotating the VOIs based on the eigenvectors of the moment-of-inertia tensors, and then applying an appearance-based two-step k-means clustering technique to the rotated VOIs. The proposed method is applied to actual CT scans and experimental results are shown.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2015-05-09
    Description: We propose a linear time algorithm, called G2DLP, for generating 2D lattice L(n1, n2) paths, equivalent to two-item  multiset permutations, with a given number of turns. The usage of turn has three meanings: in the context of multiset permutations, it means that two consecutive elements of a permutation belong to two different items; in lattice path enumerations, it means that the path changes its direction, either from eastward to northward or from northward to eastward; in open shop scheduling, it means that we transfer a job from one type of machine to another. The strategy of G2DLP is divide-and-combine; the division is based on the enumeration results of a previous study and is achieved by aid of an integer partition algorithm and a multiset permutation algorithm; the combination is accomplished by a concatenation algorithm that constructs the paths we require. The advantage of G2DLP is twofold. First, it is optimal in the sense that it directly generates all feasible paths without visiting an infeasible one. Second, it can generate all paths in any specified order of turns, for example, a decreasing order or an increasing order. In practice, two applications, scheduling and cryptography, are discussed.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: $(t,k)$ -Diagnosis, which is a generalization of sequential diagnosis, requires that at least $k$ faulty processors be identified and repaired in each iteration when there are at most $t$ faulty processors, where $tge k$ . Based on the assumption that each vertex is adjacent to at least one fault-free vertex, the conditional $(t,k)$ -diagnosis of graphs was investigated by using the comparison diagnosis model. Lower bounds on the conditional $(t, k)$ -diagnosability of graphs were derived, and applied to obtain the following results. 1) Symmetric $d$ -dimensional grids are conditionally $(frac{N}{2d+1}-1,2d-1)$ -diagnosable when $dge 2$ and $N$ (the number of vertices) $ge 4^d$ . 2) Symmetric $d$ -dimensional tori are conditionally $(frac{1}{5}(N+min lbrace frac{8}{5} N^{frac{2}{3}},frac{2N-20}{15}rbrace -2),6)$ -diagnosable when $d=2$ and $Nge 49$ and $(frac{N}{2d+1}-1,4d-2)$ -diagnosable when $dge 3$ and $Nge 4^d$
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: Deadline guaranteed packet scheduling for switches is a fundamental issue for providing guaranteed QoS in digital networks. It is a historically difficult NP-hard problem if three or more deadlines are involved. All existing algorithms have too low throughput to be used in practice. A key reason is they use packet deadlines as default priorities to decide which packets to drop whenever conflicts occur. Although such a priority structure can ease the scheduling by focusing on one deadline at a time, it hurts the throughput greatly. Since deadlines do not necessarily represent the actual importance of packets, we can greatly improve the throughput if deadline induced priority is not enforced. This paper first presents an algorithm that guarantees the maximum throughput for the case where only two different deadlines are allowed. Then, an algorithm called iterative scheduling with no priority (ISNOP) is proposed for the general case where k > 2 different deadlines may occur. Not only does this algorithm have dramatically better average performance than all existing algorithms, but also guarantees approximation ratio of 2. ISNOP would provide a good practical solution for the historically difficult packet scheduling problem.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2015-05-13
    Description: By increasing the complexity of digital systems, verification and debugging of such systems have become a major problem and economic issue. Although many computer aided design (CAD) solutions have been suggested to enhance efficiency of existing debugging approaches, they are still suffering from lack of providing a small set of potential error locations and also automatic correction mechanisms. On the other hand, the ever-growing usage of digital signal processing (DSP), computer graphics and embedded systems applications that can be modeled as polynomial computations in their datapath designs, necessitate an effective method to deal with their verification, debugging and correction. In this paper, we introduce a formal debugging approach based on static slicing and dynamic ranking methods to derive a reduced ordered set of potential error locations. In addition, to speed up finding true errors in the presence of multiple design errors, error candidates are sorted in decreasing order of their probability of being an error. After that, a mutation-based technique is employed to automatically correct bugs even in the case of multiple bugs. In order to evaluate the effectiveness of our approach, we have applied it to several industrial designs. The experimental results show that the proposed technique enables us to locate and correct even multiple bugs with high confidence in a short run time even for complex designs of up to several thousand lines of RTL code.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: The series of published works, related to differential fault attack (DFA) against the Grain family, require quite a large number (hundreds) of faults and also several assumptions on the locations and the timings of the faults injected. In this paper, we present a significantly improved scenario from the adversarial point of view for DFA against the Grain family of stream ciphers. Our model is the most realistic one so far as it considers that the cipher has to be re-keyed only a few times and faults can be injected at any random location and at any random point of time, i.e., no precise control is needed over the location and timing of fault injections. We construct equations based on the algebraic description of the cipher by introducing new variables so that the degrees of the equations do not increase. In line of algebraic cryptanalysis, we accumulate such equations based on the fault-free and faulty key-stream bits and solve them using the SAT Solver Cryptominisat-2.9.5 installed with SAGE 5.7. In a few minutes we can recover the state of Grain v1, Grain-128 and Grain-128a with as little as 10, 4 and 10 faults respectively.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: Several papers have studied fault attacks on computing a pairing value $e(P,Q)$ , where $P$ is a public point and $Q$ is a secret point. In this paper, we observe that these attacks are in fact effective only on a small number of pairing-based protocols, and that too only when the protocols are implemented with specific symmetric pairings. We demonstrate the effectiveness of the fault attacks on a public-key encryption scheme, an identity-based encryption scheme, and an oblivious transfer protocol when implemented with a symmetric pairing derived from a supersingular elliptic curve with embedding degree 2.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: The key benefits of using the smartphone accelerometer for human mobility analysis, with or without location determination based upon GPS, Wi-Fi or GSM is that it is energy-efficient, provides real-time contextual information and has high availability. Using measurements from an accelerometer for human mobility analysis presents its own challenges as we all carry our smartphonesdifferently and the measurements are body placement dependent. Also it often relies on an on-demand remote data exchangefor analysis and processing; which is less energy-efficient, has higher network costs and is not real-time. We present a novelaccelerometer framework based upon a probabilistic algorithm that neutralizes the effect of different smartphone on-body placements and orientations to allow human movements to be more accurately and energy-efficiently identified. Using solely the embeddedsmartphone accelerometer without need for referencing historical data and accelerometer noise filtering, our method can in real-time with a time constraint of 2 seconds identify the human mobility state. The method achieves an overall average classification accuracyof 92 percent when evaluated on a dataset gathered from fifteen individuals that classified nine different urban human mobility states.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: Nanoscale process variations in conventional SRAM cells are known to limit voltage scaling in microprocessor caches. Recently, a number of novel cache architectures have been proposed which substitute faulty words of one cache line with healthy words of others, to tolerate these failures at low voltages. These schemes rely on the fault maps to identify faulty words, inevitably increasing the chip area. Besides, the relationship between word sizes and the cache failure rates is not well studied in these works. In this paper, we analyze the word substitution schemes by employing Fault Tree Model and Collision Graph Model. A novel cache architecture (Macho) is then proposed based on this model. Macho is dynamically reconfigurable and is locally optimized (tailored to local fault density) using two algorithms: 1) a graph coloring algorithm for moderate fault densities and 2) a bipartite matching algorithm to support high fault densities. An adaptive matching algorithm enables on-demand reconfiguration of Macho to concentrate available resources on cache working sets. As a result, voltage scaling down to 400 mV is possible, tolerating bit failure rates reaching 1 percent (one failure in every 100 cells). This near-threshold voltage (NTV) operation achieves 44 percent energy reduction in our simulated system (CPU $+$ DRAM models) with a 1 MB L2 cache.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2015-05-09
    Description: In this work we generate the numerical solutions of Burgers’ equation by applying the Crank-Nicholson method and different schemes for solving nonlinear systems, instead of using Hopf-Cole transformation to reduce Burgers’ equation into the linear heat equation. The method is analyzed on two test problems in order to check its efficiency on different kinds of initial conditions. Numerical solutions as well as exact solutions for different values of viscosity are calculated, concluding that the numerical results are very close to the exact solution.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI Publishing
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: We present a custom architecture for realizing the Gentry-Halevi fully homomorphic encryption (FHE) scheme. This contribution presents the first full realization of FHE in hardware. The architecture features an optimized multi-million bit multiplier based on the Schönhage Strassen multiplication algorithm. Moreover, a number of optimizations including spectral techniques as well as a precomputation strategy is used to significantly improve the performance of the overall design. When synthesized using 90 nm technology, the presented architecture achieves to realize the encryption, decryption, and recryption operations in 18.1 msec, 16.1 msec, and 3.1 sec, respectively, and occupies a footprint of less than 30 million gates.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    facet.materialart.
    Unknown
    Institute of Electrical and Electronics Engineers (IEEE)
    Publication Date: 2015-05-13
    Description: Providing deadline-sensitive services is a challenge in data centers. Because of the conservativeness in additive increase congestion avoidance, current transmission control protocols are inefficient in utilizing the super high bandwidth of data centers. This may cause many deadline-sensitive flows to miss their deadlines before achieving their available bandwidths. We propose an Adaptive-Acceleration Data Center TCP, A $!^2$ DTCP, which takes into account both network congestion and latency requirement of application service. By using congestion avoidance with an adaptive increase rate that varies between additive and multiplicative, A $!^2$ DTCP accelerates bandwidth detection thus achieving high bandwidth utilization efficiency. At-scale simulations and real testbed implementations show that A $!^2$ DTCP significantly reduces the missed deadline ratio compared to D $!^2$ TCP and DCTCP. In addition, A $!^2$ DTCP can co-exist with conventional TCP as well without requiring more changes in switch hardware than D $!^2$ TCP and DCTCP.
    Print ISSN: 0018-9340
    Electronic ISSN: 1557-9956
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...