ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

feed icon rss

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • 2020-2022  (3)
Collection
Years
Year
  • 1
    Publication Date: 2020-08-21
    Description: According to the basic principle of the hydraulic anti-kink system and flow continuity equation, this paper takes the low-floor tram as the research object and the four vehicles as the research carrier. Based on the correlation parameters between the vehicle subsystem and the hydraulic subsystem, a co-simulation platform of a low-floor tram with hydraulic an anti-kink system is built. The co-simulation results show that the anti-kink system can well maintain the relative yaw angle consistency between the vehicle body and bogie. The anti-kink system restrains the maximum yaw angle and excessive lateral displacement of the vehicle body effectively. The consistency between the experiment results and the simulation results shows the accuracy of the model. The co-simulation model of the low-floor tram with hydraulic anti-kink system can be used to research the dynamic performance when it passes through curve line.
    Electronic ISSN: 1996-1073
    Topics: Energy, Environment Protection, Nuclear Power Engineering
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2020-02-07
    Description: The acceleration architecture of transposed convolution layers is essential since transposed convolution operations, as critical components in the generative model of generative adversarial networks, are computationally intensive inherently. In addition, the pre-processing of inserting and padding with zeros for input feature maps causes many ineffective operations. Most of the already known FPGA (Field Programmable Gate Array) based architectures for convolution layers cannot tackle these issues. In this paper, we firstly propose a novel dataflow exploration through splitting the filters and its corresponding input feature maps into four sets and then applying the Winograd algorithm for fast processing with a high efficiency. Secondly, we present an underlying FPGA-based accelerator architecture that features owning processing units, with embedded parallel, pipelined, and buffered processing flow. At last, a parallelism-aware memory partition technique and the hardware-based design space are explored coordinating, respectively, for the required parallel operations and optimal design parameters. Experiments of several state-of-the-art GANs by our methods achieve an average performance of 639.2 GOPS on Xilinx ZCU102 and 162.5 GOPS on Xilinx VC706. In reference to a conventional optimized accelerator baseline, this work demonstrates an 8.6× (up to 11.7×) increase in processing performance, compared to below 2.2× improvement by the prior studies in the literature.
    Electronic ISSN: 2079-9292
    Topics: Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2021-04-08
    Description: Pruning and quantization are two commonly used approaches to accelerate the LSTM (Long Short-Term Memory) model. However, the traditional linear quantization usually suffers from the problem of gradient vanishing, and the existing pruning methods all have the problem of producing undesired irregular sparsity or large indexing overhead. To alleviate the problem of vanishing gradient, this work proposed a normalized linear quantization approach, which first normalize operands regionally and then quantize them in a local mix-max range. To overcome the problem of irregular sparsity and large indexing overhead, this work adopts the permuted block diagonal mask matrices to generate the sparse model. Due to the sparse model being highly regular, the position of non-zero weights can be obtained by a simple calculation, thus avoiding the large indexing overhead. Based on the sparse LSTM model generated from the permuted block diagonal mask matrices, this paper also proposed a high energy-efficiency accelerator, PermLSTM that comprehensively exploits the sparsity of weights, activations, and products regarding the matrix–vector multiplications, resulting in a 55.1% reduction in power consumption. The accelerator has been realized on Arria-10 FPGAs running at 150 MHz and achieved 2.19×∼24.4× energy efficiency compared with the other FPGA-based LSTM accelerators previously reported.
    Electronic ISSN: 2079-9292
    Topics: Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...