ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (10,514)
  • Elsevier  (7,885)
  • Molecular Diversity Preservation International  (1,482)
  • MDPI  (975)
  • Ubiquity Press  (132)
  • Wolfram Research  (40)
  • Institute of Electrical and Electronics Engineers (IEEE)
  • 2020-2022
  • 2015-2019  (7,771)
  • 2010-2014  (2,743)
  • 1990-1994
  • 1945-1949
  • 2019  (7,771)
  • 2010  (2,743)
  • Computer Science  (10,514)
Collection
  • Articles  (10,514)
Publisher
Years
  • 2020-2022
  • 2015-2019  (7,771)
  • 2010-2014  (2,743)
  • 1990-1994
  • 1945-1949
Year
  • 1
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 8 July 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Maxim Rakhuba, Alexander Novikov, Ivan Oseledets〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Such problems as computation of spectra of spin chains and vibrational spectra of molecules can be written as 〈em〉high-dimensional eigenvalue problems〈/em〉, i.e., when the eigenvector can be naturally represented as a multidimensional tensor. Tensor methods have proven to be an efficient tool for the approximation of solutions of high-dimensional eigenvalue problems, however, their performance deteriorates quickly when the number of eigenstates to be computed increases. We address this issue by designing a new algorithm motivated by the ideas of 〈em〉Riemannian optimization〈/em〉 (optimization on smooth manifolds) for the approximation of multiple eigenstates in the 〈em〉tensor-train format〈/em〉, which is also known as matrix product state representation. The proposed algorithm is implemented in TensorFlow, which allows for both CPU and GPU parallelization.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 8 July 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Chen Liu, Florian Frank, Faruk O. Alpak, Béatrice Rivière〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Permeability estimation of porous media from directly solving the Navier–Stokes equations has a wide spectrum of applications in petroleum industry. In this paper, we utilize a pressure-correction projection algorithm in conjunction with the interior penalty discontinuous Galerkin scheme for space discretization to build an incompressible Navier–Stokes simulator and to use this simulator to calculate permeability of real rock samples. The proposed method is accurate, numerically robust, and exhibits the potential for tackling realistic problems.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 8 July 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Mustapha Malek, Nouh Izem, M. Shadi Mohamed, Mohammed Seaid, Omar Laghrouche〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉An efficient partition of unity finite element method for three-dimensional transient diffusion problems is presented. A class of multiple exponential functions independent of time variable is proposed to enrich the finite element approximations. As a consequence of this procedure, the associated matrix for the linear system is evaluated once at the first time step and the solution is obtained at subsequent time step by only updating the right-hand side of the linear system. This results in an efficient numerical solver for transient diffusion equations in three space dimensions. Compared to the conventional finite element methods with 〈em〉h〈/em〉-refinement, the proposed approach is simple, more efficient and more accurate. The performance of the proposed method is assessed using several test examples for transient diffusion in three space dimensions. We present numerical results for a transient diffusion equation with known analytical solution to quantify errors for the new method. We also solve time-dependent diffusion problems in complex geometries. We compare the results obtained using the partition of unity finite element method to those obtained using the standard finite element method. It is shown that the proposed method strongly reduces the necessary number of degrees of freedom to achieve a prescribed accuracy.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 8 July 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Lahbib Bourhrara〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉This document presents a new numerical scheme dealing with the Boltzmann transport equation. This scheme is based on the expansion of the angular flux in a truncated spherical harmonics function and the discontinuous finite element method for the spatial variable. The advantage of this scheme lies in the fact that we can deal with unstructured, non-conformal and curved meshes. Indeed, it is possible to deal with distorted regions whose boundary is constituted by edges that can be either line segments or circular arcs or circles. In this document, we detail the derivation of the method for 2D geometries. However, the generalization to 2D extruded geometries is trivial.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2019
    Description: Over the years, the cellular mobile network has evolved from a wireless plain telephone system to a very complex system providing telephone service, Internet connectivity and many interworking capabilities with other networks. Its air interface performance has increased drastically over time, leading to high throughput and low latency. Changes to the core network, however, have been slow and incremental, with increased complexity worsened by the necessity of backwards-compatibility with older-generation systems such as the Global System for Mobile communication (GSM). In this paper, a new virtualized Peer-to-Peer (P2P) core network architecture is presented. The key idea of our approach is that each user is assigned a private virtualized copy of the whole core network. This enables a higher degree of security and novel services that are not possible in today’s architecture. We describe the new architecture, focusing on its main elements, IP addressing, message flows, mobility management, and scalability. Furthermore, we will show some significant advantages this new architecture introduces. Finally, we investigate the performance of our architecture by analyzing voice-call traffic available in a database of a large U.S. cellular network provider.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 132〈/p〉 〈p〉Author(s): T. Carlotto, P.L.B. Chaffe〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Streamflow recession analysis is crucial for understanding how catchments release water in periods of drought and therefore is important for water resources planning and management. Despite there being several theories on how to model recession curves, few studies compare the different approaches to that problem. In this work, we developed the Master Recession Curve Parameterization tool (MRCPtool), which brings together a set of automated methods for the analysis of recession periods based only on streamflow data. The methods include: (i) hydrograph separation using numerical filters; (ii) automatic extraction of recession periods; (iii) creation of the MRC with the matching strip method; (iv) creation of the MRC for different flow classes defined from the flow duration curve; (v) analysis of flow recession rates 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈mrow〉〈mo〉(〈/mo〉〈mo linebreak="goodbreak" linebreakstyle="after"〉−〈/mo〉〈mi〉d〈/mi〉〈mi〉Q〈/mi〉〈mo〉∕〈/mo〉〈mi〉d〈/mi〉〈mi〉t〈/mi〉〈mo〉)〈/mo〉〈/mrow〉〈/math〉 as a function of flow 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si2.svg"〉〈mrow〉〈mo〉(〈/mo〉〈mi〉Q〈/mi〉〈mo〉)〈/mo〉〈/mrow〉〈/math〉 and (vi) creation of the MRC from simulated recession curves with different analytical approaches, including linear and nonlinear models. The MRCPtool contains a graphical user interface developed in MATLAB software that facilitates the analysis of streamflow datasets. Finally, we present an example application of the MRCPtool using a streamflow dataset of 44 years. The MRCPtool is an open source tool that can be downloaded from the site: 〈a href="http://www.labhidro.ufsc.br/static/software/MRCPtool.rar" target="_blank"〉http://www.labhidro.ufsc.br/static/software/MRCPtool.rar〈/a〉.〈/p〉〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2019
    Description: The ongoing digital transformation has the potential to revolutionize nearly all industrial manufacturing processes. However, its concrete requirements and implications are still not sufficiently investigated. In order to establish a common understanding, a multitude of initiatives have published guidelines, reference frameworks and specifications, all intending to promote their particular interpretation of the Industrial Internet of Things (IIoT). As a result of the inconsistent use of terminology, heterogeneous structures and proposed processes, an opaque landscape has been created. The consequence is that both new users and experienced experts can hardly manage to get an overview of the amount of information and publications, and make decisions on what is best to use and to adopt. This work contributes to the state of the art by providing a structured analysis of existing reference frameworks, their classifications and the concerns they target. We supply alignments of shared concepts, identify gaps and give a structured mapping of regarded concerns at each part of the respective reference architectures. Furthermore, the linking of relevant industry standards and technologies to the architectures allows a more effective search for specifications and guidelines and supports the direct technology adoption.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2019
    Description: Service recommendation is one of the important means of service selection. Aiming at the problems of ignoring the influence of typical data sources such as service information and interaction logs on the similarity calculation of user preferences and insufficient consideration of dynamic trust relationship in traditional trust-based Web service recommendation methods, a novel approach for Web service recommendation based on advanced trust relationships is presented. After considering the influence of indirect trust paths, the improved calculation about indirect trust degree is proposed. By quantifying the popularity of service, the method of calculating user preference similarity is investigated. Furthermore, the dynamic adjustment mechanism of trust is designed by differentiating the effect of each service recommendation. Integrating these efforts, a service recommendation mechanism is introduced, in which a new service recommendation algorithm is described. Experimental results show that, compared with existing methods, the proposed approach not only has higher accuracy of service recommendation, but also can resist attacks from malicious users more effectively.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2019
    Description: We explore the class of positive integers n that admit idempotent factorizations n = p ¯ q ¯ such that λ ( n ) ∣ ( p ¯ − 1 ) ( q ¯ − 1 ) , where λ is the Carmichael lambda function. Idempotent factorizations with p ¯ and q ¯ prime have received the most attention due to their cryptographic advantages, but there are an infinite number of n with idempotent factorizations containing composite p ¯ and/or q ¯ . Idempotent factorizations are exactly those p ¯ and q ¯ that generate correctly functioning keys in the Rivest–Shamir–Adleman (RSA) 2-prime protocol with n as the modulus. While the resulting p ¯ and q ¯ have no cryptographic utility and therefore should never be employed in that capacity, idempotent factorizations warrant study in their own right as they live at the intersection of multiple hard problems in computer science and number theory. We present some analytical results here. We also demonstrate the existence of maximally idempotent integers, those n for which all bipartite factorizations are idempotent. We show how to construct them, and present preliminary results on their distribution.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 132〈/p〉 〈p〉Author(s): Hongxing Zhang, Mingliang Zhang, Yongpeng Ji, Yini Wang, Tianping Xu〈/p〉 〈div xml:lang="en"〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Vegetation exerts a significant damping effect on tsunami wave run-up on coastal beaches, thus effectively mitigating the tsunami hazard. A depth-integrated two-dimensional numerical model (HydroSed2D, Liu et al., 2008; Liu et al., 2010) is developed to investigate tsunami wave run-up and land inundation on coastal beaches covered with 〈em〉Pandanus odoratissimus〈/em〉 (〈em〉P. odoratissimus〈/em〉). The present model is based on a finite volume Roe-type scheme, that solves the non-linear shallow water equations with the capacity of treating the wet or dry boundary at the wave front. The momentum equations in this model are modified by adding a drag force term, thus considering the resistance effects of vegetation on tsunami waves. The accuracy of the numerical scheme and the vegetation drag force are validated by three experimental cases of dam-break flow propagation in a dry channel, solitary wave propagation in a vegetated flume, and tsunami run-up over an uneven bed. Subsequently, a numerical model is applied to simulate tsunami run-up and land inundation on actual-scale vegetated beaches and a series of sensitive analyses are conducted by comparing numerical results. The obtained numerical results suggest that 〈em〉P. odoratissimus〈/em〉 can effectively attenuate tsunami run-up and land inundation distance on coastal beaches, and a higher attenuation rate for tsunami wave can be achieved by increasing both vegetation width and vegetation density. The tsunami wave height is also an important factor that impacts the tsunami wave run-up and land inundation on vegetated beaches.〈/p〉〈/div〉 〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 11
    Publication Date: 2019
    Description: 〈p〉Publication date: July–December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 SoftwareX, Volume 10〈/p〉 〈p〉Author(s): R.D. Martin, Q. Cai, T. Garrow, C. Kapahi〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉QExpy is an open source python-3 module that was developed in order to simplify the analysis of data in undergraduate physics laboratories. Through the use of this module, students can focus their time on understanding the science and the data from their experiments, rather than on processing their data. In particular, the module allows users to easily propagate uncertainties from measured quantities using a variety of techniques (derivatives, Monte Carlo), as well as to plot and fit functions to data. The interface is designed to be pedagogical so that students with no prior programming experience can be eased into using python in their introductory physics laboratories.〈/p〉〈/div〉
    Electronic ISSN: 2352-7110
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 12
    Publication Date: 2019
    Description: 〈p〉Publication date: July–December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 SoftwareX, Volume 10〈/p〉 〈p〉Author(s): Simon Behrendt, Thomas Dimpfl, Franziska J. Peter, David J. Zimmermann〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉This paper shows how to quantify and test for the information flow between two time series with Shannon transfer entropy and Rényi transfer entropy using the 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈mi〉R〈/mi〉〈/math〉 package 〈em〉RTransferEntropy〈/em〉. We discuss the methodology, the bias correction applied to calculate effective transfer entropy and outline how to conduct statistical inference. Furthermore, we describe the package in detail and demonstrate its functionality by means of several simulated processes and present an application to financial time series.〈/p〉〈/div〉
    Electronic ISSN: 2352-7110
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 13
    facet.materialart.
    Unknown
    Elsevier
    In: SoftwareX
    Publication Date: 2019
    Description: 〈p〉Publication date: July–December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 SoftwareX, Volume 10〈/p〉 〈p〉Author(s): Jacob L. Moore, Nathaniel R. Morgan, Mark F. Horstemeyer〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We discuss the creation and implementation of a generalized library, named ELEMENTS, of mathematical functions for supporting a very broad range of element types including: linear, quadratic, and cubic serendipity elements in 2D and 3D; high-order spectral elements; and a linear 4D element. The ELEMENTS library can be used for research and development of both continuous and discontinuous finite element methods for solving a diverse range of partial differential equations. The library has functions for calculating quantities that are commonly used in finite element methods such as the gradient of a basis function, the Jacobi matrix, the inverse Jacobi matrix, the determinant of the Jacobi matrix, and a physical position inside the element, to name a few examples. The library also supports both Gauss–Legendre and Gauss–Lobatto quadrature rules up to 8 quadrature points in each coordinate direction. The examples and discussions in this paper will focus on Lagrangian solid mechanics and dynamics, but ELEMENTS can be used for many other applications.〈/p〉〈/div〉
    Electronic ISSN: 2352-7110
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 14
    Publication Date: 2019
    Description: Google’s Material Design, created in 2014, led to the extended application of floating action buttons (FAB) in user interfaces of web pages and mobile applications. FAB’s roll is to trigger an activity either on the present screen, or it can play out an activity that makes another screen. A few specialists in user experience (UX) and user interface (UI) design are sceptical regarding the usability of FAB in the interfaces of both web pages and mobile applications. They claim that the use of FAB easily distracts users and that it interferes with using other important functions of the applications, and it is unusable in applications designed for iOS systems. The aim of this paper is to investigate by an experiment the quality of experience (QoE) of a static and animated FAB and compare it to the toolbar alternative. The experimental results of different testing methods rejected the hypothesis that the usage and animation of this UI element has a positive influence on the application usability. However, its static and animated utilization enhanced the ratings of hedonic and aesthetic features of the user experience, justifying the usage of this type of button.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 15
    Publication Date: 2019
    Description: Recommender systems are nowadays an indispensable part of most personalized systems implementing information access and content delivery, supporting a great variety of user activities [...]
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 16
    Publication Date: 2019
    Description: 〈p〉Publication date: 1 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 396〈/p〉 〈p〉Author(s): Luigi Brugnano, Juan I. Montijano, Luis Rández〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉In this paper we study arbitrarily high-order energy-conserving methods for simulating the dynamics of a charged particle. They are derived and studied within the framework of 〈em〉Line Integral Methods (LIMs)〈/em〉, previously used for defining 〈em〉Hamiltonian Boundary Value Methods (HBVMs)〈/em〉, a class of energy-conserving Runge-Kutta methods for Hamiltonian problems. A complete analysis of the new methods is provided, which is confirmed by a few numerical tests.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 17
    Publication Date: 2019
    Description: Finite element data form an important basis for engineers to undertake analysis and research. In most cases, it is difficult to generate the internal sections of finite element data and professional operations are required. To display the internal data of entities, a method for generating the arbitrary sections of finite element data based on radial basis function (RBF) interpolation is proposed in this paper. The RBF interpolation function is used to realize arbitrary surface cutting of the entity, and the section can be generated by the triangulation of discrete tangent points. Experimental studies have proved that the method is very convenient for allowing users to obtain visualization results for an arbitrary section through simple and intuitive interactions.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 18
    Publication Date: 2019
    Description: 〈p〉Publication date: October 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 131〈/p〉 〈p〉Author(s): Ludovic Räss, Dmitriy Kolyukhin, Alexander Minakov〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We present an efficient implementation of the method for sampling spatial realisations of a 3-D random fields with given power spectrum. The method allows for a multi-scale resolution and approaches well for parallel implementations, overcoming the physical limitation of computer memory when dealing with large 3-D problems. We implement the random field generator to execute on graphical processing units (GPU) using the CUDA C programming language. We compare the memory footprint and the wall-time of our implementation to FFT-based solutions. We illustrate the efficiency of the proposed numerical method using examples of an acoustic scattering problem which can be encountered both in controlled-source and earthquake seismology. In particular, we apply our method to study the scattering of seismic waves in 3-D anisotropic random media with a particular focus on P-wave coda observations and seismic monitoring of hydrocarbon reservoirs.〈/p〉〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 19
    Publication Date: 2019
    Description: 〈p〉Publication date: October 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 131〈/p〉 〈p〉Author(s): R. Moreno, F.J. Pérez-Gil, J.J. Pardo, A. Navarro, F.J. Tapiador〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Current computing platforms, including HPC, Grid and Cloud computing, offer many computational resources. These platforms can provide the scientific world with on-demand and scalable computing power. However, their use requires advanced computer skills. In our work, we summarize the main problems found by research scientists when they attempt to execute complex software on one or several of these platforms. As a solution to these problems, we propose the 〈em〉Science for Everyone〈/em〉 (ScifE) framework, and an intuitive and easy-to-use web interface, to perform scientific experiments on HPC or Cloud platforms. The Community Earth System Model (CESM) has been used to test our proof-of-concept framework as it is a complex scientific software used by many climate researchers. Using our framework, a user can execute CESM with custom parameters with only a few clicks on the web interface. As ScifE is generic, other software can be imported relatively easy to our framework. Additional collaborative tools are being developed for successive versions of ScifE, enhancing the practice of science.〈/p〉〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 20
    Publication Date: 2019
    Description: The number of documents published on the Web in languages other than English grows every year. As a consequence, the need to extract useful information from different languages increases, highlighting the importance of research into Open Information Extraction (OIE) techniques. Different OIE methods have dealt with features from a unique language; however, few approaches tackle multilingual aspects. In those approaches, multilingualism is restricted to processing text in different languages, rather than exploring cross-linguistic resources, which results in low precision due to the use of general rules. Multilingual methods have been applied to numerous problems in Natural Language Processing, achieving satisfactory results and demonstrating that knowledge acquisition for a language can be transferred to other languages to improve the quality of the facts extracted. We argue that a multilingual approach can enhance OIE methods as it is ideal to evaluate and compare OIE systems, and therefore can be applied to the collected facts. In this work, we discuss how the transfer knowledge between languages can increase acquisition from multilingual approaches. We provide a roadmap of the Multilingual Open IE area concerning state of the art studies. Additionally, we evaluate the transfer of knowledge to improve the quality of the facts extracted in each language. Moreover, we discuss the importance of a parallel corpus to evaluate and compare multilingual systems.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 21
    Publication Date: 2019
    Description: This paper aims to explore the current status, research trends and hotspots related to the field of infrared detection technology through bibliometric analysis and visualization techniques based on the Science Citation Index Expanded (SCIE) and Social Sciences Citation Index (SSCI) articles published between 1990 and 2018 using the VOSviewer and Citespace software tools. Based on our analysis, we first present the spatiotemporal distribution of the literature related to infrared detection technology, including annual publications, origin country/region, main research organization, and source publications. Then, we report the main subject categories involved in infrared detection technology. Furthermore, we adopt literature cocitation, author cocitation, keyword co-occurrence and timeline visualization analyses to visually explore the research fronts and trends, and present the evolution of infrared detection technology research. The results show that China, the USA and Italy are the three most active countries in infrared detection technology research and that the Centre National de la Recherche Scientifique has the largest number of publications among related organizations. The most prominent research hotspots in the past five years are vibration thermal imaging, pulse thermal imaging, photonic crystals, skin temperature, remote sensing technology, and detection of delamination defects in concrete. The trend of future research on infrared detection technology is from qualitative to quantitative research development, engineering application research and infrared detection technology combined with other detection techniques. The proposed approach based on the scientific knowledge graph analysis can be used to establish reference information and a research basis for application and development of methods in the domain of infrared detection technology studies.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 22
    Publication Date: 2019
    Description: The literature on big data analytics and firm performance is still fragmented and lacking in attempts to integrate the current studies’ results. This study aims to provide a systematic review of contributions related to big data analytics and firm performance. The authors assess papers listed in the Web of Science index. This study identifies the factors that may influence the adoption of big data analytics in various parts of an organization and categorizes the diverse types of performance that big data analytics can address. Directions for future research are developed from the results. This systematic review proposes to create avenues for both conceptual and empirical research streams by emphasizing the importance of big data analytics in improving firm performance. In addition, this review offers both scholars and practitioners an increased understanding of the link between big data analytics and firm performance.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 23
    Publication Date: 2019
    Description: Service Level Agreements are employed to set availability commitments in cloud services. When a violation occurs as in an outage, cloud providers may be called to compensate customers for the losses incurred. Such compensation may be so large as to erode cloud providers’ profit margins. Insurance may be used to protect cloud providers against such a danger. In this paper, closed formulas are provided through the expected utility paradigm to set the insurance premium under different outage models and QoS metrics (no. of outages, no. of long outages, and unavailability). When the cloud service is paid through a fixed fee, we also provide the maximum unit compensation that a cloud provider can offer so as to meet constraints on its profit loss. The unit compensation is shown to vary approximately as the inverse square of the service fee.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 24
    Publication Date: 2019
    Description: 〈p〉Publication date: July–December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 SoftwareX, Volume 10〈/p〉 〈p〉Author(s): Cornelis Marcel Pieter ’t Hart, Georgios Leontaris, Oswaldo Morales-Nápoles〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉This is an update to PII: 〈a href="https://www.sciencedirect.com/science/article/pii/S2352711018300608" target="_blank"〉S2352711018300608〈/a〉〈/p〉 〈p〉In this paper, we discuss ANDURYL, which is a Python-based open source successor of the MATLAB toolbox ANDURIL. The output of ANDURYL is in good agreement with the results obtained from ANDURIL and EXCALIBUR. Additional features available in ANDURYL, and not available in its predecessors, are discussed.〈/p〉 〈/div〉
    Electronic ISSN: 2352-7110
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 25
    Publication Date: 2019
    Description: Term translation quality in machine translation (MT), which is usually measured by domain experts, is a time-consuming and expensive task. In fact, this is unimaginable in an industrial setting where customised MT systems often need to be updated for many reasons (e.g., availability of new training data, leading MT techniques). To the best of our knowledge, as of yet, there is no publicly-available solution to evaluate terminology translation in MT automatically. Hence, there is a genuine need to have a faster and less-expensive solution to this problem, which could help end-users to identify term translation problems in MT instantly. This study presents a faster and less expensive strategy for evaluating terminology translation in MT. High correlations of our evaluation results with human judgements demonstrate the effectiveness of the proposed solution. The paper also introduces a classification framework, TermCat, that can automatically classify term translation-related errors and expose specific problems in relation to terminology translation in MT. We carried out our experiments with a low resource language pair, English–Hindi, and found that our classifier, whose accuracy varies across the translation directions, error classes, the morphological nature of the languages, and MT models, generally performs competently in the terminology translation classification task.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 26
    Publication Date: 2019
    Description: Radar signal processing mainly focuses on target detection, classification, estimation, filtering, and so on. Compressed sensing radar (CSR) technology can potentially provide additional tools to simultaneously reduce computational complexity and effectively solve inference problems. CSR allows direct compressive signal processing without the need to reconstruct the signal. This study aimed to solve the problem of CSR detection without signal recovery by optimizing the transmit waveform. Therefore, a waveform optimization method was introduced to improve the output signal-to-interference-plus-noise ratio (SINR) in the case where the target signal is corrupted by colored interference and noise having known statistical characteristics. Two different target models are discussed: deterministic and random. In the case of a deterministic target, the optimum transmit waveform is derived by maximizing the SINR and a suboptimum solution is also presented. In the case of random target, an iterative waveform optimization method is proposed to maximize the output SINR. This approach ensures that SINR performance is improved in each iteration step. The performance of these methods is illustrated by computer simulation.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 27
    Publication Date: 2019
    Description: In semi-autonomous robot conferencing, not only the operator controls the robot, but the robot itself also moves autonomously. Thus, it can modify the operator’s movement (e.g., adding social behaviors). However, the sense of agency, that is, the degree of feeling that the movement of the robot is the operator’s own movement, would decrease if the operator is conscious of the discrepancy between the teleoperation and autonomous behavior. In this study, we developed an interface to control the robot head by using an eye tracker. When the robot autonomously moves its eye-gaze position, the interface guides the operator’s eye movement towards this autonomous movement. The experiment showed that our interface can maintain the sense of agency, because it provided the illusion that the autonomous behavior of a robot is directed by the operator’s eye movement. This study reports the conditions of how to provide this illusion in semi-autonomous robot conferencing.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 28
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 132〈/p〉 〈p〉Author(s): Xenia Specka, Philipp Gärtner, Carsten Hoffmann, Nikolai Svoboda, Markus Stecker, Udo Einspanier, Kristian Senkler, M.A. Muqit Zoarder, Uwe Heinrich〈/p〉 〈div xml:lang="en"〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉A spatial data infrastructure (SDI) for the upload and provision of soil-agricultural research data in Germany was developed and launched in 2017. The precondition for the new SDI were to be compliant with the European initiative for spatial information (INSPIRE), to consider FAIR data principles, to be interoperable with other disciplinary national and international SDIs and to support dataset registrations with digital object identifiers (DOI). To meet these requirements, the new SDI had to support both the INSPIRE and DataCite metadata standards. As there was no metadata standard available that supported both metadata schemas from different disciplines, it became necessary to merge these two existing standards.〈/p〉 〈p〉The objective of this paper is to describe the developmental steps of the newly created BonaRes metadata schema for geospatial soil-agricultural research data. First, we analyzed the INSPIRE and DataCite metadata schemas, identified and compared semantically equivalent metadata elements, including properties and allowed content values, for potential mapping. In the next step, we specified the new metadata model, whose elements were derived from INSPIRE or DataCite under consideration of the previously developed mapping. Third, additional metadata elements were defined to directly describe the underlying data model.〈/p〉 〈p〉The advantages and disadvantages of the method that was applied to define the BonaRes metadata schema elements originating from INSPIRE and DataCite schemas are presented and discussed. By implementing the new metadata model, the SDIs can achieve cross-portal metadata interoperability with other INSPIRE-compliant SDIs. Furthermore, researchers who store their geospatial research data in SDIs to implement the BonaRes metadata schema can increase the visibility and findability of their research, as the data can be assigned a DOI, which is a prerequisite for data citations and data publications. The new metadata elements of the BonaRes metadata schema allows targeted dataset queries, e.g., soil pH values, yields or parameters, which were measured by a specific method, using the underlying data model, and improves the discovery and reusability of research data.〈/p〉 〈/div〉 〈/div〉 〈h5〉Graphical abstract〈/h5〉 〈div〉〈p〉〈figure〉〈img src="https://ars.els-cdn.com/content/image/1-s2.0-S009830041930086X-fx1.jpg" width="306" alt="Image 1" title="Image 1"〉〈/figure〉〈/p〉〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 29
    Publication Date: 2019
    Description: Appropriate business processes management (BPM) within an organization can help attain organizational goals. It is particularly important to effectively manage the lifecycle of these processes for organizational effectiveness in improving ever-growing performance and competitivity-building across the company. This paper presents a process discovery and how we can use it in a broader framework supporting self-organization in BPM. Process discovery is intrinsically associated with the process lifecycle. We have made a pre-evaluation of the usefulness of our facts using a generated log file. We also compared visualizations of the outcomes of our approach with different cases and showed performance characteristics of the cash loan sales process.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 30
    Publication Date: 2019
    Description: Correlations between observed data are at the heart of all empirical research that strives for establishing lawful regularities. However, there are numerous ways to assess these correlations, and there are numerous ways to make sense of them. This essay presents a bird’s eye perspective on different interpretive schemes to understand correlations. It is designed as a comparative survey of the basic concepts. Many important details to back it up can be found in the relevant technical literature. Correlations can (1) extend over time (diachronic correlations) or they can (2) relate data in an atemporal way (synchronic correlations). Within class (1), the standard interpretive accounts are based on causal models or on predictive models that are not necessarily causal. Examples within class (2) are (mainly unsupervised) data mining approaches, relations between domains (multiscale systems), nonlocal quantum correlations, and eventually correlations between the mental and the physical.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 31
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 21 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Konstantin Pieper, K. Chad Sockwell, Max Gunzburger〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉A framework for exponential time discretization of the multilayer rotating shallow water equations is developed in combination with a mimetic discretization in space. The method is based on a combination of existing exponential time differencing (ETD) methods and a careful choice of approximate Jacobians. The discrete Hamiltonian structure and conservation properties of the model are taken into account, in order to ensure stability of the method for large time steps and simulation horizons. In the case of many layers, further efficiency can be gained by a layer reduction which is based on the vertical structure of fast and slow modes. Numerical experiments on the example of a mid-latitude regional ocean model confirm long term stability for time steps increased by an order of magnitude over the explicit CFL, while maintaining accuracy for key statistical quantities.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 32
    Publication Date: 2019
    Description: The capacity of private information retrieval (PIR) from databases coded using maximum distance separable (MDS) codes was previously characterized by Banawan and Ulukus, where it was assumed that the messages are encoded and stored separably in the databases. This assumption was also usually made in other related works in the literature, and this capacity is usually referred to as the MDS-PIR capacity colloquially. In this work, we considered the question of if and when this capacity barrier can be broken through joint encoding and storing of the messages. Our main results are two classes of novel code constructions, which allow joint encoding, as well as the corresponding PIR protocols, which indeed outperformed the separate MDS-coded systems. Moreover, we show that a simple, but novel expansion technique allows us to generalize these two classes of codes, resulting in a wider range of the cases where this capacity barrier can be broken.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 33
    Publication Date: 2019
    Description: Collaborative filtering based recommender systems have proven to be extremely successful in settings where user preference data on items is abundant. However, collaborative filtering algorithms are hindered by their weakness against the item cold-start problem and general lack of interpretability. Ontology-based recommender systems exploit hierarchical organizations of users and items to enhance browsing, recommendation, and profile construction. While ontology-based approaches address the shortcomings of their collaborative filtering counterparts, ontological organizations of items can be difficult to obtain for items that mostly belong to the same category (e.g., television series episodes). In this paper, we present an ontology-based recommender system that integrates the knowledge represented in a large ontology of literary themes to produce fiction content recommendations. The main novelty of this work is an ontology-based method for computing similarities between items and its integration with the classical Item-KNN (K-nearest neighbors) algorithm. As a study case, we evaluated the proposed method against other approaches by performing the classical rating prediction task on a collection of Star Trek television series episodes in an item cold-start scenario. This transverse evaluation provides insights into the utility of different information resources and methods for the initial stages of recommender system development. We found our proposed method to be a convenient alternative to collaborative filtering approaches for collections of mostly similar items, particularly when other content-based approaches are not applicable or otherwise unavailable. Aside from the new methods, this paper contributes a testbed for future research and an online framework to collaboratively extend the ontology of literary themes to cover other narrative content.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 34
    Publication Date: 2019
    Description: The advent of utility computing has revolutionized almost every sector of traditional software development. Especially commercial cloud computing services, pioneered by the likes of Amazon, Google and Microsoft, have provided an unprecedented opportunity for the fast and sustainable development of complex distributed systems. Nevertheless, existing models and tools aim primarily for systems where resource usage—by humans and bots alike—is logically and physically quite disperse resulting in a low likelihood of conflicting resource access. However, a number of resource-intensive applications, such as Massively Multiplayer Online Games (MMOGs) and large-scale simulations introduce a requirement for a very large common state with many actors accessing it simultaneously and thus a high likelihood of conflicting resource access. This paper presents a systematic mapping study of the state-of-the-art in software technology aiming explicitly to support the development of MMOGs, a class of large-scale, resource-intensive software systems. By examining the main focus of a diverse set of related publications, we identify a list of criteria that are important for MMOG development. Then, we categorize the selected studies based on the inferred criteria in order to compare their approach, unveil the challenges faced in each of them and reveal research trends that might be present. Finally we attempt to identify research directions which appear promising for enabling the use of standardized technology for this class of systems.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 35
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 20 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Stéphane Zaleski, Feng Xiao〈/p〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 36
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 20 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Dario Collia, Marija Vukicevic, Valentina Meschini, Luigino Zovatto, Gianni Pedrizzetti〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉The fluid dynamics inside the left ventricle of the human heart is considered a potential indicator of long term cardiovascular outcome. In this respect, numerical simulations can play an important role for integrating existing technology to reproduce flow details and even conditions associated to virtual therapeutic solutions. Nevertheless, numerical models encounter serious practical difficulties in describing the interaction between flow and surrounding tissues due to the limited information inherently available in real clinical applications.〈/p〉 〈p〉This study presents a computational method for the fluid dynamics inside the left ventricle designed to be efficiently integrated in clinical scenarios. It includes an original model of the mitral valve dynamics, which describes an asymptotic behavior for tissues with no elastic stiffness other than the constrain of the geometry obtained from medical imaging; in particular, the model provides an asymptotic description without requiring details of tissue properties that may not be measurable 〈em〉in vivo〈/em〉.〈/p〉 〈p〉The advantages of this model with respect to a valveless orifice and its limitations with respect to a complete tissue modeling are verified. Its performances are then analyzed in details to ensure a correct interpretation of results. It represents a potential option when information about tissue mechanical properties is insufficient for the implementations of a full fluid-structure interaction approach.〈/p〉 〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 37
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 9 July 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences〈/p〉 〈p〉Author(s): Andrés F. Osorio, Sebastian Montoya-Vargas, Cesar A. Cartagena, Jairo Espinosa, Alejandro Orfila, Christian Winter〈/p〉 〈div xml:lang="en"〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉A new approach to estimate the peak period using time stack images from a video monitoring system is presented. The method is an improvement of the one developed by Stockdon and Holman (Stockdon, H.F., Holman, R.A., 2000. Estimation of wave phase speed and nearshore bathymetry from video imagery. J. Geophys. Res. Ocean. 105, 22015–22033), (S&H) including thresholding of the image to automatically identify free foam zone in the image, improved filtering procedure and windowing of the pixel intensity series in order to avoid noisy signals and spectral leakage. Results are tested against measurements in four beaches showing an absolute error below 2 s for 75% of the analyzed images despite the variable light conditions and noisy signals. Comparison with the S&H method is given showing that the new method reduces relative error on average by 28%. These results are achieved without taking into account the camera configuration requiring only the identification of a representative amount of land pixels in order to adequately eliminate noisy signals produced by environmental light fluctuations.〈/p〉〈/div〉 〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 38
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 22 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): L. Nouveau, M. Ricchiuto, G. Scovazzi〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We propose an extension of the embedded boundary method known as “shifted boundary method” to elliptic diffusion equations in mixed form (e.g., Darcy flow, heat diffusion problems with rough coefficients, etc.). Our aim is to obtain an improved formulation that, for linear finite elements, is at least second-order accurate for both flux and primary variable, when either Dirichlet or Neumann boundary conditions are applied. Following previous work of Nishikawa and Mazaheri in the context of residual distribution methods, we consider the mixed form of the diffusion equation (i.e., with Darcy-type operators), and introduce an enrichment of the primary variable. This enrichment is obtained exploiting the relation between the primary variable and the flux variable, which is explicitly available at nodes in the mixed formulation. The proposed enrichment mimics a formally quadratic pressure approximation, although only nodal unknowns are stored, similar to a linear finite element approximation. We consider both continuous and discontinuous finite element approximations and present two approaches: a non-symmetric enrichment, which, as in the original references, only improves the consistency of the overall method; and a symmetric enrichment, which enables a full error analysis in the classical finite element context. Combined with the shifted boundary method, these two approaches are extended to high-order embedded computations, and enable the approximation of both primary and flux (gradient) variables with second-order accuracy, independently on the type of boundary conditions applied. We also show that the the primary variable is third-order accurate, when pure Dirichlet boundary conditions are embedded.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 39
    facet.materialart.
    Unknown
    Elsevier
    Publication Date: 2019
    Description: 〈p〉Publication date: October 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Artificial Intelligence, Volume 275〈/p〉 〈p〉Author(s): 〈/p〉
    Print ISSN: 0004-3702
    Electronic ISSN: 1872-7921
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 40
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers in Industry, Volume 112〈/p〉 〈p〉Author(s): Patrick Dallasega, Rafael A. Rojas, Giulia Bruno, Erwin Rauch〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉A special characteristic of Engineer-to-Order (ETO) construction supplier companies is that they are composed of an off-site and on-site production part. Often, their synchronization is difficult by using traditional scheduling and monitoring methodologies resulting in excessive lead times, buffer levels and, as a result, additional costs. Real-time data, made available by Industry 4.0 technologies, represent a high potential to sense and react to plan deviations as soon as they appear. The paper presents a decentralized and agile approach for scheduling and control in ETO construction supply chains. The approach was modelled and validated based on a case study of an ETO façade supplier company and a discrete event simulation. It emerged that the agile approach has its main benefits in the reduction of buffer sizes and construction lead-times. The original aspect of the article is the investigation of effects of agile scheduling and control in ETO construction supply chains.〈/p〉〈/div〉
    Print ISSN: 0166-3615
    Electronic ISSN: 1872-6194
    Topics: Computer Science , Technology
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 41
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers in Industry, Volume 112〈/p〉 〈p〉Author(s): Juan Jose Rubio, Takahiro Kashiwa, Teera Laiteerapong, Wenlong Deng, Kohei Nagai, Sergio Escalera, Kotaro Nakayama, Yutaka Matsuo, Helmut Prendinger〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Structural Health Monitoring (SHM) has benefited from computer vision and more recently, Deep Learning approaches, to accurately estimate the state of deterioration of infrastructure. In our work, we test Fully Convolutional Networks (FCNs) with a dataset of deck areas of bridges for damage segmentation. We create a dataset for delamination and rebar exposure that has been collected from inspection records of bridges in Niigata Prefecture, Japan. The dataset consists of 734 images with three labels per image, which makes it the largest dataset of images of bridge deck damage. This data allows us to estimate the performance of our method based on regions of agreement, which emulates the uncertainty of in-field inspections. We demonstrate the practicality of FCNs to perform automated semantic segmentation of surface damages. Our model achieves a mean accuracy of 89.7% for delamination and 78.4% for rebar exposure, and a weighted F1 score of 81.9%.〈/p〉〈/div〉
    Print ISSN: 0166-3615
    Electronic ISSN: 1872-6194
    Topics: Computer Science , Technology
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 42
    facet.materialart.
    Unknown
    Elsevier
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 20 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Complexity〈/p〉 〈p〉Author(s): Zhiying Fang, Zheng-Chu Guo, Ding-Xuan Zhou〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We study a learning algorithm for distribution regression with regularized least squares. This algorithm, which contains two stages of sampling, aims at regressing from distributions to real valued outputs. The first stage sample consists of distributions and the second stage sample is obtained from these distributions. To extract information from samples, we embed distributions to a reproducing kernel Hilbert space (RKHS) and use the second stage sample to form the regressor by a tool of mean embedding. We show error bounds in the 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈msup〉〈mrow〉〈mi〉L〈/mi〉〈/mrow〉〈mrow〉〈mn〉2〈/mn〉〈/mrow〉〈/msup〉〈/math〉-norm and prove that the regressor is a good approximation to the regression function. We derive a learning rate which is optimal in the setting of standard least squares regression and improve the existing work. Our analysis is achieved by using a novel second order decomposition to bound operator norms.〈/p〉〈/div〉
    Print ISSN: 0885-064X
    Electronic ISSN: 1090-2708
    Topics: Computer Science , Mathematics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 43
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 29 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences〈/p〉 〈p〉Author(s): Yilun Zhang, Bin Hu, Yanguo Teng, Kevin Tu, Chen Zhu〈/p〉 〈div xml:lang="en"〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉Rate equations and kinetic parameters for about 100 minerals were programmed into a library of callable Basic language scripts for the geochemical modeling program 〈span〉Phreeqc〈/span〉 (version 3.5.0) to facilitate the application of kinetics in geochemical modeling. For most minerals, the following general equation is used: 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈mrow〉〈msub〉〈mrow〉〈mi〉r〈/mi〉〈/mrow〉〈mrow〉〈mi〉n〈/mi〉〈mi〉e〈/mi〉〈mi〉t〈/mi〉〈/mrow〉〈/msub〉〈mo linebreak="goodbreak" linebreakstyle="after"〉=〈/mo〉〈msub〉〈mrow〉〈mi〉S〈/mi〉〈/mrow〉〈mrow〉〈mi〉A〈/mi〉〈/mrow〉〈/msub〉〈munder〉〈mo〉∑〈/mo〉〈mrow〉〈mi〉j〈/mi〉〈/mrow〉〈/munder〉〈mrow〉〈msub〉〈mrow〉〈mi〉A〈/mi〉〈/mrow〉〈mrow〉〈mi〉j〈/mi〉〈/mrow〉〈/msub〉〈msup〉〈mrow〉〈mi〉e〈/mi〉〈/mrow〉〈mrow〉〈mo〉−〈/mo〉〈msub〉〈mrow〉〈mi〉E〈/mi〉〈/mrow〉〈mrow〉〈mi〉a〈/mi〉〈mo〉,〈/mo〉〈mi〉j〈/mi〉〈/mrow〉〈/msub〉〈mo〉/〈/mo〉〈mi〉R〈/mi〉〈mi〉T〈/mi〉〈/mrow〉〈/msup〉〈munder〉〈mo〉∏〈/mo〉〈mrow〉〈mi〉i〈/mi〉〈/mrow〉〈/munder〉〈mrow〉〈msup〉〈mrow〉〈msubsup〉〈mrow〉〈mi〉a〈/mi〉〈/mrow〉〈mrow〉〈mi〉i〈/mi〉〈mo〉,〈/mo〉〈mi〉j〈/mi〉〈/mrow〉〈mrow〉〈mi〉n〈/mi〉〈mi〉i〈/mi〉〈/mrow〉〈/msubsup〉〈mrow〉〈mo stretchy="true"〉(〈/mo〉〈mn〉1〈/mn〉〈mo linebreak="badbreak"〉−〈/mo〉〈msup〉〈mrow〉〈mi〉Ω〈/mi〉〈/mrow〉〈mrow〉〈msub〉〈mrow〉〈mi〉p〈/mi〉〈/mrow〉〈mrow〉〈mi〉j〈/mi〉〈/mrow〉〈/msub〉〈/mrow〉〈/msup〉〈mo stretchy="true"〉)〈/mo〉〈/mrow〉〈/mrow〉〈mrow〉〈msub〉〈mrow〉〈mi〉q〈/mi〉〈/mrow〉〈mrow〉〈mi〉j〈/mi〉〈/mrow〉〈/msub〉〈/mrow〉〈/msup〉〈/mrow〉〈/mrow〉〈/mrow〉〈/math〉〈/p〉 〈p〉where 〈em〉r〈/em〉〈sub〉net〈/sub〉 stands for the net dissolution rate of a mineral phase (mol kgw〈sup〉−1〈/sup〉 s〈sup〉−1〈/sup〉); 〈em〉j〈/em〉 the jth reaction mechanism; 〈em〉S〈/em〉〈sub〉A〈/sub〉 the surface area per unit water mass (m〈sup〉2〈/sup〉 kgw〈sup〉−1〈/sup〉); 〈em〉A〈/em〉〈sub〉〈em〉j〈/em〉〈/sub〉 the Arrhenius pre-exponential factor (mol m〈sup〉−2〈/sup〉 s〈sup〉−1〈/sup〉); 〈em〉E〈/em〉〈sub〉a,j〈/sub〉 the apparent reaction activation energy (J mol〈sup〉−1〈/sup〉); 〈em〉R〈/em〉 the universal gas constant (8.31446 J mol〈sup〉−1〈/sup〉 K〈sup〉−1〈/sup〉); 〈em〉T〈/em〉 the temperature (K); 〈em〉a〈/em〉〈sub〉〈em〉i〈/em〉〈/sub〉 the activity of aqueous species 〈em〉i〈/em〉; Ω the mineral saturation quotient. 〈em〉p〈/em〉〈sub〉〈em〉j〈/em〉〈/sub〉 and 〈em〉q〈/em〉〈sub〉〈em〉j〈/em〉〈/sub〉 are empirical fitting parameters. 〈em〉j〈/em〉 stands for the specific mechanisms of reaction. Other forms of rate equations and associated parameters programmed in the library include parallel mechanisms, Langmuir adsorption isotherm, and empirical rate equations that apply to a specific reaction mechanism or geochemical system. A separate file of PHASEs, which define the chemical stoichiometry of the phases, dissolution reactions, and equilibrium constants of the dissolution reactions, is also provided. PHREEQC requires that the names in PHASES and RATES blocks match with each other.〈/p〉 〈p〉The Basic language scripts can also be used as templates for writing other rate equations which users might wish to use. To illustrate the application of the script library, we simulated the reaction path of albite dissolution at 25 °C and 1 bar, using three rate equations and compared the results. The script and phase library and supporting materials can be downloaded from 〈a href="https://github.com/HydrogeoIU/PHREEQC-Kinetic-Library" target="_blank"〉https://github.com/HydrogeoIU/PHREEQC-Kinetic-Library〈/a〉 and doi.org/10.5967/41gq-yr13.〈/p〉 〈/div〉 〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 44
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 21 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Walter Boscheri, Dinshaw S. Balsara〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉In this work we present a conservative WENO Adaptive Order (AO) reconstruction operator applied to an explicit one-step Arbitrary-Lagrangian-Eulerian (ALE) discontinuous Galerkin (DG) method. The spatial order of accuracy is improved by reconstructing higher order piecewise polynomials of degree 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈mi〉M〈/mi〉〈mo linebreak="goodbreak" linebreakstyle="after"〉〉〈/mo〉〈mi〉N〈/mi〉〈/math〉, starting from the underlying polynomial solution of degree 〈em〉N〈/em〉 provided by the DG scheme. High order of accuracy in time is achieved by the ADER approach, making use of an element-local space-time Galerkin finite element predictor that arises from a one-step time integration procedure. As a result, space-time polynomials of order 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si2.svg"〉〈mi〉M〈/mi〉〈mo linebreak="goodbreak" linebreakstyle="after"〉+〈/mo〉〈mn〉1〈/mn〉〈/math〉 are obtained and used to perform the time evolution of the numerical solution adopting a fully explicit DG scheme.〈/p〉 〈p〉To maintain algorithm simplicity, the mesh motion is restricted to be carried out using straight lines, hence the old mesh configuration at time 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si3.svg"〉〈msup〉〈mrow〉〈mi〉t〈/mi〉〈/mrow〉〈mrow〉〈mi〉n〈/mi〉〈/mrow〉〈/msup〉〈/math〉 is connected with the new one at time 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si4.svg"〉〈msup〉〈mrow〉〈mi〉t〈/mi〉〈/mrow〉〈mrow〉〈mi〉n〈/mi〉〈mo linebreak="badbreak" linebreakstyle="after"〉+〈/mo〉〈mn〉1〈/mn〉〈/mrow〉〈/msup〉〈/math〉 via space-time segments, which result in space-time control volumes on which the governing equations have to be integrated in order to obtain the time evolution of the discrete solution. Our algorithm falls into the category of 〈em〉direct〈/em〉 Arbitrary-Lagrangian-Eulerian (ALE) schemes, where the governing PDE system is directly discretized relying on a space-time conservation formulation and which already takes into account the new grid geometry 〈em〉directly〈/em〉 during the computation of the numerical fluxes. A local rezoning strategy might be used in order to locally optimize the mesh quality and avoiding the generation of invalid elements with negative determinant. The proposed approach reduces to direct ALE finite volume schemes if 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si5.svg"〉〈mi〉N〈/mi〉〈mo linebreak="goodbreak" linebreakstyle="after"〉=〈/mo〉〈mn〉0〈/mn〉〈/math〉, while explicit direct ALE DG schemes are recovered in the case of 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si6.svg"〉〈mi〉N〈/mi〉〈mo linebreak="goodbreak" linebreakstyle="after"〉=〈/mo〉〈mi〉M〈/mi〉〈/math〉.〈/p〉 〈p〉In order to stabilize the DG solution, an 〈em〉a priori〈/em〉 WENO based limiting technique is employed, that makes use of the numerical solution inside the element under consideration and its neighbor cells to find a less oscillatory polynomial approximation. By using a 〈em〉modal basis〈/em〉 in a reference element, the evaluation of the oscillation indicators is very easily and efficiently carried out, hence allowing higher order modes to be properly limited, while leaving the zero-〈em〉th〈/em〉 order mode untouched for ensuring conservation.〈/p〉 〈p〉Numerical convergence rates for 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si7.svg"〉〈mn〉2〈/mn〉〈mo〉≤〈/mo〉〈mi〉N〈/mi〉〈mo〉,〈/mo〉〈mi〉M〈/mi〉〈mo〉≤〈/mo〉〈mn〉4〈/mn〉〈/math〉 are presented as well as a wide set of benchmark test problems for hydrodynamics on moving and fixed unstructured meshes.〈/p〉 〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 45
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers in Industry, Volume 112〈/p〉 〈p〉Author(s): Rui Miao, Yuntian Gao, Liang Ge, Zihang Jiang, Jie Zhang〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉To improve the defect identification of eddy current detection signals for narrow lap welds, we propose a method, which combines continuous wavelet transform (CWT) with convolutional neural network (CNN). Firstly, a two-dimensional time-frequency diagram is generated from the one-dimensional eddy current signal through CWT, and then the time-frequency diagram of the eddy current signal is obtained as the input of CNN. In order to meet the real-time requirement of narrow overlap weld defect recognition in practical inspection, a two-stage defect recognition model is further proposed. The first stage is to detect the anomaly of narrow lap weld based on CWT and CNN, and the second stage is to classify and identify defects by combining CWT and CNN as well. Through case study, the accuracy of our method is 96.94%, which is nearly 10% higher than the traditional method. Furthermore, the actual average detection time is only 2.4 s, making the proposed approach capable of being deployed for enterprises’ online operation.〈/p〉〈/div〉
    Print ISSN: 0166-3615
    Electronic ISSN: 1872-6194
    Topics: Computer Science , Technology
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 46
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 20 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Ali Zidane, Abbas Firoozabadi〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Non-planar fractures are often created in hydraulic fracturing. These irregular shape fractures may reduce the penetration into the formation; they may also improve the reservoir reach. Accurate flow simulation of two-phase compositional flows in domains with complex non-planar fractures is beyond the capabilities of current numerical models. In this work we present a higher-order numerical model for compositional two-phase flow in a domain with non-planar fractures. Fully unstructured gridding in 3D is a natural choice for description of geometry with irregular fracture shapes. We apply the concept of fracture cross-flow equilibrium (FCFE) in simulations of porous media flows with non-planar fractures. FCFE allows accurate flow and composition calculations at low CPU cost. Our implementation is in the context of the hybridized form of the mass conservative mixed finite element (MFE) and the higher-order discontinuous Galerkin (DG) method. In this work we introduce a simple and effective approach for design of non-planar fractures through the mesh interface that connects computer-aided-design (CAD) software to the mesh generator. In our algorithm we can simulate all ranges of fracture permeability accurately as opposed to other approaches where low permeability fractures affect the accuracy.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 47
    Publication Date: 2019
    Description: In this survey paper, we review various concepts of graph density, as well as associated theorems and algorithms. Our goal is motivated by the fact that, in many applications, it is a key algorithmic task to extract a densest subgraph from an input graph, according to some appropriate definition of graph density. While this problem has been the subject of active research for over half of a century, with many proposed variants and solutions, new results still continuously emerge in the literature. This shows both the importance and the richness of the subject. We also identify some interesting open problems in the field.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 48
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 132〈/p〉 〈p〉Author(s): D. Hasterok, M. Gard, C.M.B. Bishop, D. Kelsey〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉The fundamental origins of metamorphic rocks as sedimentary or igneous are integral to the proper interpretation of a terrane’s tectonic and geodynamic evolution. In some cases, the protolith class cannot be determined from field relationships, texture, and/or compositional layering. In this study, we utilize machine learning to predict a metamorphic protolith from its major element chemistry so that accurate interpretation of the geology may proceed when the origin is uncertain or to improve confidence in field predictions. We survey the efficacy of several machine learning techniques to predict the protolith class (igneous or sedimentary) for whole rock geochemical analyses using 9 major oxides. The data are drawn from a global geochemical database with 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈mo〉〉〈/mo〉〈/math〉533 000 geochemical analyses. In addition to metamorphic samples, igneous and sedimentary analyses are used to supplement the dataset based on their similar chemical distributions to their metamorphic counterparts. We train the classifiers on most of the data, retaining 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si2.svg"〉〈mo〉∼〈/mo〉〈/math〉10% for post-training validation. We find that the RUSBoost algorithm performs best overall, achieving a true-positive rate of 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈mo〉〉〈/mo〉〈/math〉95% and 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈mo〉〉〈/mo〉〈/math〉85% for igneous- and sedimentary-derived samples, respectively. Even the traditionally-difficult-to-differentiate metasedimentary and metaigneous rocks of granitic–granodioritic composition were consistently identified with a 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline" altimg="si1.svg"〉〈mo〉〉〈/mo〉〈/math〉75% success rate (92% for granite; 85% for granodiorite; 88% for wacke; 76% for arkose). The least correctly identified rock types were iron-rich shale (58%) and quartzolitic rocks (6%). These trained classifiers are able to classify metamorphic protoliths better than common discrimination methods, allowing for the appropriate interpretation of the chemical, physical, and tectonic contextual history of a rock. The preferred classifier is available as a MATLAB function that can be applied to a spreadsheet of geochemical analyses, returning a predicted class and estimated confidence score. We anticipate this classifier’s use as a cheap tool to aid geoscientists in accurate protolith prediction and to increase the size of global geochemical datasets where protolith information is ambiguous or not retained.〈/p〉〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 49
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 133〈/p〉 〈p〉Author(s): Georgios Panou〈/p〉 〈div xml:lang="en"〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉A new method to convert Cartesian to geodetic coordinates is presented. The geodetic longitude is computed by a simple formula while the geodetic latitude and height are determined after the computation of the foot point of the normal line to a meridian ellipse. For this reason, the simpler form of the “latitude equation” is used and the corresponding quartic equation is solved using the Horner's scheme and the bisection method, which guarantees the convergence. Care is given to error analysis when computing the geodetic latitude and height. The performance of the proposed numerical method is examined using an extensive test set of points. The results are also validated by comparison to the exact analytical method of Vermeille and the numerical method of Fukushima. We conclude that the presented method gives accurate results for all input points, for arbitrary semiaxes of an oblate spheroid and can be generalized on a triaxial ellipsoid.〈/p〉〈/div〉 〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 50
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 133〈/p〉 〈p〉Author(s): S.J. Claessens〈/p〉 〈div xml:lang="en"〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉The derivation of algorithms for the computation of geodetic coordinates from 3D Cartesian coordinates has been a very active field of research among geodesists for more than forty years. Many authors have sought the most efficient method, i.e. the method that provides the fastest computational speed, which nevertheless yields sufficient accuracy for practical applications. The problem is a special case of a more general mathematical problem that has also been studied by researchers in other fields. This paper investigates the applicability of methods by Sampson (1982, 〈em〉Computer graphics and image processing〈/em〉, 18: 97–108) and Uteshev and Goncharova (2018, 〈em〉Journal of Computational and Applied Mathematics〈/em〉, 328: 232–251) to the computation of geodetic coordinates. Both methods have been modified to make them more suitable for this particular problem. The methods are compared to several commonly used geodetic methods in terms of accuracy and computational efficiency. It is found that a simple modification improves the accuracy of the methods by ~3 orders of magnitude, and the modified method of Uteshev and Goncharova (2018) achieves an accuracy of 〈0.1 mm anywhere on the surface of the Earth. The methods are especially efficient in the computation of ellipsoidal height. As an additional result of this study, a new formulation of the well-known method by Bowring (1976, 〈em〉Survey Review〈/em〉, 23: 323–327) is derived, and it is shown to improve the computation speed of Bowring's method by ~12%–~27% compared to the conventional formulation.〈/p〉〈/div〉 〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 51
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 96〈/p〉 〈p〉Author(s): Rameswar Panda, Amran Bhuiyan, Vittorio Murino, Amit K. Roy-Chowdhury〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Existing approaches for person re-identification have concentrated on either designing the best feature representation or learning optimal matching metrics in a static setting where the number of cameras are fixed in a network. Most approaches have neglected the dynamic and open world nature of the re-identification problem, where one or multiple new cameras may be temporarily on-boarded into an existing system to get additional information or added to expand an existing network. To address such a very practical problem, we propose a novel approach for adapting existing multi-camera re-identification frameworks with limited supervision. First, we formulate a domain perceptive re-identification method based on geodesic flow kernel that can effectively find the best source camera (already installed) to adapt with newly introduced target camera(s), without requiring a very expensive training phase. Second, we introduce a transitive inference algorithm for re-identification that can exploit the information from best source camera to improve the accuracy across other camera pairs in a network of multiple cameras. Third, we develop a target-aware sparse prototype selection strategy for finding an informative subset of source camera data for data-efficient learning in resource constrained environments. Our approach can greatly increase the flexibility and reduce the deployment cost of new cameras in many real-world dynamic camera networks. Extensive experiments demonstrate that our approach significantly outperforms state-of-the-art unsupervised alternatives whilst being extremely efficient to compute.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 52
    facet.materialart.
    Unknown
    Elsevier
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 96〈/p〉 〈p〉Author(s): Chengzu Bai, Ren Zhang, Zeshui Xu, Rui Cheng, Baogang Jin, Jian Chen〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Kernel entropy component analysis (KECA) is a recently proposed dimensionality reduction approach, which has showed superiority in many pattern analysis algorithms previously based on principal component analysis (PCA). The optimized KECA (OKECA) is a state-of-the-art extension of KECA and can return projections retaining more expressive power than KECA. However, OKECA is not robust to outliers and has high computational complexities attributed to its inherent properties of L2-norm. To tackle these two problems, we propose a new variant of KECA, namely L1-norm-based KECA (L1-KECA) for data transformation and feature extraction. L1-KECA attempts to find a new kernel decomposition matrix such that the extracted features store the maximum information potential, which is measured by L1-norm. Accordingly, we present a greedy iterative algorithm which has much faster convergence than OKECA's. Additionally, L1-KECA retains OKECA's capability to obtain accurate density estimation with very few features (just one or two). Moreover, a new semi-supervised L1-KECA classifier is developed and employed into the data classification. Extensive experiments on different real-world datasets validate that our model is superior to most existing KECA-based and PCA-based approaches. Code has been also made publicly available.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 53
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 96〈/p〉 〈p〉Author(s): Samitha Herath, Basura Fernando, Mehrtash Harandi〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉In this paper we raise two important question, “〈strong〉1.〈/strong〉 Is temporal information beneficial in recognizing actions from still images? 〈strong〉2.〈/strong〉 Do we know how to take the maximum advantage from them?”. To answer these question we propose a novel transfer learning problem, Temporal To Still Image Learning (〈em〉i.e.〈/em〉, T2SIL) where we learn to derive temporal information from still images. Thereafter, we use a two-stream model where still image action predictions are fused with derived temporal predictions. In T2SIL, the knowledge transferring occurs from temporal representations of videos (〈em〉e.g.〈/em〉, Optical-flow, Dynamic Image representations) to still action images. Along with the T2SIL we propose a new action still image action dataset and a video dataset sharing the same set of classes. We explore three well established transfer learning frameworks (〈em〉i.e.〈/em〉, GANs, Embedding learning and Teacher Student Networks (TSNs)) in place of the temporal knowledge transfer method. The use of derived temporal information from our TSN and Embedding learning improves still image action recognition.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 54
    facet.materialart.
    Unknown
    Elsevier
    Publication Date: 2019
    Description: 〈p〉Publication date: January 2020〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 97〈/p〉 〈p〉Author(s): Pooya Ashtari, Fateme Nateghi Haredasht, Hamid Beigy〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Centroid-based methods including k-means and fuzzy c-means are known as effective and easy-to-implement approaches to clustering purposes in many applications. However, these algorithms cannot be directly applied to supervised tasks. This paper thus presents a generative model extending the centroid-based clustering approach to be applicable to classification and regression tasks. Given an arbitrary loss function, the proposed approach, termed Supervised Fuzzy Partitioning (SFP), incorporates labels information into its objective function through a surrogate term penalizing the empirical risk. Entropy-based regularization is also employed to fuzzify the partition and to weight features, enabling the method to capture more complex patterns, identify significant features, and yield better performance facing high-dimensional data. An iterative algorithm based on block coordinate descent scheme is formulated to efficiently find a local optimum. Extensive classification experiments on synthetic, real-world, and high-dimensional datasets demonstrate that the predictive performance of SFP is competitive with state-of-the-art algorithms such as SVM and random forest. SFP has a major advantage over such methods, in that it not only leads to a flexible, nonlinear model but also can exploit any convex loss function in the training phase without compromising computational efficiency.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 55
    Publication Date: 2019
    Description: Military named entity recognition (MNER) is one of the key technologies in military information extraction. Traditional methods for the MNER task rely on cumbersome feature engineering and specialized domain knowledge. In order to solve this problem, we propose a method employing a bidirectional long short-term memory (BiLSTM) neural network with a self-attention mechanism to identify the military entities automatically. We obtain distributed vector representations of the military corpus by unsupervised learning and the BiLSTM model combined with the self-attention mechanism is adopted to capture contextual information fully carried by the character vector sequence. The experimental results show that the self-attention mechanism can improve effectively the performance of MNER task. The F-score of the military documents and network military texts identification was 90.15% and 89.34%, respectively, which was better than other models.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 56
    Publication Date: 2019
    Description: This article empirically demonstrates the impacts of truthfully sharing forecast information and using forecast combinations in a fast-moving-consumer-goods (FMCG) supply chain. Although it is known a priori that sharing information improves the overall efficiency of a supply chain, information such as pricing or promotional strategy is often kept proprietary for competitive reasons. In this regard, it is herein shown that simply sharing the retail-level forecasts—this does not reveal the exact business strategy, due to the effect of omni-channel sales—yields nearly all the benefits of sharing all pertinent information that influences FMCG demand. In addition, various forecast combination methods are used to further stabilize the forecasts, in situations where multiple forecasting models are used during operation. In other words, it is shown that combining forecasts is less risky than “betting” on any component model.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 57
    Publication Date: 2019
    Description: An important problem in machine learning is that, when using more than two labels, it is very difficult to construct and optimize a group of learning functions that are still useful when the prior distribution of instances is changed. To resolve this problem, semantic information G theory, Logical Bayesian Inference (LBI), and a group of Channel Matching (CM) algorithms are combined to form a systematic solution. A semantic channel in G theory consists of a group of truth functions or membership functions. In comparison with the likelihood functions, Bayesian posteriors, and Logistic functions that are typically used in popular methods, membership functions are more convenient to use, providing learning functions that do not suffer the above problem. In Logical Bayesian Inference (LBI), every label is independently learned. For multilabel learning, we can directly obtain a group of optimized membership functions from a large enough sample with labels, without preparing different samples for different labels. Furthermore, a group of Channel Matching (CM) algorithms are developed for machine learning. For the Maximum Mutual Information (MMI) classification of three classes with Gaussian distributions in a two-dimensional feature space, only 2–3 iterations are required for the mutual information between three classes and three labels to surpass 99% of the MMI for most initial partitions. For mixture models, the Expectation-Maximization (EM) algorithm is improved to form the CM-EM algorithm, which can outperform the EM algorithm when the mixture ratios are imbalanced, or when local convergence exists. The CM iteration algorithm needs to combine with neural networks for MMI classification in high-dimensional feature spaces. LBI needs further investigation for the unification of statistics and logic.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 58
    Publication Date: 2019
    Description: In an era of accelerating digitization and advanced big data analytics, harnessing quality data and insights will enable innovative research methods and management approaches. Among others, Artificial Intelligence Imagery Analysis has recently emerged as a new method for analyzing the content of large amounts of pictorial data. In this paper, we provide background information and outline the application of Artificial Intelligence Imagery Analysis for analyzing the content of large amounts of pictorial data. We suggest that Artificial Intelligence Imagery Analysis constitutes a profound improvement over previous methods that have mostly relied on manual work by humans. In this paper, we discuss the applications of Artificial Intelligence Imagery Analysis for research and practice and provide an example of its use for research. In the case study, we employed Artificial Intelligence Imagery Analysis for decomposing and assessing thumbnail images in the context of marketing and media research and show how properly assessed and designed thumbnail images promote the consumption of online videos. We conclude the paper with a discussion on the potential of Artificial Intelligence Imagery Analysis for research and practice across disciplines.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 59
    Publication Date: 2019
    Description: 〈p〉Publication date: January 2020〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 97〈/p〉 〈p〉Author(s): Younghoon Kim, Hyungrok Do, Seoung Bum Kim〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Graph-based clustering is an efficient method for identifying clusters in local and nonlinear data patterns. Among the existing methods, spectral clustering is one of the most prominent algorithms. However, this method is vulnerable to noise and outliers. This study proposes a robust graph-based clustering method that removes the data nodes of relatively low density. The proposed method calculates the pseudo-density from a similarity matrix, and reconstructs it using a sparse regularization model. In this process, noise and the outer points are determined and removed. Unlike previous edge cutting-based methods, the proposed method is robust to noise while detecting clusters because it cuts out irrelevant nodes. We use a simulation and real-world data to demonstrate the usefulness of the proposed method by comparing it to existing methods in terms of clustering accuracy and robustness to noisy data. The comparison results confirm that the proposed method outperforms the alternatives.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 60
    Publication Date: 2019
    Description: Human eye movement is one of the most important functions for understanding our surroundings. When a human eye processes a scene, it quickly focuses on dominant parts of the scene, commonly known as a visual saliency detection or visual attention prediction. Recently, neural networks have been used to predict visual saliency. This paper proposes a deep learning encoder-decoder architecture, based on a transfer learning technique, to predict visual saliency. In the proposed model, visual features are extracted through convolutional layers from raw images to predict visual saliency. In addition, the proposed model uses the VGG-16 network for semantic segmentation, which uses a pixel classification layer to predict the categorical label for every pixel in an input image. The proposed model is applied to several datasets, including TORONTO, MIT300, MIT1003, and DUT-OMRON, to illustrate its efficiency. The results of the proposed model are quantitatively and qualitatively compared to classic and state-of-the-art deep learning models. Using the proposed deep learning model, a global accuracy of up to 96.22% is achieved for the prediction of visual saliency.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 61
    Publication Date: 2019
    Description: The skyline query and its variant queries are useful functions in the early stages of a knowledge-discovery processes. The skyline query and its variant queries select a set of important objects, which are better than other common objects in the dataset. In order to handle big data, such knowledge-discovery queries must be computed in parallel distributed environments. In this paper, we consider an efficient parallel algorithm for the “K-skyband query” and the “top-k dominating query”, which are popular variants of skyline query. We propose a method for computing both queries simultaneously in a parallel distributed framework called MapReduce, which is a popular framework for processing “big data” problems. Our extensive evaluation results validate the effectiveness and efficiency of the proposed algorithm on both real and synthetic datasets.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 62
    Publication Date: 2019
    Description: A generalization of Ding’s construction is proposed that employs as a defining set the collection of the sth powers ( s ≥ 2 ) of all nonzero elements in G F ( p m ) , where p ≥ 2 is prime. Some of the resulting codes are optimal or near-optimal and include projective codes over G F ( 4 ) that give rise to optimal or near optimal quantum codes. In addition, the codes yield interesting combinatorial structures, such as strongly regular graphs and block designs.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 63
    Publication Date: 2019
    Description: The exorbitant increase in the computational complexity of modern video coding standards, such as High Efficiency Video Coding (HEVC), is a compelling challenge for resource-constrained consumer electronic devices. For instance, the brute force evaluation of all possible combinations of available coding modes and quadtree-based coding structure in HEVC to determine the optimum set of coding parameters for a given content demand a substantial amount of computational and energy resources. Thus, the resource requirements for real time operation of HEVC has become a contributing factor towards the Quality of Experience (QoE) of the end users of emerging multimedia and future internet applications. In this context, this paper proposes a content-adaptive Coding Unit (CU) size selection algorithm for HEVC intra-prediction. The proposed algorithm builds content-specific weighted Support Vector Machine (SVM) models in real time during the encoding process, to provide an early estimate of CU size for a given content, avoiding the brute force evaluation of all possible coding mode combinations in HEVC. The experimental results demonstrate an average encoding time reduction of 52.38%, with an average Bjøntegaard Delta Bit Rate (BDBR) increase of 1.19% compared to the HM16.1 reference encoder. Furthermore, the perceptual visual quality assessments conducted through Video Quality Metric (VQM) show minimal visual quality impact on the reconstructed videos of the proposed algorithm compared to state-of-the-art approaches.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 64
    Publication Date: 2019
    Description: A crowdsourcing contest is one of the most popular modes of crowdsourcing and is also an important tool for an enterprise to implement open innovation. The solvers’ active participation is one of the major reasons for the success of crowdsourcing contests. Research on solvers’ participation behavior is helpful in understanding the sustainability and incentives of solvers’ participation in the online crowdsourcing platform. So, how to attract more solvers to participate and put in more effort is the focus of researchers. In this regard, previous studies mainly used the submission quantity to measure solvers’ participation behavior and lacked an effective measure on the degree of participation effort expended by a solver. For the first time, we use solvers’ participation time as a dependent variable to measure their effort in a crowdsourcing contest. Thus, we incorporate participation time into the solver’s participation research. With the data from Taskcn.com, we analyze how participation time is affected four key factors including task design, task description, task process, and environment, respectively. We found that, first, for task design, higher task rewards will attract solvers to invest more time in the participation process and the relationship between participation time and task duration is inverted U-shaped. Second, for task description, the length of the task description has a negative impact on participation time and the task description attachment will positively influence the participation time. Third, for the task process, communication and supplementary explanations in a crowdsourcing process positively affect participation time. Fourth, for environmental factors, the task density of the crowdsourcing platform and the market price of all crowdsourcing contests have respectively negative and positive effects on participation time.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 65
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Artificial Intelligence, Volume 276〈/p〉 〈p〉Author(s): Yuren Zhou, Xiaoyu He, Yi Xiang, Shaowei Cai〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Multi- and many-objective optimization problems have wide applications in the real world, and they have received growing attention from the evolutionary computation community. To promote the algorithm development in this area, numerous studies have been devoted to designing multi- and many-objective test problems. Most of these studies focus on handling complicated Pareto fronts (PFs), and the impact of the Pareto sets (PSs) has not been well-studied, although complicated PSs are prevalent in the real world. This paper presents a set of scalable test problems according to a new principle, which considers the geometrical properties of both PF and PS. A position function with a spherical form is proposed to introduce non-linear variable dependences to the PS, so as to simulate the variable dependencies in the real-world problems. According to the proposed principle, the first 〈em〉m〈/em〉 (i.e., the number of objectives) decision variables are used to form the surface of a unit hypersphere, while the rest variables are designed to optimize a certain distance function. A set of test problems are generated by the proposed principle, which are then used to investigate six representative algorithms. The experimental results indicate that the proposed test problems pose considerable difficulties to existing algorithms, calling for the need for designing new algorithms capable of handling complicated PF and PS simultaneously.〈/p〉〈/div〉
    Print ISSN: 0004-3702
    Electronic ISSN: 1872-7921
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 66
    Publication Date: 2019
    Description: Balanced partitioning is often a crucial first step in solving large-scale graph optimization problems, for example, in some cases, a big graph can be chopped into pieces that fit on one machine to be processed independently before stitching the results together, leading to certain suboptimality from the interaction among different pieces. In other cases, links between different parts may show up in the running time and/or network communications cost, hence the desire to have small cut size. We study a distributed balanced-partitioning problem where the goal is to partition the vertices of a given graph into k pieces so as to minimize the total cut size. Our algorithm is composed of a few steps that are easily implementable in distributed computation frameworks such as MapReduce. The algorithm first embeds nodes of the graph onto a line, and then processes nodes in a distributed manner guided by the linear embedding order. We examine various ways to find the first embedding, for example, via a hierarchical clustering or Hilbert curves. Then we apply four different techniques including local swaps, and minimum cuts on the boundaries of partitions, as well as contraction and dynamic programming. As our empirical study, we compare the above techniques with each other, and also to previous work in distributed graph algorithms, for example, a label-propagation method, FENNEL and Spinner. We report our results both on a private map graph and several public social networks, and show that our results beat previous distributed algorithms: For instance, compared to the label-propagation algorithm, we report an improvement of 15–25% in the cut value. We also observe that our algorithms admit scalable distributed implementation for any number of partitions. Finally, we explain three applications of this work at Google: (1) Balanced partitioning is used to route multi-term queries to different replicas in Google Search backend in a way that reduces the cache miss rates by ≈ 0.5 % , which leads to a double-digit gain in throughput of production clusters. (2) Applied to the Google Maps Driving Directions, balanced partitioning minimizes the number of cross-shard queries with the goal of saving in CPU usage. This system achieves load balancing by dividing the world graph into several “shards”. Live experiments demonstrate an ≈ 40 % drop in the number of cross-shard queries when compared to a standard geography-based method. (3) In a job scheduling problem for our data centers, we use balanced partitioning to evenly distribute the work while minimizing the amount of communication across geographically distant servers. In fact, the hierarchical nature of our solution goes well with the layering of data center servers, where certain machines are closer to each other and have faster links to one another.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 67
    Publication Date: 2019
    Description: 〈p〉Publication date: October 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers in Industry, Volume 111〈/p〉 〈p〉Author(s): Igor Manojlović, Goran Švenda, Aleksandar Erdeljan, Milan Gavrić〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉System analysis and real-time operations in power distribution utilities require an accurate but compact load data model created on the basis of large number of consumers’ measurements modeled as high-dimensional time series. This paper proposes an algorithm for grouping time series with similar load patterns and extracting characteristic representatives of loads from the obtained groups, resulting in reduced load data model size. The proposed Time Series Grouping Algorithm combines dimensionality reduction, both partitional and hierarchical clustering and cluster validation to group time series into an optimal number of clusters based on simple parametric settings. The usefulness of the proposed algorithm is proven in a case study implemented in R language. The case study was conducted on real smart meter data from three distribution networks: one North American and the other two European. Results of the case study confirm that the proposed solution achieves high cluster validity and short execution time comparing to related algorithms. Therefore, the article’s main contribution is load pattern recognition support convenient for applications in distribution management systems.〈/p〉〈/div〉
    Print ISSN: 0166-3615
    Electronic ISSN: 1872-6194
    Topics: Computer Science , Technology
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 68
    Publication Date: 2019
    Description: 〈p〉Publication date: 1 December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 398〈/p〉 〈p〉Author(s): Martin Pollack, Michele Pütz, Daniele L. Marchisio, Michael Oevermann, Christian Hasse〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉The evolution of polydisperse systems is governed by population balance equations. A group of efficient solution approaches are the moment methods, which do not solve for the number density function (NDF) directly but rather for a set of its moments. While this is computationally efficient, a specific challenge arises when describing the fluxes across a boundary in phase space for the disappearance of elements, the so-called zero-flux. The main difficulty is the missing NDF-information at the boundary, which most moment methods cannot provide. Relevant physical examples are evaporating droplets, soot oxidation or particle dissolution.〈/p〉 〈p〉In general, this issue can be solved by reconstructing the NDF close to the boundary. However, this was previously only achieved with univariate approaches, i.e. considering only a single internal variable. Many physical problems are insufficiently described by univariate population balance equations, e.g. droplets in sprays often require the temperature or the velocity to be internal coordinates in addition to the size.〈/p〉 〈p〉In this paper, we propose an algorithm, which provides an efficient multivariate approach to calculate the zero-fluxes. The algorithm employs the Extended Quadrature Method of Moments (EQMOM) with Beta and Gamma kernel density functions for the marginal NDF reconstruction and a polynomial or spline for the other conditional dimensions. This combination allows to reconstruct the entire multivariate NDF and based on this, expressions for the disappearance flux are derived. An algorithm is proposed for the whole moment inversion and reconstruction process. It is validated against a suite of test cases with increasing complexity. The influence of the number of kernel density functions and the configuration of the polynomials and splines on the accuracy is discussed. Finally, the associated computational costs are evaluated.〈/p〉 〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 69
    Publication Date: 2019
    Description: 〈p〉Publication date: 1 December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 398〈/p〉 〈p〉Author(s): Antoine Vermeil de Conchard, Huina Mao, Romain Rumpler〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Effective treatment of unbounded domains using artificial truncating boundaries are essential in numerical simulation, e.g. using the Finite Element Method (FEM). Among these, Perfectly Matched Layers (PML) have proved to be particularly efficient and flexible. However, an efficient handling of frequency sweeps is not trivial with such absorbing layers since the formulation inherently contains coupled space- and frequency-dependent terms. Using the FEM, this may imply generating system matrices at each step of the frequency sweep. In this paper, an approximation is proposed in order to allow for efficient frequency sweeps. The performance and robustness of the proposed approximation is presented on 2D and 3D acoustic cases. A generic, robust way to truncate the acoustic domain efficiently is also proposed, tested on a range of test cases and for different frequency regions.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 70
    Publication Date: 2019
    Description: Analyzing the structure of a social network helps in gaining insights into interactions and relationships among users while revealing the patterns of their online behavior. Network centrality is a metric of importance of a network node in a network, which allows revealing the structural patterns and morphology of networks. We propose a distributed computing approach for the calculation of network centrality value for each user using the MapReduce approach in the Hadoop platform, which allows faster and more efficient computation as compared to the conventional implementation. A distributed approach is scalable and helps in efficient computations of large-scale datasets, such as social network data. The proposed approach improves the calculation performance of degree centrality by 39.8%, closeness centrality by 40.7% and eigenvalue centrality by 41.1% using a Twitter dataset.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 71
    Publication Date: 2019
    Description: Deep neural networks are successful learning tools for building nonlinear models. However, a robust deep learning-based classification model needs a large dataset. Indeed, these models are often unstable when they use small datasets. To solve this issue, which is particularly critical in light of the possible clinical applications of these predictive models, researchers have developed approaches such as virtual sample generation. Virtual sample generation significantly improves learning and classification performance when working with small samples. The main objective of this study is to evaluate the ability of the proposed virtual sample generation to overcome the small sample size problem, which is a feature of the automated detection of a neurodevelopmental disorder, namely autism spectrum disorder. Results show that our method enhances diagnostic accuracy from 84%–95% using virtual samples generated on the basis of five actual clinical samples. The present findings show the feasibility of using the proposed technique to improve classification performance even in cases of clinical samples of limited size. Accounting for concerns in relation to small sample sizes, our technique represents a meaningful step forward in terms of pattern recognition methodology, particularly when it is applied to diagnostic classifications of neurodevelopmental disorders. Besides, the proposed technique has been tested with other available benchmark datasets. The experimental outcomes showed that the accuracy of the classification that used virtual samples was superior to the one that used original training data without virtual samples.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 72
    Publication Date: 2019
    Description: 〈p〉Publication date: January 2020〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 97〈/p〉 〈p〉Author(s): Qiong Wang, Lu Zhang, Wenbin Zou, Kidiyo Kpalma〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉In this paper, we present a novel method for salient object detection in videos. Salient object detection methods based on background prior may miss salient region when the salient object touches the frame borders. To solve this problem, we propose to detect the whole salient object via the adjunction of virtual borders. A guided filter is then applied on the temporal output to integrate the spatial edge information for a better detection of the salient object edges. At last, a global spatio-temporal saliency map is obtained by combining the spatial saliency map and the temporal saliency map together according to the entropy. The proposed method is assessed on three popular datasets (Fukuchi, FBMS and VOS) and compared to several state-of-the-art methods. The experimental results show that the proposed approach outperforms the tested methods.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 73
    Publication Date: 2019
    Description: In this paper, I first review signal detection theory (SDT) approaches to perception, and then discuss why it is thought that SDT theory implies that increasing attention improves performance. Our experiments have shown, however, that this is not necessarily true. Subjects had either focused attention on two of four possible locations in the visual field, or diffused attention to all four locations. The stimuli (offset letters), locations, conditions, and tasks were all known in advance, responses were forced-choice, subjects were properly instructed and motivated, and instructions were always valid—conditions which should optimize signal detection. Relative to diffusing attention, focusing attention indeed benefitted discrimination of forward from backward pointing Es. However, focusing made it harder to identify a randomly chosen one of 20 letters. That focusing can either aid or disrupt performance, even when cues are valid and conditions are idealized, is surprising, but it can also be explained by SDT, as shown here. These results warn the experimental researcher not to confuse focusing attention with enhancing performance, and warn the modeler not to assume that SDT is unequivocal.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 74
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): Maxime Theillard, David Saintillan〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We present a new framework for the efficient simulation of the dynamics of active fluids in complex two- and three-dimensional microfluidic geometries. Focusing on the case of a suspension of microswimmers such as motile bacteria, we adopt a continuum mean-field model based on partial differential equations for the evolution of the concentration, polarization and nematic tensor fields, which are nonlinearly coupled to the Navier-Stokes equations for the fluid flow driven by internal active stresses. A level set method combined with an adaptive mesh refinement scheme on Quad-/Octree grids is used to capture complex domain shapes while refining the solution near boundaries or in the neighborhood of sharp gradients. A hybrid finite volumes/finite differences method is implemented in which the concentration field is treated using finite volumes to ensure mass conservation, while the polarization and nematic alignment fields are treated using a combination of finite differences and finite volumes for enhanced accuracy. The governing equations for these fields are solved along with the Navier-Stokes equations, which are evolved using an unconditionally stable projection solver. We illustrate the versatility and robustness of our method by analyzing spontaneous active flows in various two- and three-dimensional systems. Our results show excellent agreement with previous models and experiments and pave the way for further developments in active microfluidics.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 75
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): Jiangming Xie, M. Yvonne Ou, Liwei Xu〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉Poroelastic materials play an important role in biomechanical and geophysical research. In this paper, we investigate wave propagation in orthotropic poroelastic media by studying the time-domain poroelastic wave equations. Both the low frequency Biot's (LF-Biot) equations and the Biot-Johnson-Koplik-Dashen (Biot-JKD) model are considered. In LF-Biot equations, the dissipation terms are proportional to the relative velocity between the fluid and the solid by a constant. Contrast to this, the dissipation terms in the Biot-JKD model are in the form of time convolution (memory) as a result of the frequency-dependence of fluid-solid interaction at the underlying microscopic scale in the frequency domain. The dynamic tortuosity and permeability described by Darcy's law are two crucial factors in this problem, and highly linked to the viscous force. In the Biot model, the key difficulty is to handle the viscous term when the pore fluid is viscous. In the Biot-JKD model, the convolution operator involves order 1/2 shifted fractional derivatives in the time domain, which is challenging to discretize.〈/p〉 〈p〉In this work, a new method of the multipoint Padé (or Rational) approximation for Stieltjes function is applied to approximate the JKD dynamic tortuosity and then an augmented system of Biot-JKD model is obtained, where the kernel of the memory term is replaced by the finite auxiliary variables satisfying a local system of ordinary differential equations. The Runge-Kutta discontinuous Galerkin (RKDG) method with the un-splitting method is used to compute the numerical solution, and numerical examples are presented to demonstrate the high order accuracy and stability of the method. Compared with the existing approaches for solving the Biot-JKD equations, the augmented system presented here require neither the storage of solution history nor the computation of the flux of the auxiliary variables.〈/p〉 〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 76
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): William C. Tyson, Gary K. Yan, Christopher J. Roy, Carl F. Ollivier-Gooch〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉A higher-order accurate discretization error estimation procedure for finite-volume schemes is presented. Discretization error estimates are computed using the linearized error transport equations (ETE). ETE error estimates are applied as a correction to the primal solution. The ETE are then relinearized about the corrected primal solution, and discretization error estimates are recomputed. This process, referred to as ETE relinearization, is performed in an iterative manner to successively increase the accuracy of discretization error estimates. Under certain conditions, ETE relinearization is shown to correct error estimates, or equivalently the entire primal solution, to higher-order accuracy. In terms of computational cost, ETE relinearization has a competitive advantage over conventional higher-order discretizations when used as a form of defect correction for the primal solution. Furthermore, ETE relinearization is shown to be particularly useful for problems where the error incurred by the linearization of the ETE cannot be neglected. Results are presented for several steady-state inviscid and viscous flow problems using both structured and unstructured meshes.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 77
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): Yu Li, Richard Mikaël Slevinsky〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We present a unified treatment of the Fourier spectra of spherically symmetric nonlocal diffusion operators. We develop numerical and analytical results for the class of kernels with weak algebraic singularity as the distance between source and target tends to 0. Rapid algorithms are derived for their Fourier spectra with the computation of each eigenvalue independent of all others. The algorithms are trivially parallelizable, capable of leveraging more powerful compute environments, and the accuracy of the eigenvalues is individually controllable. The algorithms include a Maclaurin series and a full divergent asymptotic series valid for any 〈em〉d〈/em〉 spatial dimensions. Using Drummond's sequence transformation, we prove linear complexity recurrence relations for degree-graded sequences of numerators and denominators in the rational approximations to the divergent asymptotic series. These relations are important to ensure that the algorithms are efficient, and also increase the numerical stability compared with the conventional algorithm with quadratic complexity.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 78
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): Hua Shen, Matteo Parsani〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We construct a space-time conservation element and solution element (CESE) scheme for solving the compressible Euler equations on moving meshes (CESE-MM) which allow an arbitrary motion for each of the mesh points. The scheme is a direct extension of a purely Eulerian CESE scheme that was previously implemented on hybrid unstructured meshes (Shen et al. (2015) [43]). It adopts a staggered mesh in space and time such that the physical variables are continuous across the interfaces of the adjacent space-time control volumes and, therefore, a Riemann solver is not required to calculate interface fluxes or the node velocities. Moreover, the staggered mesh can significantly alleviate mesh tangles so that the time step can be kept at an acceptable level without using any rezoning operation. The discretization of the integral space-time conservation law is completely based on the physical space-time control volume, thereby satisfying the physical and geometrical conservation laws. Plenty of numerical examples are carried out to validate the accuracy and robustness of the CESE-MM scheme.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 79
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 96〈/p〉 〈p〉Author(s): Zhuoyao Zhong, Lei Sun, Qiang Huo〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Although Faster R-CNN based text detection approaches have achieved promising results, their localization accuracy is not satisfactory in certain cases due to their sub-optimal bounding box regression based localization modules. In this paper, we address this problem and propose replacing the bounding box regression module with a novel LocNet based localization module to improve the localization accuracy of a Faster R-CNN based text detector. Given a proposal generated by a region proposal network (RPN), instead of directly predicting the bounding box coordinates of the concerned text instance, the proposal is enlarged to create a search region so that an “In-Out” conditional probability to each row and column of this search region is assigned, which can then be used to accurately infer the concerned bounding box. Furthermore, we present a simple yet effective two-stage approach to convert the difficult multi-oriented text detection problem to a relatively easier horizontal text detection problem, which makes our approach able to robustly detect multi-oriented text instances with accurate bounding box localization. Experiments demonstrate that the proposed approach boosts the localization accuracy of Faster R-CNN based text detectors significantly. Consequently, our new text detector has achieved superior performance on both horizontal (ICDAR-2011, ICDAR-2013 and MULTILIGUL) and multi-oriented (MSRA-TD500, ICDAR-2015) text detection benchmark tasks.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 80
    Publication Date: 2019
    Description: The growing demand on video streaming services increasingly motivates the development of a reliable and accurate models for the assessment of Quality of Experience (QoE). In this duty, human-related factors which have significant influence on QoE play a crucial role. However, the complexity caused by multiple effects of those factors on human perception has introduced challenges on contemporary studies. In this paper, we inspect the impact of the human-related factors, namely perceptual factors, memory effect, and the degree of interest. Based on our investigation, a novel QoE model is proposed that effectively incorporates those factors to reflect the user’s cumulative perception. Evaluation results indicate that our proposed model performed excellently in predicting cumulative QoE at any moment within a streaming session.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 81
    Publication Date: 2019
    Description: 〈p〉Publication date: July–December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 SoftwareX, Volume 10〈/p〉 〈p〉Author(s): Giorgio Speranza, Roberto Canteri〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Data analysis and plotting is an important part of the research work accompanying any scientist. Once the experiments are concluded, generally a software allowing data reduction such as selection of background and its subtraction, peak fitting, graphical visualization is used to obtain a correct interpretation of the results. RxpsG is a public domain software with an easy user friendly interface oriented to X-ray Photoelectron Spectroscopy (XPS) and Auger Electron Spectroscopy (AES) data manipulation based on the R platform. All the features needed to analyze XPS, AES spectra are implemented and the software allows an immediate data reporting. Although the RxpsG is primarily devoted to electron and photoelectron spectral analysis, it allows any data in text format to be loaded and processed. RxpsG is a project open to contributions and implementation of new procedures. In this work we will describe the potentialities of the software and the more important features.〈/p〉〈/div〉 〈h5〉Graphical abstract〈/h5〉 〈div〉〈p〉〈figure〉〈img src="https://ars.els-cdn.com/content/image/1-s2.0-S2352711019300378-fx1.jpg" width="301" alt="Graphical abstract for this article" title=""〉〈/figure〉〈/p〉〈/div〉
    Electronic ISSN: 2352-7110
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 82
    Publication Date: 2019
    Description: 〈p〉Publication date: November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Artificial Intelligence, Volume 276〈/p〉 〈p〉Author(s): Federico Cerutti, Massimiliano Giacomin, Mauro Vallati〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉In this paper we illustrate the design choices that led to the development of 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈mi mathvariant="sans-serif"〉ArgSemSAT〈/mi〉〈/math〉, the winner of the preferred semantics track at the 2017 International Competition on Computational Models of Arguments (ICCMA 2017), a biennial contest on problems associated to the Dung's model of abstract argumentation frameworks, widely recognised as a fundamental reference in computational argumentation. The algorithms of 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈mi mathvariant="sans-serif"〉ArgSemSAT〈/mi〉〈/math〉 are based on multiple calls to a SAT solver to compute complete labellings, and on encoding constraints to drive the search towards the solution of decision and enumeration problems. In this paper we focus on preferred semantics (and incidentally stable as well), one of the most popular and complex semantics for identifying acceptable arguments. We discuss our design methodology that includes a systematic exploration and empirical evaluation of labelling encodings, algorithmic variations and SAT solver choices. In designing the successful 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈mi mathvariant="sans-serif"〉ArgSemSAT〈/mi〉〈/math〉, we discover that: (1) there is a labelling encoding that appears to be universally better than other, logically equivalent ones; (2) composition of different techniques such as AllSAT and enumerating stable extensions when searching for preferred semantics brings advantages; (3) injecting domain specific knowledge in the algorithm design can lead to significant improvements.〈/p〉〈/div〉
    Print ISSN: 0004-3702
    Electronic ISSN: 1872-7921
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 83
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Computers & Geosciences, Volume 133〈/p〉 〈p〉Author(s): Weiming Hu, Guido Cervone〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉The Analog Ensemble is a statistical technique that generates probabilistic forecasts using a current deterministic prediction, a set of historical predictions, and the associated observations. It generates ensemble forecasts by first identifying the most similar past predictions to the current one, and then summarizing the corresponding observations. This is a computationally efficient solution for ensemble modeling because it does not require multiple numerical weather prediction simulations, but a single model realization.〈/p〉 〈p〉Despite this intrinsic computational efficiency, the required computation can grow very large because atmospheric models are routinely run with increasing resolutions. For example, the North American Mesoscale forecast system contains over 262 792 grid points to generate a 12 km prediction. The North American Mesoscale model generally uses a structured grid to represent the domain, despite the fact that certain physical changes occur non-uniformly across space and time. For example, temperature changes tend to occur more rapidly in mountains than plains.〈/p〉 〈p〉An evolutionary algorithm is proposed to dynamically and automatically learn the optimal unstructured grid pattern. This iterative evolutionary algorithm is guided by Darwinian evolutionary rule generation and instantiation to identify grid vertices. Analog computations are performed only at vertices. Therefore, minimizing the number of vertices and identifying their locations are paramount to optimizing the available computational resources, minimizing queue time, and ultimately achieving better results. The optimal unstructured grid is then reused to guide the predictions for a variety of applications like temperature and wind speed.〈/p〉 〈/div〉
    Print ISSN: 0098-3004
    Electronic ISSN: 1873-7803
    Topics: Geosciences , Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 84
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): S. Dargaville, A.G. Buchan, R.P. Smedley-Stevenson, P.N. Smith, C.C. Pain〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉This paper describes an angular adaptivity algorithm for Boltzmann transport applications which uses P〈sub〉〈em〉n〈/em〉〈/sub〉 and filtered P〈sub〉〈em〉n〈/em〉〈/sub〉 expansions, allowing for different expansion orders across space/energy. Our spatial discretisation is specifically designed to use less memory than competing DG schemes and also gives us direct access to the amount of stabilisation applied at each node. For filtered P〈sub〉〈em〉n〈/em〉〈/sub〉 expansions, we then use our adaptive process in combination with this net amount of stabilisation to compute a spatially dependent filter strength that does not depend on 〈em〉a priori〈/em〉 spatial information. This applies heavy filtering only where discontinuities are present, allowing the filtered P〈sub〉〈em〉n〈/em〉〈/sub〉 expansion to retain high-order convergence where possible. Regular and goal-based error metrics are shown and both the adapted P〈sub〉〈em〉n〈/em〉〈/sub〉 and adapted filtered P〈sub〉〈em〉n〈/em〉〈/sub〉 methods show significant reductions in DOFs and runtime. The adapted filtered P〈sub〉〈em〉n〈/em〉〈/sub〉 with our spatially dependent filter shows close to fixed iteration counts and up to high-order is even competitive with P〈sup〉0〈/sup〉 discretisations in problems with heavy advection.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 85
    Publication Date: 2019
    Description: 〈p〉Publication date: 15 November 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 397〈/p〉 〈p〉Author(s): Jingwei Hu, Shi Jin, Ruiwen Shu〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉The Boltzmann equation may contain uncertainties in initial/boundary data or collision kernel. To study the impact of these uncertainties, a stochastic Galerkin (sG) method was proposed in [18] and studied in the kinetic regime. When the system is close to the fluid regime (the Knudsen number is small), the method would become prohibitively expensive due to the stiff collision term. In this work, we develop efficient sG methods for the Boltzmann equation that work for a wide range of Knudsen numbers, and investigate, in particular, their behavior in the fluid regime.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 86
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 96〈/p〉 〈p〉Author(s): Chunfeng Song, Yongzhen Huang, Yan Huang, Ning Jia, Liang Wang〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Gait recognition is one of the most important techniques for human identification at a distance. Most current gait recognition frameworks consist of several separate steps: silhouette segmentation, feature extraction, feature learning, and similarity measurement. These modules are mutually independent with each part fixed, resulting in a suboptimal performance in challenging conditions. In this paper, we integrate those steps into one framework, i.e., an end-to-end network for gait recognition, named 〈strong〉GaitNet〈/strong〉. It is composed of two convolutional neural networks: one corresponds to gait segmentation, and the other corresponds to classification. The two networks are modeled in one joint learning procedure which can be trained jointly. This strategy greatly simplifies the traditional step-by-step manner and is thus much more efficient for practical applications. Moreover, joint learning can automatically adjust each part to fit the global optimal objective, leading to obvious performance improvement over separate learning. We evaluate our method on three large scale gait datasets, including CASIA-B, SZU RGB-D Gait and a newly built database with complex dynamic outdoor backgrounds. Extensive experimental results show that the proposed method is effective and achieves the state-of-the-art results. The code and data will be released upon request.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 87
    Publication Date: 2019
    Description: 〈p〉Publication date: December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 96〈/p〉 〈p〉Author(s): Chuan-Xian Ren, Xiao-Lin Xu, Zhen Lei〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Person re-identification (re-ID) is to match different images of the same pedestrian. It has attracted increasing research interest in pattern recognition and machine learning. Traditionally, person re-ID is formulated as a metric learning problem with binary classification output. However, higher order relationship, such as triplet closeness among the instances, is ignored by such pair-wise based metric learning methods. Thus, the discriminative information hidden in these data is insufficiently explored. This paper proposes a new structured loss function to push the frontier of the person re-ID performance in realistic scenarios. The new loss function introduces two margin parameters. They operate as bounds to remove positive pairs of very small distances and negative pairs of large distances. A trade-off coefficient is assigned to the loss term of negative pairs to alleviate class-imbalance problem. By using a linear function with the margin-based objectives, the gradients 〈em〉w.r.t.〈/em〉 weight matrices are no longer dependent on the iterative loss values in a multiplicative manner. This makes the weights update process robust to large iterative loss values. The new loss function is compatible with many deep learning architectures, thus, it induces new deep network with pair-pruning regularization for metric learning. To evaluate the performance of the proposed model, extensive experiments are conducted on benchmark datasets. The results indicate that the new loss together with the ResNet-50 backbone has excellent feature representation ability for person re-ID.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 88
    Publication Date: 2019
    Description: This paper deals with the Arabic translation taṣawwur in Averroes’ Great Commentary of the term τῶν ἀδιαιρέτων νόησις (“ton adiaireton noesis”, thinking of the indivisibles) in Aristotle’s De anima and the Latin translation from Arabic with (in-)formatio, as quoted by Albertus Magnus [...]
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 89
    Publication Date: 2019
    Description: Opportunistic networks are considered as the promising network structures to implement traditional and typical infrastructure-based communication by enabling smart mobile devices in the networks to contact with each other within a fixed communication area. Because of the intermittent and unstable connections between sources and destinations, message routing and forwarding in opportunistic networks have become challenging and troublesome problems recently. In this paper, to improve the data dissemination environment, we propose an improved routing-forwarding strategy utilizing node profile and location prediction for opportunistic networks, which mainly includes three continuous phases: the collecting and updating of routing state information, community detection and optimization and node location prediction. Each mobile node in the networks is able to establish a network routing matrix after the entire process of information collecting and updating. Due to the concentrated population in urban areas and relatively few people in remote areas, the distribution of location prediction roughly presents a type of symmetry in opportunistic networks. Afterwards, the community optimization and location prediction mechanisms could be regarded as an significant foundation for data dissemination in the networks. Ultimately, experimental results demonstrate that the proposed algorithm could slightly enhance the delivery ratio and substantially degrade the network overhead and end-to-end delay as compared with the other four routing strategies.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 90
    Publication Date: 2019
    Description: With digital media, not only are media extensions of their human users, as McLuhan posited, but there is a flip or reversal in which the human users of digital media become an extension of those digital media as these media scoop up their data and use them to the advantage of those that control these media. The implications of this loss of privacy as we become “an item in a data bank” are explored and the field of captology is described. The feedback of the users of digital media become the feedforward for those media.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 91
    Publication Date: 2019
    Description: The main scope of the presented research was the development of an innovative product for the management of city parking lots. Our application will ensure the implementation of the Smart City concept by using computer vision and communication platforms, which enable the development of new integrated digital services. The use of video cameras could simplify and lower the costs of parking lot controls. In the aim of parking space detection, an aggregated decision was proposed, employing various metrics, computed over a sliding window interval provided by the camera. The history created over 20 images provides an adaptive method for background and accurate detection. The system has shown high robustness in two benchmarks, achieving a recognition rate higher than 93%.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 92
    Publication Date: 2019
    Description: 〈p〉Publication date: 1 December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 398〈/p〉 〈p〉Author(s): Qing Pan, Timon Rabczuk, Gang Xu, Chong Chen〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉We investigate the isogeometric analysis for surface PDEs based on the extended Loop subdivision approach. The basis functions consisting of quartic box-splines corresponding to each subdivided control mesh are utilized to represent the geometry exactly, and construct the solution space for dependent variables as well, which is consistent with the concept of isogeometric analysis. The subdivision process is equivalent to the 〈em〉h〈/em〉-refinement of NURBS-based isogeometric analysis. The performance of the proposed method is evaluated by solving various surface PDEs, such as surface Laplace-Beltrami harmonic/biharmonic/triharmonic equations, which are defined on the limit surfaces of extended Loop subdivision for different initial control meshes. Numerical experiments show that the proposed method has desirable performance in terms of the accuracy, convergence and computational cost for solving the above surface PDEs defined on both open and closed surfaces. The proposed approach is proved to be second-order accuracy in the sense of 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈msup〉〈mrow〉〈mi〉L〈/mi〉〈/mrow〉〈mrow〉〈mn〉2〈/mn〉〈/mrow〉〈/msup〉〈/math〉-norm with theoretical and/or numerical results, which is also outperformed over the standard linear finite element by several numerical comparisons.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 93
    Publication Date: 2019
    Description: 〈p〉Publication date: 1 December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 398〈/p〉 〈p〉Author(s): Arthur E.P. Veldman〈/p〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 94
    Publication Date: 2019
    Description: 〈p〉Publication date: January 2020〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 97〈/p〉 〈p〉Author(s): Shuzhao Li, Huimin Yu, Roland Hu〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Person attributes are often exploited as mid-level human semantic information to help promote the performance of person re-identification task. In this paper, unlike most existing methods simply taking the attribute learning as a classification problem, we perform it in a different way with the motivation that attributes are related to specific local regions, which refers to the perceptual ability of attributes. We utilize the process of attribute detection to generate corresponding attribute-part detectors, whose invariance to many influences like poses and camera views can be guaranteed. With detected local part regions, our model extracts local part features to handle the body part misalignment problem, which is another major challenge for person re-identification. The local descriptors are further refined by fused attribute information to eliminate interferences caused by detection deviation. Finally, the refined local feature works together with a holistic-level feature to constitute our final feature representation. Extensive experiments on two popular benchmarks with attribute annotations demonstrate the effectiveness of our model and competitive performance compared with state-of-the-art algorithms.〈/p〉〈/div〉 〈h5〉Graphical abstract〈/h5〉 〈div〉〈p〉〈figure〉〈img src="https://ars.els-cdn.com/content/image/1-s2.0-S003132031930319X-fx1.jpg" width="301" alt="Graphical abstract for this article" title=""〉〈/figure〉〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 95
    Publication Date: 2019
    Description: Anomaly detection of network traffic flows is a non-trivial problem in the field of network security due to the complexity of network traffic. However, most machine learning-based detection methods focus on network anomaly detection but ignore the user anomaly behavior detection. In real scenarios, the anomaly network behavior may harm the user interests. In this paper, we propose an anomaly detection model based on time-decay closed frequent patterns to address this problem. The model mines closed frequent patterns from the network traffic of each user and uses a time-decay factor to distinguish the weight of current and historical network traffic. Because of the dynamic nature of user network behavior, a detection model update strategy is provided in the anomaly detection framework. Additionally, the closed frequent patterns can provide interpretable explanations for anomalies. Experimental results show that the proposed method can detect user behavior anomaly, and the network anomaly detection performance achieved by the proposed method is similar to the state-of-the-art methods and significantly better than the baseline methods.
    Electronic ISSN: 2078-2489
    Topics: Computer Science
    Published by MDPI
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 96
    facet.materialart.
    Unknown
    Elsevier
    Publication Date: 2019
    Description: 〈p〉Publication date: January 2020〈/p〉 〈p〉〈b〉Source:〈/b〉 Pattern Recognition, Volume 97〈/p〉 〈p〉Author(s): Xin Wei, Hui Wang, Bryan Scotney, Huan Wan〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉Face recognition has achieved great success owing to the fast development of deep neural networks in the past few years. Different loss functions can be used in a deep neural network resulting in different performance. Most recently some loss functions have been proposed, which have advanced the state of the art. However, they cannot solve the problem of 〈em〉margin bias〈/em〉 which is present in class imbalanced datasets, having the so-called long-tailed distributions. In this paper, we propose to solve the margin bias problem by setting a minimum margin for all pairs of classes. We present a new loss function, Minimum Margin Loss (MML), which is aimed at enlarging the margin of those overclose class centre pairs so as to enhance the discriminative ability of the deep features. MML, together with Softmax Loss and Centre Loss, supervises the training process to balance the margins of all classes irrespective of their class distributions. We implemented MML in Inception-ResNet-v1 and conducted extensive experiments on seven face recognition benchmark datasets, MegaFace, FaceScrub, LFW, SLLFW, YTF, IJB-B and IJB-C. Experimental results show that the proposed MML loss function has led to new state of the art in face recognition, reducing the negative effect of margin bias.〈/p〉〈/div〉
    Print ISSN: 0031-3203
    Electronic ISSN: 1873-5142
    Topics: Computer Science
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 97
    Publication Date: 2019
    Description: 〈p〉Publication date: 1 December 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics, Volume 398〈/p〉 〈p〉Author(s): Massimiliano Ferronato, Andrea Franceschini, Carlo Janna, Nicola Castelletto, Hamdi A. Tchelepi〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉This work discusses a general approach for preconditioning the block Jacobian matrix arising from the discretization and linearization of coupled multiphysics problem. The objective is to provide a fully algebraic framework that can be employed as a starting point for the development of specialized algorithms exploiting unique features of the specific problem at hand. The basic idea relies on approximately computing an operator able to decouple the different processes, which can then be solved independently one from the other. In this work, the decoupling operator is computed by extending the theory of block sparse approximate inverses. The proposed approach is implemented for two multiphysics applications, namely the simulation of a coupled poromechanical system and the mechanics of fractured media. The numerical results obtained in experiments taken from real-world examples are used to analyze and discuss the properties of the preconditioner.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 98
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 16 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Edoardo Zoni, Yaman Güçlü〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉A common strategy in the numerical solution of partial differential equations is to define a uniform discretization of a tensor-product multi-dimensional logical domain, which is mapped to a physical domain through a given coordinate transformation. By extending this concept to a multi-patch setting, simple and efficient numerical algorithms can be employed on relatively complex geometries. The main drawback of such an approach is the inherent difficulty in dealing with singularities of the coordinate transformation.〈/p〉 〈p〉This work suggests a comprehensive numerical strategy for the common situation of disk-like domains with a singularity at a unique pole, where one edge of the rectangular logical domain collapses to one point of the physical domain (for example, a circle). We present robust numerical methods for the solution of Vlasov-like hyperbolic equations coupled to Poisson-like elliptic equations in such geometries. We describe a semi-Lagrangian advection solver that employs a novel set of coordinates, named pseudo-Cartesian coordinates, to integrate the characteristic equations in the whole domain, including the pole, and a finite element elliptic solver based on globally 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈msup〉〈mrow〉〈mi mathvariant="script"〉C〈/mi〉〈/mrow〉〈mrow〉〈mn〉1〈/mn〉〈/mrow〉〈/msup〉〈/math〉 smooth splines (Toshniwal et al., 2017). The two solvers are tested both independently and on a coupled model, namely the 2D guiding-center model for magnetized plasmas, equivalent to a vorticity model for incompressible inviscid Euler fluids. The numerical methods presented show high-order convergence in the space discretization parameters, uniformly across the computational domain, without effects of order reduction due to the singularity. Dedicated tests show that the numerical techniques described can be applied straightforwardly also in the presence of point charges (equivalently, point-like vortices), within the context of particle-in-cell methods.〈/p〉 〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 99
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 16 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Maria Giuseppina Chiara Nestola, Barna Becsek, Hadi Zolfaghari, Patrick Zulian, Dario De Marinis, Rolf Krause, Dominik Obrist〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉 〈p〉We present a novel framework inspired by the Immersed Boundary Method for predicting the fluid-structure interaction of complex structures immersed in laminar, transitional and turbulent flows.〈/p〉 〈p〉The key elements of the proposed fluid-structure interaction framework are 1) the solution of elastodynamics equations for the structure, 2) the use of a high-order Navier–Stokes solver for the flow, and 3) the variational transfer (〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈msup〉〈mrow〉〈mi〉L〈/mi〉〈/mrow〉〈mrow〉〈mn〉2〈/mn〉〈/mrow〉〈/msup〉〈/math〉-projection) for coupling the solid and fluid subproblems.〈/p〉 〈p〉The dynamic behavior of a deformable structure is simulated in a finite element framework by adopting a fully implicit scheme for its temporal integration. It allows for mechanical constitutive laws including inhomogeneous and fiber-reinforced materials.〈/p〉 〈p〉The Navier–Stokes equations for the incompressible flow are discretized with high-order finite differences which allow for the direct numerical simulation of laminar, transitional and turbulent flows.〈/p〉 〈p〉The structure and the flow solvers are coupled by using an 〈math xmlns:mml="http://www.w3.org/1998/Math/MathML" altimg="si1.svg"〉〈msup〉〈mrow〉〈mi〉L〈/mi〉〈/mrow〉〈mrow〉〈mn〉2〈/mn〉〈/mrow〉〈/msup〉〈/math〉-projection method for the transfer of velocities and forces between the fluid grid and the solid mesh. This strategy allows for the numerical solution of coupled large scale problems based on nonconforming structured and unstructured grids. The transfer between fluid and solid limits the convergence order of the flow solver close to the fluid-solid interface.〈/p〉 〈p〉The framework is validated with the Turek–Hron benchmark and a newly proposed benchmark modelling the flow-induced oscillation of an inert plate. A three-dimensional simulation of an elastic beam in transitional flow is provided to show the solver's capability of coping with anisotropic elastic structures immersed in complex fluid flow.〈/p〉 〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 100
    Publication Date: 2019
    Description: 〈p〉Publication date: Available online 16 August 2019〈/p〉 〈p〉〈b〉Source:〈/b〉 Journal of Computational Physics〈/p〉 〈p〉Author(s): Min Chai, Kun Luo, Changxiao Shao, Haiou Wang, Jianren Fan〈/p〉 〈h5〉Abstract〈/h5〉 〈div〉〈p〉This paper proposes a finite difference discretization method for simulations of heat and mass transfer with Robin boundary conditions on irregular domains. The level set method is utilized to implicitly capture the irregular evolving interface, and the ghost fluid method to address variable discontinuities on the interface. Special care has been devoted to providing ghost values that are restricted by the Robin boundary conditions. Specifically, it is done in two steps: 1) calculate the normal derivative in cells adjacent to the interface by reconstructing a linear polynomial system; 2) successively extrapolate the normal derivative and the ghost value in the normal direction using a linear partial differential equation approach. This method produces second-order accurate solutions for both the Poisson and heat equations with Robin boundary conditions, and first-order accurate solutions for the Stefan problems. The solution gradients are of first-order accuracy, as expected. It is easy to implement in three-dimensional configurations, and can be straightforwardly generalized into higher-order variants. The method thus represents a promising tool for practical heat and mass transfer problems involving Robin boundary conditions.〈/p〉〈/div〉
    Print ISSN: 0021-9991
    Electronic ISSN: 1090-2716
    Topics: Computer Science , Physics
    Published by Elsevier
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...