ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (9,304)
  • Latest Papers from Table of Contents or Articles in Press  (9,304)
  • Molecular Diversity Preservation International  (8,627)
  • Hindawi  (325)
  • Institute of Electrical and Electronics Engineers  (227)
  • Blackwell Publishing Ltd
  • Copernicus
  • De Gruyter
  • Springer Science + Business Media
  • 2020-2022  (8,954)
  • 2010-2014
  • 1990-1994  (350)
  • 1985-1989
  • 1960-1964
  • 2021  (8,954)
  • 1991  (175)
  • 1990  (175)
  • Electrical Engineering, Measurement and Control Technology  (5,404)
  • Mathematics  (3,001)
  • Computer Science  (942)
  • Ethnic Sciences  (71)
  • Sociology
Collection
  • Articles  (9,304)
Source
Publisher
Years
  • 2020-2022  (8,954)
  • 2010-2014
  • 1990-1994  (350)
  • 1985-1989
  • 1960-1964
  • +
Year
Journal
  • 1
    Publication Date: 2021-08-20
    Description: The Kadomtsev–Petviashvili equation is known to be the leading term of a semi-infinite hierarchy of integrable equations with evolutions given by times with positive numbers. Here, we introduce new hierarchy directed to negative numbers of times. The derivation of such systems, as well as the corresponding hierarchy, is based on the commutator identities. This approach enables introduction of linear differential equations that admit lifts up to nonlinear integrable ones by means of the special dressing procedure. Thus, one can construct not only nonlinear equations, but corresponding Lax pairs as well. The Lax operator of this evolution coincides with the Lax operator of the “positive” hierarchy. We also derive (1 + 1)-dimensional reductions of equations of this hierarchy.
    Electronic ISSN: 2227-7390
    Topics: Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 2
    Publication Date: 2021-08-20
    Description: This paper presents the design of NaviPBx, an ultrasound-navigated prostate cancer biopsy system. NaviPBx is designed to support an affordable and sustainable national healthcare program in Senegal. It uses spatiotemporal navigation and multiparametric transrectal ultrasound to guide biopsies. NaviPBx integrates concepts and methods that have been independently validated previously in clinical feasibility studies and deploys them together in a practical prostate cancer biopsy system. NaviPBx is based entirely on free open-source software and will be shared as a free open-source program with no restriction on its use. NaviPBx is set to be deployed and sustained nationwide through the Senegalese Military Health Service. This paper reports on the results of the design process of NaviPBx. Our approach concentrates on “frugal technology”, intended to be affordable for low–middle income (LMIC) countries. Our project promises the wide-scale application of prostate biopsy and will foster time-efficient development and programmatic implementation of ultrasound-guided diagnostic and therapeutic interventions in Senegal and beyond.
    Electronic ISSN: 2313-433X
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 3
    Publication Date: 2021-08-20
    Description: Traditional research methods in software defect prediction use part of the data in the same project to train the defect prediction model and predict the defect label of the remaining part of the data. However, in the practical realm of software development, the software project that needs to be predicted is generally a brand new software project, and there is not enough labeled data to build a defect prediction model; therefore, traditional methods are no longer applicable. Cross-project defect prediction uses the labeled data of the same type of project similar to the target project to build the defect prediction model, so as to solve the problem of data loss in traditional methods. However, the difference in data distribution between the same type of project and the target project reduces the performance of defect prediction. To solve this problem, this paper proposes a cross-project defect prediction method based on manifold feature transformation. This method transforms the original feature space of the project into a manifold space, then reduces the difference in data distribution of the transformed source project and the transformed target project in the manifold space, and finally uses the transformed source project to train a naive Bayes prediction model with better performance. A comparative experiment was carried out using the Relink dataset and the AEEEM dataset. The experimental results show that compared with the benchmark method and several cross-project defect prediction methods, the proposed method effectively reduces the difference in data distribution between the source project and the target project, and obtains a higher F1 value, which is an indicator commonly used to measure the performance of the two-class model.
    Electronic ISSN: 1999-5903
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 4
    Publication Date: 2021-08-20
    Description: For α∈[0,1], let Aα(Gσ)=αD(G)+(1−α)A(Gσ), where G is a simple undirected graph, D(G) is the diagonal matrix of its vertex degrees and A(Gσ) is the adjacency matrix of the signed graph Gσ whose underlying graph is G. In this paper, basic properties of Aα(Gσ) are obtained, its positive semidefiniteness is studied and some bounds on its eigenvalues are derived—in particular, lower and upper bounds on its largest eigenvalue are obtained.
    Electronic ISSN: 2227-7390
    Topics: Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 5
    Publication Date: 2021-08-20
    Description: This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering the various uncertainties on the embedded sensors and the system’s design. The NMPC problem is solved at each time step using validated simulation and interval analysis methods to compute the optimal and safe control inputs over a finite prediction horizon. This approach considers several constraints which are crucial for the system’s safety and stability, namely the state and the control limits. The proposed controller consists of two steps: filtering and branching procedures enabling to find the input intervals that fulfill the state constraints and ensure the convergence to the reference set. Then, the optimization procedure allows for computing the optimal and punctual control input that must be sent to the system’s actuators for the pendulum stabilization. The validated NMPC capabilities are illustrated through several simulations under the DynIbex library and experiments using an inverted pendulum.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 6
    Publication Date: 2021-08-20
    Description: This paper reveals the relationship between the Miller plateau voltage and the displacement currents through the gate–drain capacitance (CGD) and the drain–source capacitance (CDS) in the switching process of a power transistor. The corrected turn-on Miller plateau voltage and turn-off Miller plateau voltage are different even with a constant current load. Using the proposed new Miller plateau, the turn-on and turn-off sequences can be more accurately analyzed, and the switching power loss can be more accurately predicted accordingly. Switching loss models based on the new Miller plateau have also been proposed. The experimental test result of the power MOSFET (NCE2030K) verified the relationship between the Miller plateau voltage and the displacement currents through CGD and CDS. A carefully designed verification test bench featuring a power MOSFET written in Verilog-A proved the prediction accuracy of the switching waveform and switching loss with the new proposed Miller plateau. The average relative error of the loss model using the new plateau is reduced to 1/2∼1/4 of the average relative error of the loss model using the old plateau; the proposed loss model using the new plateau, which also takes the gate current’s variation into account, further reduces the error to around 5%.
    Electronic ISSN: 2079-9292
    Topics: Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 7
    Publication Date: 2021-08-18
    Description: The Society of Automotive Engineers (SAE) defines six levels of driving automation, ranging from Level 0 to Level 5. Automated driving systems perform entire dynamic driving tasks for Levels 3–5 automated vehicles. Delegating dynamic driving tasks from driver to automated driving systems can eliminate crashes attributed to driver errors. Sharing status, sharing intent, seeking agreement, or sharing prescriptive information between road users and vehicles dedicated to automated driving systems can further enhance dynamic driving task performance, safety, and traffic operations. Extensive simulation is required to reduce operating costs and achieve an acceptable risk level before testing cooperative automated driving systems in laboratory environments, test tracks, or public roads. Cooperative automated driving systems can be simulated using a vehicle dynamics simulation tool (e.g., CarMaker and CarSim) or a traffic microsimulation tool (e.g., Vissim and Aimsun). Vehicle dynamics simulation tools are mainly used for verification and validation purposes on a small scale, while traffic microsimulation tools are mainly used for verification purposes on a large scale. Vehicle dynamics simulation tools can simulate longitudinal, lateral, and vertical dynamics for only a few vehicles in each scenario (e.g., up to ten vehicles in CarMaker and up to twenty vehicles in CarSim). Conventional traffic microsimulation tools can simulate vehicle-following, lane-changing, and gap-acceptance behaviors for many vehicles in each scenario without simulating vehicle powertrain. Vehicle dynamics simulation tools are more compute-intensive but more accurate than traffic microsimulation tools. Due to software architecture or computing power limitations, simplifying assumptions underlying convectional traffic microsimulation tools may have been a necessary compromise long ago. There is, therefore, a need for a simulation tool to optimize computational complexity and accuracy to simulate many vehicles in each scenario with reasonable accuracy. This research proposes a traffic microsimulation tool that employs a simplified vehicle powertrain model and a model-based fault detection method to simulate many vehicles with reasonable accuracy at each simulation time step under noise and unknown inputs. Our traffic microsimulation tool considers driver characteristics, vehicle model, grade, pavement conditions, operating mode, vehicle-to-vehicle communication vulnerabilities, and traffic conditions to estimate longitudinal control variables with reasonable accuracy at each simulation time step for many conventional vehicles, vehicles dedicated to automated driving systems, and vehicles equipped with cooperative automated driving systems. Proposed vehicle-following model and longitudinal control functions are verified for fourteen vehicle models, operating in manual, automated, and cooperative automated modes over two driving schedules under three malicious fault magnitudes on transmitted accelerations.
    Electronic ISSN: 2079-9292
    Topics: Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 8
    Publication Date: 2021-08-18
    Description: In machine learning and data science, feature selection is considered as a crucial step of data preprocessing. When we directly apply the raw data for classification or clustering purposes, sometimes we observe that the learning algorithms do not perform well. One possible reason for this is the presence of redundant, noisy, and non-informative features or attributes in the datasets. Hence, feature selection methods are used to identify the subset of relevant features that can maximize the model performance. Moreover, due to reduction in feature dimension, both training time and storage required by the model can be reduced as well. In this paper, we present a tri-stage wrapper-filter-based feature selection framework for the purpose of medical report-based disease detection. In the first stage, an ensemble was formed by four filter methods—Mutual Information, ReliefF, Chi Square, and Xvariance—and then each feature from the union set was assessed by three classification algorithms—support vector machine, naïve Bayes, and k-nearest neighbors—and an average accuracy was calculated. The features with higher accuracy were selected to obtain a preliminary subset of optimal features. In the second stage, Pearson correlation was used to discard highly correlated features. In these two stages, XGBoost classification algorithm was applied to obtain the most contributing features that, in turn, provide the best optimal subset. Then, in the final stage, we fed the obtained feature subset to a meta-heuristic algorithm, called whale optimization algorithm, in order to further reduce the feature set and to achieve higher accuracy. We evaluated the proposed feature selection framework on four publicly available disease datasets taken from the UCI machine learning repository, namely, arrhythmia, leukemia, DLBCL, and prostate cancer. Our obtained results confirm that the proposed method can perform better than many state-of-the-art methods and can detect important features as well. Less features ensure less medical tests for correct diagnosis, thus saving both time and cost.
    Electronic ISSN: 1424-8220
    Topics: Chemistry and Pharmacology , Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 9
    Publication Date: 2021-08-19
    Description: The mathematical model of diffusion in powder oxide catalysts during the process of temperature programmed oxygen isotopic exchange is proposed. The diffusion is considered together with the homogeneous and heterogeneous oxygen isotopic exchange processes. The matrix forms of exchange rate equations of simple and complex heteroexchange, and homoexchange reactions which obtain symmetrical forms are analyzed. The quantitative values of model parameters are found from the fitting of experimental data taken from literature of temperature programmed oxygen isotopic exchange process in catalysts ZrO2 and CeO2. The fittings show a good matching of model results with experimental data. The shapes of kinetic curves registered during temperature programmed oxygen isotopic exchange process are analyzed and the influence of various process parameters such as activation energies of simple and complex heteroexchange, oxygen surface concentration of catalyst, ratio of catalysts surface and volume of reactor, diffusion activation energy is considered. The depth profiles of diffusing oxygen species in oxide catalysts powder are calculated.
    Electronic ISSN: 2073-8994
    Topics: Mathematics
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 10
    Publication Date: 2021-08-19
    Description: With the rapid development of deep learning, computer vision has assisted in solving a variety of problems in engineering construction. However, very few computer vision-based approaches have been proposed on work productivity’s evaluation. Therefore, taking a super high-rise project as a research case, using the detected object information obtained by a deep learning algorithm, a computer vision-based method for evaluating the productivity of assembling reinforcement is proposed. Firstly, a detector that can accurately distinguish various entities related to assembling reinforcement based on CenterNet is established. DLA34 is selected as the backbone. The mAP reaches 0.9682, and the speed of detecting a single image can be as low as 0.076 s. Secondly, the trained detector is used to detect the video frames, and images with detected boxes and documents with coordinates can be obtained. The position relationship between the detected work objects and detected workers is used to determine how many workers (N) have participated in the task. The time (T) to perform the process can be obtained from the change of coordinates of the work object. Finally, the productivity is evaluated according to N and T. The authors use four actual construction videos for validation, and the results show that the productivity evaluation is generally consistent with the actual conditions. The contribution of this research to construction management is twofold: On the one hand, without affecting the normal behavior of workers, a connection between construction individuals and work object is established, and the work productivity evaluation is realized. On the other hand, the proposed method has a positive effect on improving the efficiency of construction management.
    Electronic ISSN: 1424-8220
    Topics: Chemistry and Pharmacology , Electrical Engineering, Measurement and Control Technology
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...