ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Ihre E-Mail wurde erfolgreich gesendet. Bitte prüfen Sie Ihren Maileingang.

Leider ist ein Fehler beim E-Mail-Versand aufgetreten. Bitte versuchen Sie es erneut.

Vorgang fortführen?

Exportieren
Filter
  • Artikel  (5.753)
  • Algorithms  (1.162)
  • BMC Medical Informatics and Decision Making  (757)
  • 110151
  • 9794
  • Informatik  (5.753)
  • 1
    Publikationsdatum: 2021-08-20
    Beschreibung: This paper combines the interval analysis tools with the nonlinear model predictive control (NMPC). The NMPC strategy is formulated based on an uncertain dynamic model expressed as nonlinear ordinary differential equations (ODEs). All the dynamic parameters are identified in a guaranteed way considering the various uncertainties on the embedded sensors and the system’s design. The NMPC problem is solved at each time step using validated simulation and interval analysis methods to compute the optimal and safe control inputs over a finite prediction horizon. This approach considers several constraints which are crucial for the system’s safety and stability, namely the state and the control limits. The proposed controller consists of two steps: filtering and branching procedures enabling to find the input intervals that fulfill the state constraints and ensure the convergence to the reference set. Then, the optimization procedure allows for computing the optimal and punctual control input that must be sent to the system’s actuators for the pendulum stabilization. The validated NMPC capabilities are illustrated through several simulations under the DynIbex library and experiments using an inverted pendulum.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 2
    Publikationsdatum: 2021-08-20
    Beschreibung: The characteristics of bridge pile-group foundation have a significant influence on the dynamic performance of the superstructure. Most of the existing analysis methods for the pile-group foundation impedance take the trait of strong specialty, which cannot be generalized in practical projects. Therefore, a project-oriented numerical solution algorithm is proposed to compute the dynamic impedance of bridge pile-group foundation. Based on the theory of viscous-spring artificial boundary, the derivation and solution of the impedance function are transferred to numerical modeling and harmonic analysis, which can be carried out through the finite element method. By taking a typical pile-group foundation as a case study, the results based on the algorithm are compared with those from existing literature. Moreover, an impact experiment of a real pile-group foundation was implemented, the results of which are also compared with those resulting from the proposed numerical algorithm. Both comparisons show that the proposed numerical algorithm satisfies engineering precision, thus showing good effectiveness in application.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 3
    Publikationsdatum: 2021-08-21
    Beschreibung: Background Significant investments have been made towards the implementation of mHealth applications and eRecord systems globally. However, fragmentation of these technologies remains a big challenge, often unresolved in developing countries. In particular, evidence shows little consideration for linking mHealth applications and eRecord systems. Botswana is a typical developing country in sub-Saharan Africa that has explored mHealth applications, but the solutions are not interoperable with existing eRecord systems. This paper describes Botswana’s eRecord systems interoperability landscape and provides guidance for linking mHealth applications to eRecord systems, both for Botswana and for developing countries using Botswana as an exemplar. Methods A survey and interviews of health ICT workers and a review of the Botswana National eHealth Strategy were completed. Perceived interoperability benefits, opportunities and challenges were charted and analysed, and future guidance derived. Results Survey and interview responses showed the need for interoperable mHealth applications and eRecord systems within the health sector of Botswana and within the context of the National eHealth Strategy. However, the current Strategy does not address linking mHealth applications to eRecord systems. Across Botswana’s health sectors, global interoperability standards and Application Programming Interfaces are widely used, with some level of interoperability within, but not between, public and private facilities. Further, a mix of open source and commercial eRecord systems utilising relational database systems and similar data formats are supported. Challenges for linking mHealth applications and eRecord systems in Botswana were identified and categorised into themes which led to development of guidance to enhance the National eHealth Strategy. Conclusion Interoperability between mHealth applications and eRecord systems is needed and is feasible. Opportunities and challenges for linking mHealth applications to eRecord systems were identified, and future guidance stemming from this insight presented. Findings will aid Botswana, and other developing countries, in resolving the pervasive disconnect between mHealth applications and eRecord systems.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 4
    Publikationsdatum: 2021-08-19
    Beschreibung: Many smart city and society applications such as smart health (elderly care, medical applications), smart surveillance, sports, and robotics require the recognition of user activities, an important class of problems known as human activity recognition (HAR). Several issues have hindered progress in HAR research, particularly due to the emergence of fog and edge computing, which brings many new opportunities (a low latency, dynamic and real-time decision making, etc.) but comes with its challenges. This paper focuses on addressing two important research gaps in HAR research: (i) improving the HAR prediction accuracy and (ii) managing the frequent changes in the environment and data related to user activities. To address this, we propose an HAR method based on Soft-Voting and Self-Learning (SVSL). SVSL uses two strategies. First, to enhance accuracy, it combines the capabilities of Deep Learning (DL), Generalized Linear Model (GLM), Random Forest (RF), and AdaBoost classifiers using soft-voting. Second, to classify the most challenging data instances, the SVSL method is equipped with a self-training mechanism that generates training data and retrains itself. We investigate the performance of our proposed SVSL method using two publicly available datasets on six human activities related to lying, sitting, and walking positions. The first dataset consists of 562 features and the second dataset consists of five features. The data are collected using the accelerometer and gyroscope smartphone sensors. The results show that the proposed method provides 6.26%, 1.75%, 1.51%, and 4.40% better prediction accuracy (average over the two datasets) compared to GLM, DL, RF, and AdaBoost, respectively. We also analyze and compare the class-wise performance of the SVSL methods with that of DL, GLM, RF, and AdaBoost.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 5
    Publikationsdatum: 2021-08-21
    Beschreibung: Background To enhance teleconsultation management, demands can be classified into different patterns, and the service of each pattern demand can be improved. Methods For the effective teleconsultation classification, a novel ensemble hierarchical clustering method is proposed in this study. In the proposed method, individual clustering results are first obtained by different hierarchical clustering methods, and then ensembled by one-hot encoding, the calculation and division of cosine similarity, and network graph representation. In the built network graph about the high cosine similarity, the connected demand series can be categorized into one pattern. For verification, 43 teleconsultation demand series are used as sample data, and the efficiency and quality of teleconsultation services are respectively analyzed before and after the demand classification. Results The teleconsultation demands are classified into three categories, erratic, lumpy, and slow. Under the fixed strategies, the service analysis after demand classification reveals the deficiencies of teleconsultation services, but analysis before demand classification can’t. Conclusion The proposed ensemble hierarchical clustering method can effectively category teleconsultation demands, and the effective demand categorization can enhance teleconsultation management.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 6
    Publikationsdatum: 2021-02-25
    Beschreibung: Genetic algorithms (GA’s) are mostly used as an offline optimisation method to discover a suitable solution to a complex problem prior to implementation. In this paper, we present a different application in which a GA is used to progressively adapt the collective performance of an ad hoc collection of devices that are being integrated post-deployment. Adaptive behaviour in the context of this article refers to two dynamic aspects of the problem: (a) the availability of individual devices as well as the objective functions for the performance of the entire population. We illustrate this concept in a video surveillance scenario in which already installed cameras are being retrofitted with networking capabilities to form a coherent closed-circuit television (CCTV) system. We show that this can be conceived as a multi-objective optimisation problem which can be solved at run-time, with the added benefit that solutions can be refined or modified in response to changing priorities or even unpredictable events such as faults. We present results of a detailed simulation study, the implications of which are being discussed from both a theoretical and practical viewpoint (trade-off between saving computational resources and surveillance coverage).
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 7
    Publikationsdatum: 2021-02-25
    Beschreibung: Collision between rigid three-dimensional objects is a very common modelling problem in a wide spectrum of scientific disciplines, including Computer Science and Physics. It spans from realistic animation of polyhedral shapes for computer vision to the description of thermodynamic and dynamic properties in simple and complex fluids. For instance, colloidal particles of especially exotic shapes are commonly modelled as hard-core objects, whose collision test is key to correctly determine their phase and aggregation behaviour. In this work, we propose the Oriented Cuboid Sphere Intersection (OCSI) algorithm to detect collisions between prolate or oblate cuboids and spheres. We investigate OCSI’s performance by bench-marking it against a number of algorithms commonly employed in computer graphics and colloidal science: Quick Rejection First (QRI), Quick Rejection Intertwined (QRF) and a vectorized version of the OBB-sphere collision detection algorithm that explicitly uses SIMD Streaming Extension (SSE) intrinsics, here referred to as SSE-intr. We observed that QRI and QRF significantly depend on the specific cuboid anisotropy and sphere radius, while SSE-intr and OCSI maintain their speed independently of the objects’ geometry. While OCSI and SSE-intr, both based on SIMD parallelization, show excellent and very similar performance, the former provides a more accessible coding and user-friendly implementation as it exploits OpenMP directives for automatic vectorization.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 8
    Publikationsdatum: 2021-02-25
    Beschreibung: The Dynamic Facility Location problem is a generalization of the classic Facility Location problem, in which the distance metric between clients and facilities changes over time. Such metrics that develop as a function of time are usually called "evolving metrics", thus Dynamic Facility Location can be alternatively interpreted as a Facility Location problem in evolving metrics. The objective in this time-dependent variant is to balance the trade-off between optimizing the classic objective function and the stability of the solution, which is modeled by charging a switching cost when a client’s assignment changes from one facility to another. In this paper, we study the online variant of Dynamic Facility Location. We present a randomized O(logm+logn)-competitive algorithm, where m is the number of facilities and n is the number of clients. In the first step, our algorithm produces a fractional solution, in each timestep, to the objective of Dynamic Facility Location involving a regularization function. This step is an adaptation of the generic algorithm proposed by Buchbinder et al. in their work "Competitive Analysis via Regularization." Then, our algorithm rounds the fractional solution of this timestep to an integral one with the use of exponential clocks. We complement our result by proving a lower bound of Ω(m) for deterministic algorithms and lower bound of Ω(logm) for randomized algorithms. To the best of our knowledge, these are the first results for the online variant of the Dynamic Facility Location problem.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 9
    Publikationsdatum: 2021-02-25
    Beschreibung: We study the algorithmic complexity of solving subtraction games in a fixed dimension with a finite difference set. We prove that there exists a game in this class such that solving the game is EXP-complete and requires time 2Ω(n), where n is the input size. This bound is optimal up to a polynomial speed-up. The results are based on a construction introduced by Larsson and Wästlund. It relates subtraction games and cellular automata.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 10
    Publikationsdatum: 2021-03-30
    Beschreibung: This paper proposes a model to evaluate business projects to get into an incubator, allowing to rank them in order of selection priority. The model combines the Momentum method to build prospective scenarios and the AHP-TOPSIS-2N Multiple Criteria Decision Making (MCDM) method to rank the alternatives. Six business projects were evaluated to be incubated. The Momentum method made it possible for us to create an initial core of criteria for the evaluation of incubation projects. The AHP-TOPSIS-2N method supported the decision to choose the company to be incubated by ranking the alternatives in order of relevance. Our evaluation model has improved the existing models used by incubators. This model can be used and/or adapted by any incubator to evaluate the business projects to be incubated. The set of criteria for the evaluation of incubation projects is original and the use of prospective scenarios with an MCDM method to evaluate companies to be incubated does not exist in the literature.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 11
    Publikationsdatum: 2021-03-30
    Beschreibung: This paper presents an effective stochastic algorithm that embeds a large neighborhood decomposition technique into a variable neighborhood search for solving the permutation flow-shop scheduling problem. The algorithm first constructs a permutation as a seed using a recursive application of the extended two-machine problem. In this method, the jobs are recursively decomposed into two separate groups, and, for each group, an optimal permutation is calculated based on the extended two-machine problem. Then the overall permutation, which is obtained by integrating the sub-solutions, is improved through the application of a variable neighborhood search technique. The same as the first technique, this one is also based on the decomposition paradigm and can find an optimal arrangement for a subset of jobs. In the employed large neighborhood search, the concept of the critical path has been used to help the decomposition process avoid unfruitful computation and arrange only promising contiguous parts of the permutation. In this fashion, the algorithm leaves those parts of the permutation which already have high-quality arrangements and concentrates on modifying other parts. The results of computational experiments on the benchmark instances indicate the procedure works effectively, demonstrating that solutions, in a very short distance of the best-known solutions, are calculated within seconds on a typical personal computer. In terms of the required running time to reach a high-quality solution, the procedure outperforms some well-known metaheuristic algorithms in the literature.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 12
    Publikationsdatum: 2021-02-01
    Beschreibung: Background and objectives Internet-based technologies play an increasingly important role in the management and outcome of patients with chronic kidney disease (CKD). The healthcare system is currently flooded with digital innovations and internet-based technologies as a consequence of the coronavirus disease 2019 (COVID-19) pandemic. However, information about the attitude of German CKD-patients with access to online tools towards the use of remote, internet-based interactions such as video conferencing, email, electronic medical records and apps in general and for health issues in particular, are missing. Design, setting, participants, and measurements To address the use, habits and willingness of CKD patients in handling internet-based technologies we conducted a nationwide cross-sectional questionnaire survey in adults with CKD. Results We used 380 questionnaires from adult CKD patients (47.6% on dialysis, 43.7% transplanted and 8.7% CKD before renal replacement therapy) for analysis. Of these 18.9% denied using the internet at all (nonusers). Nonusers were significantly older (74.4 years, SD 11.4) than users (54.5 years, SD 14.5, p 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 13
    Publikationsdatum: 2021-03-29
    Beschreibung: Background Inguinal hernia repair, gallbladder removal, and knee- and hip replacements are the most commonly performed surgical procedures, but all are subject to practice variation and variable patient-reported outcomes. Shared decision-making (SDM) has the potential to reduce surgery rates and increase patient satisfaction. This study aims to evaluate the effectiveness of an SDM strategy with online decision aids for surgical and orthopaedic practice in terms of impact on surgery rates, patient-reported outcomes, and cost-effectiveness. Methods The E-valuAID-study is designed as a multicentre, non-randomized stepped-wedge study in patients with an inguinal hernia, gallstones, knee or hip osteoarthritis in six surgical and six orthopaedic departments. The primary outcome is the surgery rate before and after implementation of the SDM strategy. Secondary outcomes are patient-reported outcomes and cost-effectiveness. Patients in the usual care cluster prior to implementation of the SDM strategy will be treated in accordance with the best available clinical evidence, physician’s knowledge and preference and the patient’s preference. The intervention consists of the implementation of the SDM strategy and provision of disease-specific online decision aids. Decision aids will be provided to the patients before the consultation in which treatment decision is made. During this consultation, treatment preferences are discussed, and the final treatment decision is confirmed. Surgery rates will be extracted from hospital files. Secondary outcomes will be evaluated using questionnaires, at baseline, 3 and 6 months. Discussion The E-valuAID-study will examine the cost-effectiveness of an SDM strategy with online decision aids in patients with an inguinal hernia, gallstones, knee or hip osteoarthritis. This study will show whether decision aids reduce operation rates while improving patient-reported outcomes. We hypothesize that the SDM strategy will lead to lower surgery rates, better patient-reported outcomes, and be cost-effective. Trial registration: The Netherlands Trial Register, Trial NL8318, registered 22 January 2020. URL: https://www.trialregister.nl/trial/8318.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 14
    Publikationsdatum: 2021-02-01
    Beschreibung: Background This study developed a diagnostic tool to automatically detect normal, unclear and tumor images from colonoscopy videos using artificial intelligence. Methods For the creation of training and validation sets, 47,555 images in the jpg format were extracted from colonoscopy videos for 24 patients in Korea University Anam Hospital. A gastroenterologist with the clinical experience of 15 years divided the 47,555 images into three classes of Normal (25,895), Unclear (2038) and Tumor (19,622). A single shot detector, a deep learning framework designed for object detection, was trained using the 47,255 images and validated with two sets of 300 images—each validation set included 150 images (50 normal, 50 unclear and 50 tumor cases). Half of the 47,255 images were used for building the model and the other half were used for testing the model. The learning rate of the model was 0.0001 during 250 epochs (training cycles). Results The average accuracy, precision, recall, and F1 score over the category were 0.9067, 0.9744, 0.9067 and 0.9393, respectively. These performance measures had no change with respect to the intersection-over-union threshold (0.45, 0.50, and 0.55). This finding suggests the stability of the model. Conclusion Automated detection of normal, unclear and tumor images from colonoscopy videos is possible by using a deep learning framework. This is expected to provide an invaluable decision supporting system for clinical experts.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 15
    Publikationsdatum: 2021-03-31
    Beschreibung: Background Diabetes is a medical and economic burden in the United States. In this study, a machine learning predictive model was developed to predict unplanned medical visits among patients with diabetes, and findings were used to design a clinical intervention in the sponsoring healthcare organization. This study presents a case study of how predictive analytics can inform clinical actions, and describes practical factors that must be incorporated in order to translate research into clinical practice. Methods Data were drawn from electronic medical records (EMRs) from a large healthcare organization in the Northern Plains region of the US, from adult (≥ 18 years old) patients with type 1 or type 2 diabetes who received care at least once during the 3-year period. A variety of machine-learning classification models were run using standard EMR variables as predictors (age, body mass index (BMI), systolic blood pressure (BP), diastolic BP, low-density lipoprotein, high-density lipoprotein (HDL), glycohemoglobin (A1C), smoking status, number of diagnoses and number of prescriptions). The best-performing model after cross-validation testing was analyzed to identify strongest predictors. Results The best-performing model was a linear-basis support vector machine, which achieved a balanced accuracy (average of sensitivity and specificity) of 65.7%. This model outperformed a conventional logistic regression by 0.4 percentage points. A sensitivity analysis identified BP and HDL as the strongest predictors, such that disrupting these variables with random noise decreased the model’s overall balanced accuracy by 1.3 and 1.4 percentage points, respectively. These recommendations, along with stakeholder engagement, behavioral economics strategies, and implementation science principles helped to inform the design of a clinical intervention targeting behavioral changes. Conclusion Our machine-learning predictive model more accurately predicted unplanned medical visits among patients with diabetes, relative to conventional models. Post-hoc analysis of the model was used for hypothesis generation, namely that HDL and BP are the strongest contributors to unplanned medical visits among patients with diabetes. These findings were translated into a clinical intervention now being piloted at the sponsoring healthcare organization. In this way, this predictive model can be used in moving from prediction to implementation and improved diabetes care management in clinical settings.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 16
    Publikationsdatum: 2021-03-25
    Beschreibung: We study cost-sharing games in real-time scheduling systems where the server’s activation cost in every time slot is a function of its load. We focus on monomial cost functions and consider both the case when the degree is less than one (inducing positive congestion effect for the jobs) and when it is greater than one (inducing negative congestion effect for the jobs). For the former case, we provide tight bounds on the price of anarchy, and show that the price of anarchy grows to infinity as a polynomial of the number of jobs in the game. For the latter, we observe that existing results provide constant and tight (asymptotically in the degree of the monomial) bounds on the price of anarchy. We then turn to analyze payment mechanism with arbitrary cost-sharing, that is, when the strategy of a player includes also its payment. We show that our mechanism reduces the price of anarchy of games with n jobs and unit server costs from Θ(n) to 2. We also show that, for a restricted class of instances, a similar improvement is achieved for monomial server costs. This is not the case, however, for unrestricted instances of monomial costs, for which we prove that the price of anarchy remains super-constant for our mechanism. For systems with load-independent activation costs, we show that our mechanism can produce an optimal solution which is stable against coordinated deviations.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 17
    Publikationsdatum: 2021-03-22
    Beschreibung: This study presents a novel performance metric for feature selection algorithms that is unbiased and can be used for comparative analysis across feature selection problems. The baseline fitness improvement (BFI) measure quantifies the potential value gained by applying feature selection. The BFI measure can be used to compare the performance of feature selection algorithms across datasets by measuring the change in classifier performance as a result of feature selection, with respect to the baseline where all features are included. Empirical results are presented to show that there is performance complementarity for a suite of feature selection algorithms on a variety of real world datasets. The BFI measure is a normalised performance metric that can be used to correlate problem characteristics with feature selection algorithm performance, across multiple datasets. This ability paves the way towards describing the performance space of the per-instance algorithm selection problem for feature selection algorithms.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 18
    Publikationsdatum: 2021-02-02
    Beschreibung: Background Due to the need for informatics competencies in the field of nursing, the present study was conducted to design a psychometric instrument to determine the qualification of informatics competencies of employed nurses in educational care centers. Methods The questionnaire was made by reviewing existing scientific resources and assessment tools. Two hundred nurses were selected using simple random sampling. Structural equation modeling was used using the measurement model technique and the average variance was calculated. Linear structural relations (LISREL) software was used to test the assumptions and correlations of the model. Results Findings showed relatively good estimation in the fit of first-order measurement model. The informatics knowledge subscale with a determining rate of 0.90 had the greatest explanatory effect among the subscales and informatics skill with a determining rate of 0.67 and basic computer skill with a determining rate of 0.60 were observed. The second-order measurement model of fitness indicators showed that the three factors can well explain the multidimensional construct of informatics competency. Conclusions The designed tool can be used to develop educational strategies in relation to nursing students in the field of informatics and prepare them in the rich environment of information technology, which can be helpful in training nursing instructors.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 19
    Publikationsdatum: 2021-02-02
    Beschreibung: Background Questionnaires are commonly used tools in telemedicine services that can help to evaluate different aspects. Selecting the ideal questionnaire for this purpose may be challenging for researchers. This study aims to review which questionnaires are used to evaluate telemedicine services in the studies, which are most common, and what aspects of telemedicine evaluation do they capture. Methods The PubMed database was searched in August 2020 to retrieve articles. Data extracted from the final list of articles included author/year of publication, journal of publication, type of evaluation, and evaluation questionnaire. Data were analyzed using descriptive statistics. Results Fifty-three articles were included in this study. The questionnaire was used for evaluating the satisfaction (49%), usability (34%), acceptance (11.5%), and implementation (2%) of telemedicine services. Among telemedicine specific questionnaires, Telehealth Usability Questionnaire (TUQ) (19%), Telemedicine Satisfaction Questionnaire (TSQ) (13%), and Service User Technology Acceptability Questionnaire (SUTAQ) (5.5%), were respectively most frequently used in the collected articles. Other most used questionnaires generally used for evaluating the users’ satisfaction, usability, and acceptance of technology were Client Satisfaction Questionnaire (CSQ) (5.5%), Questionnaire for User Interaction Satisfaction (QUIS) (5.5%), System Usability Scale (SUS) (5.5%), Patient Satisfaction Questionnaire (PSQ) (5.5%), and Technology Acceptance Model (TAM) (3.5%) respectively. Conclusion Employing specifically designed questionnaires or designing a new questionnaire with fewer questions and more comprehensiveness in terms of the issues studied provides a better evaluation. Attention to user needs, end-user acceptance, and implementation processes, along with users' satisfaction and usability evaluation, may optimize telemedicine efforts in the future.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 20
    Publikationsdatum: 2021-03-22
    Beschreibung: With the widespread success of deep learning in the two-dimensional field, how to apply deep learning methods from two-dimensional to three-dimensional field has become a current research hotspot. Among them, the polygon mesh structure in the three-dimensional representation as a complex data structure provides an effective shape approximate representation for the three-dimensional object. Although the traditional method can extract the characteristics of the three-dimensional object through the graphical method, it cannot be applied to more complex objects. However, due to the complexity and irregularity of the mesh data, it is difficult to directly apply convolutional neural networks to 3D mesh data processing. Considering this problem, we propose a deep learning method based on a capsule network to effectively classify mesh data. We first design a polynomial convolution template. Through a sliding operation similar to a two-dimensional image convolution window, we directly sample on the grid surface, and use the window sampling surface as the minimum unit of calculation. Because a high-order polynomial can effectively represent a surface, we fit the approximate shape of the surface through the polynomial, use the polynomial parameter as the shape feature of the surface, and add the center point coordinates and normal vector of the surface as the pose feature of the surface. The feature is used as the feature vector of the surface. At the same time, to solve the problem of the introduction of a large number of pooling layers in traditional convolutional neural networks, the capsule network is introduced. For the problem of nonuniform size of the input grid model, the capsule network attitude parameter learning method is improved by sharing the weight of the attitude matrix. The amount of model parameters is reduced, and the training efficiency of the 3D mesh model is further improved. The experiment is compared with the traditional method and the latest two methods on the SHREC15 data set. Compared with the MeshNet and MeshCNN, the average recognition accuracy in the original test set is improved by 3.4% and 2.1%, and the average after fusion of features the accuracy reaches 93.8%. At the same time, under the premise of short training time, this method can also achieve considerable recognition results through experimental verification. The three-dimensional mesh classification method proposed in this paper combines the advantages of graphics and deep learning methods, and effectively improves the classification effect of 3D mesh model.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 21
    Publikationsdatum: 2021-03-29
    Beschreibung: Best match graphs (BMGs) are vertex-colored digraphs that naturally arise in mathematical phylogenetics to formalize the notion of evolutionary closest genes w.r.t. an a priori unknown phylogenetic tree. BMGs are explained by unique least resolved trees. We prove that the property of a rooted, leaf-colored tree to be least resolved for some BMG is preserved by the contraction of inner edges. For the special case of two-colored BMGs, this leads to a characterization of the least resolved trees (LRTs) of binary-explainable trees and a simple, polynomial-time algorithm for the minimum cardinality completion of the arc set of a BMG to reach a BMG that can be explained by a binary tree.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 22
    Publikationsdatum: 2021-03-19
    Beschreibung: Various criteria and algorithms can be used for clustering, leading to very distinct outcomes and potential biases towards datasets with certain structures. More generally, the selection of the most effective algorithm to be applied for a given dataset, based on its characteristics, is a problem that has been largely studied in the field of meta-learning. Recent advances in the form of a new methodology known as Instance Space Analysis provide an opportunity to extend such meta-analyses to gain greater visual insights of the relationship between datasets’ characteristics and the performance of different algorithms. The aim of this study is to perform an Instance Space Analysis for the first time for clustering problems and algorithms. As a result, we are able to analyze the impact of the choice of the test instances employed, and the strengths and weaknesses of some popular clustering algorithms, for datasets with different structures.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 23
    Publikationsdatum: 2021-03-19
    Beschreibung: A novel coronavirus (COVID-19), which has become a great concern for the world, was identified first in Wuhan city in China. The rapid spread throughout the world was accompanied by an alarming number of infected patients and increasing number of deaths gradually. If the number of infected cases can be predicted in advance, it would have a large contribution to controlling this pandemic in any area. Therefore, this study introduces an integrated model for predicting the number of confirmed cases from the perspective of Bangladesh. Moreover, the number of quarantined patients and the change in basic reproduction rate (the R0-value) can also be evaluated using this model. This integrated model combines the SEIR (Susceptible, Exposed, Infected, Removed) epidemiological model and neural networks. The model was trained using available data from 250 days. The accuracy of the prediction of confirmed cases is almost between 90% and 99%. The performance of this integrated model was evaluated by showing the difference in accuracy between the integrated model and the general SEIR model. The result shows that the integrated model is more accurate than the general SEIR model while predicting the number of confirmed cases in Bangladesh.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 24
    Publikationsdatum: 2021-03-24
    Beschreibung: We used fuzzy entropy as a feature to optimize the intrinsically disordered protein prediction scheme. The optimization scheme requires computing only five features for each residue of a protein sequence, that is, the Shannon entropy, topological entropy, and the weighted average values of two propensities. Notably, this is the first time that fuzzy entropy has been applied to the field of protein sequencing. In addition, we used three machine learning to examine the prediction results before and after optimization. The results show that the use of fuzzy entropy leads to an improvement in the performance of different algorithms, demonstrating the generality of its application. Finally, we compare the simulation results of our scheme with those of some existing schemes to demonstrate its effectiveness.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 25
    Publikationsdatum: 2021-03-10
    Beschreibung: Background Clinical Decision Support Systems (CDSSs) for Prescribing are one of the innovations designed to improve physician practice performance and patient outcomes by reducing prescription errors. This study was therefore conducted to examine the effects of various CDSSs on physician practice performance and patient outcomes. Methods This systematic review was carried out by searching PubMed, Embase, Web of Science, Scopus, and Cochrane Library from 2005 to 2019. The studies were independently reviewed by two researchers. Any discrepancies in the eligibility of the studies between the two researchers were then resolved by consulting the third researcher. In the next step, we performed a meta-analysis based on medication subgroups, CDSS-type subgroups, and outcome categories. Also, we provided the narrative style of the findings. In the meantime, we used a random-effects model to estimate the effects of CDSS on patient outcomes and physician practice performance with a 95% confidence interval. Q statistics and I2 were then used to calculate heterogeneity. Results On the basis of the inclusion criteria, 45 studies were qualified for analysis in this study. CDSS for prescription drugs/COPE has been used for various diseases such as cardiovascular diseases, hypertension, diabetes, gastrointestinal and respiratory diseases, AIDS, appendicitis, kidney disease, malaria, high blood potassium, and mental diseases. In the meantime, other cases such as concurrent prescribing of multiple medications for patients and their effects on the above-mentioned results have been analyzed. The study shows that in some cases the use of CDSS has beneficial effects on patient outcomes and physician practice performance (std diff in means = 0.084, 95% CI 0.067 to 0.102). It was also statistically significant for outcome categories such as those demonstrating better results for physician practice performance and patient outcomes or both. However, there was no significant difference between some other cases and traditional approaches. We assume that this may be due to the disease type, the quantity, and the type of CDSS criteria that affected the comparison. Overall, the results of this study show positive effects on performance for all forms of CDSSs. Conclusions Our results indicate that the positive effects of the CDSS can be due to factors such as user-friendliness, compliance with clinical guidelines, patient and physician cooperation, integration of electronic health records, CDSS, and pharmaceutical systems, consideration of the views of physicians in assessing the importance of CDSS alerts, and the real-time alerts in the prescription.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 26
    Publikationsdatum: 2021-03-18
    Beschreibung: Background Patients with complex health care needs may suffer adverse outcomes from fragmented and delayed care, reducing well-being and increasing health care costs. Health reform efforts, especially those in primary care, attempt to mitigate risk of adverse outcomes by better targeting resources to those most in need. However, predicting who is susceptible to adverse outcomes, such as unplanned hospitalizations, ED visits, or other potentially avoidable expenditures, can be difficult, and providing intensive levels of resources to all patients is neither wanted nor efficient. Our objective was to understand if primary care teams can predict patient risk better than standard risk scores. Methods Six primary care practices risk stratified their entire patient population over a 2-year period, and worked to mitigate risk for those at high risk through care management and coordination. Individual patient risk scores created by the practices were collected and compared to a common risk score (Hierarchical Condition Categories) in their ability to predict future expenditures, ED visits, and hospitalizations. Accuracy of predictions, sensitivity, positive predictive values (PPV), and c-statistics were calculated for each risk scoring type. Analyses were stratified by whether the practice used intuition alone, an algorithm alone, or adjudicated an algorithmic risk score. Results In all, 40,342 patients were risk stratified. Practice scores had 38.6% agreement with HCC scores on identification of high-risk patients. For the 3,381 patients with reliable outcomes data, accuracy was high (0.71–0.88) but sensitivity and PPV were low (0.16–0.40). Practice-created scores had 0.02–0.14 lower sensitivity, specificity and PPV compared to HCC in prediction of outcomes. Practices using adjudication had, on average, .16 higher sensitivity. Conclusions Practices using simple risk stratification techniques had slightly worse accuracy in predicting common outcomes than HCC, but adjudication improved prediction.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 27
    Publikationsdatum: 2021-03-18
    Beschreibung: Background The Ministry of Health in Saudi Arabia is expanding the country’s telemedicine services by using advanced technology in health services. In doing so, an e-health application (app), Seha, was introduced in 2018 that allows individuals to have face-to-face visual medical consultations with their doctors on their smartphones. Objective This study evaluated the effectiveness of the app in improving healthcare delivery by ensuring patient satisfaction with the care given, increasing access to care, and improving efficiency in the healthcare system. Methods A cross-sectional study design was used to assess the perceptions of users of the Seha app and non-users who continued with traditional health services. The data were collected using an online survey via Google Forms between June 2020 and September 2020. Independent t tests and chi-square (χ2) tests were conducted to answer the research questions. Results There was a significant difference between users and non-users in terms of ease of access to health services (t =  − 9.38, p 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 28
    Publikationsdatum: 2021-03-14
    Beschreibung: Thick ellipsoids were recently introduced by the authors to represent uncertainty in state variables of dynamic systems, not only in terms of guaranteed outer bounds but also in terms of an inner enclosure that belongs to the true solution set with certainty. Because previous work has focused on the definition and computationally efficient implementation of arithmetic operations and extensions of nonlinear standard functions, where all arguments are replaced by thick ellipsoids, this paper introduces novel operators for specifically evaluating quasi-linear system models with bounded parameters as well as for the union and intersection of thick ellipsoids. These techniques are combined in such a way that a discrete-time state observer can be designed in a predictor-corrector framework. Estimation results are presented for a combined observer-based estimation of state variables as well as disturbance forces and torques in the sense of an unknown input estimator for a hovercraft.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 29
    Publikationsdatum: 2021-03-25
    Beschreibung: Advances in computers and communications have significantly changed almost every aspect of our daily activity. In this maze of change, governments around the world cannot remain indifferent. Public administration is evolving and taking on a new form through e-government. A large number of organizations have set up websites, establishing an online interface with the citizens and businesses with which it interacts. However, most organizations, especially the decentralized agencies of the ministries and local authorities, do not offer their information electronically despite the fact that they provide many information services that are not integrated with other e-government services. Besides, these services are mainly focused on serving citizens and businesses and less on providing services to employees. In this paper, we describe the process of developing an ontology to support the administrative procedures of decentralized government organizations. Finally, we describe the development of an e-government portal that provides employees services that are processed online, using the above ontology for modeling and data management.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 30
    Publikationsdatum: 2021-03-25
    Beschreibung: Background Poor balance has been cited as one of the key causal factors of falls. Timely detection of balance impairment can help identify the elderly prone to falls and also trigger early interventions to prevent them. The goal of this study was to develop a surrogate approach for assessing elderly’s functional balance based on Short Form Berg Balance Scale (SFBBS) score. Methods Data were collected from a waist-mounted tri-axial accelerometer while participants performed a timed up and go test. Clinically relevant variables were extracted from the segmented accelerometer signals for fitting SFBBS predictive models. Regularized regression together with random-shuffle-split cross-validation was used to facilitate the development of the predictive models for automatic balance estimation. Results Eighty-five community-dwelling older adults (72.12 ± 6.99 year) participated in our study. Our results demonstrated that combined clinical and sensor-based variables, together with regularized regression and cross-validation, achieved moderate-high predictive accuracy of SFBBS scores (mean MAE = 2.01 and mean RMSE = 2.55). Step length, gender, gait speed and linear acceleration variables describe the motor coordination were identified as significantly contributed variables of balance estimation. The predictive model also showed moderate-high discriminations in classifying the risk levels in the performance of three balance assessment motions in terms of AUC values of 0.72, 0.79 and 0.76 respectively. Conclusions The study presented a feasible option for quantitatively accurate, objectively measured, and unobtrusively collected functional balance assessment at the point-of-care or home environment. It also provided clinicians and elderly with stable and sensitive biomarkers for long-term monitoring of functional balance.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 31
    Publikationsdatum: 2021-02-17
    Beschreibung: Background The electronic health record (EHR) holds the prospect of providing more complete and timely access to clinical information for biomedical research, quality assessments, and quality improvement compared to other data sources, such as administrative claims. In this study, we sought to assess the completeness and timeliness of structured diagnoses in the EHR compared to computed diagnoses for hypertension (HTN), hyperlipidemia (HLD), and diabetes mellitus (DM). Methods We determined the amount of time for a structured diagnosis to be recorded in the EHR from when an equivalent diagnosis could be computed from other structured data elements, such as vital signs and laboratory results. We used EHR data for encounters from January 1, 2012 through February 10, 2019 from an academic health system. Diagnoses for HTN, HLD, and DM were computed for patients with at least two observations above threshold separated by at least 30 days, where the thresholds were outpatient blood pressure of ≥ 140/90 mmHg, any low-density lipoprotein ≥ 130 mg/dl, or any hemoglobin A1c ≥ 6.5%, respectively. The primary measure was the length of time between the computed diagnosis and the time at which a structured diagnosis could be identified within the EHR history or problem list. Results We found that 39.8% of those with HTN, 21.6% with HLD, and 5.2% with DM did not receive a corresponding structured diagnosis recorded in the EHR. For those who received a structured diagnosis, a mean of 389, 198, and 166 days elapsed before the patient had the corresponding diagnosis of HTN, HLD, or DM, respectively, recorded in the EHR. Conclusions We found a marked temporal delay between when a diagnosis can be computed or inferred and when an equivalent structured diagnosis is recorded within the EHR. These findings demonstrate the continued need for additional study of the EHR to avoid bias when using observational data and reinforce the need for computational approaches to identify clinical phenotypes.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 32
    Publikationsdatum: 2021-02-02
    Beschreibung: The Disjoint Connecting Paths problem and its capacitated generalization, called Unsplittable Flow problem, play an important role in practical applications such as communication network design and routing. These tasks are NP-hard in general, but various polynomial-time approximations are known. Nevertheless, the approximations tend to be either too loose (allowing large deviation from the optimum), or too complicated, often rendering them impractical in large, complex networks. Therefore, our goal is to present a solution that provides a relatively simple, efficient algorithm for the unsplittable flow problem in large directed graphs, where the task is NP-hard, and is known to remain NP-hard even to approximate up to a large factor. The efficiency of our algorithm is achieved by sacrificing a small part of the solution space. This also represents a novel paradigm for approximation. Rather than giving up the search for an exact solution, we restrict the solution space to a subset that is the most important for applications, and excludes only a small part that is marginal in some well-defined sense. Specifically, the sacrificed part only contains scenarios where some edges are very close to saturation. Since nearly saturated links are undesirable in practical applications, therefore, excluding near saturation is quite reasonable from the practical point of view. We refer the solutions that contain no nearly saturated edges as safe solutions, and call the approach safe approximation. We prove that this safe approximation can be carried out efficiently. That is, once we restrict ourselves to safe solutions, we can find the exact optimum by a randomized polynomial time algorithm.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 33
    Publikationsdatum: 2021-02-02
    Beschreibung: Background Data from clinical registries may be linked to gain additional insights into disease processes, risk factors and outcomes. Identifying information varies from full names, addresses and unique identification codes to statistical linkage keys to no direct identifying information at all. A number of databases in Australia contain the statistical linkage key 581 (SLK-581). Our aim was to investigate the ability to link data using SLK-581 between two national databases, and to compare this linkage to that achieved with direct identifiers or other non-identifying variables. Methods The Australian and New Zealand Society of Cardiothoracic Surgeons database (ANZSCTS-CSD) contains fully identified data. The Australian and New Zealand Intensive Care Society database (ANZICS-APD) contains non-identified data together with SLK-581. Identifying data is removed at participating hospitals prior to central collation and storage. We used the local hospital ANZICS-APD data at a large single tertiary centre prior to deidentification and linked this to ANZSCTS-CSD data. We compared linkage using SLK-581 to linkage using non-identifying variables (dates of admission and discharge, age and sex) and linkage using a complete set of unique identifiers. We compared the rate of match, rate of mismatch and clinical characteristics between unmatched patients using the different methods. Results There were 1283 patients eligible for matching in the ANZSCTS-CSD. 1242 were matched using unique identifiers. Using non-identifying variables 1151/1242 (92.6%) patients were matched. Using SLK-581, 1202/1242 (96.7%) patients were matched. The addition of non-identifying data to SLK-581 provided few additional patients (1211/1242, 97.5%). Patients who did not match were younger, had a higher mortality risk and more non-standard procedures vs matched patients. The differences between unmatched patients using different matching strategies were small. Conclusion All strategies provided an acceptable linkage. SLK-581 improved the linkage compared to non-identifying variables, but was not as successful as direct identifiers. SLK-581 may be used to improve linkage between national registries where identifying information is not available or cannot be released.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 34
    Publikationsdatum: 2021-03-28
    Beschreibung: The accurate of i identificationntrinsically disordered proteins or protein regions is of great importance, as they are involved in critical biological process and related to various human diseases. In this paper, we develop a deep neural network that is based on the well-known VGG16. Our deep neural network is then trained through using 1450 proteins from the dataset DIS1616 and the trained neural network is tested on the remaining 166 proteins. Our trained neural network is also tested on the blind test set R80 and MXD494 to further demonstrate the performance of our model. The MCC value of our trained deep neural network is 0.5132 on the test set DIS166, 0.5270 on the blind test set R80 and 0.4577 on the blind test set MXD494. All of these MCC values of our trained deep neural network exceed the corresponding values of existing prediction methods.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 35
    Publikationsdatum: 2021-03-28
    Beschreibung: Image denoising is a challenging research problem that aims to recover noise-free images from those that are contaminated with noise. In this paper, we focus on the denoising of images that are contaminated with additive white Gaussian noise. For this purpose, we propose an ensemble learning model that uses the output of three image denoising models, namely ADNet, IRCNN, and DnCNN, in the ratio of 2:3:6, respectively. The first model (ADNet) consists of Convolutional Neural Networks with attention along with median filter layers after every convolutional layer and a dilation rate of 8. In the case of the second model, it is a feed forward denoising CNN or DnCNN with median filter layers after half of the convolutional layers. For the third model, which is Deep CNN Denoiser Prior or IRCNN, the model contains dilated convolutional layers and median filter layers up to the dilated convolutional layers with a dilation rate of 6. By quantitative analysis, we note that our model performs significantly well when tested on the BSD500 and Set12 datasets.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 36
    Publikationsdatum: 2021-03-28
    Beschreibung: The optimization of bus scheduling is a key method to improve bus service. So, the purpose of this paper is to address the regional public transportation dispatching problem, while taking into account the association between the departure time of buses and the waiting time of passengers. A bi-objective optimization model for regional public transportation scheduling is established to minimize the total waiting cost of passengers and to maximize the comprehensive service rate of buses. Moreover, a NSGA-II algorithm with adaptive adjusted model for crossover and mutation probability is designed to obtain the Pareto solution set of this problem, and the entropy weight-TOPSIS method is utilized to make a decision. Then the algorithms are compared with examples, and the results show that the model is feasible, and the proposed algorithms are achievable in solving the regional public transportation scheduling problem.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 37
    Publikationsdatum: 2021-03-25
    Beschreibung: Cicerone and Di Stefano defined and studied the class of k-distance-hereditary graphs, i.e., graphs where the distance in each connected induced subgraph is at most k times the distance in the whole graph. The defined graphs represent a generalization of the well known distance-hereditary graphs, which actually correspond to 1-distance-hereditary graphs. In this paper we make a step forward in the study of these new graphs by providing characterizations for the class of all the k-distance-hereditary graphs such that k
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 38
    Publikationsdatum: 2021-03-26
    Beschreibung: Multi-facility location problem is a type of task often solved (not only) in logistics. It consists in finding the optimal location of the required number of centers for a given number of points. One of the possible solutions is to use the principle of the genetic algorithm. The Solver add-in, which uses the evolutionary method, is available in the Excel office software. It was used to solve the benchmark in 4 levels of difficulty (from 5 centers for 25 points to 20 centers for 100 points), and one task from practice. The obtained results were compared with the results obtained by the metaheuristic simulated annealing method. It was found that the results obtained by the evolutionary method are sufficiently accurate. Their accuracy depends on the complexity of the task and the performance of the HW used. The advantage of the proposed solution is easy availability and minimal requirements for user knowledge.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 39
    Publikationsdatum: 2021-03-16
    Beschreibung: We study the problem of quickly computing point-to-point shortest paths in massive road networks with traffic predictions. Incorporating traffic predictions into routing allows, for example, to avoid commuter traffic congestions. Existing techniques follow a two-phase approach: In a preprocessing step, an index is built. The index depends on the road network and the traffic patterns but not on the path start and end. The latter are the input of the query phase, in which shortest paths are computed. All existing techniques have large index size, slow query running times or may compute suboptimal paths. In this work, we introduce CATCHUp (Customizable Approximated Time-dependent Contraction Hierarchies through Unpacking), the first algorithm that simultaneously achieves all three objectives. The core idea of CATCHUp is to store paths instead of travel times at shortcuts. Shortcut travel times are derived lazily from the stored paths. We perform an experimental study on a set of real world instances and compare our approach with state-of-the-art techniques. Our approach achieves the fastest preprocessing, competitive query running times and up to 38 times smaller indexes than competing approaches.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 40
    Publikationsdatum: 2021-03-14
    Beschreibung: The widespread use of automated decision processes in many areas of our society raises serious ethical issues with respect to the fairness of the process and the possible resulting discrimination. To solve this issue, we propose a novel adversarial training approach called GANSan for learning a sanitizer whose objective is to prevent the possibility of any discrimination (i.e., direct and indirect) based on a sensitive attribute by removing the attribute itself as well as the existing correlations with the remaining attributes. Our method GANSan is partially inspired by the powerful framework of generative adversarial networks (in particular Cycle-GANs), which offers a flexible way to learn a distribution empirically or to translate between two different distributions. In contrast to prior work, one of the strengths of our approach is that the sanitization is performed in the same space as the original data by only modifying the other attributes as little as possible, thus preserving the interpretability of the sanitized data. Consequently, once the sanitizer is trained, it can be applied to new data locally by an individual on their profile before releasing it. Finally, experiments on real datasets demonstrate the effectiveness of the approach as well as the achievable trade-off between fairness and utility.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 41
    Publikationsdatum: 2021-03-24
    Beschreibung: In this paper, we present a new parametric family of three-step iterative for solving nonlinear equations. First, we design a fourth-order triparametric family that, by holding only one of its parameters, we get to accelerate its convergence and finally obtain a sixth-order uniparametric family. With this last family, we study its convergence, its complex dynamics (stability), and its numerical behavior. The parameter spaces and dynamical planes are presented showing the complexity of the family. From the parameter spaces, we have been able to determine different members of the family that have bad convergence properties, as attracting periodic orbits and attracting strange fixed points appear in their dynamical planes. Moreover, this same study has allowed us to detect family members with especially stable behavior and suitable for solving practical problems. Several numerical tests are performed to illustrate the efficiency and stability of the presented family.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 42
    facet.materialart.
    Unbekannt
    Molecular Diversity Preservation International
    Publikationsdatum: 2021-03-19
    Beschreibung: Color coding is an algorithmic technique used in parameterized complexity theory to detect “small” structures inside graphs. The idea is to derandomize algorithms that first randomly color a graph and then search for an easily-detectable, small color pattern. We transfer color coding to the world of descriptive complexity theory by characterizing—purely in terms of the syntactic structure of describing formulas—when the powerful second-order quantifiers representing a random coloring can be replaced by equivalent, simple first-order formulas. Building on this result, we identify syntactic properties of first-order quantifiers that can be eliminated from formulas describing parameterized problems. The result applies to many packing and embedding problems, but also to the long path problem. Together with a new result on the parameterized complexity of formula families involving only a fixed number of variables, we get that many problems lie in FPT just because of the way they are commonly described using logical formulas.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 43
    Publikationsdatum: 2021-03-02
    Beschreibung: Device-to-Device (D2D) communications, which enable direct communication between nearby user devices over the licensed spectrum, have been considered a key technique to improve spectral efficiency and system throughput in cellular networks (CNs). However, the limited spectrum resources cannot be sufficient to support more cellular users (CUs) and D2D users to meet the growth of the traffic data in future wireless networks. Therefore, Long-Term Evolution-Unlicensed (LTE-U) and D2D-Unlicensed (D2D-U) technologies have been proposed to further enhance system capacity by extending the CUs and D2D users on the unlicensed spectrum for communications. In this paper, we consider an LTE network where the CUs and D2D users are allowed to share the unlicensed spectrum with Wi-Fi users. To maximize the sum rate of all users while guaranteeing each user’s quality of service (QoS), we jointly consider user access and resource allocation. To tackle the formulated problem, we propose a matching-iteration-based joint user access and resource allocation algorithm. Simulation results show that the proposed algorithm can significantly improve system throughput compared to the other benchmark algorithms.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 44
    Publikationsdatum: 2021-03-17
    Beschreibung: Background Studies that examine the adoption of clinical decision support (CDS) by healthcare providers have generally lacked a theoretical underpinning. The Unified Theory of Acceptance and Use of Technology (UTAUT) model may provide such a theory-based explanation; however, it is unknown if the model can be applied to the CDS literature. Objective Our overall goal was to develop a taxonomy based on UTAUT constructs that could reliably characterize CDS interventions. Methods We used a two-step process: (1) identified randomized controlled trials meeting comparative effectiveness criteria, e.g., evaluating the impact of CDS interventions with and without specific features or implementation strategies; (2) iteratively developed and validated a taxonomy for characterizing differential CDS features or implementation strategies using three raters. Results Twenty-five studies with 48 comparison arms were identified. We applied three constructs from the UTAUT model and added motivational control to characterize CDS interventions. Inter-rater reliability was as follows for model constructs: performance expectancy (κ = 0.79), effort expectancy (κ = 0.85), social influence (κ = 0.71), and motivational control (κ = 0.87). Conclusion We found that constructs from the UTAUT model and motivational control can reliably characterize features and associated implementation strategies. Our next step is to examine the quantitative relationships between constructs and CDS adoption.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 45
    Publikationsdatum: 2021-03-12
    Beschreibung: The authors wish to make the following corrections to their paper [...]
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 46
    Publikationsdatum: 2021-03-09
    Beschreibung: Background We developed transformer-based deep learning models based on natural language processing for early risk assessment of Alzheimer’s disease from the picture description test. Methods The lack of large datasets poses the most important limitation for using complex models that do not require feature engineering. Transformer-based pre-trained deep language models have recently made a large leap in NLP research and application. These models are pre-trained on available large datasets to understand natural language texts appropriately, and are shown to subsequently perform well on classification tasks with small training sets. The overall classification model is a simple classifier on top of the pre-trained deep language model. Results The models are evaluated on picture description test transcripts of the Pitt corpus, which contains data of 170 AD patients with 257 interviews and 99 healthy controls with 243 interviews. The large bidirectional encoder representations from transformers (BERTLarge) embedding with logistic regression classifier achieves classification accuracy of 88.08%, which improves the state-of-the-art by 2.48%. Conclusions Using pre-trained language models can improve AD prediction. This not only solves the problem of lack of sufficiently large datasets, but also reduces the need for expert-defined features.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 47
    Publikationsdatum: 2021-03-09
    Beschreibung: Background Assessing the quality of healthcare data is a complex task including the selection of suitable measurement methods (MM) and adequately assessing their results. Objectives To present an interoperable data quality (DQ) assessment method that formalizes MMs based on standardized data definitions and intends to support collaborative governance of DQ-assessment knowledge, e.g. which MMs to apply and how to assess their results in different situations. Methods We describe and explain central concepts of our method using the example of its first real world application in a study on predictive biomarkers for rejection and other injuries of kidney transplants. We applied our open source tool—openCQA—that implements our method utilizing the openEHR specifications. Means to support collaborative governance of DQ-assessment knowledge are the version-control system git and openEHR clinical information models. Results Applying the method on the study’s dataset showed satisfactory practicability of the described concepts and produced useful results for DQ-assessment. Conclusions The main contribution of our work is to provide applicable concepts and a tested exemplary open source implementation for interoperable and knowledge-based DQ-assessment in healthcare that considers the need for flexible task and domain specific requirements.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 48
    Publikationsdatum: 2021-03-05
    Beschreibung: Background Cost control and usage regulation of medical materials (MMs) are the practical issues that the government pays close attention to. Although it is well established that there is great potential to mobilize doctors and patients in participating MMs-related clinical decisions, few interventions adopt effective measures against specific behavioral deficiencies. This study aims at developing and validating an independent consultation and feedback system (ICFS) for optimizing clinical decisions on the use of MMs for inpatients needing joint replacement surgeries. Methods Development of the research protocol is based on a problem or deficiency list derived on a trans-theoretical framework which incorporates including mainly soft systems-thinking, information asymmetry, crisis-coping, dual delegation and planned behavior. The intervention consists of two main components targeting at patients and doctors respectively. Each of the intervention ingredients is designed to tackle the doctor and patient-side problems with MMs using in joint replacement surgeries. The intervention arm receives 18 months' ICFS intervention program on the basis of the routine medical services; while the control arm, only the routine medical services. Implementation of the intervention is supported by an online platform established and maintained by the Quality Assurance Center for Medical Care in Anhui Province, a smartphone-based application program (APP) and a web-based clinical support system. Discussion The implementation of this study is expected to significantly reduce the deficiencies and moral hazards in decision-making of MMs using through the output of economic, efficient, sustainable and easy-to-promote cooperative intervention programs, thus greatly reducing medical costs and standardizing medical behaviors. Trial registration number ISRCTN10152297.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 49
    Publikationsdatum: 2021-03-08
    Beschreibung: Background There have been few studies describing how production EMR systems can be systematically queried to identify clinically-defined populations and limited studies utilising free-text in this process. The aim of this study is to provide a generalisable methodology for constructing clinically-defined EMR-derived patient cohorts using structured and unstructured data in EMRs. Methods Patients with possible acute coronary syndrome (ACS) were used as an exemplar. Cardiologists defined clinical criteria for patients presenting with possible ACS. These were mapped to data tables within the production EMR system creating seven inclusion criteria comprised of structured data fields (orders and investigations, procedures, scanned electrocardiogram (ECG) images, and diagnostic codes) and unstructured clinical documentation. Data were extracted from two local health districts (LHD) in Sydney, Australia. Outcome measures included examination of the relative contribution of individual inclusion criteria to the identification of eligible encounters, comparisons between inclusion criterion and evaluation of consistency of data extracts across years and LHDs. Results Among 802,742 encounters in a 5 year dataset (1/1/13–30/12/17), the presence of an ECG image (54.8% of encounters) and symptoms and keywords in clinical documentation (41.4–64.0%) were used most often to identify presentations of possible ACS. Orders and investigations (27.3%) and procedures (1.4%), were less often present for identified presentations. Relevant ICD-10/SNOMED CT codes were present for 3.7% of identified encounters. Similar trends were seen when the two LHDs were examined separately, and across years. Conclusions Clinically-defined EMR-derived cohorts combining structured and unstructured data during cohort identification is a necessary prerequisite for critical validation work required for development of real-time clinical decision support and learning health systems.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 50
    Publikationsdatum: 2021-03-09
    Beschreibung: Background In the intensive care unit (ICU), delirium is a common, acute, confusional state associated with high risk for short- and long-term morbidity and mortality. Machine learning (ML) has promise to address research priorities and improve delirium outcomes. However, due to clinical and billing conventions, delirium is often inconsistently or incompletely labeled in electronic health record (EHR) datasets. Here, we identify clinical actions abstracted from clinical guidelines in electronic health records (EHR) data that indicate risk of delirium among intensive care unit (ICU) patients. We develop a novel prediction model to label patients with delirium based on a large data set and assess model performance. Methods EHR data on 48,451 admissions from 2001 to 2012, available through Medical Information Mart for Intensive Care-III database (MIMIC-III), was used to identify features to develop our prediction models. Five binary ML classification models (Logistic Regression; Classification and Regression Trees; Random Forests; Naïve Bayes; and Support Vector Machines) were fit and ranked by Area Under the Curve (AUC) scores. We compared our best model with two models previously proposed in the literature for goodness of fit, precision, and through biological validation. Results Our best performing model with threshold reclassification for predicting delirium was based on a multiple logistic regression using the 31 clinical actions (AUC 0.83). Our model out performed other proposed models by biological validation on clinically meaningful, delirium-associated outcomes. Conclusions Hurdles in identifying accurate labels in large-scale datasets limit clinical applications of ML in delirium. We developed a novel labeling model for delirium in the ICU using a large, public data set. By using guideline-directed clinical actions independent from risk factors, treatments, and outcomes as model predictors, our classifier could be used as a delirium label for future clinically targeted models.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 51
    Publikationsdatum: 2021-03-06
    Beschreibung: A choice to use a seat belt is largely dependent on the psychology of the vehicles’ occupants, and thus those decisions are expected to be characterized by preference heterogeneity. Despite the importance of seat belt use on the safety of the roadways, the majority of existing studies ignored the heterogeneity in the data and used a very standard statistical or descriptive method to identify the factors of using a seatbelt. Application of the right statistical method is of crucial importance to unlock the underlying factors of the choice being made by vehicles’ occupants. Thus, this study was conducted to identify the contributory factors to the front-seat passengers’ choice of seat belt usage, while accounting for the choice preference heterogeneity. The latent class model has been offered to replace the mixed logit model by replacing a continuous distribution with a discrete one. However, one of the shortcomings of the latent class model is that the homogeneity is assumed across a same class. A further extension is to relax the assumption of homogeneity by allowing some parameters to vary across the same group. The model could still be extended to overlay some attributes by considering attributes non-attendance (ANA), and aggregation of common-metric attributes (ACMA). Thus, this study was conducted to make a comparison across goodness of fit of the discussed models. Beside a comparison based on goodness of fit, the share of individuals in each class was used to see how it changes based on various model specifications. In summary, the results indicated that adding another layer to account for the heterogeneity within the same class of the latent class (LC) model, and accounting for ANA and ACMA would improve the model fit. It has been discussed in the content of the manuscript that accounting for ANA, ACMA and an extra layer of heterogeneity does not just improve the model goodness of fit, but largely impacts the share of class allocation of the models.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 52
    Publikationsdatum: 2021-03-04
    Beschreibung: A typhoon is an extreme weather event with strong destructive force, which can bring huge losses of life and economic damage to people. Thus, it is meaningful to reduce the prediction errors of typhoon intensity forecasting. Artificial and deep neural networks have recently become widely used for typhoon forecasting in order to ensure typhoon intensity forecasting is accurate and timely. Typhoon intensity forecasting models based on long short-term memory (LSTM) are proposed herein, which forecast typhoon intensity as a time series problem based on historical typhoon data. First, the typhoon intensity forecasting models are trained and tested with processed typhoon data from 2000 to 2014 to find the optimal prediction factors. Then, the models are validated using the optimal prediction factors compared to a feed-forward neural network (FNN). As per the results of the model applied for typhoons Chan-hom and Soudelor in 2015, the model based on LSTM using the optimal prediction factors shows the best performance and lowest prediction errors. Thus, the model based on LSTM is practical and meaningful for predicting typhoon intensity within 120 h.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 53
    Publikationsdatum: 2021-03-26
    Beschreibung: Background Strabismus is a complex disease that has various treatment approaches each with its own advantages and drawbacks. In this context, shared decisions making (SDM) is a communication process with the provider sharing all the relevant treatment alternatives, all the benefits, and risks of each procedure, while the patient shares all the preferences and values regarding his/her choices. In that way, SDM is a bidirectional process that goes beyond the typical informed consent. Therefore, it is known a little of the extent to which SDM influences the satisfaction with the treatment outcome along with strabismus patients. To study this correlation, an SDM-Q-9 questionnaire was provided within surgical consultations where treatment decisions were made; the SDM-Q-9 aims to assess the relationship between the post-operative patient’s satisfaction and their SMD score. Methods The study is considered a prospective observational pilot study. Eligible patients were adult patients diagnosed with strabismus, who had multiple treatment options, were given at the right of choice without being driven into a physician’s preference. Ninety-three strabismus patients were asked to fill out the SDM-Q-9 questionnaire related to their perception of SDM during the entire period of strabismus treatment. After the treatment, patients were asked to rate their satisfaction level with the surgical outcome as excellent, good, fair, and poor. Descriptive statistics and the linear regression statistical tests (Spearman, Mann Whitney U, and Kriskal–Wallis) were used as analysis tools. Results The average age of the participants was 24, where 50.6% were women. The mean SDM-Q-9 score among patients was 32 (IQR = 3). The postoperative patient satisfaction was rated as being excellent by 16 (17.2%) patients, good by 38 (40.9%), fair by 32 (34.4%), and poor by 7 patients (7.5%). Data analysis by linear regression statistical tests showed a positive correlation between the SDM-Q-9 score and the patient satisfaction related to the surgery outcome (B = 0.005, p 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 54
    Publikationsdatum: 2021-03-18
    Beschreibung: This paper studies a novel intelligent motion control algorithm for Autonomous Underwater Vehicles (AUV) and develops a virtual reality system for a new interactive experimental platform. The paper designs a robust neuro-fuzzy controller to tackle system uncertainties and external disturbances. Fuzzy control can solve the uncertainty problem of control systems. The neural network model self-tunes the controller parameters to improve the anti-interference ability. The designed control algorithm is verified using a MATLAB implementation and a virtual reality system. The virtual reality system developed in this paper can be used to debug the control algorithm, simulate the marine environment, and establish an ocean current interference model. The paper uses the MATLAB engine to realize the data communication between the MATLAB and the AUV virtual reality system. This allows the output order of the controller in MATLAB to drive the AUV in a virtual simulation system to simulate the 3D space motion.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 55
    Publikationsdatum: 2021-03-15
    Beschreibung: To reconstruct point geometry from multiple images, computation of the fundamental matrix is always necessary. With a new optimization criterion, i.e., the re-projective 3D metric geometric distance rather than projective space under RANSAC (Random Sample And Consensus) framework, our method can reveal the quality of the fundamental matrix visually through 3D reconstruction. The geometric distance is the projection error of 3D points to the corresponding image pixel coordinates in metric space. The reasonable visual figures of the reconstructed scenes are shown but only some numerical result were compared, as is standard practice. This criterion can lead to a better 3D reconstruction result especially in 3D metric space. Our experiments validate our new error criterion and the quality of fundamental matrix under the new criterion.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 56
    Publikationsdatum: 2021-03-17
    Beschreibung: In this paper, we present a decentralized unmanned aerial vehicle (UAV) swarm formation control approach based on a decision theoretic approach. Specifically, we pose the UAV swarm motion control problem as a decentralized Markov decision process (Dec-MDP). Here, the goal is to drive the UAV swarm from an initial geographical region to another geographical region where the swarm must form a three-dimensional shape (e.g., surface of a sphere). As most decision-theoretic formulations suffer from the curse of dimensionality, we adapt an existing fast approximate dynamic programming method called nominal belief-state optimization (NBO) to approximately solve the formation control problem. We perform numerical studies in MATLAB to validate the performance of the above control algorithms.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 57
    Publikationsdatum: 2021-03-18
    Beschreibung: Topic Detection and Tracking (TDT) on Twitter emulates human identifying developments in events from a stream of tweets, but while event participants are important for humans to understand what happens during events, machines have no knowledge of them. Our evaluation on football matches and basketball games shows that identifying event participants from tweets is a difficult problem exacerbated by Twitter’s noise and bias. As a result, traditional Named Entity Recognition (NER) approaches struggle to identify participants from the pre-event Twitter stream. To overcome these challenges, we describe Automatic Participant Detection (APD) to detect an event’s participants before the event starts and improve the machine understanding of events. We propose a six-step framework to identify participants and present our implementation, which combines information from Twitter’s pre-event stream and Wikipedia. In spite of the difficulties associated with Twitter and NER in the challenging context of events, our approach manages to restrict noise and consistently detects the majority of the participants. By empowering machines with some of the knowledge that humans have about events, APD lays the foundation not just for improved TDT systems, but also for a future where machines can model and mine events for themselves.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 58
    Publikationsdatum: 2021-03-20
    Beschreibung: Background Diabetes Mellitus (DM) has become the third chronic non-communicable disease that hits patients after tumors, cardiovascular and cerebrovascular diseases, and has become one of the major public health problems in the world. Therefore, it is of great importance to identify individuals at high risk for DM in order to establish prevention strategies for DM. Methods Aiming at the problem of high-dimensional feature space and high feature redundancy of medical data, as well as the problem of data imbalance often faced. This study explored different supervised classifiers, combined with SVM-SMOTE and two feature dimensionality reduction methods (Logistic stepwise regression and LAASO) to classify the diabetes survey sample data with unbalanced categories and complex related factors. Analysis and discussion of the classification results of 4 supervised classifiers based on 4 data processing methods. Five indicators including Accuracy, Precision, Recall, F1-Score and AUC are selected as the key indicators to evaluate the performance of the classification model. Results According to the result, Random Forest Classifier combining SVM-SMOTE resampling technology and LASSO feature screening method (Accuracy = 0.890, Precision = 0.869, Recall = 0.919, F1-Score = 0.893, AUC = 0.948) proved the best way to tell those at high risk of DM. Besides, the combined algorithm helps enhance the classification performance for prediction of high-risk people of DM. Also, age, region, heart rate, hypertension, hyperlipidemia and BMI are the top six most critical characteristic variables affecting diabetes. Conclusions The Random Forest Classifier combining with SVM-SMOTE and LASSO feature reduction method perform best in identifying high-risk people of DM from individuals. And the combined method proposed in the study would be a good tool for early screening of DM.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 59
    Publikationsdatum: 2021-03-20
    Beschreibung: Background A central goal among researchers and policy makers seeking to implement clinical interventions is to identify key facilitators and barriers that contribute to implementation success. Despite calls from a number of scholars, empirical insights into the complex structural and cultural predictors of why decision aids (DAs) become routinely embedded in health care settings remains limited and highly variable across implementation contexts. Methods We examined associations between “reach”, a widely used indicator (from the RE-AIM model) of implementation success, and multi-level site characteristics of nine LVAD clinics engaged over 18 months in implementation and dissemination of a decision aid for left ventricular assist device (LVAD) treatment. Based on data collected from nurse coordinators, we explored factors at the level of the organization (e.g. patient volume), patient population (e.g. health literacy; average sickness level), clinician characteristics (e.g. attitudes towards decision aid; readiness for change) and process (how the aid was administered). We generated descriptive statistics for each site and calculated zero-order correlations (Pearson’s r) between all multi-level site variables including cumulative reach at 12 months and 18 months for all sites. We used principal components analysis (PCA) to examine any latent factors governing relationships between and among all site characteristics, including reach. Results We observed strongest inclines in reach of our decision aid across the first year, with uptake fluctuating over the second year. Average reach across sites was 63% (s.d. = 19.56) at 12 months and 66% (s.d. = 19.39) at 18 months. Our PCA revealed that site characteristics positively associated with reach on two distinct dimensions, including a first dimension reflecting greater organizational infrastructure and standardization (characteristic of larger, more established clinics) and a second dimension reflecting positive attitudinal orientations, specifically, openness and capacity to give and receive decision support among coordinators and patients. Conclusions Successful implementation plans should incorporate specific efforts to promote supportive and mutually informative interactions between clinical staff members and to institute systematic and standardized protocols to enhance the availability, convenience and salience of intervention tool in routine practice. Further research is needed to understand whether “core predictors” of success vary across different intervention types.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 60
    Publikationsdatum: 2021-02-18
    Beschreibung: For a given positive integer k, the k-circle formation problem asks a set of autonomous, asynchronous robots to form disjoint circles having k robots each at distinct locations, centered at a set of fixed points in the Euclidean plane. The robots are identical, anonymous, oblivious, and they operate in Look–Compute–Move cycles. This paper studies the k-circle formation problem and its relationship with the k-epf problem, a generalized version of the embedded pattern formation problem, which asks exactly k robots to reach and remain at each fixed point. First, the k-circle formation problem is studied in a setting where the robots have an agreement on the common direction and orientation of one of the axes. We have characterized all the configurations and the values of k, for which the k-circle formation problem is deterministically unsolvable in this setting. For the remaining configurations and the values of k, a deterministic distributed algorithm has been proposed, in order to solve the problem. It has been proved that for the initial configurations with distinct robot positions, if the k-circle formation problem is deterministically solvable then the k-epf problem is also deterministically solvable. It has been shown that by modifying the proposed algorithm, the k-epf problem can be solved deterministically.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 61
    Publikationsdatum: 2021-02-08
    Beschreibung: Background The coronavirus disease 2019 (COVID-19) pandemic has caused health concerns worldwide since December 2019. From the beginning of infection, patients will progress through different symptom stages, such as fever, dyspnea or even death. Identifying disease progression and predicting patient outcome at an early stage helps target treatment and resource allocation. However, there is no clear COVID-19 stage definition, and few studies have addressed characterizing COVID-19 progression, making the need for this study evident. Methods We proposed a temporal deep learning method, based on a time-aware long short-term memory (T-LSTM) neural network and used an online open dataset, including blood samples of 485 patients from Wuhan, China, to train the model. Our method can grasp the dynamic relations in irregularly sampled time series, which is ignored by existing works. Specifically, our method predicted the outcome of COVID-19 patients by considering both the biomarkers and the irregular time intervals. Then, we used the patient representations, extracted from T-LSTM units, to subtype the patient stages and describe the disease progression of COVID-19. Results Using our method, the accuracy of the outcome of prediction results was more than 90% at 12 days and 98, 95 and 93% at 3, 6, and 9 days, respectively. Most importantly, we found 4 stages of COVID-19 progression with different patient statuses and mortality risks. We ranked 40 biomarkers related to disease and gave the reference values of them for each stage. Top 5 is Lymph, LDH, hs-CRP, Indirect Bilirubin, Creatinine. Besides, we have found 3 complications - myocardial injury, liver function injury and renal function injury. Predicting which of the 4 stages the patient is currently in can help doctors better assess and cure the patient. Conclusions To combat the COVID-19 epidemic, this paper aims to help clinicians better assess and treat infected patients, provide relevant researchers with potential disease progression patterns, and enable more effective use of medical resources. Our method predicted patient outcomes with high accuracy and identified a four-stage disease progression. We hope that the obtained results and patterns will aid in fighting the disease.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 62
    Publikationsdatum: 2021-02-04
    Beschreibung: The sharing mode of the logistics industry can effectively solve the new problems arising from the rapid development of the express industry. However, only when the interests are reasonably distributed can the sharing mode be implemented for a long time. This paper discusses the connotation of unified warehouse and distribution, designs the operation mode of a unified warehouse and distribution, and solves the profit distribution problem of a unified warehouse and distribution alliance based on the improved Shapley value method. Firstly, the traditional Shapley value method is improved by using a comprehensive correction factor, including the proportions of investment, risk, and innovative research contributions. Secondly, each factor’s weight is determined by the analytic hierarchy process (AHP), and the profits are distributed according to the contribution of each express enterprise to the alliance. Finally, an example is given to verify the validity of the modified algorithm. It proves that the modified Shapley value method can effectively solve the problem of profit distribution.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 63
    Publikationsdatum: 2021-02-04
    Beschreibung: Cluster analysis is widely applied in the neuropsychological field for exploring patterns in cognitive profiles, but traditional hierarchical and non-hierarchical approaches could be often poorly effective or even inapplicable on certain type of data. Moreover, these traditional approaches need the initial specification of the number of clusters, based on a priori knowledge not always owned. For this reason, we proposed a novel method for cognitive clustering through the affinity propagation (AP) algorithm. In particular, we applied the AP clustering on the regression residuals of the Mini Mental State Examination scores—a commonly used screening tool for cognitive impairment—of a cohort of 49 Parkinson’s disease, 48 Progressive Supranuclear Palsy and 44 healthy control participants. We found four clusters, where two clusters (68 and 30 participants) showed almost intact cognitive performance, one cluster had a moderate cognitive impairment (34 participants), and the last cluster had a more extensive cognitive deficit (8 participants). The findings showed, for the first time, an intra- and inter-diagnostic heterogeneity in the cognitive profile of Parkinsonisms patients. Our novel method of unsupervised learning could represent a reliable tool for supporting the neuropsychologists in understanding the natural structure of the cognitive performance in the neurodegenerative diseases.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 64
    Publikationsdatum: 2021-02-10
    Beschreibung: Background Rheumatoid arthritis (RA) is an autoimmune disorder with systemic inflammation and may be induced by oxidative stress that affects an inflamed joint. Our objectives were to examine isotypes of autoantibodies against 4-hydroxy-2-nonenal (HNE) modifications in RA and associate them with increased levels of autoantibodies in RA patients. Methods Serum samples from 155 female patients [60 with RA, 35 with osteoarthritis (OA), and 60 healthy controls (HCs)] were obtained. Four novel differential HNE-modified peptide adducts, complement factor H (CFAH)1211–1230, haptoglobin (HPT)78–108, immunoglobulin (Ig) kappa chain C region (IGKC)2–19, and prothrombin (THRB)328–345, were re-analyzed using tandem mass spectrometric (MS/MS) spectra (ProteomeXchange: PXD004546) from RA patients vs. HCs. Further, we determined serum protein levels of CFAH, HPT, IGKC and THRB, HNE-protein adducts, and autoantibodies against unmodified and HNE-modified peptides. Significant correlations and odds ratios (ORs) were calculated. Results Levels of HPT in RA patients were greatly higher than the levels in HCs. Levels of HNE-protein adducts and autoantibodies in RA patients were significantly greater than those of HCs. IgM anti-HPT78−108 HNE, IgM anti-IGKC2−19, and IgM anti-IGKC2−19 HNE may be considered as diagnostic biomarkers for RA. Importantly, elevated levels of IgM anti-HPT78−108 HNE, IgM anti-IGKC2−19, and IgG anti-THRB328−345 were positively correlated with the disease activity score in 28 joints for C-reactive protein (DAS28-CRP). Further, the ORs of RA development through IgM anti-HPT78−108 HNE (OR 5.235, p 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 65
    Publikationsdatum: 2021-02-06
    Beschreibung: Background Researchers developing prediction models are faced with numerous design choices that may impact model performance. One key decision is how to include patients who are lost to follow-up. In this paper we perform a large-scale empirical evaluation investigating the impact of this decision. In addition, we aim to provide guidelines for how to deal with loss to follow-up. Methods We generate a partially synthetic dataset with complete follow-up and simulate loss to follow-up based either on random selection or on selection based on comorbidity. In addition to our synthetic data study we investigate 21 real-world data prediction problems. We compare four simple strategies for developing models when using a cohort design that encounters loss to follow-up. Three strategies employ a binary classifier with data that: (1) include all patients (including those lost to follow-up), (2) exclude all patients lost to follow-up or (3) only exclude patients lost to follow-up who do not have the outcome before being lost to follow-up. The fourth strategy uses a survival model with data that include all patients. We empirically evaluate the discrimination and calibration performance. Results The partially synthetic data study results show that excluding patients who are lost to follow-up can introduce bias when loss to follow-up is common and does not occur at random. However, when loss to follow-up was completely at random, the choice of addressing it had negligible impact on model discrimination performance. Our empirical real-world data results showed that the four design choices investigated to deal with loss to follow-up resulted in comparable performance when the time-at-risk was 1-year but demonstrated differential bias when we looked into 3-year time-at-risk. Removing patients who are lost to follow-up before experiencing the outcome but keeping patients who are lost to follow-up after the outcome can bias a model and should be avoided. Conclusion Based on this study we therefore recommend (1) developing models using data that includes patients that are lost to follow-up and (2) evaluate the discrimination and calibration of models twice: on a test set including patients lost to follow-up and a test set excluding patients lost to follow-up.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 66
    Publikationsdatum: 2021-02-08
    Beschreibung: Following publication of the original article [1], it was reported that the contents of Additional file 2 were a duplicate of the files for Additional file 1.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 67
    Publikationsdatum: 2021-02-09
    Beschreibung: Background Information literacy competency is one of the requirements to implement Evidence-Based Practice (EBP) in nursing. It is necessary to pay attention to curricular development and use new educational methods such as virtual education to strengthen information literacy competency in nursing students. Given the scarcity of the studies on the effectiveness of virtual education in nursing, particularly in Iran, and the positive university atmosphere regarding the use of virtual education, this study investigated the effect of virtual education on the undergraduate nursing students’ information literacy competency for EBP. Methods This interventional study was performed with two groups of intervention and control and a pretest and posttest design. Seventy-nine nursing students were selected and assigned to the intervention or control groups by random sampling. Virtual education of the information literacy was uploaded on a website in the form of six modules delivered in four weeks. Questionnaires of demographic information and information literacy for EBP were used to collect data before and one month after the virtual education. Results The results showed no significant difference between the control and intervention groups in all dimensions of information literacy competency in the pre-test stage. In the post-test, the virtual education improved dimensions of information seeking skills (t = 3.14, p = 0.002) and knowledge about search operators (t = 39.84, p = 0.001) in the intervention groups compared with the control group. The virtual education did not have any significant effect on the use of different information resources and development of search strategy with assessing the frequency of selecting the most appropriate search statement in the intervention group. Conclusion Virtual education had a significant effect on information seeking skills and knowledge about search operators in nursing students. Nurse educators can benefit from our experiences in designing this method for the use of virtual education programs in nursing schools. Given the lack of effectiveness of this program in using different information resources and development of search strategy, nurse educators are recommended to train information literacy for EBP by integrating several approaches such as virtual (online and offline) and face-to-face education.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 68
    Publikationsdatum: 2021-02-09
    Beschreibung: Background U.S. hospitals and dialysis centers are penalized for 30-day hospital readmissions of dialysis patients, despite little infrastructure to facilitate care transitions between these settings. We are developing a third-party web-based information exchange platform, DialysisConnect, to enable clinicians to view and exchange information about dialysis patients during admission, hospitalization, and discharge. This health information technology solution could serve as a flexible and relatively affordable solution for dialysis facilities and hospitals across the nation who are seeking to serve as true partners in the improved care of dialysis patients. The purpose of this study was to evaluate the perceived coherence of DialysisConnect to key clinical stakeholders, to prepare messaging for implementation. Methods As part of a hybrid effectiveness-implementation study guided by Normalization Process Theory, we collected data on stakeholder perceptions of continuity of care for patients receiving maintenance dialysis and a DialysisConnect prototype before completing development and piloting the system. We conducted four focus groups with stakeholders from one academic hospital and associated dialysis centers [hospitalists (n = 5), hospital staff (social workers, nurses, pharmacists; n = 9), nephrologists (n = 7), and dialysis clinic staff (social workers, nurses; n = 10)]. Transcriptions were analyzed thematically within each component of the construct of coherence (differentiation, communal specification, individual specification, and internalization). Results Participants differentiated DialysisConnect from usual care variously as an information dashboard, a quick-exchange communication channel, and improved discharge information delivery; some could not differentiate it in terms of workflow. The purpose of DialysisConnect (communal specification) was viewed as fully coherent only for communicating outside of the same healthcare system. Current system workarounds were acknowledged as deterrents for practice change. All groups delegated DialysisConnect tasks (individual specification) to personnel besides themselves. Partial internalization of DialysisConnect was achieved only by dialysis clinic staff, based on experience with similar technology. Conclusions Implementing DialysisConnect for clinical users in both settings will require presenting a composite picture of current communication processes from all stakeholder groups to correct single-group misunderstandings, as well as providing data about care transitions communication beyond the local context to ease resistance to practice change.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 69
    Publikationsdatum: 2021-02-06
    Beschreibung: Background Researchers and policy makers have long suspected that people have differing, and potentially nefarious, motivations for participating in stated-preference studies such as discrete-choice experiments (DCE). While anecdotes and theories exist on why people participate in surveys, there is a paucity of evidence exploring variation in preferences for participating in stated-preference studies. Methods We used a DCE to estimate preferences for participating in preference research among an online survey panel sample. Preferences for the characteristics of a study to be conducted at a local hospital were assessed across five attributes (validity, relevance, bias, burden, time and payment) and described across three levels using a starring system. A D-efficient experimental design was used to construct three blocks of 12 choice tasks with two profiles each. Respondents were also asked about factors that motivated their choices. Mixed logistic regression was used to analyze the aggregate sample and latent class analysis identified segments of respondents. Results 629 respondents completed the experiment. In aggregate “study validity” was most important. Latent class results identified two segments based on underlying motivations: a quality-focused segment (76%) who focused most on validity, relevance, and bias and a convenience-focused segment (24%) who focused most on reimbursement and time. Quality-focused respondents spent more time completing the survey (p 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 70
    Publikationsdatum: 2021-02-08
    Beschreibung: This work studies clustering algorithms which operates with ordinal or comparison-based queries (operations), a situation that arises in many active-learning applications where “dissimilarities” between data points are evaluated by humans. Typically, exact answers are costly (or difficult to obtain in large amounts) while possibly erroneous answers have low cost. Motivated by these considerations, we study algorithms with non-trivial trade-offs between the number of exact (high-cost) operations and noisy (low-cost) operations with provable performance guarantees. Specifically, we study a class of polynomial-time graph-based clustering algorithms (termed Single-Linkage) which are widely used in practice and that guarantee exact solutions for stable instances in several clustering problems (these problems are NP-hard in the worst case). We provide several variants of these algorithms using ordinal operations and, in particular, non-trivial trade-offs between the number of high-cost and low-cost operations that are used. Our algorithms still guarantee exact solutions for stable instances of k-medoids clustering, and they use a rather small number of high-cost operations, without increasing the low-cost operations too much.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 71
    Publikationsdatum: 2021-02-09
    Beschreibung: Autonomous vehicles require fleet-wide data collection for continuous algorithm development and validation. The smart black box (SBB) intelligent event data recorder has been proposed as a system for prioritized high-bandwidth data capture. This paper extends the SBB by applying anomaly detection and action detection methods for generalized event-of-interest (EOI) detection. An updated SBB pipeline is proposed for the real-time capture of driving video data. A video dataset is constructed to evaluate the SBB on real-world data for the first time. SBB performance is assessed by comparing the compression of normal and anomalous data and by comparing our prioritized data recording with an FIFO strategy. The results show that SBB data compression can increase the anomalous-to-normal memory ratio by ∼25%, while the prioritized recording strategy increases the anomalous-to-normal count ratio when compared to an FIFO strategy. We compare the real-world dataset SBB results to a baseline SBB given ground-truth anomaly labels and conclude that improved general EOI detection methods will greatly improve SBB performance.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 72
    Publikationsdatum: 2021-02-07
    Beschreibung: K-Means Clustering is a popular technique in data analysis and data mining. To remedy the defects of relying on the initialization and converging towards the local minimum in the K-Means Clustering (KMC) algorithm, a chaotic adaptive artificial bee colony algorithm (CAABC) clustering algorithm is presented to optimally partition objects into K clusters in this study. This algorithm adopts the max–min distance product method for initialization. In addition, a new fitness function is adapted to the KMC algorithm. This paper also reports that the iteration abides by the adaptive search strategy, and Fuch chaotic disturbance is added to avoid converging on local optimum. The step length decreases linearly during the iteration. In order to overcome the shortcomings of the classic ABC algorithm, the simulated annealing criterion is introduced to the CAABC. Finally, the confluent algorithm is compared with other stochastic heuristic algorithms on the 20 standard test functions and 11 datasets. The results demonstrate that improvements in CAABA-K-means have an advantage on speed and accuracy of convergence over some conventional algorithms for solving clustering problems.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 73
    Publikationsdatum: 2021-02-10
    Beschreibung: Background This study aims to explore the information chain management model of large instrument and equipment inter-working in the operating room (OR) led by information nurses. Methods Through the chain management process of large instruments and equipment in the OR, which was based on information nurses, the management model of inter-working and integrating information chain was established, the key links were controlled, and the whole life cycle management of instruments and equipment from expected procurement to scrapping treatment was realized. Using the cluster sampling method, 1562 surgical patients were selected. Among these patients, 749 patients were assigned to the control group before the running mode, and 813 patients were assigned to the observation group after the running mode. The related indexes for large instrument and equipment management in the department before and after the running mode were compared. Results In the observation group, the average time of equipment registration was (22.05 ± 2.36), the cost was reduced by 2220 yuan/year, and the satisfaction rate of the nursing staff was 97.62%. These were significantly better, when compared to the control group (P 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 74
    Publikationsdatum: 2021-02-11
    Beschreibung: Mass spectrometers enable identifying proteins in biological samples leading to biomarkers for biological process parameters and diseases. However, bioinformatic evaluation of the mass spectrometer data needs a standardized workflow and system that stores the protein sequences. Due to its standardization and maturity, relational systems are a great fit for storing protein sequences. Hence, in this work, we present a schema for distributed column-based database management systems using a column-oriented index to store sequence data. In order to achieve a high storage performance, it was necessary to choose a well-performing strategy for transforming the protein sequence data from the FASTA format to the new schema. Therefore, we applied an in-memory map, HDDmap, database engine, and extended radix tree and evaluated their performance. The results show that our proposed extended radix tree performs best regarding memory consumption and runtime. Hence, the radix tree is a suitable data structure for transforming protein sequences into the indexed schema.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 75
    Publikationsdatum: 2021-02-09
    Beschreibung: We consider the distributed setting of N autonomous mobile robots that operate in Look-Compute-Move (LCM) cycles and use colored lights (the robots with lights model). We assume obstructed visibility where a robot cannot see another robot if a third robot is positioned between them on the straight line segment connecting them. In this paper, we consider the problem of positioning N autonomous robots on a plane so that every robot is visible to all others (this is called the Complete Visibility problem). This problem is fundamental, as it provides a basis to solve many other problems under obstructed visibility. In this paper, we provide the first, asymptotically optimal, O(1) time, O(1) color algorithm for Complete Visibility in the asynchronous setting. This significantly improves on an O(N)-time translation of the existing O(1) time, O(1) color semi-synchronous algorithm to the asynchronous setting. The proposed algorithm is collision-free, i.e., robots do not share positions, and their paths do not cross. We also introduce a new technique for moving robots in an asynchronous setting that may be of independent interest, called Beacon-Directed Curve Positioning.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 76
    Publikationsdatum: 2021-02-15
    Beschreibung: The reconstruction of gene regulatory networks based on gene expression data can effectively uncover regulatory relationships between genes and provide a deeper understanding of biological control processes. Non-linear dependence is a common problem in the regulatory mechanisms of gene regulatory networks. Various methods based on information theory have been developed to infer networks. However, the methods have introduced many redundant regulatory relationships in the network inference process. A recent measurement method called distance correlation has, in many cases, shown strong and computationally efficient non-linear correlations. In this paper, we propose a novel regulatory network inference method called the distance-correlation and network topology centrality network (DCNTC) method. The method is based on and extends the Local Density Measurement of Network Node Centrality (LDCNET) algorithm, which has the same choice of network centrality ranking as the LDCNET algorithm, but uses a simpler and more efficient distance correlation measure of association between genes. In this work, we integrate distance correlation and network topological centrality into the reasoning about the structure of gene regulatory networks. We will select optimal thresholds based on the characteristics of the distribution of each gene pair in relation to distance correlation. Experiments were carried out on four network datasets and their performance was compared.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 77
    Publikationsdatum: 2021-02-13
    Beschreibung: We consider the facility layout problem (FLP) in which we find the arrangements of departments with the smallest material handling cost that can be expressed as the product of distance times flows between departments. It is known that FLP can be formulated as a linear programming problem if the relative positioning of departments is specified, and, thus, can be solved to optimality. In this paper, we describe a custom interior-point algorithm for solving FLP with relative positioning constraints (FLPRC) that is much faster than the standard methods used in the general-purpose solver. We build a compact formation of FLPRC and its duals, which enables us to establish the optimal condition very quickly. We use this optimality condition to implement the primal-dual interior-point method with an efficient Newton step computation that exploit special structure of a Hessian. We confirm effectiveness of our proposed model through applications to several well-known benchmark data sets. Our algorithm shows much faster speed for finding the optimal solution.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 78
    Publikationsdatum: 2021-02-07
    Beschreibung: The problem of classification for imbalanced datasets is frequently encountered in practical applications. The data to be classified in this problem are skewed, i.e., the samples of one class (the minority class) are much less than those of other classes (the majority class). When dealing with imbalanced datasets, most classifiers encounter a common limitation, that is, they often obtain better classification performances on the majority classes than those on the minority class. To alleviate the limitation, in this study, a fuzzy rule-based modeling approach using information granules is proposed. Information granules, as some entities derived and abstracted from data, can be used to describe and capture the characteristics (distribution and structure) of data from both majority and minority classes. Since the geometric characteristics of information granules depend on the distance measures used in the granulation process, the main idea of this study is to construct information granules on each class of imbalanced data using Minkowski distance measures and then to establish the classification models by using “If-Then” rules. The experimental results involving synthetic and publicly available datasets reflect that the proposed Minkowski distance-based method can produce information granules with a series of geometric shapes and construct granular models with satisfying classification performance for imbalanced datasets.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 79
    Publikationsdatum: 2021-02-18
    Beschreibung: Background Social networking sites such as Facebook® can contribute to health promotion and behaviour change activities, but are currently underused for this purpose. In Germany, health insurance companies are relevant public health agencies that are responsible for health promotion, primary prevention, and health education. We intended to analyse the Facebook® accounts of health insurance providers to explore the range of prevention topics addressed, identify the communication formats used, and analyse user activity stimulated by prevention-related posts. Methods We performed a quantitative content analysis of text and picture data on Facebook® accounts (9 months in retrospect) in a cross-sectional study design. 64/159 German health insurance providers hosted a Facebook® page, 25/64 posted ≥ 10 posts/months. Among those 25, we selected 17 health insurance companies (12 public, 5 private) for analysis. All posts were categorized according to domains in the classification system that was developed for this study, and the number of likes and comments was counted. The data were analysed using descriptive statistics. Results We collected 3,763 Facebook® posts, 32% of which had a focus on prevention. The frequency of prevention-related posts varied among health insurance providers (1–25 per month). The behaviours addressed most frequently were healthy nutrition, physical activity, and stress/anxiety relief, often in combination with each other. All these topics yielded a moderate user engagement (30–120 likes, 2–10 comments per post). User engagement was highest when a competition or quiz were posted (11% of posts). The predominant communication pattern was health education, often supplemented by photos or links, or information about offline events (e.g. a public run). Some providers regularly engaged in two-side communication with users, inviting tips, stories or recipes, or responding to individual comments. Still, the interactive potential offered by Facebook® was only partly exploited. Conclusions Those few health insurace companies that regularly post content about prevention or healthy lifestyles on their Facebook® accounts comply with suggestions given for social media communication. Still, many health insurance providers fail to actively interact with wider audiences. Whether health communication on Facebook® can actually increase health literacy and lead to behaviour changes still needs to be evaluated.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 80
    Publikationsdatum: 2021-02-18
    Beschreibung: Background Systemic inflammatory response syndrome (SIRS) is defined as a non-specific inflammatory process in the absence of infection. SIRS increases susceptibility for organ dysfunction, and frequently affects the clinical outcome of affected patients. We evaluated a knowledge-based, interoperable clinical decision-support system (CDSS) for SIRS detection on a pediatric intensive care unit (PICU). Methods The CDSS developed retrieves routine data, previously transformed into an interoperable format, by using model-based queries and guideline- and knowledge-based rules. We evaluated the CDSS in a prospective diagnostic study from 08/2018–03/2019. 168 patients from a pediatric intensive care unit of a tertiary university hospital, aged 0 to 18 years, were assessed for SIRS by the CDSS and by physicians during clinical routine. Sensitivity and specificity (when compared to the reference standard) with 95% Wald confidence intervals (CI) were estimated on the level of patients and patient-days. Results Sensitivity and specificity was 91.7% (95% CI 85.5–95.4%) and 54.1% (95% CI 45.4–62.5%) on patient level, and 97.5% (95% CI 95.1–98.7%) and 91.5% (95% CI 89.3–93.3%) on the level of patient-days. Physicians’ SIRS recognition during clinical routine was considerably less accurate (sensitivity of 62.0% (95% CI 56.8–66.9%)/specificity of 83.3% (95% CI 80.4–85.9%)) when measurd on the level of patient-days. Evaluation revealed valuable insights for the general design of the CDSS as well as specific rule modifications. Despite a lower than expected specificity, diagnostic accuracy was higher than the one in daily routine ratings, thus, demonstrating high potentials of using our CDSS to help to detect SIRS in clinical routine. Conclusions We successfully evaluated an interoperable CDSS for SIRS detection in PICU. Our study demonstrated the general feasibility and potentials of the implemented algorithms but also some limitations. In the next step, the CDSS will be optimized to overcome these limitations and will be evaluated in a multi-center study. Trial registration: NCT03661450 (ClinicalTrials.gov); registered September 7, 2018.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 81
    Publikationsdatum: 2021-02-24
    Beschreibung: Custodian capture occurs when a player has placed two of his pieces on the opposite sides of an orthogonal line of the opponent’s men. Each piece moves like the rook in Chess. Different cultures played it from pre-modern times in two-player strategy board games, Ludus Latrunculorum (Kowalski’s reconstruction), Hasami shogi in Japan, Mak-yek in Thailand and Myanmar, Ming Mang in Tibet, and so on. We prove that a custodian capture game on n×n square board is EXPTIME hard if the first player to capture five or more men in total wins.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 82
    Publikationsdatum: 2021-02-25
    Beschreibung: Background The Ministry of Health of Malaysia has invested significant resources to implement an electronic health record (EHR) system to ensure the full automation of hospitals for coordinated care delivery. Thus, evaluating whether the system has been effectively utilized is necessary, particularly regarding how it predicts the post-implementation primary care providers’ performance impact. Methods Convenience sampling was employed for data collection in three government hospitals for 7 months. A standardized effectiveness survey for EHR systems was administered to primary health care providers (specialists, medical officers, and nurses) as they participated in medical education programs. Empirical data were assessed by employing partial least squares-structural equation modeling for hypothesis testing. Results The results demonstrated that knowledge quality had the highest score for predicting performance and had a large effect size, whereas system compatibility was the most substantial system quality component. The findings indicated that EHR systems supported the clinical tasks and workflows of care providers, which increased system quality, whereas the increased quality of knowledge improved user performance. Conclusion Given these findings, knowledge quality and effective use should be incorporated into evaluating EHR system effectiveness in health institutions. Data mining features can be integrated into current systems for efficiently and systematically generating health populations and disease trend analysis, improving clinical knowledge of care providers, and increasing their productivity. The validated survey instrument can be further tested with empirical surveys in other public and private hospitals with different interoperable EHR systems.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 83
    Publikationsdatum: 2021-02-19
    Beschreibung: Recent discoveries of distinct molecular subtypes have led to remarkable advances in treatment for a variety of diseases. While subtyping via unsupervised clustering has received a great deal of interest, most methods rely on basic statistical or machine learning methods. At the same time, techniques based on graph clustering, particularly clique-based strategies, have been successfully used to identify disease biomarkers and gene networks. A graph theoretical approach based on the paraclique algorithm is described that can easily be employed to identify putative disease subtypes and serve as an aid in outlier detection as well. The feasibility and potential effectiveness of this method is demonstrated on publicly available gene co-expression data derived from patient samples covering twelve different disease families.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 84
    Publikationsdatum: 2021-02-17
    Beschreibung: Background We know little about the best approaches to design training for healthcare professionals. We thus studied how user-centered and theory-based design contribute to the development of a distance learning program for professionals, to increase their shared decision-making (SDM) with older adults living with neurocognitive disorders and their caregivers. Methods In this mixed-methods study, healthcare professionals who worked in family medicine clinics and homecare services evaluated a training program in a user-centered approach with several iterative phases of quantitative and qualitative evaluation, each followed by modifications. The program comprised an e-learning activity and five evidence summaries. A subsample assessed the e-learning activity during semi-structured think-aloud sessions. A second subsample assessed the evidence summaries they received by email. All participants completed a theory-based questionnaire to assess their intention to adopt SDM. Descriptive statistical analyses and qualitative thematic analyses were integrated at each round to prioritize training improvements with regard to the determinants most likely to influence participants’ intention. Results Of 106 participants, 98 completed their evaluations of either the e-learning activity or evidence summary (93%). The professions most represented were physicians (60%) and nurses (15%). Professionals valued the e-learning component to gain knowledge on the theory and practice of SDM, and the evidence summaries to apply the knowledge gained through the e-learning activity to diverse clinical contexts. The iterative design process allowed addressing most weaknesses reported. Participants’ intentions to adopt SDM and to use the summaries were high at baseline and remained positive as the rounds progressed. Attitude and social influence significantly influenced participants' intention to use the evidence summaries (P 
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 85
    Publikationsdatum: 2021-02-17
    Beschreibung: Background Summative eHealth evaluations frequently lack quality, which affects the generalizability of the evidence, and its use in practice and further research. To guarantee quality, a number of activities are recommended in the guidelines for evaluation planning. This study aimed to examine a case of an eHealth evaluation planning in a multi-national and interdisciplinary setting and to provide recommendations for eHealth evaluation planning guidelines. Methods An empirical eHealth evaluation process was developed through a case study. The empirical process was compared with selected guidelines for eHealth evaluation planning using a pattern-matching technique. Results Planning in the interdisciplinary and multi-national team demanded extensive negotiation and alignment to support the future use of the evidence created. The evaluation planning guidelines did not provide specific strategies for different set-ups of the evaluation teams. Further, they did not address important aspects of quality evaluation, such as feasibility analysis of the outcome measures and data collection, monitoring of data quality, and consideration of the methods and measures employed in similar evaluations. Conclusions Activities to prevent quality problems need to be incorporated in the guidelines for evaluation planning. Additionally, evaluators could benefit from guidance in evaluation planning related to the different set-ups of the evaluation teams.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 86
    Publikationsdatum: 2021-02-11
    Beschreibung: Background No case definition of Type 1 diabetes (T1D) for the claims data has been proposed in Japan yet. This study aimed to evaluate the performance of candidate case definitions for T1D using Electronic health care records (EHR) and claims data in a University Hospital in Japan. Methods The EHR and claims data for all the visiting patients in a University Hospital were used. As the candidate case definitions for claims data, we constructed 11 definitions by combinations of International Statistical Classification of Diseases and Related Health Problems, Tenth Revision. (ICD 10) code of T1D, the claims code of insulin needles for T1D patients, basal insulin, and syringe pump for continuous subcutaneous insulin infusion (CSII). We constructed a predictive model for T1D patients using disease names, medical practices, and medications as explanatory variables. The predictive model was applied to patients of test group (validation data), and performances of candidate case definitions were evaluated. Results As a result of performance evaluation, the sensitivity of the confirmed disease name of T1D was 32.9 (95% CI: 28.4, 37.2), and positive predictive value (PPV) was 33.3 (95% CI: 38.0, 38.4). By using the case definition of both the confirmed diagnosis of T1D and either of the claims code of the two insulin treatment methods (i.e., syringe pump for CSII and insulin needles), PPV improved to 90.2 (95% CI: 85.2, 94.4). Conclusions We have established a case definition with high PPV, and the case definition can be used for precisely detecting T1D patients from claims data in Japan.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 87
    Publikationsdatum: 2021-02-11
    Beschreibung: Data streams are ubiquitous and related to the proliferation of low-cost mobile devices, sensors, wireless networks and the Internet of Things. While it is well known that complex phenomena are not stationary and exhibit a concept drift when observed for a sufficiently long time, relatively few studies have addressed the related problem of feature drift. In this paper, a variation of the QuickReduct algorithm suitable to process data streams is proposed and tested: it builds an evolving reduct that dynamically selects the relevant features in the stream, removing the redundant ones and adding the newly relevant ones as soon as they become such. Tests on five publicly available datasets with an artificially injected drift have confirmed the effectiveness of the proposed method.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 88
    Publikationsdatum: 2021-04-18
    Beschreibung: We propose and validate a method to find an implicit representation of a surface placed at a distance h from another implicit surface. With two such surfaces on either side of the original surface, a volumetric shell of predefined thickness can be obtained. The usability of the proposed method is demonstrated through providing solid models of triply periodic minimal surface (TPMS) geometries with a predefined constant and variable thickness. The method has an adjustable order of convergence. If applied to surfaces with spatially varying thickness, the convergence order is limited to second order. This accuracy is still substantially higher than the accuracy of any contemporary 3D printer that could benefit from the function as an infill volume for shells with predefined thicknesses.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 89
    Publikationsdatum: 2021-04-13
    Beschreibung: Objective To explore an effective algorithm based on artificial neural network to pick correctly the minority of pregnant women with SLE suffering fetal loss outcomes from the majority with live birth and train a well behaved model as a clinical decision assistant. Methods We integrated the thoughts of comparative and focused study into the artificial neural network and presented an effective algorithm aiming at imbalanced learning in small dataset. Results We collected 469 non-trivial pregnant patients with SLE, where 420 had live-birth outcomes and the other 49 patients ended in fetal loss. A well trained imbalanced-learning model had a high sensitivity of 19/21 ($$90.8\%$$ 90.8 % ) for the identification of patients with fetal loss outcomes. Discussion The misprediction of the two patients was explainable. Algorithm improvements in artificial neural network framework enhanced the identification in imbalanced learning problems and the external validation increased the reliability of algorithm. Conclusion The well-trained model was fully qualified to assist healthcare providers to make timely and accurate decisions.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 90
    Publikationsdatum: 2021-04-16
    Beschreibung: Cryptography is the science and study of protecting data in computer and communication systems from unauthorized disclosure and modification. An ordinary difference equation (a map) can be used in encryption–decryption algorithms. In particular, the Arnold’s cat and the sine-Gordon linear maps can be used in cryptographic algorithms for encoding digital images. In this article, a two-dimensional linear mKdV map derived from an ordinary difference mKdV equation will be used in a cryptographic encoding algorithm. The proposed encoding algorithm will be compared with those generated using sine-Gordon and Arnold’s cat maps via the correlations between adjacent pixels in the encrypted image and the uniformity of the pixel distribution. Note that the mKdV map is derived from the partial discrete mKdV equation with Consistency Around the Cube (CAC) properties, whereas the sine-Gordon map is derived from the partial discrete sine-Gordon equation, which does not have CAC properties.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 91
    Publikationsdatum: 2021-04-15
    Beschreibung: Background Semantic categorization analysis of clinical trials eligibility criteria based on natural language processing technology is crucial for the task of optimizing clinical trials design and building automated patient recruitment system. However, most of related researches focused on English eligibility criteria, and to the best of our knowledge, there are no researches studied the Chinese eligibility criteria. Thus in this study, we aimed to explore the semantic categories of Chinese eligibility criteria. Methods We downloaded the clinical trials registration files from the website of Chinese Clinical Trial Registry (ChiCTR) and extracted both the Chinese eligibility criteria and corresponding English eligibility criteria. We represented the criteria sentences based on the Unified Medical Language System semantic types and conducted the hierarchical clustering algorithm for the induction of semantic categories. Furthermore, in order to explore the classification performance of Chinese eligibility criteria with our developed semantic categories, we implemented multiple classification algorithms, include four baseline machine learning algorithms (LR, NB, kNN, SVM), three deep learning algorithms (CNN, RNN, FastText) and two pre-trained language models (BERT, ERNIE). Results We totally developed 44 types of semantic categories, summarized 8 topic groups, and investigated the average incidence and prevalence in 272 hepatocellular carcinoma related Chinese clinical trials. Compared with the previous proposed categories in English eligibility criteria, 13 novel categories are identified in Chinese eligibility criteria. The classification result shows that most of semantic categories performed quite well, the pre-trained language model ERNIE achieved best performance with macro-average F1 score of 0.7980 and micro-average F1 score of 0.8484. Conclusion As a pilot study of Chinese eligibility criteria analysis, we developed the 44 semantic categories by hierarchical clustering algorithms for the first times, and validated the classification capacity with multiple classification algorithms.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 92
    Publikationsdatum: 2021-04-29
    Beschreibung: Background Robust, flexible, and integrated health information (HIS) systems are essential to achieving national and international goals in health and development. Such systems are still uncommon in most low and middle income countries. This article describes a first-phase activity in Tanzania to integrate the country’s vertical health management information system with the help of an interoperability layer that enables cross-program data exchange. Methods From 2014 to 2019, the Tanzanian government and partners implemented a five-step procedure based on the “Mind the GAPS” (governance, architecture, program management, and standards) framework and using both proprietary and open-source tools. In collaboration with multiple stakeholders, the team developed the system to address major data challenges via four fully documented “use case scenarios” addressing data exchange among hospitals, between services and the supply chain, across digital data systems, and within the supply chain reporting system. This work included developing the architecture for health system data exchange, putting a middleware interoperability layer in place to facilitate the exchange, and training to support use of the system and the data it generates. Results Tanzania successfully completed the five-step procedure for all four use cases. Data exchange is currently enabled among 15 separate information systems, and has resulted in improved data availability and significant time savings. The government has adopted the health information exchange within the national strategy for health care information, and the system is being operated and managed by Tanzanian officials. Conclusion Developing an integrated HIS requires a significant time investment; but ultimately benefit both programs and patients. Tanzania’s experience may interest countries that are developing their HIS programs.
    Digitale ISSN: 1472-6947
    Thema: Informatik , Medizin
    Publiziert von BioMed Central
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 93
    Publikationsdatum: 2021-04-22
    Beschreibung: Insertion-deletion systems have been introduced as a formalism to model operations that find their counterparts in ideas of bio-computing, more specifically, when using DNA or RNA strings and biological mechanisms that work on these strings. So-called matrix control has been introduced to insertion-deletion systems in order to enable writing short program fragments. We discuss substitutions as a further type of operation, added to matrix insertion-deletion systems. For such systems, we additionally discuss the effect of appearance checking. This way, we obtain new characterizations of the family of context-sensitive and the family of recursively enumerable languages. Not much context is needed for systems with appearance checking to reach computational completeness. This also suggests that bio-computers may run rather traditionally written programs, as our simulations also show how Turing machines, like any other computational device, can be simulated by certain matrix insertion-deletion-substitution systems.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 94
    Publikationsdatum: 2021-04-23
    Beschreibung: In the computational biology community there are many biological cases that are considered as multi-one-class classification problems. Examples include the classification of multiple tumor types, protein fold recognition and the molecular classification of multiple cancer types. In all of these cases the real world appropriately characterized negative cases or outliers are impractical to achieve and the positive cases might consist of different clusters, which in turn might lead to accuracy degradation. In this paper we present a novel algorithm named MultiKOC multi-one-class classifiers based K-means to deal with this problem. The main idea is to execute a clustering algorithm over the positive samples to capture the hidden subdata of the given positive data, and then building up a one-class classifier for every cluster member’s examples separately: in other word, train the OC classifier on each piece of subdata. For a given new sample, the generated classifiers are applied. If it is rejected by all of those classifiers, the given sample is considered as a negative sample, otherwise it is a positive sample. The results of MultiKOC are compared with the traditional one-class, multi-one-class, ensemble one-classes and two-class methods, yielding a significant improvement over the one-class and like the two-class performance.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 95
    Publikationsdatum: 2021-04-22
    Beschreibung: MicroRNAs (miRNAs) are short RNA sequences that are actively involved in gene regulation. These regulators on the post-transcriptional level have been discovered in virtually all eukaryotic organisms. Additionally, miRNAs seem to exist in viruses and might also be produced in microbial pathogens. Initially, transcribed RNA is cleaved by Drosha, producing precursor miRNAs. We have previously shown that it is possible to distinguish between microRNA precursors of different clades by representing the sequences in a k-mer feature space. The k-mer representation considers the frequency of a k-mer in the given sequence. We further hypothesized that the relationship between k-mers (e.g., distance between k-mers) could be useful for classification. Three different distance-based features were created, tested, and compared. The three feature sets were entitled inter k-mer distance, k-mer location distance, and k-mer first–last distance. Here, we show that classification performance above 80% (depending on the evolutionary distance) is possible with a combination of distance-based and regular k-mer features. With these novel features, classification at closer evolutionary distances is better than using k-mers alone. Combining the features leads to accurate classification for larger evolutionary distances. For example, categorizing Homo sapiens versus Brassicaceae leads to an accuracy of 93%. When considering average accuracy, the novel distance-based features lead to an overall increase in effectiveness. On the contrary, secondary-structure-based features did not lead to any effective separation among clades in this study. With this line of research, we support the differentiation between true and false miRNAs detected from next-generation sequencing data, provide an additional viewpoint for confirming miRNAs when the species of origin is known, and open up a new strategy for analyzing miRNA evolution.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 96
    Publikationsdatum: 2021-04-23
    Beschreibung: Finding substrings of a text T that match a regular expression p is a fundamental problem. Despite being the subject of extensive research, no solution with a time complexity significantly better than O(|T||p|) has been found. Backurs and Indyk in FOCS 2016 established conditional lower bounds for the algorithmic problem based on the Strong Exponential Time Hypothesis that helps explain this difficulty. A natural question is whether we can improve the time complexity for matching the regular expression by preprocessing the text T? We show that conditioned on the Online Matrix–Vector Multiplication (OMv) conjecture, even with arbitrary polynomial preprocessing time, a regular expression query on a text cannot be answered in strongly sublinear time, i.e., O(|T|1−ε) for any ε〉0. Furthermore, if we extend the OMv conjecture to a plausible conjecture regarding Boolean matrix multiplication with polynomial preprocessing time, which we call Online Matrix–Matrix Multiplication (OMM), we can strengthen this hardness result to there being no solution with a query time that is O(|T|3/2−ε). These results hold for alphabet sizes three or greater. We then provide data structures that answer queries in O(|T||p|τ) time where τ∈[1,|T|] is fixed at construction. These include a solution that works for all regular expressions with Expτ·|T| preprocessing time and space. For patterns containing only ‘concatenation’ and ‘or’ operators (the same type used in the hardness result), we provide (1) a deterministic solution which requires Expτ·|T|log2|T| preprocessing time and space, and (2) when |p|≤|T|z for z=2o(log|T|), a randomized solution with amortized query time which answers queries correctly with high probability, requiring Expτ·|T|2Ωlog|T| preprocessing time and space.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 97
    Publikationsdatum: 2021-04-22
    Beschreibung: The continuous p-median problem (CPMP) is one of the most popular and widely used models in location theory that minimizes the sum of distances from known demand points to the sought points called centers or medians. This NP-hard location problem is also useful for clustering (automatic grouping). In this case, sought points are considered as cluster centers. Unlike similar k-means model, p-median clustering is less sensitive to noisy data and appearance of the outliers (separately located demand points that do not belong to any cluster). Local search algorithms including Variable Neighborhood Search as well as evolutionary algorithms demonstrate rather precise results. Various algorithms based on the use of greedy agglomerative procedures are capable of obtaining very accurate results that are difficult to improve on with other methods. The computational complexity of such procedures limits their use for large problems, although computations on massively parallel systems significantly expand their capabilities. In addition, the efficiency of agglomerative procedures is highly dependent on the setting of their parameters. For the majority of practically important p-median problems, one can choose a very efficient algorithm based on the agglomerative procedures. However, the parameters of such algorithms, which ensure their high efficiency, are difficult to predict. We introduce the concept of the AGGLr neighborhood based on the application of the agglomerative procedure, and investigate the search efficiency in such a neighborhood depending on its parameter r. Using the similarities between local search algorithms and (1 + 1)-evolutionary algorithms, as well as the ability of the latter to adapt their search parameters, we propose a new algorithm based on a greedy agglomerative procedure with the automatically tuned parameter r. Our new algorithm does not require preliminary tuning of the parameter r of the agglomerative procedure, adjusting this parameter online, thus representing a more versatile computational tool. The advantages of the new algorithm are shown experimentally on problems with a data volume of up to 2,000,000 demand points.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 98
    Publikationsdatum: 2021-04-27
    Beschreibung: In the field of Artificial Intelligence, Bayesian Networks (BN) [...]
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 99
    Publikationsdatum: 2021-04-27
    Beschreibung: Schizophrenia is a serious mental illness associated with neurobiological deficits. Even though the brain activities during tasks (i.e., P300 activities) are considered as biomarkers to diagnose schizophrenia, brain activities at rest have the potential to show an inherent dysfunctionality in schizophrenia and can be used to understand the cognitive deficits in these patients. In this study, we developed a machine learning algorithm (MLA) based on eyes closed resting-state electroencephalogram (EEG) datasets, which record the neural activity in the absence of any tasks or external stimuli given to the subjects, aiming to distinguish schizophrenic patients (SCZs) from healthy controls (HCs). The MLA has two steps. In the first step, symbolic transfer entropy (STE), which is a measure of effective connectivity, is applied to resting-state EEG data. In the second step, the MLA uses the STE matrix to find a set of features that can successfully discriminate SCZ from HC. From the results, we found that the MLA could achieve a total accuracy of 96.92%, with a sensitivity of 95%, a specificity of 98.57%, precision of 98.33%, F1-score of 0.97, and Matthews correlation coefficient (MCC) of 0.94 using only 10 out of 1900 STE features, which implies that the STE matrix extracted from resting-state EEG data may be a promising tool for the clinical diagnosis of schizophrenia.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
  • 100
    Publikationsdatum: 2021-04-26
    Beschreibung: Person re-Identification(Re-ID) based on deep convolutional neural networks (CNNs) achieves remarkable success with its fast speed. However, prevailing Re-ID models are usually built upon backbones that manually design for classification. In order to automatically design an effective Re-ID architecture, we propose a pedestrian re-identification algorithm based on knowledge distillation, called KDAS-ReID. When the knowledge of the teacher model is transferred to the student model, the importance of knowledge in the teacher model will gradually decrease with the improvement of the performance of the student model. Therefore, instead of applying the distillation loss function directly, we consider using dynamic temperatures during the search stage and training stage. Specifically, we start searching and training at a high temperature and gradually reduce the temperature to 1 so that the student model can better learn from the teacher model through soft targets. Extensive experiments demonstrate that KDAS-ReID performs not only better than other state-of-the-art Re-ID models on three benchmarks, but also better than the teacher model based on the ResNet-50 backbone.
    Digitale ISSN: 1999-4893
    Thema: Informatik
    Standort Signatur Erwartet Verfügbarkeit
    BibTip Andere fanden auch interessant ...
Schließen ⊗
Diese Webseite nutzt Cookies und das Analyse-Tool Matomo. Weitere Informationen finden Sie hier...