ALBERT

All Library Books, journals and Electronic Records Telegrafenberg

Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • Articles  (8,319)
  • IEEE Transactions on Knowledge and Data Engineering  (1,300)
  • IEEE Transactions on Pattern Analysis and Machine Intelligence  (1,266)
  • Algorithms  (1,162)
  • BMC Medical Informatics and Decision Making  (757)
  • 110151
  • 1274
  • 1275
  • 9794
  • Computer Science  (8,319)
Collection
  • Articles  (8,319)
Years
Journal
Topic
  • 101
    Publication Date: 2021-04-27
    Description: Background The uptake of complex clinical decision support systems (CDSS) in daily practice remains low, despite the proven potential to reduce medical errors and to improve the quality of care. To improve successful implementation of a complex CDSS this study aims to identify the factors that hinder, or alleviate the acceptance of, clinicians toward the use of a complex CDSS for treatment allocation of patients with chronic low back pain. Methods We tested a research model in which the intention to use a CDSS by clinicians is influenced by the perceived usefulness; this usefulness, in turn is influenced by the perceived service benefits and perceived service risks. An online survey was created to test our research model and the data was analysed using Partial Least Squares Structural Equation Modelling. The study population consisted of clinicians. The online questionnaire started with demographic questions and continued with a video animation of the complex CDSS followed by the set of measurement items. The online questionnaire ended with two open questions enquiring the reasons to use and not use, a complex CDSS. Results Ninety-eight participants (46% general practitioners, 25% primary care physical therapists, and 29% clinicians at a rehabilitation centre) fully completed the questionnaire. Fifty-two percent of the respondents were male. The average age was 48 years (SD ± 12.2). The causal model suggests that perceived usefulness is the main factor contributing to the intention to use a complex CDSS. Perceived service benefits and risks are both significant antecedents of perceived usefulness and perceived service risks are affected by the perceived threat to autonomy and trusting beliefs, particularly benevolence and competence. Conclusions To improve the acceptance of complex CDSSs it is important to address the risks, but the main focus during the implementation phase should be on the expected improvements in patient outcomes and the overall gain for clinicians. Our results will help the development of complex CDSSs that fit more into the daily clinical practice of clinicians.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 102
    Publication Date: 2021-04-27
    Description: Background This paper describes a model for estimating COVID-19 related excess deaths that are a direct consequence of insufficient hospital ward bed and intensive care unit (ICU) capacity. Methods Compartmental models were used to estimate deaths under different combinations of ICU and ward care required and received in England up to late April 2021. Model parameters were sourced from publicly available government information and organisations collating COVID-19 data. A sub-model was used to estimate the mortality scalars that represent increased mortality due to insufficient ICU or general ward bed capacity. Three illustrative scenarios for admissions numbers, ‘Optimistic’, ‘Middling’ and ‘Pessimistic’, were modelled and compared with the subsequent observations to the 3rd February. Results The key output was the demand and capacity model described. There were no excess deaths from a lack of capacity in the ‘Optimistic’ scenario. Several of the ‘Middling’ scenario applications resulted in excess deaths—up to 597 deaths (0.6% increase) with a 20% reduction compared to best estimate ICU capacity. All the ‘Pessimistic’ scenario applications resulted in excess deaths, ranging from 49,178 (17.0% increase) for a 20% increase in ward bed availability, to 103,735 (35.8% increase) for a 20% shortfall in ward bed availability. These scenarios took no account of the emergence of the new, more transmissible, variant of concern (b.1.1.7). Conclusions Mortality is increased when hospital demand exceeds available capacity. No excess deaths from breaching capacity would be expected under the ‘Optimistic’ scenario. The ‘Middling’ scenario could result in some excess deaths—up to a 0.7% increase relative to the total number of deaths. The ‘Pessimistic’ scenario would have resulted in significant excess deaths. Our sensitivity analysis indicated a range between 49,178 (17% increase) and 103,735 (35.8% increase). Given the new variant, the pessimistic scenario appeared increasingly likely and could have resulted in a substantial increase in the number of COVID-19 deaths. In the event, it would appear that capacity was not breached at any stage at a national level with no excess deaths. it will remain unclear if minor local capacity breaches resulted in any small number of excess deaths.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 103
    Publication Date: 2021-04-25
    Description: As studies move into deeper characterization of the impact of selection through non-neutral mutations in whole genome population genetics, modeling for selection becomes crucial. Moreover, epistasis has long been recognized as a significant component in understanding the evolution of complex genetic systems. We present a backward coalescent model, EpiSimRA, that accommodates multiple loci selection, with multi-way (k-way) epistasis for any arbitrary k. Starting from arbitrary extant populations with epistatic sites, we trace the Ancestral Recombination Graph (ARG), sampling relevant recombination and coalescent events. Our framework allows for studying different complex evolutionary scenarios in the presence of selective sweeps, positive and negative selection with multiway epistasis. We also present a forward counterpart of the coalescent model based on a Wright-Fisher (WF) process, which we use as a validation framework, comparing the hallmarks of the ARG between the two. We provide the first framework that allows a nose-to-nose comparison of multiway epistasis in a coalescent simulator with its forward counterpart with respect to the hallmarks of the ARG. We demonstrate, through extensive experiments, that EpiSimRA is consistently superior in terms of performance (seconds vs. hours) in comparison to the forward model without compromising on its accuracy.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 104
    Publication Date: 2021-04-25
    Description: The problem of the analysis of datasets formed by the results of group expert assessment of objects by a certain set of features is considered. Such datasets may contain mismatched, including conflicting values of object evaluations by the analyzed features. In addition, the values of the assessments for the features can be not only point, but also interval due to the incompleteness and inaccuracy of the experts’ knowledge. Taking into account all the results of group expert assessment of objects for a certain set of features, estimated pointwise, can be carried out using the multiset toolkit. To process interval values of assessments, it is proposed to use a linguistic approach which involves the use of a linguistic scale in order to describe various strategies for evaluating objects: conservative, neutral and risky, and implement various decision-making strategies in the problems of clustering, classification, and ordering of objects. The linguistic approach to working with objects assessed by a group of experts with setting interval values of assessments has been successfully applied to the analysis of the dataset presented by competitive projects. A herewith, for the dataset under consideration, using various assessment strategies, solutions of clustering, classification, and ordering problems were obtained with the study of the influence of the chosen assessment strategy on the results of solving the corresponding problem.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 105
    Publication Date: 2021-04-27
    Description: This paper presents a new approach based on Multi-Criteria Decision Analysis (MCDA), named PROMETHEE-SAPEVO-M1, through its implementation and feasibility related to the decision-making process regarding the evaluation of helicopters of attack of the Brazilian Navy. The proposed methodology aims to present an integration of ordinal evaluation into the cardinal procedure from the PROMETHEE method, enabling to perform qualitative and quantitative data and generate the criteria weights by pairwise evaluation, transparently. The modeling provides three models of preference analysis, as partial, complete, and outranking by intervals, along with an intra-criterion analysis by veto threshold, enabling the analysis of the performance of an alternative in a specific criterion. As a demonstration of the application, is carried out a case study by the PROMETHEE-SAPEVO-M1 web platform, addressing a strategic analysis of attack helicopters to be acquired by the Brazilian Navy, from the need to be evaluating multiple specifications with different levels of importance within the context problem. The modeling implementation in the case study is made in detail, first performing the alternatives in each criterion and then presenting the results by three different models of preference analysis, along with the intra-criterion analysis and a rank reversal procedure. Moreover, is realized a comparison analysis to the PROMETHEE method, exploring the main features of the PROMETHEE-SAPEVO-M1. Moreover, a section of discussion is presented, exposing some features and main points of the proposal. Therefore, this paper provides a valuable contribution to academia and society since it represents the application of an MCDA method in the state of the art, contributing to the decision-making resolution of the most diverse real problems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 106
    Publication Date: 2021-03-02
    Description: Background Retrieving gene and disease information from a vast collection of biomedical abstracts to provide doctors with clinical decision support is one of the important research directions of Precision Medicine. Method We propose a novel article retrieval method based on expanded word and co-word analyses, also conducting Cuckoo Search to optimize parameters of the retrieval function. The main goal is to retrieve the abstracts of biomedical articles that refer to treatments. The methods mentioned in this manuscript adopt the BM25 algorithm to calculate the score of abstracts. We, however, propose an improved version of BM25 that computes the scores of expanded words and co-word leading to a composite retrieval function, which is then optimized using the Cuckoo Search. The proposed method aims to find both disease and gene information in the abstract of the same biomedical article. This is to achieve higher relevance and hence score of articles. Besides, we investigate the influence of different parameters on the retrieval algorithm and summarize how they meet various retrieval needs. Results The data used in this manuscript is sourced from medical articles presented in Text Retrieval Conference (TREC): Clinical Decision Support (CDS) Tracks of 2017, 2018, and 2019 in Precision Medicine. A total of 120 topics are tested. Three indicators are employed for the comparison of utilized methods, which are selected among the ones based only on the BM25 algorithm and its improved version to conduct comparable experiments. The results showed that the proposed algorithm achieves better results. Conclusion The proposed method, an improved version of the BM25 algorithm, utilizes both co-word implementation and Cuckoo Search, which has been verified achieving better results on a large number of experimental sets. Besides, a relatively simple query expansion method is implemented in this manuscript. Future research will focus on ontology and semantic networks to expand the query vocabulary.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 107
    Publication Date: 2021-03-02
    Description: Identifying and ranking the node influence in complex networks is an important issue. It helps to understand the dynamics of spreading process for designing efficient strategies to hinder or accelerate information spreading. The idea of decomposing network to rank node influence is adopted widely because of low computational complexity. Of this type, decomposition is a dynamic process, and each iteration could be regarded as an inverse process of spreading. In this paper, we propose a new ranking method, Dynamic Node Strength Decomposition, based on decomposing network. The spreading paths are distinguished by weighting the edges according to the nodes at both ends. The change of local structure in the process of decomposition is considered. Our experimental results on four real networks with different sizes show that the proposed method can generate a more monotonic ranking list and identify node influence more effectively.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 108
    Publication Date: 2021-02-28
    Description: This paper presents a performance comparison of greedy heuristics for a recent variant of the dominating set problem known as the minimum positive influence dominating set (MPIDS) problem. This APX-hard combinatorial optimization problem has applications in social networks. Its aim is to identify a small subset of key influential individuals in order to facilitate the spread of positive influence in the whole network. In this paper, we focus on the development of a fast and effective greedy heuristic for the MPIDS problem, because greedy heuristics are an essential component of more sophisticated metaheuristics. Thus, the development of well-working greedy heuristics supports the development of efficient metaheuristics. Extensive experiments conducted on a wide range of social networks and complex networks confirm the overall superiority of our greedy algorithm over its competitors, especially when the problem size becomes large. Moreover, we compare our algorithm with the integer linear programming solver CPLEX. While the performance of CPLEX is very strong for small and medium-sized networks, it reaches its limits when being applied to the largest networks. However, even in the context of small and medium-sized networks, our greedy algorithm is only 2.53% worse than CPLEX.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 109
    Publication Date: 2021-02-27
    Description: Background Malignant brain tumor diseases exhibit differences within molecular features depending on the patient’s age. Methods In this work, we use gene mutation data from public resources to explore age specifics about glioma. We use both an explainable clustering as well as classification approach to find and interpret age-based differences in brain tumor diseases. We estimate age clusters and correlate age specific biomarkers. Results Age group classification shows known age specifics but also points out several genes which, so far, have not been associated with glioma classification. Conclusions We highlight mutated genes to be characteristic for certain age groups and suggest novel age-based biomarkers and targets.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 110
    Publication Date: 2021-03-02
    Description: Efficient exact parameterized algorithms are an active research area. Such algorithms exhibit a broad interest in the theoretical community. In the last few years, implementations for computing various parameters (parameter detection) have been established in parameterized challenges, such as treewidth, treedepth, hypertree width, feedback vertex set, or vertex cover. In theory, instances, for which the considered parameter is small, can be solved fast (problem evaluation), i.e., the runtime is bounded exponential in the parameter. While such favorable theoretical guarantees exists, it is often unclear whether one can successfully implement these algorithms under practical considerations. In other words, can we design and construct implementations of parameterized algorithms such that they perform similar or even better than well-established problem solvers on instances where the parameter is small. Indeed, we can build an implementation that performs well under the theoretical assumptions. However, it could also well be that an existing solver implicitly takes advantage of a structure, which is often claimed for solvers that build on Sat-solving. In this paper, we consider finding one solution to instances of answer set programming (ASP), which is a logic-based declarative modeling and solving framework. Solutions for ASP instances are so-called answer sets. Interestingly, the problem of deciding whether an instance has an answer set is already located on the second level of the polynomial hierarchy. An ASP solver that employs treewidth as parameter and runs dynamic programming on tree decompositions is DynASP2. Empirical experiments show that this solver is fast on instances of small treewidth and can outperform modern ASP when one counts answer sets. It remains open, whether one can improve the solver such that it also finds one answer set fast and shows competitive behavior to modern ASP solvers on instances of low treewidth. Unfortunately, theoretical models of modern ASP solvers already indicate that these solvers can solve instances of low treewidth fast, since they are based on Sat-solving algorithms. In this paper, we improve DynASP2 and construct the solver DynASP2.5, which uses a different approach. The new solver shows competitive behavior to state-of-the-art ASP solvers even for finding just one solution. We present empirical experiments where one can see that our new implementation solves ASP instances, which encode the Steiner tree problem on graphs with low treewidth, fast. Our implementation is based on a novel approach that we call multi-pass dynamic programming (M-DPSINC). In the paper, we describe the underlying concepts of our implementation (DynASP2.5) and we argue why the techniques still yield correct algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 111
    Publication Date: 2021-03-21
    Description: Multiagent cooperation is one of the most attractive research fields in multiagent systems. There are many attempts made by researchers in this field to promote cooperation behavior. However, several issues still exist, such as complex interactions among different groups of agents, redundant communication contents of irrelevant agents, which prevents the learning and convergence of agent cooperation behaviors. To address the limitations above, a novel method called multiagent hierarchical cognition difference policy (MA-HCDP) is proposed in this paper. It includes a hierarchical group network (HGN), a cognition difference network (CDN), and a soft communication network (SCN). HGN is designed to distinguish different underlying information of diverse groups’ observations (including friendly group, enemy group, and object group) and extract different high-dimensional state representations of different groups. CDN is designed based on a variational auto-encoder to allow each agent to choose its neighbors (communication targets) adaptively with its environment cognition difference. SCN is designed to handle the complex interactions among the agents with a soft attention mechanism. The results of simulations demonstrate the superior effectiveness of our method compared with existing methods.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 112
    Publication Date: 2021-03-19
    Description: In the context of combinatorial sampling, the so-called “unranking method” can be seen as a link between a total order over the objects and an effective way to construct an object of given rank. The most classical order used in this context is the lexicographic order, which corresponds to the familiar word ordering in the dictionary. In this article, we propose a comparative study of four algorithms dedicated to the lexicographic unranking of combinations, including three algorithms that were introduced decades ago. We start the paper with the introduction of our new algorithm using a new strategy of computations based on the classical factorial numeral system (or factoradics). Then, we present, in a high level, the three other algorithms. For each case, we analyze its time complexity on average, within a uniform framework, and describe its strengths and weaknesses. For about 20 years, such algorithms have been implemented using big integer arithmetic rather than bounded integer arithmetic which makes the cost of computing some coefficients higher than previously stated. We propose improvements for all implementations, which take this fact into account, and we give a detailed complexity analysis, which is validated by an experimental analysis. Finally, we show that, even if the algorithms are based on different strategies, all are doing very similar computations. Lastly, we extend our approach to the unranking of other classical combinatorial objects such as families counted by multinomial coefficients and k-permutations.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 113
    Publication Date: 2021-03-06
    Description: Background Colorectal cancer (CRC) is a common malignancy worldwide. Despite being the most common cancer in Singapore, CRC screening rate remains low due to knowledge deficits, social reasons such as inconvenience and a lack of reminder or recommendation. A decision aid (DA) may facilitate an individual’s decision-making to undertake CRC screening by addressing misconceptions and barriers. We postulate that a more person-centred and culturally adapted DA will better serve the local population. The views of the target users are thus needed to develop such a DA. A CRC screening DA prototype has been adapted from an American DA to cater to the Asian users. This study aimed to explore user perspectives on an adapted CRC screening DA-prototype in terms of the design, content and perceived utility. Methods The study used in-depth interviews (IDIs) and focus group discussions (FGDs) to gather qualitative data from English-literate multi-ethnic Asian adults aged 50 years old and above. They had yet to screen for CRC before they were recruited from a public primary care clinic in Singapore. The interviews were audio-recorded, transcribed and analysed to identify emergent themes via thematic analysis. Results This study included 27 participants involved in 5 IDI and 5 FGDs. Participants found the DA easily comprehensible and of appropriate length. They appreciated information about the options and proposed having multi-lingual DAs. The design, in terms of the layout, size and font, was well-accepted but there were suggestions to digitalize the DA. Participants felt that the visuals were useful but there were concerns about modesty due to the realism of the illustration. They would use the DA for information-sharing with their family and for discussion with their doctor for decision making. They preferred the doctor’s recommendation for CRC screening and initiating the use of the DA. Conclusions Participants generally had favourable perceptions of the DA-prototype. A revised DA will be developed based on their feedback. Further input from doctors on the revised DA will be obtained before assessing its effectiveness to increase CRC screening rate in a randomized controlled trial.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 114
    Publication Date: 2021-03-06
    Description: Over the last decades, the face of health care has changed dramatically, with big improvements in what is technically feasible. However, there are indicators that the current approach to evaluating evidence in health care is not holistic and hence in the long run, health care will not be sustainable. New conceptual and normative frameworks for the evaluation of health care need to be developed and investigated. The current paper presents a novel framework of justifiable health care and explores how the use of artificial intelligence and big data can contribute to achieving the goals of this framework.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 115
    Publication Date: 2021-02-05
    Description: Transit network optimization can effectively improve transit efficiency, improve traffic conditions, and reduce the pollution of the environment. In order to better meet the travel demands of passengers, the factors influencing passengers’ satisfaction with a customized bus are fully analyzed. Taking the minimum operating cost of the enterprise as the objective and considering the random travel time constraints of passengers, the customized bus routes are optimized. The K-means clustering analysis is used to classify the passengers’ needs based on the analysis of the passenger travel demand of the customized shuttle bus, and the time stochastic uncertainty under the operating environment of the customized shuttle bus line is fully considered. On the basis of meeting the passenger travel time requirements and minimizing the cost of service operation, an optimization model that maximizes the overall satisfaction of passengers and public transit enterprises is structured. The smaller the value of the objective function is, the lower the operating cost. When the value is negative, it means there is profit. The model is processed by the deterministic processing method of random constraints, and then the hybrid intelligent algorithm is used to solve the model. A stochastic simulation technique is used to train stochastic constraints to approximate uncertain functions. Then, the improved immune clonal algorithm is used to solve the vehicle routing problem. Finally, it is proved by a case that the method can reasonably and efficiently realize the optimization of the customized shuttle bus lines in the region.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 116
    Publication Date: 2021-02-15
    Description: Background Machine learning is a broad term encompassing a number of methods that allow the investigator to learn from the data. These methods may permit large real-world databases to be more rapidly translated to applications to inform patient-provider decision making. Methods This systematic literature review was conducted to identify published observational research of employed machine learning to inform decision making at the patient-provider level. The search strategy was implemented and studies meeting eligibility criteria were evaluated by two independent reviewers. Relevant data related to study design, statistical methods and strengths and limitations were identified; study quality was assessed using a modified version of the Luo checklist. Results A total of 34 publications from January 2014 to September 2020 were identified and evaluated for this review. There were diverse methods, statistical packages and approaches used across identified studies. The most common methods included decision tree and random forest approaches. Most studies applied internal validation but only two conducted external validation. Most studies utilized one algorithm, and only eight studies applied multiple machine learning algorithms to the data. Seven items on the Luo checklist failed to be met by more than 50% of published studies. Conclusions A wide variety of approaches, algorithms, statistical software, and validation strategies were employed in the application of machine learning methods to inform patient-provider decision making. There is a need to ensure that multiple machine learning approaches are used, the model selection strategy is clearly defined, and both internal and external validation are necessary to be sure that decisions for patient care are being made with the highest quality evidence. Future work should routinely employ ensemble methods incorporating multiple machine learning algorithms.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 117
    Publication Date: 2021-02-27
    Description: Recently, with the development of mobile devices and the crowdsourcing platform, spatial crowdsourcing (SC) has become more widespread. In SC, workers need to physically travel to complete spatial–temporal tasks during a certain period of time. The main problem in SC platforms is scheduling a set of proper workers to achieve a set of spatial tasks based on different objectives. In actuality, real-world applications of SC need to optimize multiple objectives together, and these objectives may sometimes conflict with one another. Furthermore, there is a lack of research dealing with the multi-objective optimization (MOO) problem within an SC environment. Thus, in this work we focused on task scheduling based on multi-objective optimization (TS-MOO) in SC, which is based on maximizing the number of completed tasks, minimizing the total travel costs, and ensuring the balance of the workload between workers. To solve the previous problem, we developed a new method, i.e., the multi-objective task scheduling optimization (MOTSO) model that consists of two algorithms, namely, the multi-objective particle swarm optimization (MOPSO) algorithm with our fitness function Alabbadi, et al. and the ranking strategy algorithm based on the task entropy concept and task execution duration. The main purpose of our ranking strategy is to improve and enhance the performance of our MOPSO. The primary goal of the proposed MOTSO model is to find an optimal solution based on the multiple objectives that conflict with one another. We conducted our experiment with both synthetic and real datasets; the experimental results and statistical analysis showed that our proposed model is effective in terms of maximizing the number of completed tasks, minimizing the total travel costs, and balancing the workload between workers.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 118
    Publication Date: 2021-02-27
    Description: The choice of which objective functions, or benchmark problems, should be used to test an optimization algorithm is a crucial part of the algorithm selection framework. Benchmark suites that are often used in the literature have been shown to exhibit poor coverage of the problem space. Exploratory landscape analysis can be used to quantify characteristics of objective functions. However, exploratory landscape analysis measures are based on samples of the objective function, and there is a lack of work on the appropriate choice of sample size needed to produce reliable measures. This study presents an approach to determine the minimum sample size needed to obtain robust exploratory landscape analysis measures. Based on reliable exploratory landscape analysis measures, a self-organizing feature map is used to cluster a comprehensive set of benchmark functions. From this, a benchmark suite that has better coverage of the single-objective, boundary-constrained problem space is proposed.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 119
    Publication Date: 2021-03-30
    Description: Three-component systems of diffusion–reaction equations play a central role in the modelling and simulation of chemical processes in engineering, electro-chemistry, physical chemistry, biology, population dynamics, etc. A major question in the simulation of three-component systems is how to guarantee non-negative species distributions in the model and how to calculate them effectively. Current numerical methods to enforce non-negative species distributions tend to be cost-intensive in terms of computation time and they are not robust for big rate constants of the considered reaction. In this article, a method, as a combination of homotopy methods, modern augmented Lagrangian methods, and adaptive FEMs is outlined to obtain a robust and efficient method to simulate diffusion–reaction models with non-negative concentrations. Although in this paper the convergence analysis is not described rigorously, multiple numerical examples as well as an application to elctro-deposition from an aqueous Cu2+-(β-alanine) electrolyte are presented.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 120
    Publication Date: 2021-04-03
    Description: Background Many patients with atrial fibrillation (AF) remain undiagnosed despite availability of interventions to reduce stroke risk. Predictive models to date are limited by data requirements and theoretical usage. We aimed to develop a model for predicting the 2-year probability of AF diagnosis and implement it as proof-of-concept (POC) in a production electronic health record (EHR). Methods We used a nested case–control design using data from the Indiana Network for Patient Care. The development cohort came from 2016 to 2017 (outcome period) and 2014 to 2015 (baseline). A separate validation cohort used outcome and baseline periods shifted 2 years before respective development cohort times. Machine learning approaches were used to build predictive model. Patients ≥ 18 years, later restricted to age ≥ 40 years, with at least two encounters and no AF during baseline, were included. In the 6-week EHR prospective pilot, the model was silently implemented in the production system at a large safety-net urban hospital. Three new and two previous logistic regression models were evaluated using receiver-operating characteristics. Number, characteristics, and CHA2DS2-VASc scores of patients identified by the model in the pilot are presented. Results After restricting age to ≥ 40 years, 31,474 AF cases (mean age, 71.5 years; female 49%) and 22,078 controls (mean age, 59.5 years; female 61%) comprised the development cohort. A 10-variable model using age, acute heart disease, albumin, body mass index, chronic obstructive pulmonary disease, gender, heart failure, insurance, kidney disease, and shock yielded the best performance (C-statistic, 0.80 [95% CI 0.79–0.80]). The model performed well in the validation cohort (C-statistic, 0.81 [95% CI 0.8–0.81]). In the EHR pilot, 7916/22,272 (35.5%; mean age, 66 years; female 50%) were identified as higher risk for AF; 5582 (70%) had CHA2DS2-VASc score ≥ 2. Conclusions Using variables commonly available in the EHR, we created a predictive model to identify 2-year risk of developing AF in those previously without diagnosed AF. Successful POC implementation of the model in an EHR provided a practical strategy to identify patients who may benefit from interventions to reduce their stroke risk.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 121
    Publication Date: 2021-04-03
    Description: Background Artificial intelligence (AI) research is highly dependent on the nature of the data available. With the steady increase of AI applications in the medical field, the demand for quality medical data is increasing significantly. We here describe the development of a platform for providing and sharing digital pathology data to AI researchers, and highlight challenges to overcome in operating a sustainable platform in conjunction with pathologists. Methods Over 3000 pathological slides from five organs (liver, colon, prostate, pancreas and biliary tract, and kidney) in histologically confirmed tumor cases by pathology departments at three hospitals were selected for the dataset. After digitalizing the slides, tumor areas were annotated and overlaid onto the images by pathologists as the ground truth for AI training. To reduce the pathologists’ workload, AI-assisted annotation was established in collaboration with university AI teams. Results A web-based data sharing platform was developed to share massive pathological image data in 2019. This platform includes 3100 images, and 5 pre-processing algorithms for AI researchers to easily load images into their learning models. Discussion Due to different regulations among countries for privacy protection, when releasing internationally shared learning platforms, it is considered to be most prudent to obtain consent from patients during data acquisition. Conclusions Despite limitations encountered during platform development and model training, the present medical image sharing platform can steadily fulfill the high demand of AI developers for quality data. This study is expected to help other researchers intending to generate similar platforms that are more effective and accessible in the future.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 122
    Publication Date: 2021-04-03
    Description: Background Ensuring data is of appropriate quality is essential for the secondary use of electronic health records (EHRs) in research and clinical decision support. An effective method of data quality assessment (DQA) is automating data quality rules (DQRs) to replace the time-consuming, labor-intensive manual process of creating DQRs, which is difficult to guarantee standard and comparable DQA results. This paper presents a case study of automatically creating DQRs based on openEHR archetypes in a Chinese hospital to investigate the feasibility and challenges of automating DQA for EHR data. Methods The clinical data repository (CDR) of the Shanxi Dayi Hospital is an archetype-based relational database. Four steps are undertaken to automatically create DQRs in this CDR database. First, the keywords and features relevant to DQA of archetypes were identified via mapping them to a well-established DQA framework, Kahn’s DQA framework. Second, the templates of DQRs in correspondence with these identified keywords and features were created in the structured query language (SQL). Third, the quality constraints were retrieved from archetypes. Fourth, these quality constraints were automatically converted to DQRs according to the pre-designed templates and mapping relationships of archetypes and data tables. We utilized the archetypes of the CDR to automatically create DQRs to meet quality requirements of the Chinese Application-Level Ranking Standard for EHR Systems (CARSES) and evaluated their coverage by comparing with expert-created DQRs. Results We used 27 archetypes to automatically create 359 DQRs. 319 of them are in agreement with the expert-created DQRs, covering 84.97% (311/366) requirements of the CARSES. The auto-created DQRs had varying levels of coverage of the four quality domains mandated by the CARSES: 100% (45/45) of consistency, 98.11% (208/212) of completeness, 54.02% (57/87) of conformity, and 50% (11/22) of timeliness. Conclusion It’s feasible to create DQRs automatically based on openEHR archetypes. This study evaluated the coverage of the auto-created DQRs to a typical DQA task of Chinese hospitals, the CARSES. The challenges of automating DQR creation were identified, such as quality requirements based on semantic, and complex constraints of multiple elements. This research can enlighten the exploration of DQR auto-creation and contribute to the automatic DQA.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 123
    Publication Date: 2021-04-12
    Description: We construct empirically based regression models for estimating the tour length in the Close Enough Traveling Salesman Problem (CETSP). In the CETSP, a customer is considered visited when the salesman visits any point in the customer’s service region. We build our models using as many as 14 independent variables on a set of 780 benchmark instances of the CETSP and compare the estimated tour lengths to the results from a Steiner zone heuristic. We validate our results on a new set of 234 instances that are similar to the 780 benchmark instances. We also generate results for a new set of 72 larger instances. Overall, our models fit the data well and do a very good job of estimating the tour length. In addition, we show that our modeling approach can be used to accurately estimate the optimal tour lengths for the CETSP.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 124
    Publication Date: 2021-04-08
    Description: Aiming at the problem of fault diagnosis in continuous time systems, a kind of fault diagnosis algorithm based on adaptive nonlinear proportional integral (PI) observer, which can realize the effective fault identification, is studied in this paper. Firstly, the stability and stability conditions of fault diagnosis method based on the PI observer are analyzed, and the upper bound of the fault estimation error is given. Secondly, the fault diagnosis algorithm based on adjustable nonlinear PI observer is designed and constructed, it is analyzed and we proved that the upper bound of fault estimation under this algorithm is better than that of the traditional method. Finally, the L-1011 unmanned aerial vehicle (UAV) is taken as the experimental object for numerical simulation, and the fault diagnosis method based on adaptive observer factor achieves faster response speed and more accurate fault identification results.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 125
    Publication Date: 2021-02-01
    Description: Background In this work, we aimed to demonstrate how to utilize the lab test results and other clinical information to support precision medicine research and clinical decisions on complex diseases, with the support of electronic medical record facilities. We defined “clinotypes” as clinical information that could be observed and measured objectively using biomedical instruments. From well-known ‘omic’ problem definitions, we defined problems using clinotype information, including stratifying patients—identifying interested sub cohorts for future studies, mining significant associations between clinotypes and specific phenotypes-diseases, and discovering potential linkages between clinotype and genomic information. We solved these problems by integrating public omic databases and applying advanced machine learning and visual analytic techniques on two-year health exam records from a large population of healthy southern Chinese individuals (size n = 91,354). When developing the solution, we carefully addressed the missing information, imbalance and non-uniformed data annotation issues. Results We organized the techniques and solutions to address the problems and issues above into CPA framework (Clinotype Prediction and Association-finding). At the data preprocessing step, we handled the missing value issue with predicted accuracy of 0.760. We curated 12,635 clinotype-gene associations. We found 147 Associations between 147 chronic diseases-phenotype and clinotypes, which improved the disease predictive performance to AUC (average) of 0.967. We mined 182 significant clinotype-clinotype associations among 69 clinotypes. Conclusions Our results showed strong potential connectivity between the omics information and the clinical lab test information. The results further emphasized the needs to utilize and integrate the clinical information, especially the lab test results, in future PheWas and omic studies. Furthermore, it showed that the clinotype information could initiate an alternative research direction and serve as an independent field of data to support the well-known ‘phenome’ and ‘genome’ researches.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 126
    Publication Date: 2021-02-24
    Description: Background Fatigue is a kind of non-specific symptom, which occurs widely in sub-health and various diseases. It is closely related to people's physical and mental health. Due to the lack of objective diagnostic criteria, it is often neglected in clinical diagnosis, especially in the early stage of disease. Many clinical practices and researches have shown that tongue and pulse conditions reflect the body's overall state. Establishing an objective evaluation method for diagnosing disease fatigue and non-disease fatigue by combining clinical symptom, index, and tongue and pulse data is of great significance for clinical treatment timely and effectively. Methods In this study, 2632 physical examination population were divided into healthy controls, sub-health fatigue group, and disease fatigue group. Complex network technology was used to screen out core symptoms and Western medicine indexes of sub-health fatigue and disease fatigue population. Pajek software was used to construct core symptom/index network and core symptom-index combined network. Simultaneously, canonical correlation analysis was used to analyze the objective tongue and pulse data between the two groups of fatigue population and analyze the distribution of tongue and pulse data. Results Some similarities were found in the core symptoms of sub-health fatigue and disease fatigue population, but with different node importance. The node-importance difference indicated that the diagnostic contribution rate of the same symptom to the two groups was different. The canonical correlation coefficient of tongue and pulse data in the disease fatigue group was 0.42 (P 
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 127
    Publication Date: 2021-02-24
    Description: A novel design method for time series modeling and prediction with fuzzy cognitive maps (FCM) is proposed in this paper. The developed model exploits the least square method to learn the weight matrix of FCM derived from the given historical data of time series. A fuzzy c-means clustering algorithm is used to construct the concepts of the FCM. Compared with the traditional FCM, the least square fuzzy cognitive map (LSFCM) is a direct solution procedure without iterative calculations. LSFCM model is a straightforward, robust and rapid learning method, owing to its reliable and efficient. In addition, the structure of the LSFCM can be further optimized with refinements the position of the concepts for the higher prediction precision, in which the evolutionary optimization algorithm is used to find the optimal concepts. Withal, we discussed in detail the number of concepts and the parameters of activation function on the impact of FCM models. The publicly available time series data sets with different statistical characteristics coming from different areas are applied to evaluate the proposed modeling approach. The obtained results clearly show the effectiveness of the approach.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 128
    Publication Date: 2021-02-25
    Description: Background Heart disease is the primary cause of morbidity and mortality in the world. It includes numerous problems and symptoms. The diagnosis of heart disease is difficult because there are too many factors to analyze. What’s more, the misclassification cost could be very high. Methods A cost-sensitive ensemble method was proposed to improve the efficiency of diagnosis and reduce the misclassification cost. The proposed method contains five heterogeneous classifiers: random forest, logistic regression, support vector machine, extreme learning machine and k-nearest neighbor. T-test was used to investigate if the performance of the ensemble was better than individual classifiers and the contribution of Relief algorithm. Results The best performance was achieved by the proposed method according to ten-fold cross validation. The statistical tests demonstrated that the performance of the proposed ensemble was significantly superior to individual classifiers, and the efficiency of classification was distinctively improved by Relief algorithm. Conclusions The proposed ensemble gained significantly better results compared with individual classifiers and previous studies, which implies that it can be used as a promising alternative tool in medical decision making for heart disease diagnosis.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 129
    Publication Date: 2021-02-18
    Description: Background Rare Diseases (RDs) are difficult to diagnose. Clinical Decision Support Systems (CDSS) could support the diagnosis for RDs. The Medical Informatics in Research and Medicine (MIRACUM) consortium developed a CDSS for RDs based on distributed clinical data from eight German university hospitals. To support the diagnosis for difficult patient cases, the CDSS uses data from the different hospitals to perform a patient similarity analysis to obtain an indication of a diagnosis. To optimize our CDSS, we conducted a qualitative study to investigate usability and functionality of our designed CDSS. Methods We performed a Thinking Aloud Test (TA-Test) with RDs experts working in Rare Diseases Centers (RDCs) at MIRACUM locations which are specialized in diagnosis and treatment of RDs. An instruction sheet with tasks was prepared that the participants should perform with the CDSS during the study. The TA-Test was recorded on audio and video, whereas the resulting transcripts were analysed with a qualitative content analysis, as a ruled-guided fixed procedure to analyse text-based data. Furthermore, a questionnaire was handed out at the end of the study including the System Usability Scale (SUS). Results A total of eight experts from eight MIRACUM locations with an established RDC were included in the study. Results indicate that more detailed information about patients, such as descriptive attributes or findings, can help the system perform better. The system was rated positively in terms of functionality, such as functions that enable the user to obtain an overview of similar patients or medical history of a patient. However, there is a lack of transparency in the results of the CDSS patient similarity analysis. The study participants often stated that the system should present the user with an overview of exact symptoms, diagnosis, and other characteristics that define two patients as similar. In the usability section, the CDSS received a score of 73.21 points, which is ranked as good usability. Conclusions This qualitative study investigated the usability and functionality of a CDSS of RDs. Despite positive feedback about functionality of system, the CDSS still requires some revisions and improvement in transparency of the patient similarity analysis.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 130
    Publication Date: 2021-02-20
    Description: A grammar-based compressor is an algorithm that receives a word and outputs a context-free grammar that only produces this word. The approximation ratio for a single input word is the size of the grammar produced for this word divided by the size of a smallest grammar for this word. The worst-case approximation ratio of a grammar-based compressor for a given word length is the largest approximation ratio over all input words of that length. In this work, we study the worst-case approximation ratio of the algorithms Greedy, RePair and LongestMatch on unary strings, i.e., strings that only make use of a single symbol. Our main contribution is to show the improved upper bound of O((logn)8·(loglogn)3) for the worst-case approximation ratio of Greedy. In addition, we also show the lower bound of 1.34847194⋯ for the worst-case approximation ratio of Greedy, and that RePair and LongestMatch have a worst-case approximation ratio of log2(3).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 131
    Publication Date: 2021-02-22
    Description: Background Burn is one of the most brutal harms to the human body and mind and its wide-ranging complications have many adverse effects on the patients’ quality of life. The present study was conducted to investigate the effect of rehabilitation education through social media on burn patients’ quality of life. Methods The present randomized, controlled, clinical trial was conducted on 60 patients admitted to Imam Reza Hospital Burn Center in the city of Mashhad, Iran, who were randomly assigned to either the intervention or control groups (n = 30 per group). The researcher then created a WhatsApp channel to provide educational content and a WhatsApp group for burns patients to join and get their questions answered. The intervention group patients pursued their post-discharge education through the social media for a month. The control group patients received their discharge education according to the ward’s routine procedures through pamphlets and face-to-face training by the personnel. As the study’s main variable, the Burn Specific Health Scale-Brief was completed by both groups before and 1 and 2 months after the intervention. Data were analyzed using the ANCOVA and repeated-measures ANOVA. Results There was no significant differences between the intervention and control groups in terms of the QOL score and any of the domains at baseline. The results indicated the significant effect of the intervention both 1 and 2 months post-intervention on the QOL score and all the domains (P 
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 132
    Publication Date: 2021-02-19
    Description: Multi-criteria analysis is a decision-making and efficiency assessment tool for application in both the private and public sectors. Its application is preceded by the selection of suitable indicators and a homogenous set of variants, as well as suitable methods based on the nature of the input data. The goal of the submitted research is to highlight the importance of selecting suitable indicators using a case study assessment of the financial health of a municipality—more precisely, the efficiency of management of this municipality. Four key indicators, thirty-two homogenous subjects, and one multi-criteria analysis method were identified in this study based on the theoretical foundations of the specific issue. These elements were processed into a total of 14 variants depending on the number of assessed indicators. Then, these results were subjected to statistical verification alongside verification using the Jaccard index. Based on the acquired results, we highlight the need for correct and expert identification of the relevant sets of alternatives (the criteria matrix) and expert discussion, which should precede the selection of the assessed indicators and objectify this selection process as much as possible. Assessment based on a low number of indicators was shown to be insufficient, highly variable, and diverse, and these differences were partially eliminated as the number of assessed indicators increased.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 133
    Publication Date: 2021-02-18
    Description: Background Mining massive prescriptions in Traditional Chinese Medicine (TCM) accumulated in the lengthy period of several thousand years to discover essential herbal groups for distinct efficacies is of significance for TCM modernization, thus starting to draw attentions recently. However, most existing methods for the task treat herbs with different surface forms orthogonally and determine efficacy-specific herbal groups based on the raw frequencies an herbal group occur in a collection of prescriptions. Such methods entirely overlook the fact that prescriptions in TCM are formed empirically by different people at different historical stages, and thus full of herbs with different surface forms expressing the same material, or even noisy and redundant herbs. Methods We propose a two-stage approach for efficacy-specific herbal group detection from prescriptions in TCM. For the first stage we devise a hierarchical attentive neural network model to capture essential herbs in a prescription for its efficacy, where herbs are encoded with dense real-valued vectors learned automatically to identify their differences on the semantical level. For the second stage, frequent patterns are mined to discover essential herbal groups for an efficacy from distilled prescriptions obtained in the first stage. Results We verify the effectiveness of our proposed approach from two aspects, the first one is the ability of the hierarchical attentive neural network model to distill a prescription, and the second one is the accuracy in discovering efficacy-specific herbal groups. Conclusion The experimental results demonstrate that the hierarchical attentive neural network model is capable to capture herbs in a prescription essential to its efficacy, and the distilled prescriptions significantly could improve the performance of efficacy-specific herbal group detection.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 134
    Publication Date: 2021-02-19
    Description: Background The Thai medical application for patient triage, namely Triagist, is an mHealth application designed to support the pre-hospital process. However, since the functions of the application that are necessary for the pre-hospital process have been found not to be fully developed, the addition of a back-end system has been considered to increase its performance and usability. Objective To determine the ability of the previous version to effectively manage the pre-hospital process and analyse the current problems with the pre-hospital operation. Therefore, the new system was developed to support the connection of dispatch centres or operational centres to the Triagist mobile application and system evaluation. Method Design thinking methodology was used to analyse, design and develop a patient triage system to support the pre-hospital process in Thailand based on users’ requirements. 68 active members of the rescue teams and emergency medical staff in Chiang Mai and Lampang provinces were recruited to test the reliability of the system based on a prototype application. Results The new medical mobile application for patient triage in Thailand was validated for use due to containing the two essential functions of Initial Dispatch Code (IDC) geolocation and IDC management. When the system was tested by emergency staff who were responsible for using it, those with the least experience were found to use it better than their highly experienced colleagues. Moreover, in cases where the system had been implemented, it was found to determine the frequency of symptoms, the time period during which cases occurred, and the density of cases in each area. Conclusion This system, which has been developed based on the use of smart technology, will play an important role in supporting emergency services in Thailand by enhancing the efficiency of the pre-hospital process. Emergency centres will receive IDC information from the geolocation system so that they can determine patients’ location without undue delay. Emergency services will be able to rapidly prepare the necessary resources and administrative tasks will be supported by linking the dispatch centre to central rescue teams.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 135
    Publication Date: 2021-02-19
    Description: Background Despite a substantial increase in the adoption of electronic medical records (EMRs) in primary health care settings, the use of advanced EMR features is limited. Several studies have identified both barriers and facilitating factors that influence primary care physicians’ (PCPs) use of advanced EMR features and the maturation of their EMR use. The purpose of this study is to explore and identify the factors that impact PCPs’ mature use of EMRs. Methods A systematic review was conducted in accordance with the Cochrane Handbook. The MEDLINE, Embase, and PsycINFO electronic databases were searched from 1946 to June 13, 2019. Two independent reviewers screened the studies for eligibility; to be included, studies had to address factors influencing PCPs’ mature use of EMRs. A narrative synthesis was conducted to collate study findings and to report on patterns identified across studies. The quality of the studies was also appraised. Results Of the 1893 studies identified, 14 were included in this study. Reported factors that influenced PCPs’ mature use of EMRs fell into one of the following 5 categories: technology, people, organization, resources, and policy. Concerns about the EMR system’s functionality, lack of physician awareness of EMR functionality, limited physician availability to learn more about EMRs, the habitual use of successfully completing clinical tasks using only basic EMR features, business-oriented organizational objectives, lack of vendor training, limited resource availability, and lack of physician readiness were reported as barriers to PCPs’ mature use of EMRs. The motivation of physicians, user satisfaction, coaching and peer mentoring, EMR experience, gender, physician perception, transition planning for changes in roles and work processes, team-based care, adequate technical support and training, sharing resources, practices affiliated with an integrated delivery system, financial incentives, and policies to increase EMR use all had a favorable impact on PCPs’ use of advanced EMR features. Conclusions By using a narrative synthesis to synthesize the evidence, we identified interrelated factors influencing the mature use of EMRs by PCPs. The findings underline the need to provide adequate training and policies that facilitate the mature use of EMRs by PCPs. Trial registration: PROSPERO CRD42019137526.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 136
    Publication Date: 2021-02-23
    Description: As the number of users who cook their own food increases, there is increasing demand for an optimal cooking procedure for multiple dishes, but the optimal cooking procedure varies from user to user due to the difference of each user’s cooking skill and environment. In this paper, we propose a system of presenting optimal cooking procedures that enables parallel cooking of multiple recipes. We formulate the problem of deciding optimal cooking procedures as a task scheduling problem by creating a task graph for each recipe. To reduce execution time, we propose two extensions to the preprocessing and bounding operation of PDF/IHS, a sequential optimization algorithm for the task scheduling problem, each taking into account the cooking characteristics. We confirmed that the proposed algorithm can reduce execution time by up to 44% compared to the base PDF/IHS, and increase execution time by about 900 times even when the number of required searches increases by 10,000 times. In addition, through the experiment with three recipes for 10 participants each, it was confirmed that by following the optimal cooking procedure for a certain menu, the actual cooking time was reduced by up to 13 min (14.8% of the time when users cooked freely) compared to the time when users cooked freely.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 137
    Publication Date: 2021-02-24
    Description: We consider the problem of determinizing and minimizing automata for nested words in practice. For this we compile the nested regular expressions (NREs) from the usual XPath benchmark to nested word automata (NWAs). The determinization of these NWAs, however, fails to produce reasonably small automata. In the best case, huge deterministic NWAs are produced after few hours, even for relatively small NREs of the benchmark. We propose a different approach to the determinization of automata for nested words. For this, we introduce stepwise hedge automata (SHAs) that generalize naturally on both (stepwise) tree automata and on finite word automata. We then show how to determinize SHAs, yielding reasonably small deterministic automata for the NREs from the XPath benchmark. The size of deterministic SHAs automata can be reduced further by a novel minimization algorithm for a subclass of SHAs. In order to understand why the new approach to determinization and minimization works so nicely, we investigate the relationship between NWAs and SHAs further. Clearly, deterministic SHAs can be compiled to deterministic NWAs in linear time, and conversely NWAs can be compiled to nondeterministic SHAs in polynomial time. Therefore, we can use SHAs as intermediates for determinizing NWAs, while avoiding the huge size increase with the usual determinization algorithm for NWAs. Notably, the NWAs obtained from the SHAs perform bottom-up and left-to-right computations only, but no top-down computations. This NWA behavior can be distinguished syntactically by the (weak) single-entry property, suggesting a close relationship between SHAs and single-entry NWAs. In particular, it turns out that the usual determinization algorithm for NWAs behaves well for single-entry NWAs, while it quickly explodes without the single-entry property. Furthermore, it is known that the class of deterministic multi-module single-entry NWAs enjoys unique minimization. The subclass of deterministic SHAs to which our novel minimization algorithm applies is different though, in that we do not impose multiple modules. As further optimizations for reducing the sizes of the constructed SHAs, we propose schema-based cleaning and symbolic representations based on apply-else rules that can be maintained by determinization. We implemented the optimizations and report the experimental results for the automata constructed for the XPathMark benchmark.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 138
    Publication Date: 2021-02-25
    Description: Background Taking care of chronic or long-term patients at home is an arduous task. Non-professional caregivers suffer the consequences of doing so, especially in terms of their mental health. Performing some simple activities through a mobile phone app may improve their mindset and consequently increase their positivity. However, each caregiver may need support in different aspects of positive mental health. In this paper, a method is defined to calculate the utility of a set of activities for a particular caregiver in order to personalize the intervention plan proposed in the app. Methods Based on the caregivers’ answers to a questionnaire, a modular averaging method is used to calculate the personal level of competence in each positive mental health factor. A reward-penalty scoring procedure then assigns an overall impact value to each activity. Finally, the app ranks the activities using this impact value. Results The results of this new personalization method are provided based on a pilot test conducted on 111 caregivers. The results indicate that a conjunctive average is appropriate at the first stage and that reward should be greater than penalty in the second stage. Conclusions The method presented is able to personalize the intervention plan by determining the best order of carrying out the activities for each caregiver, with the aim of avoiding a high level of deterioration in any factor.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 139
    Publication Date: 2021-02-25
    Description: Background Radiation Therapy (RT) is a common treatment after breast cancer surgery and a complex process using high energy X-rays to eradicate cancer cells, important in reducing the risk of local recurrence. The high-tech environment and unfamiliar nature of RT can affect the patient’s experience of the treatment. Misconceptions or lack of knowledge about RT processes can increase levels of anxiety and enhance feelings of being unprepared at the beginning of treatment. Moreover, the waiting time is often quite long. The primary aim of this study will be to evaluate whether a digital information tool with VR-technology and preparatory information can decrease distress as well as enhance the self-efficacy and health literacy of patients affected by breast cancer before, during, and after RT. A secondary aim will be to explore whether the digital information tool increase patient flow while maintaining or increasing the quality of care. Method The study is a prospective and longitudinal RCT study with an Action Research participatory design approach including mixed-methods data collection, i.e., standardised instruments, qualitative interviews (face-to-face and telephone) with a phenomenological hermeneutical approach, diaries, observations, and time measurements, and scheduled to take place from autumn 2020 to spring 2022. The intervention group (n = 80), will receive standard care and information (oral and written) and the digital information tool; and the control group (n = 80), will receive standard care and information (oral and written). Study recruitment and randomisation will be completed at two centres in the west of Sweden. Discussion Research in this area is scarce and, to our knowledge, only few previous studies examine VR as a tool for increasing preparedness for patients with breast cancer about to undergo RT that also includes follow-ups six months after completed treatment. The participatory approach and design will safeguard the possibilities to capture the patient perspective throughout the development process, and the RCT design supports high research quality. Digitalisation brings new possibilities to provide safe, person-centred information that also displays a realistic picture of RT treatment and its contexts. The planned study will generate generalisable knowledge of relevance in similar health care contexts. Trial registration: ClinicalTrials.gov Identifier: NCT04394325. Registered May 19, 2020. Prospectively registered.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 140
    Publication Date: 2021-02-27
    Description: Background Currently the diagnosis of shoulder instability, particularly in children, is difficult and can take time. These diagnostic delays can lead to poorer outcome and long-term complications. A Diagnostic Decision Support System (DDSS) has the potential to reduce time to diagnosis and improve outcomes for patients. The aim of this study was to develop a concept map for a future DDSS in shoulder instability. Methods A modified nominal focus group technique, involving three clinical vignettes, was used to elicit physiotherapists decision-making processes. Results Twenty-five physiotherapists, (18F:7 M) from four separate clinical sites participated. The themes identified related to ‘Variability in diagnostic processes and lack of standardised practice’ and ‘Knowledge and attitudes towards novel technologies for facilitating assessment and clinical decision making’. Conclusion No common structured approach towards assessment and diagnosis was identified. Lack of knowledge, perceived usefulness, access and cost were identified as barriers to adoption of new technology. Based on the information elicited a conceptual design of a future DDSS has been proposed. Work to develop a systematic approach to assessment, classification and diagnosis is now proposed. Trial Registraty This was not a clinical trial and so no clinical trial registry is needed.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 141
    Publication Date: 2021-04-19
    Description: Background Previous studies showed that transitional care reduces the complication rate and readmission rate and improves the quality of life in kidney transplant receipts, nevertheless, in fact there are no standard evaluation indexes and debatable scientific of existing indexes in kidney transplant recipients. Therefore, the aim of this study was to construct an evaluation index system to assess the effects of transitional care in kidney transplant recipients. Methods Based on Omaha system, an initial evaluation index system about the effects of transitional care in kidney transplant recipients was drafted by the literature review and semi-structured interview. Two rounds of correspondence were conducted in 19 experts and the analytic hierarchy process (AHP) was used to calculate the weights of all indexes. Results Five first-level indexes, sixteen second-level indexes, and forty-eight third-level indexes were selected in the initial evaluation index system. The authority coefficient of two-round expert consultations was 0.90 and coordination coefficients of indexes ranged from 0.24 to 0.34. Conclusion The established evaluation index system for the effectiveness of transitional care for kidney transplant recipients was scientific and reliable. Furthermore, it would be a potential method to evaluate effects of transitional care in kidney transplant recipients after further examination.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 142
    Publication Date: 2021-04-28
    Description: Re-entry winged body vehicles have several advantages w.r.t capsules, such as maneuverability and controlled landing opportunity. On the other hand, they show an increment in design level complexity, especially from an aerodynamic, aero-thermodynamic, and structural point of view, and in the difficulties of housing in operative existing launchers. In this framework, the idea of designing unmanned vehicles equipped with deployable wings for suborbital flight was born. This work details a preliminary study for identifying the best configuration for the hinge system aimed at the in-orbit deployment of an unmanned re-entry vehicle’s wings. In particular, the adopted optimization methodology is described. The adopted approach uses a genetic algorithm available in commercial software in conjunction with fully parametric models created in FEM environments and, in particular, it can optimize the hinge position considering both the deployed and folded configuration. The results identify the best hinge configuration that minimizes interface loads, thus, realizing a lighter and more efficient deployment system. Indeed, for such a category of vehicle, it is mandatory to reduce the structural mass, as much as possible in order to increase the payload and reduce service costs.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 143
    Publication Date: 2021-04-29
    Description: The extended Kalman filter has been shown to be a precise method for nonlinear state estimation and is the facto standard in navigation systems. However, if the initial estimated state is far from the true one, the filter may diverge, mainly due to an inconsistent linearization. Moreover, interval filters guarantee a robust and reliable, yet unprecise and discontinuous localization. This paper proposes to choose a point estimated by an interval method, as a linearization point of the extended Kalman filter. We will show that this combination allows us to get a higher level of integrity of the extended Kalman filter.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 144
    Publication Date: 2021-04-19
    Description: Background Prediction of neonatal deaths in NICUs is important for benchmarking and evaluating healthcare services in NICUs. Application of machine learning techniques can improve physicians’ ability to predict the neonatal deaths. The aim of this study was to present a neonatal death risk prediction model using machine learning techniques. Methods This study was conducted in Tehran, Iran in two phases. Initially, important risk factors in neonatal death were identified and then several machine learning models including Artificial Neural Network (ANN), decision tree (Random Forest (RF), C5.0 and CHART tree), Support Vector Machine (SVM), Bayesian Network and Ensemble models were developed. Finally, we prospectively applied these models to predict neonatal death in a NICU and followed up the neonates to compare the outcomes of these neonates with real outcomes. Results 17 factors were considered important in neonatal mortality prediction. The highest Area Under the Curve (AUC) was achieved for the SVM and Ensemble models with 0.98. The best precision and specificity were 0.98 and 0.94, respectively for the RF model. The highest accuracy, sensitivity and F-score were achieved for the SVM model with 0.94, 0.95 and 0.96, respectively. The best performance of models in prospective evaluation was for the ANN, C5.0 and CHAID tree models. Conclusion Using the developed machine learning models can help physicians predict the neonatal deaths in NICUs.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 145
    Publication Date: 2021-04-10
    Description: In recent years, both the Internet of Things (IoT) and blockchain technologies have been highly influential and revolutionary. IoT enables companies to embrace Industry 4.0, the Fourth Industrial Revolution, which benefits from communication and connectivity to reduce cost and to increase productivity through sensor-based autonomy. These automated systems can be further refined with smart contracts that are executed within a blockchain, thereby increasing transparency through continuous and indisputable logging. Ideally, the level of security for these IoT devices shall be very high, as they are specifically designed for this autonomous and networked environment. This paper discusses a use case of a company with legacy devices that wants to benefit from the features and functionality of blockchain technology. In particular, the implications of retrofit solutions are analyzed. The use of the BISS:4.0 platform is proposed as the underlying infrastructure. BISS:4.0 is intended to integrate the blockchain technologies into existing enterprise environments. Furthermore, a security analysis of IoT and blockchain present attacks and countermeasures are presented that are identified and applied to the mentioned use case.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 146
    Publication Date: 2021-04-05
    Description: White noise generators can use uniform random sequences as a basis. However, such a technology may lead to deficient results if the original sequences have insufficient uniformity or omissions of random variables. This article offers a new approach for creating a phase signal generator with an improved matrix of autocorrelation coefficients. As a result, the generated signals of the white noise process have absolutely uniform intensities at the eigen Fourier frequencies. The simulation results confirm that the received signals have an adequate approximation of uniform white noise.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 147
    Publication Date: 2021-04-03
    Description: We consider the communication complexity of the Hamming distance of two strings. Bille et al. [SPIRE 2018] considered the communication complexity of the longest common prefix (LCP) problem in the setting where the two parties have their strings in a compressed form, i.e., represented by the Lempel-Ziv 77 factorization (LZ77) with/without self-references. We present a randomized public-coin protocol for a joint computation of the Hamming distance of two strings represented by LZ77 without self-references. Although our scheme is heavily based on Bille et al.’s LCP protocol, our complexity analysis is original which uses Crochemore’s C-factorization and Rytter’s AVL-grammar. As a byproduct, we also show that LZ77 with/without self-references are not monotonic in the sense that their sizes can increase by a factor of 4/3 when a prefix of the string is removed.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 148
    Publication Date: 2021-03-31
    Description: The proper planning of rest periods in response to the availability of parking spaces at rest areas is an important issue for haulage companies as well as traffic and road administrations. We present a case study of how You Only Look Once (YOLO)v5 can be implemented to detect heavy goods vehicles at rest areas during winter to allow for the real-time prediction of parking spot occupancy. Snowy conditions and the polar night in winter typically pose some challenges for image recognition, hence we use thermal network cameras. As these images typically have a high number of overlaps and cut-offs of vehicles, we applied transfer learning to YOLOv5 to investigate whether the front cabin and the rear are suitable features for heavy goods vehicle recognition. Our results show that the trained algorithm can detect the front cabin of heavy goods vehicles with high confidence, while detecting the rear seems more difficult, especially when located far away from the camera. In conclusion, we firstly show an improvement in detecting heavy goods vehicles using their front and rear instead of the whole vehicle, when winter conditions result in challenging images with a high number of overlaps and cut-offs, and secondly, we show thermal network imaging to be promising in vehicle detection.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 149
    Publication Date: 2021-03-31
    Description: The Maximum Satisfiability (Maximum Satisfiability (MaxSAT)) approach is the choice, and perhaps the only one, to deal with most real-world problems as most of them are unsatisfiable. Thus, the search for a complete and consistent solution to a real-world problem is impractical due to computational and time constraints. As a result, MaxSAT problems and solving techniques are of exceptional interest in the domain of Satisfiability (Satisfiability (SAT)). Our research experimentally investigated the performance gains of extending the most recently developed SAT dynamic Initial Weight assignment technique (InitWeight) to handle the MaxSAT problems. Specifically, we first investigated the performance gains of dynamically assigning the initial weights in the Divide and Distribute Fixed Weights solver (DDFW+Initial Weight for Maximum Satisfiability (DDFW+InitMaxSAT)) over Divide and Distribute Fixed Weights solver (DDFW) when applied to solve a wide range of well-known unweighted MaxSAT problems obtained from DIMACS. Secondly, we compared DDFW+InitMaxSAT’s performance against three known state-of-the-art SAT solving techniques: YalSAT, ProbSAT, and Sparrow. We showed that the assignment of dynamic initial weights increased the performance of DDFW+InitMaxSAT against DDFW by an order of magnitude on the majority of problems and performed similarly otherwise. Furthermore, we showed that the performance of DDFW+InitMaxSAT was superior to the other state-of-the-art algorithms. Eventually, we showed that the InitWeight technique could be extended to handling partial MaxSAT with minor modifications.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 150
    Publication Date: 2021-04-05
    Description: Background Screening carotid B-mode ultrasonography is a frequently used method to detect subjects with carotid atherosclerosis (CAS). Due to the asymptomatic progression of most CAS patients, early identification is challenging for clinicians, and it may trigger ischemic stroke. Recently, machine learning has shown a strong ability to classify data and a potential for prediction in the medical field. The combined use of machine learning and the electronic health records of patients could provide clinicians with a more convenient and precise method to identify asymptomatic CAS. Methods Retrospective cohort study using routine clinical data of medical check-up subjects from April 19, 2010 to November 15, 2019. Six machine learning models (logistic regression [LR], random forest [RF], decision tree [DT], eXtreme Gradient Boosting [XGB], Gaussian Naïve Bayes [GNB], and K-Nearest Neighbour [KNN]) were used to predict asymptomatic CAS and compared their predictability in terms of the area under the receiver operating characteristic curve (AUCROC), accuracy (ACC), and F1 score (F1). Results Of the 18,441 subjects, 6553 were diagnosed with asymptomatic CAS. Compared to DT (AUCROC 0.628, ACC 65.4%, and F1 52.5%), the other five models improved prediction: KNN + 7.6% (0.704, 68.8%, and 50.9%, respectively), GNB + 12.5% (0.753, 67.0%, and 46.8%, respectively), XGB + 16.0% (0.788, 73.4%, and 55.7%, respectively), RF + 16.6% (0.794, 74.5%, and 56.8%, respectively) and LR + 18.1% (0.809, 74.7%, and 59.9%, respectively). The highest achieving model, LR predicted 1045/1966 cases (sensitivity 53.2%) and 3088/3566 non-cases (specificity 86.6%). A tenfold cross-validation scheme further verified the predictive ability of the LR. Conclusions Among machine learning models, LR showed optimal performance in predicting asymptomatic CAS. Our findings set the stage for an early automatic alarming system, allowing a more precise allocation of CAS prevention measures to individuals probably to benefit most.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 151
    Publication Date: 2021-04-04
    Description: This paper presents a filter generating method that modifies sensor signals using genetic network programming (GNP) for automatic calibration to absorb individual differences. For our earlier study, we developed a prototype that incorporates bed-leaving detection sensors using piezoelectric films and a machine-learning-based behavior recognition method using counter-propagation networks (CPNs). Our method learns topology and relations between input features and teaching signals. Nevertheless, CPNs have been insufficient to address individual differences in parameters such as weight and height used for bed-learning behavior recognition. For this study, we actualize automatic calibration of sensor signals for invariance relative to these body parameters. This paper presents two experimentally obtained results from our earlier study. They were obtained using low-accuracy sensor signals. For the preliminary experiment, we optimized the original sensor signals to approximate high-accuracy ideal sensor signals using generated filters. We used fitness to assess differences between the original signal patterns and ideal signal patterns. For application experiments, we used fitness calculated from the recognition accuracy obtained using CPNs. The experimentally obtained results reveal that our method improved the mean accuracies for three datasets.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 152
    Publication Date: 2021-04-05
    Description: Background Despite growing evidence that deprescribing can improve clinical outcomes, quality of life and reduce the likelihood of adverse drug events, the practice is not widespread, particularly in hospital settings. Clinical risk assessment tools, like the Drug Burden Index (DBI), can help prioritise patients for medication review and prioritise medications to deprescribe, but are not integrated within routine care. The aim of this study was to conduct formative usability testing of a computerised decision support (CDS) tool, based on DBI, to identify modifications required to the tool prior to trialling in practice. Methods Our CDS tool comprised a DBI MPage in the electronic medical record (clinical workspace) that facilitated review of a patient’s DBI and medication list, access to deprescribing resources, and the ability to deprescribe. Two rounds of scenario-based formative usability testing with think-aloud protocol were used. Seventeen end-users participated in the testing, including junior and senior doctors, and pharmacists. Results Participants expressed positive views about the DBI CDS tool but testing revealed a number of clear areas for improvement. These primarily related to terminology used (i.e. what is a DBI and how is it calculated?), and consistency of functionality and display. A key finding was that users wanted the CDS tool to look and function in a similar way to other decision support tools in the electronic medical record. Modifications were made to the CDS tool in response to user feedback. Conclusion Usability testing proved extremely useful for identifying components of our CDS tool that were confusing, difficult to locate or to understand. We recommend usability testing be adopted prior to implementation of any digital health intervention. We hope our revised CDS tool equips clinicians with the knowledge and confidence to consider discontinuation of inappropriate medications in routine care of hospitalised patients. In the next phase of our project, we plan to pilot test the tool in practice to evaluate its uptake and effectiveness in supporting deprescribing in routine hospital care.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 153
    Publication Date: 2021-04-09
    Description: Quantitative Structure–Activity Relationship (QSAR) aims to correlate molecular structure properties with corresponding bioactivity. Chance correlations and multicollinearity are two major problems often encountered when generating QSAR models. Feature selection can significantly improve the accuracy and interpretability of QSAR by removing redundant or irrelevant molecular descriptors. An artificial bee colony algorithm (ABC) that mimics the foraging behaviors of honey bee colony was originally proposed for continuous optimization problems. It has been applied to feature selection for classification but seldom for regression analysis and prediction. In this paper, a binary ABC algorithm is used to select features (molecular descriptors) in QSAR. Furthermore, we propose an improved ABC-based algorithm for feature selection in QSAR, namely ABC-PLS-1. Crossover and mutation operators are introduced to employed bee and onlooker bee phase to modify several dimensions of each solution, which not only saves the process of converting continuous values into discrete values, but also reduces the computational resources. In addition, a novel greedy selection strategy which selects the feature subsets with higher accuracy and fewer features helps the algorithm to converge fast. Three QSAR datasets are used for the evaluation of the proposed algorithm. Experimental results show that ABC-PLS-1 outperforms PSO-PLS, WS-PSO-PLS, and BFDE-PLS in accuracy, root mean square error, and the number of selected features. Moreover, we also study whether to implement scout bee phase when tracking regression problems and drawing such an interesting conclusion that the scout bee phase is redundant when dealing with the feature selection in low-dimensional and medium-dimensional regression problems.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 154
    Publication Date: 2021-02-22
    Description: Background The large volume of medical literature makes it difficult for healthcare professionals to keep abreast of the latest studies that support Evidence-Based Medicine. Natural language processing enhances the access to relevant information, and gold standard corpora are required to improve systems. To contribute with a new dataset for this domain, we collected the Clinical Trials for Evidence-Based Medicine in Spanish (CT-EBM-SP) corpus. Methods We annotated 1200 texts about clinical trials with entities from the Unified Medical Language System semantic groups: anatomy (ANAT), pharmacological and chemical substances (CHEM), pathologies (DISO), and lab tests, diagnostic or therapeutic procedures (PROC). We doubly annotated 10% of the corpus and measured inter-annotator agreement (IAA) using F-measure. As use case, we run medical entity recognition experiments with neural network models. Results This resource contains 500 abstracts of journal articles about clinical trials and 700 announcements of trial protocols (292 173 tokens). We annotated 46 699 entities (13.98% are nested entities). Regarding IAA agreement, we obtained an average F-measure of 85.65% (±4.79, strict match) and 93.94% (±3.31, relaxed match). In the use case experiments, we achieved recognition results ranging from 80.28% (±00.99) to 86.74% (±00.19) of average F-measure. Conclusions Our results show that this resource is adequate for experiments with state-of-the-art approaches to biomedical named entity recognition. It is freely distributed at: http://www.lllf.uam.es/ESP/nlpmedterm_en.html. The methods are generalizable to other languages with similar available sources.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 155
    Publication Date: 2021-04-07
    Description: Background Implementation of evidence-based interventions often involves strategies to engage diverse populations while also attempting to maintain external validity. When using health IT tools to deliver patient-centered health messages, systems-level requirements are often at odds with ‘on-the ground’ tailoring approaches for patient-centered care or ensuring equity among linguistically diverse populations. Methods We conducted a fidelity and acceptability-focused evaluation of the STAR MAMA Program, a 5-month bilingual (English and Spanish) intervention for reducing diabetes risk factors among 181 post-partum women with recent gestational diabetes. The study’s purpose was to explore fidelity to pre-determined ‘core’ (e.g. systems integration) and ‘modifiable’ equity components (e.g. health coaching responsiveness, and variation by language) using an adapted implementation fidelity framework. Participant-level surveys, systems-level databases of message delivery, call completion, and coaching notes were included. Results 96.6% of participants are Latina and 80.9% were born outside the US. Among those receiving the STAR MAMA intervention; 55 received the calls in Spanish (61%) and 35 English (39%). 90% (n = 81) completed ≥ one week. Initially, systems errors were common, and increased triggers for health coach call-backs. Although Spanish speakers had more triggers over the intervention period, the difference was not statistically significant. Of the calls triggering a health coach follow-up, attempts were made for 85.4% (n = 152) of the English call triggers and for 80.0% (n = 279) of the Spanish call triggers (NS). Of attempted calls, health coaching calls were complete for 55.6% (n = 85) of English-language call triggers and for 56.6% of Spanish-language call triggers (NS). Some differences in acceptability were noted by language, with Spanish-speakers reporting higher satisfaction with prevention content (p = 
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 156
    Publication Date: 2021-04-08
    Description: Background The motion capture has been used as the usual method for measuring movement parameters of human, and most of the measuring data are obtained by partial manual process based on commercial software. An automatic kinematics data process was developed by programming on MATLAB software in this paper. Methods The motion capture measurement of healthy volunteers was carried out and the MATLAB program was used for data process. Firstly, the coordinate data of markers and anatomical points on human lower limb measured by motion capture system were read and repaired through the usual and the patch program. Meantime, the local coordinate systems of human femur and tibia were established with anatomical points. Then flexion/extension, abduction/adduction and internal/external rotation of human knee tibiofemoral joint were obtained by special coordinate transformation program. Results Using the above methods, motion capture measurements and batch data processing were carried out on squatting and climbing stairs of 29 healthy volunteers. And the motion characteristics (flexion/extension, internal/external rotation and adduction/abduction) of the knee joint were obtained. For example, the maximum internal/external rotation in squatting and climbing stairs were respectively was 30.5 degrees and 14 degrees, etc. Meantime, the results of this paper also were respectively compared with the results processed by other research methods, and the results were basically consistent, thus the reliability of our research method was verified. After calibration processing, the compiled MATLAB program of this paper can directly be used for efficient batch processing and avoiding manual modeling one by one. Conclusion A novel Patch Program of this paper has been developed, which can make reasonable compensation for missing and noise signals to obtain more complete motion data. At the same time, a universal data processing program has also been developed for obtaining the relative movement of various components of the human body, and the program can be modified for detail special analysis. These motion capture technologies can be used to judge whether the human body functions are abnormal, provide a reference for rehabilitation treatment and design of rehabilitation equipment, and evaluate the effectiveness before and after surgery.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 157
    Publication Date: 2021-04-07
    Description: An amendment to this paper has been published and can be accessed via the original article.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 158
    Publication Date: 2021-04-07
    Description: Background Accurate, coded problem lists are valuable for data reuse, including clinical decision support and research. However, healthcare providers frequently modify coded diagnoses by including or removing common contextual properties in free-text diagnosis descriptions: uncertainty (suspected glaucoma), laterality (left glaucoma) and temporality (glaucoma 2002). These contextual properties could cause a difference in meaning between underlying diagnosis codes and modified descriptions, inhibiting data reuse. We therefore aimed to develop and evaluate an algorithm to identify these contextual properties. Methods A rule-based algorithm called UnLaTem (Uncertainty, Laterality, Temporality) was developed using a single-center dataset, including 288,935 diagnosis descriptions, of which 73,280 (25.4%) were modified by healthcare providers. Internal validation of the algorithm was conducted with an independent sample of 980 unique records. A second validation of the algorithm was conducted with 996 records from a Dutch multicenter dataset including 175,210 modified descriptions of five hospitals. Two researchers independently annotated the two validation samples. Performance of the algorithm was determined using means of the recall and precision of the validation samples. The algorithm was applied to the multicenter dataset to determine the actual prevalence of the contextual properties within the modified descriptions per specialty. Results For the single-center dataset recall (and precision) for removal of uncertainty, uncertainty, laterality and temporality respectively were 100 (60.0), 99.1 (89.9), 100 (97.3) and 97.6 (97.6). For the multicenter dataset for removal of uncertainty, uncertainty, laterality and temporality it was 57.1 (88.9), 86.3 (88.9), 99.7 (93.5) and 96.8 (90.1). Within the modified descriptions of the multicenter dataset, 1.3% contained removal of uncertainty, 9.9% uncertainty, 31.4% laterality and 9.8% temporality. Conclusions We successfully developed a rule-based algorithm named UnLaTem to identify contextual properties in Dutch modified diagnosis descriptions. UnLaTem could be extended with more trigger terms, new rules and the recognition of term order to increase the performance even further. The algorithm’s rules are available as additional file 2. Implementing UnLaTem in Dutch hospital systems can improve precision of information retrieval and extraction from diagnosis descriptions, which can be used for data reuse purposes such as decision support and research.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 159
    Publication Date: 2021-04-08
    Description: Background Telerehabilitation has been considered a suitable alternative healthcare delivery system during the COVID-19 outbreak, and many studies have promoted its feasibility in delivering physical care to patients who live with pain and disability. Physiotherapists’ perceptions and willingness are two key factors that influence the provision of remote physiotherapy. Aim To investigate physiotherapists’ perceptions of and willingness to use telerehabilitation in Kuwait during the COVID-19 pandemic and to explore the barriers that may hinder the use of telerehabilitation in this sector. Methods The following methods were used: (1) a cross-sectional survey and (2) face-to-face semi-structured interviews. In the cross-sectional survey, an electronic questionnaire was sent to 747 physiotherapists who were working in the governmental health sector. The questionnaire included four sections: perceptions of telerehabilitation, comfort with technology, willingness to use telerehabilitation, and barriers to using telerehabilitation. Six interviews were conducted with physiotherapy managers to explore the barriers and facilitators of telerehabilitation practice. Data analysis In this study, descriptive data analysis was conducted, and a cross-tabulation technique was used to find the associations between the variables, in which chi-square tests were used to identify the significance of the results, where p 
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 160
    Publication Date: 2021-04-07
    Description: Background Passive sensor data from mobile devices can shed light on daily activities, social behavior, and maternal-child interactions to improve maternal and child health services including mental healthcare. We assessed feasibility and acceptability of the Sensing Technologies for Maternal Depression Treatment in Low Resource Settings (StandStrong) platform. The StandStrong passive data collection platform was piloted with adolescent and young mothers, including mothers experiencing postpartum depression, in Nepal. Methods Mothers (15–25 years old) with infants (
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 161
    Publication Date: 2021-04-20
    Description: Binary cube-free language and ternary square-free language are two “canonical” representatives of a wide class of languages defined by avoidance properties. Each of these two languages can be viewed as an infinite binary tree reflecting the prefix order of its elements. We study how “homogenious” these trees are, analysing the following parameter: the density of branching nodes along infinite paths. We present combinatorial results and an efficient search algorithm, which together allowed us to get the following numerical results for the cube-free language: the minimal density of branching points is between 3509/9120≈0.38476 and 13/29≈0.44828, and the maximal density is between 0.72 and 67/93≈0.72043. We also prove the lower bound 223/868≈0.25691 on the density of branching points in the tree of the ternary square-free language.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 162
    Publication Date: 2021-03-08
    Description: Continuous-time linear systems with uncertain parameters are widely used for modeling real-life processes. The uncertain parameters, contained in the system and input matrices, can be constant or time-varying. In the latter case, they may represent state dependencies of these matrices. Assuming bounded uncertainties, interval methods become applicable for a verified reachability analysis, for feasibility analysis of feedback controllers, or for the design of robust set-valued state estimators. The evaluation of these system models becomes computationally efficient after a transformation into a cooperative state-space representation, where the dynamics satisfy certain monotonicity properties with respect to the initial conditions. To obtain such representations, similarity transformations are required which are not trivial to find for sufficiently wide a-priori bounds of the uncertain parameters. This paper deals with the derivation and algorithmic comparison of two different transformation techniques for which their applicability to processes with constant and time-varying parameters has to be distinguished. An interval-based reachability analysis of the states of a simple electric step-down converter concludes this paper.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 163
    Publication Date: 2021-04-20
    Description: This study investigates the use of machine-learning approaches to interpret Dissolved Gas Analysis (DGA) data to find incipient faults early in oil-impregnated transformers. Transformers are critical pieces of equipment in transmitting and distributing electrical energy. The failure of a single unit disturbs a huge number of consumers and suppresses economic activities in the vicinity. Because of this, it is important that power utility companies accord high priority to condition monitoring of critical assets. The analysis of dissolved gases is a technique popularly used for monitoring the condition of transformers dipped in oil. The interpretation of DGA data is however inconclusive as far as the determination of incipient faults is concerned and depends largely on the expertise of technical personnel. To have a coherent, accurate, and clear interpretation of DGA, this study proposes a novel multinomial classification model christened KosaNet that is based on decision trees. Actual DGA data with 2912 entries was used to compute the performance of KosaNet against other algorithms with multiclass classification ability namely the decision tree, k-NN, Random Forest, Naïve Bayes, and Gradient Boost. Investigative results show that KosaNet demonstrated an improved DGA classification ability particularly when classifying multinomial data.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 164
    Publication Date: 2021-04-21
    Description: Compared with continuous elements, discontinuous elements advance in processing the discontinuity of physical variables at corner points and discretized models with complex boundaries. However, the computational accuracy of discontinuous elements is sensitive to the positions of element nodes. To reduce the side effect of the node position on the results, this paper proposes employing partially discontinuous elements to compute the time-domain boundary integral equation of 3D elastodynamics. Using the partially discontinuous element, the nodes located at the corner points will be shrunk into the element, whereas the nodes at the non-corner points remain unchanged. As such, a discrete model that is continuous on surfaces and discontinuous between adjacent surfaces can be generated. First, we present a numerical integration scheme of the partially discontinuous element. For the singular integral, an improved element subdivision method is proposed to reduce the side effect of the time step on the integral accuracy. Then, the effectiveness of the proposed method is verified by two numerical examples. Meanwhile, we study the influence of the positions of the nodes on the stability and accuracy of the computation results by cases. Finally, the recommended value range of the inward shrink ratio of the element nodes is provided.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 165
    Publication Date: 2021-04-21
    Description: In this paper, a novel search operation is proposed for the neuroevolution of augmented topologies, namely the difference-based mutation. This operator uses the differences between individuals in the population to perform more efficient search for optimal weights and structure of the model. The difference is determined according to the innovation numbers assigned to each node and connection, allowing tracking the changes. The implemented neuroevolution algorithm allows backward connections and loops in the topology, and uses a set of mutation operators, including connections merging and deletion. The algorithm is tested on a set of classification problems and the rotary inverted pendulum control problem. The comparison is performed between the basic approach and modified versions. The sensitivity to parameter values is examined. The experimental results prove that the newly developed operator delivers significant improvements to the classification quality in several cases, and allow finding better control algorithms.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 166
    Publication Date: 2021-04-01
    Description: Background Deep learning algorithms significantly improve the accuracy of pathological image classification, but the accuracy of breast cancer classification using only single-mode pathological images still cannot meet the needs of clinical practice. Inspired by the real scenario of pathologists reading pathological images for diagnosis, we integrate pathological images and structured data extracted from clinical electronic medical record (EMR) to further improve the accuracy of breast cancer classification. Methods In this paper, we propose a new richer fusion network for the classification of benign and malignant breast cancer based on multimodal data. To make pathological image can be integrated more sufficient with structured EMR data, we proposed a method to extract richer multilevel feature representation of the pathological image from multiple convolutional layers. Meanwhile, to minimize the information loss for each modality before data fusion, we use the denoising autoencoder as a way to increase the low-dimensional structured EMR data to high-dimensional, instead of reducing the high-dimensional image data to low-dimensional before data fusion. In addition, denoising autoencoder naturally generalizes our method to make the accurate prediction with partially missing structured EMR data. Results The experimental results show that the proposed method is superior to the most advanced method in terms of the average classification accuracy (92.9%). In addition, we have released a dataset containing structured data from 185 patients that were extracted from EMR and 3764 paired pathological images of breast cancer, which can be publicly downloaded from http://ear.ict.ac.cn/?page_id=1663. Conclusions We utilized a new richer fusion network to integrate highly heterogeneous data to leverage the structured EMR data to improve the accuracy of pathological image classification. Therefore, the application of automatic breast cancer classification algorithms in clinical practice becomes possible. Due to the generality of the proposed fusion method, it can be straightforwardly extended to the fusion of other structured data and unstructured data.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 167
    Publication Date: 2021-04-10
    Description: Background/Introduction Artificial intelligence (AI) in the healthcare sector is receiving attention from researchers and health professionals. Few previous studies have investigated this topic from a multi-disciplinary perspective, including accounting, business and management, decision sciences and health professions. Methods The structured literature review with its reliable and replicable research protocol allowed the researchers to extract 288 peer-reviewed papers from Scopus. The authors used qualitative and quantitative variables to analyse authors, journals, keywords, and collaboration networks among researchers. Additionally, the paper benefited from the Bibliometrix R software package. Results The investigation showed that the literature in this field is emerging. It focuses on health services management, predictive medicine, patient data and diagnostics, and clinical decision-making. The United States, China, and the United Kingdom contributed the highest number of studies. Keyword analysis revealed that AI can support physicians in making a diagnosis, predicting the spread of diseases and customising treatment paths. Conclusions The literature reveals several AI applications for health services and a stream of research that has not fully been covered. For instance, AI projects require skills and data quality awareness for data-intensive analysis and knowledge-based management. Insights can help researchers and health professionals understand and address future research on AI in the healthcare field.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 168
    Publication Date: 2021-04-12
    Description: In recent years, new metaheuristic algorithms have been developed taking as reference the inspiration on biological and natural phenomena. This nature-inspired approach for algorithm development has been widely used by many researchers in solving optimization problems. These algorithms have been compared with the traditional ones and have demonstrated to be superior in many complex problems. This paper attempts to describe the algorithms based on nature, which are used in optimizing fuzzy clustering in real-world applications. We briefly describe the optimization methods, the most cited ones, nature-inspired algorithms that have been published in recent years, authors, networks and relationship of the works, etc. We believe the paper can serve as a basis for analysis of the new area of nature and bio-inspired optimization of fuzzy clustering.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 169
    Publication Date: 2021-04-01
    Description: Background Sepsis is a highly lethal and heterogeneous disease. Utilization of an unsupervised method may identify novel clinical phenotypes that lead to targeted therapies and improved care. Methods Our objective was to derive clinically relevant sepsis phenotypes from a multivariate panel of physiological data using subgraph-augmented nonnegative matrix factorization. We utilized data from the Medical Information Mart for Intensive Care III database of patients who were admitted to the intensive care unit with sepsis. The extracted data contained patient demographics, physiological records, sequential organ failure assessment scores, and comorbidities. We applied frequent subgraph mining to extract subgraphs from physiological time series and performed nonnegative matrix factorization over the subgraphs to derive patient clusters as phenotypes. Finally, we profiled these phenotypes based on demographics, physiological patterns, disease trajectories, comorbidities and outcomes, and performed functional validation of their clinical implications. Results We analyzed a cohort of 5782 patients, derived three novel phenotypes of distinct clinical characteristics and demonstrated their prognostic implications on patient outcome. Subgroup 1 included relatively less severe/deadly patients (30-day mortality, 17%) and was the smallest-in-size group (n = 1218, 21%). It was characterized by old age (mean age, 73 years), a male majority (male-to-female ratio, 59-to-41), and complex chronic conditions. Subgroup 2 included the most severe/deadliest patients (30-day mortality, 28%) and was the second-in-size group (n = 2036, 35%). It was characterized by a male majority (male-to-female ratio, 60-to-40), severe organ dysfunction or failure compounded by a wide range of comorbidities, and uniquely high incidences of coagulopathy and liver disease. Subgroup 3 included the least severe/deadly patients (30-day mortality, 10%) and was the largest group (n = 2528, 44%). It was characterized by low age (mean age, 60 years), a balanced gender ratio (male-to-female ratio, 50-to-50), the least complicated conditions, and a uniquely high incidence of neurologic disease. These phenotypes were validated to be prognostic factors of mortality for sepsis patients. Conclusions Our results suggest that these phenotypes can be used to develop targeted therapies based on phenotypic heterogeneity and algorithms designed for monitoring, validating and intervening clinical decisions for sepsis patients.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 170
    Publication Date: 2021-03-16
    Description: Background Blood glucose (BG) management is crucial for type-1 diabetes patients resulting in the necessity of reliable artificial pancreas or insulin infusion systems. In recent years, deep learning techniques have been utilized for a more accurate BG level prediction system. However, continuous glucose monitoring (CGM) readings are susceptible to sensor errors. As a result, inaccurate CGM readings would affect BG prediction and make it unreliable, even if the most optimal machine learning model is used. Methods In this work, we propose a novel approach to predicting blood glucose level with a stacked Long short-term memory (LSTM) based deep recurrent neural network (RNN) model considering sensor fault. We use the Kalman smoothing technique for the correction of the inaccurate CGM readings due to sensor error. Results For the OhioT1DM (2018) dataset, containing eight weeks’ data from six different patients, we achieve an average RMSE of 6.45 and 17.24 mg/dl for 30 min and 60 min of prediction horizon (PH), respectively. Conclusions To the best of our knowledge, this is the leading average prediction accuracy for the ohioT1DM dataset. Different physiological information, e.g., Kalman smoothed CGM data, carbohydrates from the meal, bolus insulin, and cumulative step counts in a fixed time interval, are crafted to represent meaningful features used as input to the model. The goal of our approach is to lower the difference between the predicted CGM values and the fingerstick blood glucose readings—the ground truth. Our results indicate that the proposed approach is feasible for more reliable BG forecasting that might improve the performance of the artificial pancreas and insulin infusion system for T1D diabetes management.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 171
    Publication Date: 2021-04-09
    Description: Background Transitioning from an old medical coding system to a new one can be challenging, especially when the two coding systems are significantly different. The US experienced such a transition in 2015. Objective This research aims to introduce entropic measures to help users prepare for the migration to a new medical coding system by identifying and focusing preparation initiatives on clinical concepts with more likelihood of adoption challenges. Methods Two entropic measures of coding complexity are introduced. The first measure is a function of the variation in the alphabets of new codes. The second measure is based on the possible number of valid representations of an old code. Results A demonstration of how to implement the proposed techniques is carried out using the 2015 mappings between ICD-9-CM and ICD-10-CM/PCS. The significance of the resulting entropic measures is discussed in the context of clinical concepts that were likely to pose challenges regarding documentation, coding errors, and longitudinal data comparisons. Conclusion The proposed entropic techniques are suitable to assess the complexity between any two medical coding systems where mappings or crosswalks exist. The more the entropy, the more likelihood of adoption challenges. Users can utilize the suggested techniques as a guide to prioritize training efforts to improve documentation and increase the chances of accurate coding, code validity, and longitudinal data comparisons.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 172
    Publication Date: 2021-04-09
    Description: Background Since decision making about treatment with disease-modifying drugs (DMDs) for multiple sclerosis (MS) is preference sensitive, shared decision making between patient and healthcare professional should take place. Patient decision aids could support this shared decision making process by providing information about the disease and the treatment options, to elicit the patient’s preference and to support patients and healthcare professionals in discussing these preferences and matching them with a treatment. Therefore, a prototype of a patient decision aid for MS patients in the Netherlands—based on the principles of multi-criteria decision analysis (MCDA) —was developed, following the recommendations of the International Patient Decision Aid Standards. MCDA was chosen as it might reduce cognitive burden of considering treatment options and matching patient preferences with the treatment options. Results After determining the scope to include DMDs labelled for relapsing-remitting MS and clinically isolated syndrome, users’ informational needs were assessed using focus groups (N = 19 patients) and best-worst scaling surveys with patients (N = 185), neurologists and nurses (N = 60) to determine which information about DMDs should be included in the patient decision aid. Next, an online format and computer-based delivery of the patient decision aid was chosen to enable embedding of MCDA. A literature review was conducting to collect evidence on the effectiveness and burden of use of the DMDs. A prototype was developed next, and alpha testing to evaluate its comprehensibility and usability with in total thirteen patients and four healthcare professionals identified several issues regarding content and framing, methods for weighting importance of criteria in the MCDA structure, and the presentation of the conclusions of the patient decision aid ranking the treatment options according to the patient’s preferences. Adaptations were made accordingly, but verification of the rankings provided, validation of the patient decision aid, evaluation of the feasibility of implementation and assessing its value for supporting shared decision making should be addressed in further development of the patient decision aid. Conclusion This paper aimed to provide more transparency regarding the developmental process of an MCDA-based patient decision aid for treatment decisions for MS and the challenges faced during this process. Issues identified in the prototype were resolved as much as possible, though some issues remain. Further development is needed to overcome these issues before beta pilot testing with patients and healthcare professionals at the point of clinical decision-making can take place to ultimately enable making conclusions about the value of the MCDA-based patient decision aid for MS patients, healthcare professionals and the quality of care.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 173
    Publication Date: 2021-02-04
    Description: Background Little data exists regarding decision-making preferences for parents and surgeons in pediatric surgery. Here we investigate whether parents and surgeons have similar decision-making preferences as well as which factors influence those preferences. Specifically, we compare parents’ and surgeons’ assessments of the urgency and complexity of pediatric surgical scenarios and the impact of their assessments on decision-making preferences. Methods A survey was emailed to parents of patients evaluated in a university-based pediatric surgery clinic and surgeons belonging to the American Pediatric Surgical Association. The survey asked respondents to rate 6 clinical vignettes for urgency, complexity, and desired level of surgeon guidance using the Controlled Preferences Scale (CPS). Results Regarding urgency, parents were more likely than surgeons to rate scenarios as emergent when cancer was involved (parents: 68.8% cancer vs. 29.5% non-cancer, p 
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 174
    Publication Date: 2021-02-05
    Description: The nonlinearity of activation functions used in deep learning models is crucial for the success of predictive models. Several simple nonlinear functions, including Rectified Linear Unit (ReLU) and Leaky-ReLU (L-ReLU) are commonly used in neural networks to impose the nonlinearity. In practice, these functions remarkably enhance the model accuracy. However, there is limited insight into the effects of nonlinearity in neural networks on their performance. Here, we investigate the performance of neural network models as a function of nonlinearity using ReLU and L-ReLU activation functions in the context of different model architectures and data domains. We use entropy as a measurement of the randomness, to quantify the effects of nonlinearity in different architecture shapes on the performance of neural networks. We show that the ReLU nonliearity is a better choice for activation function mostly when the network has sufficient number of parameters. However, we found that the image classification models with transfer learning seem to perform well with L-ReLU in fully connected layers. We show that the entropy of hidden layer outputs in neural networks can fairly represent the fluctuations in information loss as a function of nonlinearity. Furthermore, we investigate the entropy profile of shallow neural networks as a way of representing their hidden layer dynamics.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 175
    Publication Date: 2021-02-04
    Description: Background The provision of unnecessary Emergency Medical Services care remains a challenge throughout the US and contributes to Emergency Department overcrowding, delayed services and lower quality of care. New EMS models of care have shown promise in improving access to health services for patients who do not need urgent care. The goals of this study were (1) to identify factors associated with EMS utilization (911) and (2) their effects on total EMS calls and transports in an MIH program. Methods The study sample included 110 MIH patients referred to the program or considered high-users of EMS services between November 2016 and September 2018. The study employed descriptive statistics and Poisson regressions to estimate the effects of covariates on total EMS calls and transports. Results The typical enrollee is a 60-year-old single Black male living with two other individuals. He has a PCP, takes 12 medications and is compliant with his treatment. The likelihood of calling and/or being transported by EMS was higher for males, patients at high risk for falls, patients with asthma/COPD, psychiatric or behavioral illnesses, and longer travel times to a PCP. Each prescribed medication increased the risk for EMS calls or transports by 4%. The program achieved clear reductions in 911 calls and transports and savings of more than 140,000 USD in the first month. Conclusions This study shows that age, marital status, high fall risk scores, the number of medications, psychiatric/behavioral illness, asthma/COPD, CHF, CVA/stroke and medication compliance may be good predictors of EMS use in an MIH setting. MIH programs can help control utilization of EMS care and reduce both EMS calls and transports.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 176
    Publication Date: 2021-02-04
    Description: Background Currently, the Internet seems to be a helpful tool for obtaining information about everything that we think about, including diseases, their prevention and treatment approaches. However, doubts exist regarding the quality and readability of such information. This study sought to assess the quality and readability of web-based Arabic information on periodontal disease. Methods In this infodemiological study, the Google, Yahoo!, and Bing search engines were searched using specific Arabic terms on periodontal disease. The first 100 consecutive websites from each engine were obtained. The eligible websites were categorized as commercial, health/professional, journalism, and other. The following tools were applied to assess the quality of the information on the included websites: the Health on the Net Foundation Code of Conduct (HONcode), the Journal of the American Medical Association (JAMA) benchmarks, and the DISCERN tool. The readability was assessed using an online readability tool. Results Of the 300 websites, 89 were eligible for quality and readability analyses. Only two websites (2.3%) were HONcode certified. Based on the DISCERN tool, 43 (48.3%) websites had low scores. The mean score of the JAMA benchmarks was 1.6 ± 1.0, but only 3 (3.4%) websites achieved “yes” responses for all four JAMA criteria. Based on the DISCERN tool, health/professional websites revealed the highest quality of information compared to other website categories. Most of the health/professional websites revealed moderate-quality information, while 55% of the commercial websites, 66% of journalism websites, and 43% of other websites showed poor quality information. Regarding readability, most of the analyzed websites presented simple and readable written content. Conclusions Aside from readable content, Arabic health information on the analyzed websites on periodontal disease is below the required level of quality.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 177
    Publication Date: 2021-02-04
    Description: Background Adverse drug reactions (ADRs) are an important concern in the medication process and can pose a substantial economic burden for patients and hospitals. Because of the limitations of clinical trials, it is difficult to identify all possible ADRs of a drug before it is marketed. We developed a new model based on data mining technology to predict potential ADRs based on available drug data. Method Based on the Word2Vec model in Nature Language Processing, we propose a new knowledge graph embedding method that embeds drugs and ADRs into their respective vectors and builds a logistic regression classification model to predict whether a given drug will have ADRs. Result First, a new knowledge graph embedding method was proposed, and comparison with similar studies showed that our model not only had high prediction accuracy but also was simpler in model structure. In our experiments, the AUC of the classification model reached a maximum of 0.87, and the mean AUC was 0.863. Conclusion In this paper, we introduce a new method to embed knowledge graph to vectorize drugs and ADRs, then use a logistic regression classification model to predict whether there is a causal relationship between them. The experiment showed that the use of knowledge graph embedding can effectively encode drugs and ADRs. And the proposed ADRs prediction system is also very effective.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 178
    Publication Date: 2021-02-04
    Description: Background Helping Babies Breathe (HBB) is a life-saving program that has helped reduce neonatal morbidity and mortality, but knowledge and skills retention after training remains a significant challenge for sustainability of impact. User-centred design (UCD) can be used to develop solutions to target knowledge and skills maintenance. Methods We applied a process of UCD beginning with understanding the facilitators of, and barriers to, learning and retaining HBB knowledge and skills. HBB Master Trainers and frontline HBB providers participated in a series of focus group discussions (FGDs) to uncover the processes of skills acquisition and maintenance to develop a mobile application called “HBB Prompt”. Themes derived from each FGD were identified and implications for development of the HBB Prompt app were explored, including feasibility of incorporating strategies into the format of an app. Data analysis took place after each iteration in Phase 1 to incorporate feedback and improve subsequent versions of HBB Prompt. Results Six HBB trainers and seven frontline HBB providers participated in a series of FGDs in Phase 1 of this study. Common themes included lack of motivation to practise, improving confidence in ventilation skills, ability to achieve the Golden Minute, fear of forgetting knowledge or skills, importance of feedback, and peer-to-peer learning. Themes identified that were not feasible to address pertained to health system challenges. Feedback about HBB Prompt was generally positive. Based on initial and iterative feedback, HBB Prompt was created with four primary functions: Training Mode, Simulation Mode, Quizzes, and Dashboard/Scoreboard. Conclusions Developing HBB Prompt with UCD to help improve knowledge and skills retention was feasible and revealed key concepts, including drivers for successes and challenges faced for learning and maintaining HBB skills. HBB Prompt will be piloted in Phase 2 of this study, where knowledge and skills retention after HBB training will be compared between an intervention group with HBB Prompt and a control group without the app. Trial registration Clinicaltrials.gov (NCT03577054). Retrospectively registered July 5, 2018, https://clinicaltrials.gov/ct2/show/study/NCT03577054.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 179
    Publication Date: 2021-02-22
    Description: In this paper, we present a new framework dedicated to the robust detection of representative variables in high dimensional spaces with a potentially limited number of observations. Representative variables are selected by using an original regularization strategy: they are the center of specific variable clusters, denoted CORE-clusters, which respect fully interpretable constraints. Each CORE-cluster indeed contains more than a predefined amount of variables and each pair of its variables has a coherent behavior in the observed data. The key advantage of our regularization strategy is therefore that it only requires to tune two intuitive parameters: the minimal dimension of the CORE-clusters and the minimum level of similarity which gathers their variables. Interpreting the role played by a selected representative variable is additionally obvious as it has a similar observed behaviour as a controlled number of other variables. After introducing and justifying this variable selection formalism, we propose two algorithmic strategies to detect the CORE-clusters, one of them scaling particularly well to high-dimensional data. Results obtained on synthetic as well as real data are finally presented.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 180
    Publication Date: 2021-02-26
    Description: Air quality modelling that relates meteorological, car traffic, and pollution data is a fundamental problem, approached in several different ways in the recent literature. In particular, a set of such data sampled at a specific location and during a specific period of time can be seen as a multivariate time series, and modelling the values of the pollutant concentrations can be seen as a multivariate temporal regression problem. In this paper, we propose a new method for symbolic multivariate temporal regression, and we apply it to several data sets that contain real air quality data from the city of Wrocław (Poland). Our experiments show that our approach is superior to classical, especially symbolic, ones, both in statistical performances and the interpretability of the results.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 181
    Publication Date: 2021-02-26
    Description: Functional connectivity (FC) studies have demonstrated the overarching value of studying the brain and its disorders through the undirected weighted graph of functional magnetic resonance imaging (fMRI) correlation matrix. However, most of the work with the FC depends on the way the connectivity is computed, and it further depends on the manual post-hoc analysis of the FC matrices. In this work, we propose a deep learning architecture BrainGNN that learns the connectivity structure as part of learning to classify subjects. It simultaneously applies a graphical neural network to this learned graph and learns to select a sparse subset of brain regions important to the prediction task. We demonstrate that the model’s state-of-the-art classification performance on a schizophrenia fMRI dataset and demonstrate how introspection leads to disorder relevant findings. The graphs that are learned by the model exhibit strong class discrimination and the sparse subset of relevant regions are consistent with the schizophrenia literature.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 182
    Publication Date: 2021-08-09
    Description: Background This article investigates the research problem of digital solutions to overcome the pandemic, more closely examining the limited effectiveness and scope of the governmental COVID-19 tracing apps, using the German COVID-19 tracing app (Corona-Warn-App) as an example. A well-designed and effective instrument in the technological toolbox is of utmost importance to overcome the pandemic. Method A multi-methodological design science research approach was applied. In three development and evaluation cycles, we presented, prototyped, and tested user-centered ideas of functional and design improvement. The applied procedure contains (1) a survey featuring 1993 participants from Germany for evaluating the current app, (2) a gathering of recommendations from epidemiologists and from a focus group discussion with IT and health experts identifying relevant functional requirements, and (3) an online survey combined with testing our prototype with 53 participants to evaluate the enhanced tracing app. Results This contribution presents 14 identified issues of the German COVID-19 tracing app, six meta-requirements, and three design principles for COVID-19 tracing apps and future pandemic apps (e.g., more user involvement and transparency). Using an interactive prototype, this study presents an extended pandemic app, containing 13 potential front-end (i.e., information on the regional infection situation, education and health literacy, crowd and event notification) and six potential back-end functional requirements (i.e., ongoing modification of risk score calculation, indoor versus outdoor). In addition, a user story approach for the COVID-19 tracing app was derived from the findings, supporting a holistic development approach. Conclusion Throughout this study, practical relevant findings can be directly transferred to the German and other international COVID-19 tracing applications. Moreover, we apply our findings to crisis management theory—particularly pandemic-related apps—and derive interdisciplinary learnings. It might be recommendable for the involved decision-makers and stakeholders to forego classic application management and switch to using an agile setup, which allows for a more flexible reaction to upcoming changes. It is even more important for governments to have a well-established, flexible, design-oriented process for creating and adapting technology to handle a crisis, as this pandemic will not be the last one.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 183
    Publication Date: 2021-08-08
    Description: The current Special Issue contains six papers focused on Multiple Criteria Decision Making (MCDM) problems and the formal techniques applied to derive consistent rankings of them [...]
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 184
    Publication Date: 2021-08-11
    Description: Background Data sparsity is a major limitation to estimating national and global dementia burden. Surveys with full diagnostic evaluations of dementia prevalence are prohibitively resource-intensive in many settings. However, validation samples from nationally representative surveys allow for the development of algorithms for the prediction of dementia prevalence nationally. Methods Using cognitive testing data and data on functional limitations from Wave A (2001–2003) of the ADAMS study (n = 744) and the 2000 wave of the HRS study (n = 6358) we estimated a two-dimensional item response theory model to calculate cognition and function scores for all individuals over 70. Based on diagnostic information from the formal clinical adjudication in ADAMS, we fit a logistic regression model for the classification of dementia status using cognition and function scores and applied this algorithm to the full HRS sample to calculate dementia prevalence by age and sex. Results Our algorithm had a cross-validated predictive accuracy of 88% (86–90), and an area under the curve of 0.97 (0.97–0.98) in ADAMS. Prevalence was higher in females than males and increased over age, with a prevalence of 4% (3–4) in individuals 70–79, 11% (9–12) in individuals 80–89 years old, and 28% (22–35) in those 90 and older. Conclusions Our model had similar or better accuracy as compared to previously reviewed algorithms for the prediction of dementia prevalence in HRS, while utilizing more flexible methods. These methods could be more easily generalized and utilized to estimate dementia prevalence in other national surveys.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 185
    Publication Date: 2021-08-06
    Description: Background Advanced analytics, such as artificial intelligence (AI), increasingly gain relevance in medicine. However, patients’ responses to the involvement of AI in the care process remains largely unclear. The study aims to explore whether individuals were more likely to follow a recommendation when a physician used AI in the diagnostic process considering a highly (vs. less) severe disease compared to when the physician did not use AI or when AI fully replaced the physician. Methods Participants from the USA (n = 452) were randomly assigned to a hypothetical scenario where they imagined that they received a treatment recommendation after a skin cancer diagnosis (high vs. low severity) from a physician, a physician using AI, or an automated AI tool. They then indicated their intention to follow the recommendation. Regression analyses were used to test hypotheses. Beta coefficients (ß) describe the nature and strength of relationships between predictors and outcome variables; confidence intervals [CI] excluding zero indicate significant mediation effects. Results The total effects reveal the inferiority of automated AI (ß = .47, p = .001 vs. physician; ß = .49, p = .001 vs. physician using AI). Two pathways increase intention to follow the recommendation. When a physician performs the assessment (vs. automated AI), the perception that the physician is real and present (a concept called social presence) is high, which increases intention to follow the recommendation (ß = .22, 95% CI [.09; 0.39]). When AI performs the assessment (vs. physician only), perceived innovativeness of the method is high, which increases intention to follow the recommendation (ß = .15, 95% CI [− .28; − .04]). When physicians use AI, social presence does not decrease and perceived innovativeness increases. Conclusion Pairing AI with a physician in medical diagnosis and treatment in a hypothetical scenario using topical therapy and oral medication as treatment recommendations leads to a higher intention to follow the recommendation than AI on its own. The findings might help develop practice guidelines for cases where AI involvement benefits outweigh risks, such as using AI in pathology and radiology, to enable augmented human intelligence and inform physicians about diagnoses and treatments.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 186
    Publication Date: 2021-08-09
    Description: Background Rhinosinusitis is an inflammation of the sinonasal cavity which affects roughly one in seven people per year. Acute rhinosinusitis (ARS) is mostly, apart from allergic etiology, caused by a viral infection and, in some cases (30–50%), by a bacterial superinfection. Antibiotics, indicated only in rare cases according to EPOS guidelines, are nevertheless prescribed in more than 80% of ARS cases, which increases the resistant bacterial strains in the population. Methods We have designed a clinical decision support system (CDSS), RHINA, based on a web application created in HTML 5, using JavaScript, jQuery, CCS3 and PHP scripting language. The presented CDSS RHINA helps general physicians to decide whether or not to prescribe antibiotics in patients with rhinosinusitis. Results In a retrospective study of a total of 1465 patients with rhinosinusitis, the CDSS RHINA presented a 90.2% consistency with the diagnosis and treatment made by the ENT specialist. Conclusion Patients assessed with the assistance of our CDSS RHINA would decrease the over-prescription of antibiotics, which in turn would help to reduce the bacterial resistance to the most commonly prescribed antibiotics.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 187
    Publication Date: 2021-08-06
    Description: This article presents a cooperative optimization approach (COA) for distributing service points for mobility applications, which generalizes and refines a previously proposed method. COA is an iterative framework for optimizing service point locations, combining an optimization component with user interaction on a large scale and a machine learning component that learns user needs and provides the objective function for the optimization. The previously proposed COA was designed for mobility applications in which single service points are sufficient for satisfying individual user demand. This framework is generalized here for applications in which the satisfaction of demand relies on the existence of two or more suitably located service stations, such as in the case of bike/car sharing systems. A new matrix factorization model is used as surrogate objective function for the optimization, allowing us to learn and exploit similar preferences among users w.r.t. service point locations. Based on this surrogate objective function, a mixed integer linear program is solved to generate an optimized solution to the problem w.r.t. the currently known user information. User interaction, refinement of the matrix factorization, and optimization are iterated. An experimental evaluation analyzes the performance of COA with special consideration of the number of user interactions required to find near optimal solutions. The algorithm is tested on artificial instances, as well as instances derived from real-world taxi data from Manhattan. Results show that the approach can effectively solve instances with hundreds of potential service point locations and thousands of users, while keeping the user interactions reasonably low. A bound on the number of user interactions required to obtain full knowledge of user preferences is derived, and results show that with 50% of performed user interactions the solutions generated by COA feature optimality gaps of only 1.45% on average.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 188
    Publication Date: 2021-08-05
    Description: Duplication-Transfer-Loss (DTL) reconciliation is a widely used computational technique for understanding gene family evolution and inferring horizontal gene transfer (transfer for short) in microbes. However, most existing models and implementations of DTL reconciliation cannot account for the effect of unsampled or extinct species lineages on the evolution of gene families, likely affecting their accuracy. Accounting for the presence and possible impact of any unsampled species lineages, including those that are extinct, is especially important for inferring and studying horizontal transfer since many genes in the species lineages represented in the reconciliation analysis are likely to have been acquired through horizontal transfer from unsampled lineages. While models of DTL reconciliation that account for transfer from unsampled lineages have already been proposed, they use a relatively simple framework for transfer from unsampled lineages and cannot explicitly infer the location on the species tree of each unsampled or extinct lineage associated with an identified transfer event. Furthermore, there does not yet exist any systematic studies to assess the impact of accounting for unsampled lineages on the accuracy of DTL reconciliation. In this work, we address these deficiencies by (i) introducing an extended DTL reconciliation model, called the DTLx reconciliation model, that accounts for unsampled and extinct species lineages in a new, more functional manner compared to existing models, (ii) showing that optimal reconciliations under the new DTLx reconciliation model can be computed just as efficiently as under the fastest DTL reconciliation model, (iii) providing an efficient algorithm for sampling optimal DTLx reconciliations uniformly at random, (iv) performing the first systematic simulation study to assess the impact of accounting for unsampled lineages on the accuracy of DTL reconciliation, and (v) comparing the accuracies of inferring transfers from unsampled lineages under our new model and the only other previously proposed parsimony-based model for this problem.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 189
    Publication Date: 2021-08-15
    Description: Adaptive and highly synchronized supply chains can avoid a cascading rise-and-fall inventory dynamic and mitigate ripple effects caused by operational failures. This paper aims to demonstrate how a deep reinforcement learning agent based on the proximal policy optimization algorithm can synchronize inbound and outbound flows and support business continuity operating in the stochastic and nonstationary environment if end-to-end visibility is provided. The deep reinforcement learning agent is built upon the Proximal Policy Optimization algorithm, which does not require hardcoded action space and exhaustive hyperparameter tuning. These features, complimented with a straightforward supply chain environment, give rise to a general and task unspecific approach to adaptive control in multi-echelon supply chains. The proposed approach is compared with the base-stock policy, a well-known method in classic operations research and inventory control theory. The base-stock policy is prevalent in continuous-review inventory systems. The paper concludes with the statement that the proposed solution can perform adaptive control in complex supply chains. The paper also postulates fully fledged supply chain digital twins as a necessary infrastructural condition for scalable real-world applications.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 190
    Publication Date: 2021-08-13
    Description: Brain-storm optimization (BSO), which is a population-based optimization algorithm, exhibits a poor search performance, premature convergence, and a high probability of falling into local optima. To address these problems, we developed the adaptive mechanism-based BSO (ABSO) algorithm based on the chaotic local search in this study. The adjustment of the search space using the local search method based on an adaptive self-scaling mechanism balances the global search and local development performance of the ABSO algorithm, effectively preventing the algorithm from falling into local optima and improving its convergence accuracy. To verify the stability and effectiveness of the proposed ABSO algorithm, the performance was tested using 29 benchmark test functions, and the mean and standard deviation were compared with those of five other optimization algorithms. The results showed that ABSO outperforms the other algorithms in terms of stability and convergence accuracy. In addition, the performance of ABSO was further verified through a nonparametric statistical test.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 191
    Publication Date: 2021-08-12
    Description: Background Data sharing is considered a crucial part of modern medical research. Unfortunately, despite its advantages, it often faces obstacles, especially data privacy challenges. As a result, various approaches and infrastructures have been developed that aim to ensure that patients and research participants remain anonymous when data is shared. However, privacy protection typically comes at a cost, e.g. restrictions regarding the types of analyses that can be performed on shared data. What is lacking is a systematization making the trade-offs taken by different approaches transparent. The aim of the work described in this paper was to develop a systematization for the degree of privacy protection provided and the trade-offs taken by different data sharing methods. Based on this contribution, we categorized popular data sharing approaches and identified research gaps by analyzing combinations of promising properties and features that are not yet supported by existing approaches. Methods The systematization consists of different axes. Three axes relate to privacy protection aspects and were adopted from the popular Five Safes Framework: (1) safe data, addressing privacy at the input level, (2) safe settings, addressing privacy during shared processing, and (3) safe outputs, addressing privacy protection of analysis results. Three additional axes address the usefulness of approaches: (4) support for de-duplication, to enable the reconciliation of data belonging to the same individuals, (5) flexibility, to be able to adapt to different data analysis requirements, and (6) scalability, to maintain performance with increasing complexity of shared data or common analysis processes. Results Using the systematization, we identified three different categories of approaches: distributed data analyses, which exchange anonymous aggregated data, secure multi-party computation protocols, which exchange encrypted data, and data enclaves, which store pooled individual-level data in secure environments for access for analysis purposes. We identified important research gaps, including a lack of approaches enabling the de-duplication of horizontally distributed data or providing a high degree of flexibility. Conclusions There are fundamental differences between different data sharing approaches and several gaps in their functionality that may be interesting to investigate in future work. Our systematization can make the properties of privacy-preserving data sharing infrastructures more transparent and support decision makers and regulatory authorities with a better understanding of the trade-offs taken.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 192
    Publication Date: 2021-08-14
    Description: Background It is encouraging to see a substantial increase in individuals surviving cancer. Even more so since most of them will have a positive effect on society by returning to work. However, many cancer survivors have unmet needs, especially when it comes to improving their quality of life (QoL). Only few survivors are able to meet all of the recommendations regarding well-being and there is a body of evidence that cancer survivors’ needs often remain neglected from health policy and national cancer control plans. This increases the impact of inequalities in cancer care and adds a dangerous component to it. The inequalities affect the individual survivor, their career, along with their relatives and society as a whole. The current study will evaluate the impact of the use of big data analytics and artificial intelligence on the self-efficacy of participants following intervention supported by digital tools. The secondary endpoints include evaluation of the impact of patient trajectories (from retrospective data) and patient gathered health data on prediction and improved intervention against possible secondary disease or negative outcomes (e.g. late toxicities, fatal events). Methods/design The study is designed as a single-case experimental prospective study where each individual serves as its own control group with basal measurements obtained at the recruitment and subsequent measurements performed every 6 months during follow ups. The measurement will involve CASE-cancer, Patient Activation Measure and System Usability Scale. The study will involve 160 survivors (80 survivors of Breast Cancer and 80 survivors of Colorectal Cancer) from four countries, Belgium, Latvia, Slovenia, and Spain. The intervention will be implemented via a digital tool (mHealthApplication), collecting objective biomarkers (vital signs) and subjective biomarkers (PROs) with the support of a (embodied) conversational agent. Additionally, the Clinical Decision Support system (CDSS), including visualization of cohorts and trajectories will enable oncologists to personalize treatment for an efficient care plan and follow-up management. Discussion We expect that cancer survivors will significantly increase their self-efficacy following the personalized intervention supported by the m-HealthApplication compared to control measurements at recruitment. We expect to observe improvement in healthy habits, disease self-management and self-perceived QoL. Trial registration ISRCTN97617326. https://doi.org/10.1186/ISRCTN97617326. Original Registration Date: 26/03/2021.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 193
    Publication Date: 2021-08-11
    Description: This paper describes a fast algorithm for constructing directly the equation automaton from the well-known Thompson automaton associated with a regular expression. Allauzen and Mohri have presented a unified construction of small automata and gave a construction of the equation automaton with time and space complexity in O(mlogm+m2), where m denotes the number of Thompson automaton transitions. It is based on two classical automata operations, namely epsilon-removal and Hopcroft’s algorithm for deterministic Finite Automata (DFA) minimization. Using the notion of c-continuation, Ziadi et al. presented a fast computation of the equation automaton in O(m2) time complexity. In this paper, we design an output-sensitive algorithm combining advantages of the previous algorithms and show that its computational complexity can be reduced to O(m×|Q≡e|), where |Q≡e| denotes the number of states of the equation automaton, by an epsilon-removal and Bubenzer minimization algorithm of an Acyclic Deterministic Finite Automata (ADFA).
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 194
    Publication Date: 2021-08-09
    Description: Background A clinical librarian is a member of the medical team in many countries. To strengthen this new job, librarians need to acquire professional skills in order to provide information services to medical staff. In this study, we aimed to explor the skills required for the presence of a clinical librarian in the treatment team. Methods In this study, we sonducted a qualitative study in which 15 experienced librarians were interviewed in connection with information services. Also, a treatment team was involved in this study using purposive-convenience and snowball sampling methods. The data collection tool was a semi-structured interview that continued until the data was saturated; finally the data analysis was performed using thematic analysis. Results Out of the total interviews, 158 primary codes and, 107 main codes were extracted in 25 subclasses. After careful evaluation and integration of subclasses and classes, they were finally classified into 13 categories and four main themes, namely clinical librarian’s role, professional and specialized skills, communication skills, and training programs. Conclusion The results showed that specialized skills and training programs for the clinical librarian are defined based on his/her duties in the treatment team. We also defined the most important key skills for the clinical librarian in two categories of professional and communication skills such as specialized information search, content production, resource management, familiarity with various sources related to evidence-based medicine, teamwork, and effective communication. To acquire these skills, officials and policy-makers should develop and implement related educational programs at medical universities and colleges.
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 195
    Publication Date: 2021-08-09
    Description: Fuzzy goal programming has important applications in many areas of supply chain, logistics, transportation and shipping business. Business management has complications, and there exist many interactions between the factors of its components. The locomotive of world trade is maritime transport and approximately 90% of the products in the world are transported by sea. Optimization of maritime operations is a challenge in order to provide technical, operational and financial benefits. Fuzzy goal programming models attract interests of many scholars, therefore the objective of this paper is to investigate the problem of minimization of total cost and minimization of loss or damage of containers returned from destination port. There are various types of fuzzy goal programming problems based on models and solution methods. This paper employs fuzzy goal programming with triangular fuzzy numbers, membership functions, constraints, assumptions as well as the variables and parameters for optimizing the solution of the model problem. The proposed model presents the mathematical algorithm, and reveals the optimal solution according to satisfaction rank from 0 to 1. Providing a theoretical background, this study offers novel ideas to researchers, decision makers and authorities.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 196
    Publication Date: 2021-08-10
    Description: Recently, deep learning has enabled a huge leap forward in image inpainting. However, due to the memory and computational limitation, most existing methods are able to handle only low-resolution inputs, typically less than 1 K. With the improvement of Internet transmission capacity and mobile device cameras, the resolution of image and video sources available to users via the cloud or locally is increasing. For high-resolution images, the common inpainting methods simply upsample the inpainted result of the shrinked image to yield a blurry result. In recent years, there is an urgent need to reconstruct the missing high-frequency information in high-resolution images and generate sharp texture details. Hence, we propose a general deep learning framework for high-resolution image inpainting, which first hallucinates a semantically continuous blurred result using low-resolution inpainting and suppresses computational overhead. Then the sharp high-frequency details with original resolution are reconstructed using super-resolution refinement. Experimentally, our method achieves inspiring inpainting quality on 2K and 4K resolution images, ahead of the state-of-the-art high-resolution inpainting technique. This framework is expected to be popularized for high-resolution image editing tasks on personal computers and mobile devices in the future.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 197
    Publication Date: 2021-07-27
    Description: Subgraph and supergraph search methods are promising techniques for the development of new drugs. For example, the chemical structure of favipiravir—an antiviral treatment for influenza—resembles the structure of some components of RNA. Represented as graphs, such compounds are similar to a subgraph of favipiravir. However, the existing supergraph search methods can only discover compounds that match exactly. We propose a novel problem, called similar supergraph search, and design an efficient algorithm to solve it. The problem is to identify all graphs in a database that are similar to any subgraph of a query graph, where similarity is defined as edit distance. Our algorithm represents the set of candidate subgraphs by a code tree, which it uses to efficiently compute edit distance. With a distance threshold of zero, our algorithm is equivalent to an existing efficient algorithm for exact supergraph search. Our experiments show that the computation time increased exponentially as the distance threshold increased, but increased sublinearly with the number of graphs in the database.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 198
    Publication Date: 2021-08-09
    Description: Logical Analysis of Data is a procedure aimed at identifying relevant features in data sets with both positive and negative samples. The goal is to build Boolean formulas, represented by strings over {0,1,-} called patterns, which can be used to classify new samples as positive or negative. Since a data set can be explained in alternative ways, many computational problems arise related to the choice of a particular set of patterns. In this paper we study the computational complexity of several of these pattern problems (showing that they are, in general, computationally hard) and we propose some integer programming models that appear to be effective. We describe an ILP model for finding the minimum-size set of patterns explaining a given set of samples and another one for the problem of determining whether two sets of patterns are equivalent, i.e., they explain exactly the same samples. We base our first model on a polynomial procedure that computes all patterns compatible with a given set of samples. Computational experiments substantiate the effectiveness of our models on fairly large instances. Finally, we conjecture that the existence of an effective ILP model for finding a minimum-size set of patterns equivalent to a given set of patterns is unlikely, due to the problem being NP-hard and co-NP-hard at the same time.
    Electronic ISSN: 1999-4893
    Topics: Computer Science
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 199
    Publication Date: 2021-04-26
    Description: Background A range of factors can reduce the effectiveness of treatment prescribed for the long-term management of chronic health conditions, such as growth disorders. In particular, prescription medications may not achieve the positive outcomes expected because approximately half of patients adhere poorly to the prescribed treatment regimen. Methods Adherence to treatment has previously been assessed using relatively unreliable subjective methods, such as patient self-reporting during clinical follow-up, or counting prescriptions filled or vials returned by patients. Here, we report on a new approach, the use of electronically recorded objective evidence of date, time, and dose taken which was obtained through a comprehensive eHealth ecosystem, based around the easypod™ electromechanical auto-injection device and web-based connect software. The benefits of this eHealth approach are also illustrated here by two case studies, selected from the Finnish cohort of the easypod™ Connect Observational Study (ECOS), a 5-year, open-label, observational study that enrolled children from 24 countries who were being treated with growth hormone (GH) via the auto-injection device. Results Analyses of data from 9314 records from the easypod™ connect database showed that, at each time point studied, a significantly greater proportion of female patients had high adherence (≥ 85%) than male patients (2849/3867 [74%] vs 3879/5447 [71%]; P 
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
  • 200
    Publication Date: 2021-03-21
    Description: Background In the recent decades, the use of computerized decision support software (CDSS)-integrated telephone triage (TT) has become an important tool for managing rising healthcare demands and overcrowding in the emergency department. Though these services have generally been shown to be effective, large gaps in the literature exist with regards to the overall quality of these systems. In the current systematic review, we aim to document the consistency of decisions that are generated in CDSS-integrated TT. Furthermore, we also seek to map those factors in the literature that have been identified to have an impact on the consistency of generated triage decisions. Methods As part of the TRANS-SENIOR international training and research network, a systematic review of the literature was conducted in November 2019. PubMed, Web of Science, CENTRAL, and the CINAHL database were searched. Quantitative articles including a CDSS component and addressing consistency of triage decisions and/or factors associated with triage decisions were eligible for inclusion in the current review. Studies exploring the use of other types of digital support systems for triage (i.e. web chat, video conferencing) were excluded. Quality appraisal of included studies were performed independently by two authors using the Methodological Index for Non-Randomized Studies. Results From a total of 1551 records that were identified, 39 full-texts were assessed for eligibility and seven studies were included in the review. All of the studies (n = 7) identified as part of our search were observational and were based on nurse-led telephone triage. Scientific efforts investigating our first aim was very limited. In total, two articles were found to investigate the consistency of decisions that are generated in CDSS-integrated TT. Research efforts were targeted largely towards the second aim of our study—all of the included articles reported factors related to the operator- (n = 6), patient- (n = 1), and/or CDSS-integrated (n = 2) characteristics to have an influence on the consistency of CDSS-integrated TT decisions. Conclusion To date, some efforts have been made to better understand how the use of CDSS-integrated TT systems may vary across settings. In general, however, the evidence-base surrounding this field of literature is largely inconclusive. Further evaluations must be prompted to better understand this area of research. Protocol registration The protocol for this study is registered in the PROSPERO database (registration number: CRD42020146323).
    Electronic ISSN: 1472-6947
    Topics: Computer Science , Medicine
    Published by BioMed Central
    Location Call Number Expected Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. More information can be found here...