All articles published by MDPI are made immediately available worldwide under an open access license. No special
permission is required to reuse all or part of the article published by MDPI, including figures and tables. For
articles published under an open access Creative Common CC BY license, any part of the article may be reused without
permission provided that the original article is clearly cited. For more information, please refer to
https://www.mdpi.com/openaccess.
Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature
Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for
future research directions and describes possible research applications.
Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive
positive feedback from the reviewers.
Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world.
Editors select a small number of articles recently published in the journal that they believe will be particularly
interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the
most exciting work published in the various research areas of the journal.
In this paper, we consider the Cauchy problem for a three-component Novikov system on the line. We give a construction of the initial data
[...] Read more.
In this paper, we consider the Cauchy problem for a three-component Novikov system on the line. We give a construction of the initial data with , such that the corresponding solution to the three-component Novikov system starting from is discontinuous at in the metric of , which implies the ill-posedness for this system in .
Full article
Nano-hydroxyapatite (n-HA) is the main inorganic component of natural bone, which has been widely used as a reinforcing filler for polymers in bone materials, and it can promote cell adhesion, proliferation, and differentiation. It can also produce interactions between cells and material surfaces
[...] Read more.
Nano-hydroxyapatite (n-HA) is the main inorganic component of natural bone, which has been widely used as a reinforcing filler for polymers in bone materials, and it can promote cell adhesion, proliferation, and differentiation. It can also produce interactions between cells and material surfaces through selective protein adsorption and has therefore always been a research hotspot in orthopedic materials. However, n-HA nano-particles are inherently easy to agglomerate and difficult to disperse evenly in the polymer. In addition, there are differences in trace elements between n-HA nano-particles and biological apatite, so the biological activity needs to be improved, and the slow degradation in vivo, which has seriously hindered the application of n-HA in bone fields, is unacceptable. Therefore, the modification of n-HA has been extensively reported in the literature. This article reviewed the physical modification and various chemical modification methods of n-HA in recent years, as well as their modification effects. In particular, various chemical modification methods and their modification effects were reviewed in detail. Finally, a summary and suggestions for the modification of n-HA were proposed, which would provide significant reference for achieving high-performance n-HA in biomedical applications.
Full article
This paper explores the application of Random Matrix Theory (RMT) as a methodological enhancement for portfolio selection within financial markets. Traditional approaches to portfolio optimization often rely on historical estimates of correlation matrices, which are particularly susceptible to instabilities. To address this challenge,
[...] Read more.
This paper explores the application of Random Matrix Theory (RMT) as a methodological enhancement for portfolio selection within financial markets. Traditional approaches to portfolio optimization often rely on historical estimates of correlation matrices, which are particularly susceptible to instabilities. To address this challenge, we combine a data preprocessing technique based on the Hilbert transformation of returns with RMT to refine the accuracy and robustness of correlation matrix estimation. By comparing empirical correlations with those generated through RMT, we reveal non-random properties and uncover underlying relationships within financial data. We then utilize this methodology to construct the correlation network dependence structure used in portfolio optimization. The empirical analysis presented in this paper validates the effectiveness of RMT in enhancing portfolio diversification and risk management strategies. This research contributes by offering investors and portfolio managers with methodological insights to construct portfolios that are more stable, robust, and diversified. At the same time, it advances our comprehension of the intricate statistical principles underlying multivariate financial data.
Full article
Background: Given the prevalence of ADHD and the gaps in ADHD care in Australia, this study investigates the critical barriers and driving forces for innovation. It does so by conducting a preparatory evaluation of an ADHD prototype digital service innovation designed to help
[...] Read more.
Background: Given the prevalence of ADHD and the gaps in ADHD care in Australia, this study investigates the critical barriers and driving forces for innovation. It does so by conducting a preparatory evaluation of an ADHD prototype digital service innovation designed to help streamline ADHD care and empower individual self-management. Methods: Semi-structured interviews with ADHD care consumers/participants and practitioners explored their experiences and provided feedback on a mobile self-monitoring app and related service innovations. Interview transcripts were double coded to explore thematic barriers and the enablers for better ADHD care. Results: Fifteen interviews (9 consumers, 6 practitioners) revealed barriers to better ADHD care for consumers (ignorance and prejudice, trust, impatience) and for practitioners (complexity, sustainability). Enablers for consumers included validation/empowerment, privacy, and security frameworks, tailoring, and access. Practitioners highlighted the value of transparency, privacy and security frameworks, streamlined content, connected care between services, and the tailoring of broader metrics. Conclusions: A consumer-centred approach to digital health service innovation, featuring streamlined, private, and secure solutions with enhanced mobile tools proves instrumental in bridging gaps in ADHD care in Australia. These innovations should help to address the gaps in ADHD care in Australia. These innovations should encompass integrated care, targeted treatment outcome data, and additional lifestyle support, whilst recognising the tensions between customised functionalities and streamlined displays.
Full article
To ensure safe and stable operation, accurate fault localization within active distribution networks is required, and this has attracted much attention. Influenced by many factors such as the control strategy, control performance, initial state of the distributed generators, and distribution network topology, it
[...] Read more.
To ensure safe and stable operation, accurate fault localization within active distribution networks is required, and this has attracted much attention. Influenced by many factors such as the control strategy, control performance, initial state of the distributed generators, and distribution network topology, it is still difficult to reliably locate complex and variable single-phase short-circuit faults relying only on a single feature quantity, while localization methods incorporating intelligent algorithms are affected by the choice of a priori samples and the fact that the solution process is a black-box model. To address this challenge, in this work, an expression for the single-phase short-circuit current vector of a distribution network containing distributed generators is derived, and the differences in magnitude and phase angle of the short-circuit current vectors upstream and downstream of the fault point are analyzed. Based on measurement theory, a fault confidence distribution function that reacts to the relative size of the current magnitude difference and phase angle difference is established, and the fusion fault confidence of the short-circuit current vector is constructed with the help of evidence theory. Finally, a method of locating single-phase short-circuit faults in distribution networks that contain distributed generators is proposed. The simulation results show that the ratio of the fusion fault confidence of the short-circuit current vector between faulted and non-faulted sections under the influence of different distributed generator capacities, fault locations, and transition resistances differ significantly. The proposed single-phase short-circuit fault localization method is both adaptive and physically interpretable and has clear boundaries, sound sensitivity, and engineering practicability.
Full article
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization
[...] Read more.
Generative adversarial networks (GANs) and diffusion models (DMs) have revolutionized the creation of synthetically generated but realistic-looking images. Distinguishing such generated images from real camera captures is one of the key tasks in current multimedia forensics research. One particular challenge is the generalization to unseen generators or post-processing. This can be viewed as an issue of handling out-of-distribution inputs. Forensic detectors can be hardened by the extensive augmentation of the training data or specifically tailored networks. Nevertheless, such precautions only manage but do not remove the risk of prediction failures on inputs that look reasonable to an analyst but in fact are out of the training distribution of the network. With this work, we aim to close this gap with a Bayesian Neural Network (BNN) that provides an additional uncertainty measure to warn an analyst of difficult decisions. More specifically, the BNN learns the task at hand and also detects potential confusion between post-processing and image generator artifacts. Our experiments show that the BNN achieves on-par performance with the state-of-the-art detectors while producing more reliable predictions on out-of-distribution examples.
Full article
With the rapidly growing interest in using structural timber, a need exists to inspect and assess these structures using non-destructive testing (NDT). This review article summarizes NDT methods for wood inspection. After an overview of the most important NDT methods currently used, a
[...] Read more.
With the rapidly growing interest in using structural timber, a need exists to inspect and assess these structures using non-destructive testing (NDT). This review article summarizes NDT methods for wood inspection. After an overview of the most important NDT methods currently used, a detailed review of Ground Penetrating Radar (GPR) and Ultrasonic Testing (UST) is presented. These two techniques can be applied in situ and produce useful visual representations for quantitative assessments and damage detection. With its commercial availability and portability, GPR can help rapidly identify critical features such as moisture, voids, and metal connectors in wood structures. UST, which effectively detects deep cracks, delaminations, and variations in ultrasonic wave velocity related to moisture content, complements GPR’s capabilities. The non-destructive nature of both techniques preserves the structural integrity of timber, enabling thorough assessments without compromising integrity and durability. Techniques such as the Synthetic Aperture Focusing Technique (SAFT) and Total Focusing Method (TFM) allow for reconstructing images that an inspector can readily interpret for quantitative assessment. The development of new sensors, instruments, and analysis techniques has continued to improve the application of GPR and UST on wood. However, due to the hon-homogeneous anisotropic properties of this complex material, challenges remain to quantify defects and characterize inclusions reliably and accurately. By integrating advanced imaging algorithms that consider the material’s complex properties, combining measurements with simulations, and employing machine learning techniques, the implementation and application of GPR and UST imaging and damage detection for wood structures can be further advanced.
Full article
by
Ontlametse T. Bareng, Sikhulile Moyo, Mbatshi Mudanga, Kagiso Sebina, Catherine K. Koofhethile, Wonderful T. Choga, Natasha O. Moraka, Dorcas Maruapula, Irene Gobe, Modisa S. Motswaledi, Rosemary Musonda, Bornapate Nkomo, Dinah Ramaabya, Tony Chebani, Penny Makuruetsa, Joseph Makhema, Roger Shapiro, Shahin Lockman and Simani Gaseitsiwe
Viruses2024, 16(5), 720; https://doi.org/10.3390/v16050720 (registering DOI) - 01 May 2024
We evaluated subsequent virologic outcomes in individuals experiencing low-level virem ia (LLV) on dolutegravir (DTG)-based first-line antiretroviral therapy (ART) in Botswana. We used a national dataset from 50,742 adults who initiated on DTG-based first-line ART from June 2016–December 2022. Individuals with at least
[...] Read more.
We evaluated subsequent virologic outcomes in individuals experiencing low-level virem ia (LLV) on dolutegravir (DTG)-based first-line antiretroviral therapy (ART) in Botswana. We used a national dataset from 50,742 adults who initiated on DTG-based first-line ART from June 2016–December 2022. Individuals with at least two viral load (VL) measurements post three months on DTG-based first-line ART were evaluated for first and subsequent episodes of LLV (VL:51–999 copies/mL). LLV was sub-categorized as low-LLV (51–200 copies/mL), medium-LLV (201–400 copies/mL) and high-LLV (401–999 copies/mL). The study outcome was virologic failure (VF) (VL ≥ 1000 copies/mL): virologic non-suppression defined as single-VF and confirmed-VF defined as two-consecutive VF measurements after an initial VL < 1000 copies/mL. Cox regression analysis identified predictive factors of subsequent VF. The prevalence of LLV was only statistically different at timepoints >6–12 (2.8%) and >12–24 (3.9%) (p-value < 0.01). LLV was strongly associated with both virologic non-suppression (adjusted hazards ratio [aHR] = 2.6; 95% CI: 2.2–3.3, p-value ≤ 0.001) and confirmed VF (aHR = 2.5; 95% CI: 2.4–2.7, p-value ≤ 0.001) compared to initially virally suppressed PLWH. High-LLV (HR = 3.3; 95% CI: 2.9–3.6) and persistent-LLV (HR = 6.6; 95% CI: 4.9–8.9) were associated with an increased hazard for virologic non-suppression than low-LLV and a single-LLV episode, respectively. In a national cohort of PLWH on DTG-based first-line ART, LLV > 400 copies/mL and persistent-LLV had a stronger association with VF. Frequent VL testing and adherence support are warranted for individuals with VL > 50 copies/mL.
Full article
Maritime traffic is increasing more and more, creating more complex navigation environments for ships. Ship trajectory prediction based on historical AIS data is a vital method of reducing navigation risks and enhancing the efficiency of maritime traffic control. At present, employing machine learning
[...] Read more.
Maritime traffic is increasing more and more, creating more complex navigation environments for ships. Ship trajectory prediction based on historical AIS data is a vital method of reducing navigation risks and enhancing the efficiency of maritime traffic control. At present, employing machine learning or deep learning techniques to construct predictive models based on AIS data has become a focal point in ship trajectory prediction research. This paper systematically evaluates various trajectory prediction methods, spanning classical machine learning approaches and emerging deep learning techniques, to uncover their respective merits and drawbacks. In this work, a variety of studies were investigated that applied different algorithms in ship trajectory prediction, including regression models (RMs), artificial neural networks (ANNs), Kalman filtering (KF), and random forests (RFs) in machine learning, along with deep learning such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), long short-term memory (LSTM), gate recurrent unit (GRU) networks, and sequence-to-sequence (Seq2seq) networks. The performance of predictive models based on different algorithms in trajectory prediction tasks was graded and analyzed. Among the existing studies, deep learning methods exhibit significant performance and considerable potential application value for maritime traffic systems, which can be assessed by future work on ship trajectory prediction research.
Full article
Since it is difficult to study the influence of different defect characteristics on the stress intensity factor of B-type sleeve fillet welds via experiments, this paper adopts ABAQUS finite element analysis software(Version 2019) to model the B-type sleeve fillet welds and studies the
[...] Read more.
Since it is difficult to study the influence of different defect characteristics on the stress intensity factor of B-type sleeve fillet welds via experiments, this paper adopts ABAQUS finite element analysis software(Version 2019) to model the B-type sleeve fillet welds and studies the stress and stress intensity factor under different crack lengths, heights, and angles. The simulation results showed that with the increase in crack length and depth, the maximum stress intensity factor gradually increased, and with the increase in the crack inclination angle, the maximum stress intensity factor first increased and then decreased.
Full article
Fuel level gauging in aircraft presents a significant flight mechanics challenge due to the influence of aircraft movements on measurements. Moreover, it constitutes a multidimensional problem where various sensors distributed within the tank must converge to yield a precise and single measurement, independent
[...] Read more.
Fuel level gauging in aircraft presents a significant flight mechanics challenge due to the influence of aircraft movements on measurements. Moreover, it constitutes a multidimensional problem where various sensors distributed within the tank must converge to yield a precise and single measurement, independent of the aircraft’s attitude. Furthermore, fuel distribution across multiple tanks of irregular geometries complicates the readings even further. These issues critically impact safety and economy, as gauging errors may compromise flight security and lead to carrying excess weight. In response to these challenges, this research introduces a multi-stage project in aircraft fuel gauging systems, as a continuum of studies, where this first article presents a computational tool designed to simulate aircraft fuel sensor data readings as a function of fuel level, fuel tank geometry, sensor location, and aircraft attitude. Developed in an open-source environment, the tool aims to support the statistical inference required for accurate modeling in which synthetic data generation becomes a crucial component. A discretization procedure accurately maps fuel tank geometries and their mass properties. The tool, then, intersects these geometries with fuel-level planes and calculates each new volume. It integrates descriptive geometry to intersect these fuel planes with representative capacitive level-sensing probes and computes the sensor readings for the simulated flight conditions. The method is validated against geometries with analytical solutions. This process yields detailed fuel measurement responses for each sensor inside the tank, and for different analyzed fuel levels, providing insights into the sensors’ signals’ non-linear behavior at each analyzed aircraft attitude. The non-linear behavior is also influenced by the sensor saturation readings at 0 when above the fuel level and at 1 when submerged. The synthetic fuel sensor readings lay the baseline for a better understanding on how to compute the true fuel level from multiple sensor readings, and ultimately optimizing the amount of used sensors and their placement. The tool’s design offers significant improvements in aircraft fuel gauging accuracy, directly impacting aerostructures and instrumentation, and it is a key aspect of flight safety, fuel management, and navigation in aerospace technology.
Full article
The rising global prevalence of diabetes mellitus, a chronic metabolic disorder, poses significant challenges to healthcare systems worldwide. This study examined in-hospital mortality among patients diagnosed with non-insulin-dependent diabetes mellitus (NIDDM) of ICD-10, or Type 2 Diabetes Mellitus (T2DM), in Indonesia, utilizing hospital
[...] Read more.
The rising global prevalence of diabetes mellitus, a chronic metabolic disorder, poses significant challenges to healthcare systems worldwide. This study examined in-hospital mortality among patients diagnosed with non-insulin-dependent diabetes mellitus (NIDDM) of ICD-10, or Type 2 Diabetes Mellitus (T2DM), in Indonesia, utilizing hospital claims data spanning from 2017 to 2022 obtained from the Indonesia Health Social Security Agency or Badan Penyelenggara Jaminan Sosial (BPJS) Kesehatan. The analysis, which included 610,809 hospitalized T2DM patients, revealed an in-hospital mortality rate of 6.6%. Factors contributing to an elevated risk of mortality included advanced age, the presence of comorbidities, and severe complications. Additionally, patients receiving health subsidies and those treated in government hospitals were found to have higher mortality risks. Geographic disparities were observed, highlighting variations in healthcare outcomes across different regions. Notably, the complication of ketoacidosis emerged as the most significant risk factor for in-hospital mortality, with an odds ratio (OR) of 10.86, underscoring the critical need for prompt intervention and thorough management of complications to improve patient outcomes.
Full article
Indoor wireless positioning has long been a dynamic field of research due to its broad application range. While many commercial products have been developed, they often are not open source or require substantial and costly infrastructure. Academically, research has extensively explored Bluetooth Low
[...] Read more.
Indoor wireless positioning has long been a dynamic field of research due to its broad application range. While many commercial products have been developed, they often are not open source or require substantial and costly infrastructure. Academically, research has extensively explored Bluetooth Low Energy (BLE) for positioning, yet there are a noticeable lack of studies that comprehensively compare traditional algorithms under these conditions. This research aims to fill this gap by evaluating classical positioning algorithms such as K-Nearest Neighbor (KNN), Weighted K-Nearest Neighbor (WKNN), Naïve Bayes (NB), and a Received Signal Strength-based Neural Network (RSS-NN) using BLE technology. We also introduce a novel method using Convolutional Neural Networks (CNN), specifically tailored to process RSS data structured in an image-like format. This approach helps overcome the limitations of traditional RSS fingerprinting by effectively managing the environmental dynamics within indoor settings. In our tests, all algorithms performed well, consistently achieving an average accuracy of less than two meters. Remarkably, the CNN method outperformed others, achieving an accuracy of 1.22 m. These results establish a solid basis for future research, particularly towards enhancing the precision of indoor positioning systems using deep learning for cost-effective, easy to set up applications.
Full article
by
Guilherme R. Guimaraes, Ricardo G. Figueiredo, Caroline Santos Silva, Vanessa Arata, Jean Carlos Z. Contreras, Cristiano M. Gomes, Ricardo B. Tiraboschi and José Bessa Junior
Int. J. Environ. Res. Public Health2024, 21(5), 580; https://doi.org/10.3390/ijerph21050580 (registering DOI) - 01 May 2024
Background: Adopting advanced digital technologies as diagnostic support tools in healthcare is an unquestionable trend accelerated by the COVID-19 pandemic. However, their accuracy in suggesting diagnoses remains controversial and needs to be explored. We aimed to evaluate and compare the diagnostic accuracy of
[...] Read more.
Background: Adopting advanced digital technologies as diagnostic support tools in healthcare is an unquestionable trend accelerated by the COVID-19 pandemic. However, their accuracy in suggesting diagnoses remains controversial and needs to be explored. We aimed to evaluate and compare the diagnostic accuracy of two free accessible internet search tools: Google and ChatGPT 3.5. Methods: To assess the effectiveness of both medical platforms, we conducted evaluations using a sample of 60 clinical cases related to urological pathologies. We organized the urological cases into two distinct categories for our analysis: (i) prevalent conditions, which were compiled using the most common symptoms, as outlined by EAU and UpToDate guidelines, and (ii) unusual disorders, identified through case reports published in the ‘Urology Case Reports’ journal from 2022 to 2023. The outcomes were meticulously classified into three categories to determine the accuracy of each platform: “correct diagnosis”, “likely differential diagnosis”, and “incorrect diagnosis”. A group of experts evaluated the responses blindly and randomly. Results: For commonly encountered urological conditions, Google’s accuracy was 53.3%, with an additional 23.3% of its results falling within a plausible range of differential diagnoses, and the remaining outcomes were incorrect. ChatGPT 3.5 outperformed Google with an accuracy of 86.6%, provided a likely differential diagnosis in 13.3% of cases, and made no unsuitable diagnosis. In evaluating unusual disorders, Google failed to deliver any correct diagnoses but proposed a likely differential diagnosis in 20% of cases. ChatGPT 3.5 identified the proper diagnosis in 16.6% of rare cases and offered a reasonable differential diagnosis in half of the cases. Conclusion: ChatGPT 3.5 demonstrated higher diagnostic accuracy than Google in both contexts. The platform showed satisfactory accuracy when diagnosing common cases, yet its performance in identifying rare conditions remains limited.
Full article
The Least Absolute Shrinkage and Selection Operator (LASSO) regression technique has proven to be a valuable tool for fitting and reducing linear models. The trend of applying LASSO to compositional data is growing, thereby expanding its applicability to diverse scientific domains. This paper
[...] Read more.
The Least Absolute Shrinkage and Selection Operator (LASSO) regression technique has proven to be a valuable tool for fitting and reducing linear models. The trend of applying LASSO to compositional data is growing, thereby expanding its applicability to diverse scientific domains. This paper aims to contribute to this evolving landscape by undertaking a comprehensive exploration of the -norm for the penalty term of a LASSO regression in a compositional context. This implies first introducing a rigorous definition of the compositional -norm, as the particular geometric structure of the compositional sample space needs to be taken into account. The focus is subsequently extended to a meticulous data-driven analysis of the dimension reduction effects on linear models, providing valuable insights into the interplay between penalty term norms and model performance. An analysis of a microbial dataset illustrates the proposed approach.
Full article
Aim: The purpose of the present study is the three-dimensional (3D) analysis of molar and incisor movements that occur during the correction of the upper midline deviation by using the Mesial-Distalslider appliance. Materials and Methods: A total of 20 consecutive patients (12 women
[...] Read more.
Aim: The purpose of the present study is the three-dimensional (3D) analysis of molar and incisor movements that occur during the correction of the upper midline deviation by using the Mesial-Distalslider appliance. Materials and Methods: A total of 20 consecutive patients (12 women and 8 men; mean age 19.6 ± 11.1 years) were selected from the Orthodontic Department of Heinrich-Heine University of Düsseldorf. To correct the upper midline deviation (>2 mm), the patients were treated with asymmetric mechanics (mesialization on one side and distalization on the contralateral side) with the aid of Mesial-Distalslider. Dental casts were taken for each patient before (T0) and after the treatment (T1). The casts were 3D digitized and the models were superimposed on the palatal anterior region. Three-dimensional molar movements and sagittal incisor movements (proclination and retroclination) were assessed for T0 and T1. Results: At the end of the treatment, the total movements of the molars resulted in 4.5 ± 2.2 mm (antero-posterior direction), −0.4 ± 2.4 mm (transverse direction) and 0.3 ± 0.9 mm (vertical direction) on the mesialization side, and −2.4 ± 1.7 mm (antero-posterior direction), −0.5 ± 1.5 mm (transverse direction) and 0.2 ± 1.4 mm (vertical direction) on the distalization side. Incisor displacement was 0.9 mm ± 1.7 (mesialization side) and 0.6 mm ± 0.7 (distalization side). Conclusion: The Mesial-Distalslider appliance could be considered a valuable tool in orthodontic treatment for upper midline correction. Within the limits of a retrospective study, asymmetric molar movements appeared possible without clinically relevant anchorage loss.
Full article
Traditional treatments of cancer have faced various challenges, including toxicity, medication resistance, and financial burdens. On the other hand, bioactive phytochemicals employed in complementary alternative medicine have recently gained interest due to their ability to control a wide range of molecular pathways while
[...] Read more.
Traditional treatments of cancer have faced various challenges, including toxicity, medication resistance, and financial burdens. On the other hand, bioactive phytochemicals employed in complementary alternative medicine have recently gained interest due to their ability to control a wide range of molecular pathways while being less harmful. As a result, we used a network pharmacology approach to study the possible regulatory mechanisms of active constituents of Cordia myxa for the treatment of liver cancer (LC). Active constituents were retrieved from the IMPPAT database and the literature review, and their targets were retrieved from the STITCH and Swiss Target Prediction databases. LC-related targets were retrieved from expression datasets (GSE39791, GSE76427, GSE22058, GSE87630, and GSE112790) through gene expression omnibus (GEO). The DAVID Gene Ontology (GO) database was used to annotate target proteins, while the Kyoto Encyclopedia and Genome Database (KEGG) was used to analyze signaling pathway enrichment. STRING and Cytoscape were used to create protein–protein interaction networks (PPI), while the degree scoring algorithm of CytoHubba was used to identify hub genes. The GEPIA2 server was used for survival analysis, and PyRx was used for molecular docking analysis. Survival and network analysis revealed that five genes named heat shot protein 90 AA1 (HSP90AA1), estrogen receptor 1 (ESR1), cytochrome P450 3A4 (CYP3A4), cyclin-dependent kinase 1 (CDK1), and matrix metalloproteinase-9 (MMP9) are linked with the survival of LC patients. Finally, we conclude that four extremely active ingredients, namely cosmosiin, rosmarinic acid, quercetin, and rubinin influence the expression of HSP90AA1, which may serve as a potential therapeutic target for LC. These results were further validated by molecular dynamics simulation analysis, which predicted the complexes with highly stable dynamics. The residues of the targeted protein showed a highly stable nature except for the N-terminal domain without affecting the drug binding. An integrated network pharmacology and docking study demonstrated that C. myxa had a promising preventative effect on LC by working on cancer-related signaling pathways.
Full article
Clinicians often have to face infections caused by microorganisms that are difficult to eradicate due to their resistance and/or tolerance to antimicrobials. Among these pathogens, Pseudomonas aeruginosa causes chronic infections due to its ability to form biofilms on medical devices, skin wounds, ulcers
[...] Read more.
Clinicians often have to face infections caused by microorganisms that are difficult to eradicate due to their resistance and/or tolerance to antimicrobials. Among these pathogens, Pseudomonas aeruginosa causes chronic infections due to its ability to form biofilms on medical devices, skin wounds, ulcers and the lungs of patients with Cystic Fibrosis. In this scenario, the plant world represents an important reservoir of natural compounds with antimicrobial and/or antibiofilm properties. In this study, an extract from the leaves of Combretum micranthum G. Don, named Cm4-p, which was previously investigated for its antimicrobial activities, was assayed for its capacity to inhibit biofilm formation and/or to eradicate formed biofilms. The model strain P. aeruginosa PAO1 and its isogenic biofilm hyperproducer derivative B13 were treated with Cm4-p. Preliminary IR, UV-vis, NMR, and mass spectrometry analyses showed that the extract was mainly composed of catechins bearing different sugar moieties. The phytocomplex (3 g/L) inhibited the biofilm formation of both the PAO1 and B13 strains in a significant manner. In light of the obtained results, Cm4-p deserves deeper investigations of its potential in the antimicrobial field.
Full article
Background: LC-MS is an ever-increasingly used methodology for clinical applications. Due to the superior selectivity and sensitivity, in certain situations, it can offer an advantage or be the only option for diagnostics and biomonitoring applications. Methods: A high selectivity sensitive LC-MS/MS method was
[...] Read more.
Background: LC-MS is an ever-increasingly used methodology for clinical applications. Due to the superior selectivity and sensitivity, in certain situations, it can offer an advantage or be the only option for diagnostics and biomonitoring applications. Methods: A high selectivity sensitive LC-MS/MS method was developed for direct quantification of free plasma polyunsaturated fatty acids as well as conjugated membrane polyunsaturated fatty acids, using isocratic reverse phase elution. A quick and simple sample purification method was used in order to ensure high-throughput analysis of biological samples. The method was validated with regard to selectivity, sensitivity, linearity, accuracy, precision, carryover, and recovery, as well as other relevant parameters. Results and Conclusions: The method was developed and validated with respect to all relevant parameters and was successfully used in a number of clinical diagnostics and biomonitoring applications. The simple sample purification process allowed for an easy learning curve for analysts and other users, while ensuring a low chance of systematic or random errors and thus reliable results usable in a clinical setting.
Full article
This study introduces a novel approach for analyzing thin film interference spectra by employing a micro-spectrometer equipped with a spectral chip. Focusing on zinc oxide (ZnO) thin films prepared via the sol–gel method, this research aims to explore the films’ physical properties through
[...] Read more.
This study introduces a novel approach for analyzing thin film interference spectra by employing a micro-spectrometer equipped with a spectral chip. Focusing on zinc oxide (ZnO) thin films prepared via the sol–gel method, this research aims to explore the films’ physical properties through spectral analysis. After obtaining the interference spectrum of the ZnO thin films, the peak positions within the spectrum were cataloged. Mathematical simulation was used to adjust the refractive index and thickness of the films to match the simulated interference peak positions with the observed peak positions. The thickness of the prepared ZnO film was estimated to be 4.9 μm and its refractive index at 80 °C was estimated to be 1.96. In addition, the measurement system was used to detect environmental changes, including temperature changes and gas exposure. It was observed that the optical characteristics of ZnO films exhibit marked variations with temperature shifts, enabling the establishment of a temperature calibration curve based on spectral feature displacement. In addition, experiments using a variety of gases showed that NO2 and gaseous isopropanol significantly affect the interference spectrum of ZnO, with the peak of the interference spectrum shifted by 2.3 nm and 5.2 nm, respectively, after injection of the two gases. This indicates that interferometric spectroscopy can serve as an effective tool for ZnO monitoring, capable of selectively detecting specific gases.
Full article
This paper focuses on the digital closed-loop design for a step-up converter with hybrid switching. For this purpose, for the first time, the control-to-output small-signal transfer function of a hybrid switching converter is determined in the rational form. Based on it, a type
[...] Read more.
This paper focuses on the digital closed-loop design for a step-up converter with hybrid switching. For this purpose, for the first time, the control-to-output small-signal transfer function of a hybrid switching converter is determined in the rational form. Based on it, a type 3 analog controller is designed, and then, its digitized counterpart is found, and the digital controller is designed using a digital signal processor. The closed-loop operation is then validated both through simulation and practical implementation.
Full article
Lung adenocarcinoma (LUAD) is the most prevalent and aggressive subtype of lung cancer, exhibiting a dismal prognosis with a five-year survival rate below 5%. DEAD-box RNA helicase 18 (DDX18, gene symbol DDX18), a crucial regulator of RNA metabolism, has been
[...] Read more.
Lung adenocarcinoma (LUAD) is the most prevalent and aggressive subtype of lung cancer, exhibiting a dismal prognosis with a five-year survival rate below 5%. DEAD-box RNA helicase 18 (DDX18, gene symbol DDX18), a crucial regulator of RNA metabolism, has been implicated in various cellular processes, including cell cycle control and tumorigenesis. However, its role in LUAD pathogenesis remains elusive. This study demonstrates the significant upregulation of DDX18 in LUAD tissues and its association with poor patient survival (from public databases). Functional in vivo and in vitro assays revealed that DDX18 knockdown potently suppresses LUAD progression. RNA sequencing and chromatin immunoprecipitation experiments identified cyclin-dependent kinase 4 (CDK4), a cell cycle regulator, as a direct transcriptional target of DDX18. Notably, DDX18 depletion induced G1 cell cycle arrest, while its overexpression promoted cell cycle progression even in normal lung cells. Interestingly, while the oncogenic protein c-Myc bound to the DDX18 promoter, it did not influence its expression. Collectively, these findings establish DDX18 as a potential oncogene in LUAD, functioning through the CDK4-mediated cell cycle pathway. DDX18 may represent a promising therapeutic target for LUAD intervention.
Full article
In order to obtain more data concerning the influence of fertilizers (organic and mineral) on different forage plants in the northeastern Romanian grassland, the mass fractions of 14 essential, enzymatic, or toxic elements were determined by instrumental neutron activation analysis together with the
[...] Read more.
In order to obtain more data concerning the influence of fertilizers (organic and mineral) on different forage plants in the northeastern Romanian grassland, the mass fractions of 14 essential, enzymatic, or toxic elements were determined by instrumental neutron activation analysis together with the amount of crude proteins, ash, fibers, as well as fat ether extract. The final results showed a significant variance in the content of analyzed elements on organic as well as on mineral fertilized experimental plots. At the same time, increased content of crude protein and fat ether extract was evident in fertilized grasses for all applied fertilizers, while other global indicators such as neutral and acid fibers of sulfuric lignin content decreased, suggesting significantly higher nutritional values for fertilized forage plants.
Full article