journal-title
stringclasses 191
values | pmid
stringlengths 8
8
⌀ | pmc
stringlengths 10
11
| doi
stringlengths 12
31
⌀ | article-title
stringlengths 11
423
| abstract
stringlengths 18
3.69k
⌀ | related-work
stringlengths 12
84k
| references
sequencelengths 0
206
| reference_info
listlengths 0
192
|
---|---|---|---|---|---|---|---|---|
Frontiers in Bioengineering and Biotechnology | null | PMC8977526 | 10.3389/fbioe.2022.841958 | Combining Radiology and Pathology for Automatic Glioma Classification | Subtype classification is critical in the treatment of gliomas because different subtypes lead to different treatment options and postoperative care. Although many radiological- or histological-based glioma classification algorithms have been developed, most of them focus on single-modality data. In this paper, we propose an innovative two-stage model to classify gliomas into three subtypes (i.e., glioblastoma, oligodendroglioma, and astrocytoma) based on radiology and histology data. In the first stage, our model classifies each image as having glioblastoma or not. Based on the obtained non-glioblastoma images, the second stage aims to accurately distinguish astrocytoma and oligodendroglioma. The radiological images and histological images pass through the two-stage design with 3D and 2D models, respectively. Then, an ensemble classification network is designed to automatically integrate the features of the two modalities. We have verified our method by participating in the MICCAI 2020 CPM-RadPath Challenge and won 1st place. Our proposed model achieves high performance on the validation set with a balanced accuracy of 0.889, Cohen’s Kappa of 0.903, and an F1-score of 0.943. Our model could advance multimodal-based glioma research and provide assistance to pathologists and neurologists in diagnosing glioma subtypes. The code has been publicly available online at https://github.com/Xiyue-Wang/1st-in-MICCAI2020-CPM. | 4.6 Comparison With Related Works
Table 4 summarizes relevant work on the MICCAI 2019 and 2020 CPM-RadPath Challenge. These results are all obtained on the validation set and are all in the one-stage classification framework.TABLE 4Comparison with related works on CPM-RadPath validation data.StudiesMethodsDataBalanced accuracyKappaF1 scorePei et al. (Pei et al., 2019)U-Net model for segment tumors, and 3D CNN model for classificationCPM-RadPath 2019 data set0.7490.7150.829Chan et al. (Chan et al., 2019)VGG16 model and Resnet50 model for image feature extraction, and k-means clustering model for classificationCPM-RadPath 2019 data set——0.780Hamidinekoo et al. (Hamidinekoo et al., 2020)DCN model for classificationCPM-RadPath 2020 data set0.7230.5540.714Yin et al. (Yin et al., 2020)After the cell kernel segmentation and noise reduction process, 3D Densenet model used for classificationCPM-RadPath 2020 data set0.9440.9710.952Lerousseau et al. (Lerousseau et al., 2020)3D Densenet for MRI, and EfficientNet-B0 for WSICPM-RadPath 2020 data set0.9110.9040.943Pei et al. (Pei et al., 2020)3D CNN for segmentation and classification of MRI, and 2D CNN model for WSI classificationCPM-RadPath 2020 data set0.8000.8010.886Zhao et al. (Zhao et al., 2020)VGG16 model for WSI, and segmentation-free self-supervised feature extraction model for MRICPM-RadPath 2020 data set0.8890.9030.943OursThe two-stage multimodal model for classificationCPM-RadPath 2020 data set
0.889
0.903
0.943
Scores in the table are all obtained from validation set. The bold values in the table represent the maximum value of each column.The early work (Chan et al., 2019; Pei et al., 2019) first segmented the tumors before classifying them in 2019, however, the final classification results are not satisfactory. In the MICCAI 2020 CPM-RadPath Challenge, (Pei et al., 2020; Yin et al., 2020; Zhao et al., 2020), still use the segmentation before the classification framework, with a significant improvement over last year’s results. The performance of these methods is affected by the segmentation results. Methods in (Hamidinekoo et al., 2020; Lerousseau et al., 2020) and our solutions do not require the segmentation process.Without segmenting the images, a direct comparison between our method and (Lerousseau et al., 2020) is feasible because of the similar performance on the validation set. However, in the MICCAI 2020 CPM-RadPath Challenge, method in (Lerousseau et al., 2020) performs not well on the test set and the model does not have strong generalization ability. Similarly, although methods in (Yin et al., 2020; Zhao et al., 2020) perform equal or better than our method on the validation set, our method obtains first place on the final competition test set. | [
"31973941",
"28060704",
"32203040",
"26958289",
"25359109",
"30131513",
"31155342",
"26017442",
"17618441",
"27157931",
"29531073",
"15466178",
"15118874",
"29931168",
"33123732",
"34277415",
"29279403",
"28031556",
"31037246",
"28622680",
"30315419",
"20644945",
"32642698",
"28815663",
"29065648",
"30498429",
"26207249",
"32040669",
"32937104"
] | [
{
"pmid": "31973941",
"title": "The diagnostic value of quantitative texture analysis of conventional MRI sequences using artificial neural networks in grading gliomas.",
"abstract": "AIM\nTo explore the value of quantitative texture analysis of conventional magnetic resonance imaging (MRI) sequences using artificial neural networks (ANN) for the differentiation of high-grade gliomas (HGG) and low-grade gliomas (LGG).\n\n\nMATERIALS AND METHODS\nA total of 181 patients, 97 with HGG (53.5%) and 84 with LGG (46.5%) with brain MRI having T2-weighted (W) fluid attenuation inversion recovery (FLAIR), and contrast-enhanced T1W images were enrolled in the present study. Histogram parameters and high-order texture features were extracted using manually placed regions of interest (ROIs) on T2W-FLAIR and contrast-enhanced T1W images covering the whole volume of the tumours. The reproducibility of the features was assessed by interobserver reliability analyses. The cohort was divided into training (n=121) and test partitions (n=60). The training set was used for attribute selection and model development, and the test set was used to evaluate the diagnostic performance of the pre-trained ANNs in discriminating HGG and LGG.\n\n\nRESULTS\nIn the test cohort, the ANN models using texture data of T2W-FLAIR and contrast-enhanced T1W images achieved an area under the receiver operating characteristic curve (AUC) of 0.87 and 0.86, respectively. The combined ANN model with selected texture features achieved the highest diagnostic accuracy equating 88.3% with an AUC of 0.92.\n\n\nCONCLUSIONS\nQuantitative texture analysis of T2W-FLAIR and contrast-enhanced T1W enhanced by ANN can accurately discriminate HGG from LGG and might be of clinical value in tailoring the management strategies in patients with gliomas."
},
{
"pmid": "28060704",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.",
"abstract": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet."
},
{
"pmid": "32203040",
"title": "Automated Brain Metastases Detection Framework for T1-Weighted Contrast-Enhanced 3D MRI.",
"abstract": "Brain Metastases (BM) complicate 20-40% of cancer cases. BM lesions can present as punctate (1 mm) foci, requiring high-precision Magnetic Resonance Imaging (MRI) in order to prevent inadequate or delayed BM treatment. However, BM lesion detection remains challenging partly due to their structural similarities to normal structures (e.g., vasculature). We propose a BM-detection framework using a single-sequence gadolinium-enhanced T1-weighted 3D MRI dataset. The framework focuses on the detection of smaller (<15 mm) BM lesions and consists of: (1) candidate-selection stage, using Laplacian of Gaussian approach for highlighting parts of an MRI volume holding higher BM occurrence probabilities, and (2) detection stage that iteratively processes cropped region-of-interest volumes centered by candidates using a custom-built 3D convolutional neural network (\"CropNet\"). Data is augmented extensively during training via a pipeline consisting of random ga mma correction and elastic deformation stages; the framework thereby maintains its invariance for a plausible range of BM shape and intensity representations. This approach is tested using five-fold cross-validation on 217 datasets from 158 patients, with training and testing groups randomized per patient to eliminate learning bias. The BM database included lesions with a mean diameter of ∼5.4 mm and a mean volume of ∼160 mm3. For 90% BM-detection sensitivity, the framework produced on average 9.12 false-positive BM detections per patient (standard deviation of 3.49); for 85% sensitivity, the average number of false-positives declined to 5.85. Comparative analysis showed that the framework produces comparable BM-detection accuracy with the state-of-art approaches validated for significantly larger lesions."
},
{
"pmid": "26958289",
"title": "Automated Grading of Gliomas using Deep Learning in Digital Pathology Images: A modular approach with ensemble of convolutional neural networks.",
"abstract": "Brain glioma is the most common primary malignant brain tumors in adults with different pathologic subtypes: Lower Grade Glioma (LGG) Grade II, Lower Grade Glioma (LGG) Grade III, and Glioblastoma Multiforme (GBM) Grade IV. The survival and treatment options are highly dependent of this glioma grade. We propose a deep learning-based, modular classification pipeline for automated grading of gliomas using digital pathology images. Whole tissue digitized images of pathology slides obtained from The Cancer Genome Atlas (TCGA) were used to train our deep learning modules. Our modular pipeline provides diagnostic quality statistics, such as precision, sensitivity and specificity, of the individual deep learning modules, and (1) facilitates training given the limited data in this domain, (2) enables exploration of different deep learning structures for each module, (3) leads to developing less complex modules that are simpler to analyze, and (4) provides flexibility, permitting use of single modules within the framework or use of other modeling or machine learning applications, such as probabilistic graphical models or support vector machines. Our modular approach helps us meet the requirements of minimum accuracy levels that are demanded by the context of different decision points within a multi-class classification scheme. Convolutional Neural Networks are trained for each module for each sub-task with more than 90% classification accuracies on validation data set, and achieved classification accuracy of 96% for the task of GBM vs LGG classification, 71% for further identifying the grade of LGG into Grade II or Grade III on independent data set coming from new patients from the multi-institutional repository."
},
{
"pmid": "30131513",
"title": "Prediction of Pseudoprogression versus Progression using Machine Learning Algorithm in Glioblastoma.",
"abstract": "We aimed to investigate the feasibility of machine learning (ML) algorithm to distinguish pseudoprogression (PsPD) from progression (PD) in patients with glioblastoma (GBM). We recruited the patients diagnosed as primary GBM who received gross total resection (GTR) and concurrent chemoradiotherapy in two institutions from April 2010 to April 2017 and presented suspicious contrast-enhanced lesion on brain magnetic resonance imaging (MRI) during follow-up. Patients from two institutions were allocated to training (N = 59) and testing (N = 19) datasets, respectively. We developed a convolutional neural network combined with a long short-term memory ML structure. MRI data, which was 9 axial post-contrast T1-weighted images in our study, and clinical features were incorporated (Model 1). In the testing set, the trained Model 1 resulted in AUC of 0.83, AUPRC of 0.87, and F1-score of 0.74 using optimal threshold. The performance was superior to that of Model 2 (CNN-LSTM model with MRI data alone) and Model 3 (random forest model with clinical feature alone). The developed algorithm involving MRI data and clinical features could help making decision during follow-up of patients with GBM treated with GTR and concurrent CCRT."
},
{
"pmid": "31155342",
"title": "Automated brain histology classification using machine learning.",
"abstract": "Brain and breast tumors cause significant morbidity and mortality worldwide. Accurate and expedient histological diagnosis of patients' tumor specimens is required for subsequent treatment and prognostication. Currently, histology slides are visually inspected by trained pathologists, but this process is both time and labor-intensive. In this paper, we propose an automated process to classify histology slides of both brain and breast tissues using the Google Inception V3 convolutional neural network (CNN). We report successful automated classification of brain histology specimens into normal, low grade glioma (LGG) or high grade glioma (HGG). We also report for the first time the benefit of transfer learning across different tissue types. Pre-training on a brain tumor classification task improved CNN performance accuracy in a separate breast tumor classification task, with the F1 score improving from 0.547 to 0.913. We constructed a dataset using brain histology images from our own hospital and a public breast histology image dataset. Our proposed method can assist human pathologists in the triage and inspection of histology slides to expedite medical care. It can also improve CNN performance in cases where the training data is limited, for example in rare tumors, by applying the learned model weights from a more common tissue type."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "17618441",
"title": "The 2007 WHO classification of tumours of the central nervous system.",
"abstract": "The fourth edition of the World Health Organization (WHO) classification of tumours of the central nervous system, published in 2007, lists several new entities, including angiocentric glioma, papillary glioneuronal tumour, rosette-forming glioneuronal tumour of the fourth ventricle, papillary tumour of the pineal region, pituicytoma and spindle cell oncocytoma of the adenohypophysis. Histological variants were added if there was evidence of a different age distribution, location, genetic profile or clinical behaviour; these included pilomyxoid astrocytoma, anaplastic medulloblastoma and medulloblastoma with extensive nodularity. The WHO grading scheme and the sections on genetic profiles were updated and the rhabdoid tumour predisposition syndrome was added to the list of familial tumour syndromes typically involving the nervous system. As in the previous, 2000 edition of the WHO 'Blue Book', the classification is accompanied by a concise commentary on clinico-pathological characteristics of each tumour type. The 2007 WHO classification is based on the consensus of an international Working Group of 25 pathologists and geneticists, as well as contributions from more than 70 international experts overall, and is presented as the standard for the definition of brain tumours to the clinical oncology and cancer research communities world-wide."
},
{
"pmid": "27157931",
"title": "The 2016 World Health Organization Classification of Tumors of the Central Nervous System: a summary.",
"abstract": "The 2016 World Health Organization Classification of Tumors of the Central Nervous System is both a conceptual and practical advance over its 2007 predecessor. For the first time, the WHO classification of CNS tumors uses molecular parameters in addition to histology to define many tumor entities, thus formulating a concept for how CNS tumor diagnoses should be structured in the molecular era. As such, the 2016 CNS WHO presents major restructuring of the diffuse gliomas, medulloblastomas and other embryonal tumors, and incorporates new entities that are defined by both histology and molecular features, including glioblastoma, IDH-wildtype and glioblastoma, IDH-mutant; diffuse midline glioma, H3 K27M-mutant; RELA fusion-positive ependymoma; medulloblastoma, WNT-activated and medulloblastoma, SHH-activated; and embryonal tumour with multilayered rosettes, C19MC-altered. The 2016 edition has added newly recognized neoplasms, and has deleted some entities, variants and patterns that no longer have diagnostic and/or biological relevance. Other notable changes include the addition of brain invasion as a criterion for atypical meningioma and the introduction of a soft tissue-type grading system for the now combined entity of solitary fibrous tumor / hemangiopericytoma-a departure from the manner by which other CNS tumors are graded. Overall, it is hoped that the 2016 CNS WHO will facilitate clinical, experimental and epidemiological studies that will lead to improvements in the lives of patients with brain tumors."
},
{
"pmid": "29531073",
"title": "Predicting cancer outcomes from histology and genomics using convolutional networks.",
"abstract": "Cancer histology reflects underlying molecular processes and disease progression and contains rich phenotypic information that is predictive of patient outcomes. In this study, we show a computational approach for learning patient outcomes from digital pathology images using deep learning to combine the power of adaptive machine learning algorithms with traditional survival models. We illustrate how these survival convolutional neural networks (SCNNs) can integrate information from both histology images and genomic biomarkers into a single unified framework to predict time-to-event outcomes and show prediction accuracy that surpasses the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. We use statistical sampling techniques to address challenges in learning survival from histology images, including tumor heterogeneity and the need for large training cohorts. We also provide insights into the prediction mechanisms of SCNNs, using heat map visualization to show that SCNNs recognize important structures, like microvascular proliferation, that are related to prognosis and that are used by pathologists in grading. These results highlight the emerging role of deep learning in precision medicine and suggest an expanding utility for computational analysis of histology in the future practice of pathology."
},
{
"pmid": "15466178",
"title": "Genetic pathways to glioblastoma: a population-based study.",
"abstract": "We conducted a population-based study on glioblastomas in the Canton of Zurich, Switzerland (population, 1.16 million) to determine the frequency of major genetic alterations and their effect on patient survival. Between 1980 and 1994, 715 glioblastomas were diagnosed. The incidence rate per 100,000 population/year, adjusted to the World Standard Population, was 3.32 in males and 2.24 in females. Observed survival rates were 42.4% at 6 months, 17.7% at 1 year, and 3.3% at 2 years. For all of the age groups, younger patients survived significantly longer, ranging from a median of 8.8 months (<50 years) to 1.6 months (>80 years). Loss of heterozygosity (LOH) 10q was the most frequent genetic alteration (69%), followed by EGFR amplification (34%), TP53 mutations (31%), p16(INK4a) deletion (31%), and PTEN mutations (24%). LOH 10q occurred in association with any of the other genetic alterations and was predictive of shorter survival. Primary (de novo) glioblastomas prevailed (95%), whereas secondary glioblastomas that progressed from low-grade or anaplastic gliomas were rare (5%). Secondary glioblastomas were characterized by frequent LOH 10q (63%) and TP53 mutations (65%). Of the TP53 mutations in secondary glioblastomas, 57% were in hotspot codons 248 and 273, whereas in primary glioblastomas, mutations were more equally distributed. G:C-->A:T mutations at CpG sites were more frequent in secondary than primary glioblastomas (56% versus 30%; P = 0.0208). This suggests that the acquisition of TP53 mutations in these glioblastoma subtypes occurs through different mechanisms."
},
{
"pmid": "15118874",
"title": "Population-based study on incidence, survival rates, and genetic alterations of low-grade diffuse astrocytomas and oligodendrogliomas.",
"abstract": "We carried out a population-based study on low-grade diffuse gliomas in the Canton of Zurich, Switzerland (population 1.16 million). From 1980 to 1994, 987 astrocytic and oligodendroglial tumors were diagnosed, of which 122 (12.4%) were low-grade (WHO grade II). The incidence rates adjusted to the World Standard Population, per million population per year, were 2.28 for low-grade diffuse astrocytomas, 0.89 for oligoastrocytomas, and 2.45 for oligodendrogliomas. The survival rate (mean follow-up 7.5+/-4.8 years) was highest for patients with oligodendroglioma (78% at 5 years, 51% at 10 years), followed by those with oligoastrocytoma (70% at 5 years, 49% at 10 years) and fibrillary astrocytoma (65% at 5 years, 31% at 10 years). Survival of patients with gemistocytic astrocytoma was poor, with survival rates of 16% at 5 years and 0% at 10 years. Younger patients (<50 years) survived significantly longer than older patients (>50 years; P=0.013). DNA sequencing, performed in 84% of cases, revealed that TP53 mutations were most frequent in gemistocytic astrocytomas (88%), followed by fibrillary astrocytomas (53%) and oligoastrocytomas (44%), but were infrequent (13%) in oligodendrogliomas. The presence of TP53 mutations was associated with shorter survival of patients with low-grade diffuse gliomas (log-rank test; P=0.047), but when each histological type was analyzed separately, an association was observed only for oligoastrocytoma ( P=0.05). Loss on 1p and 19q were assessed by quantitative microsatellite analysis in 67% of cases. These alterations were frequent in oligodendrogliomas (1p, 57%; 19q, 69%), less common in oligoastrocytomas (1p, 27%; 19q, 45%), rare in fibrillary astrocytomas (1p, 7%; 19q, 7%), and absent in gemistocytic astrocytomas. None of these alterations were predictive of survival. These results establish the frequency of key genetic alterations in low-grade diffuse gliomas at a population-based level. Multivariate Cox's regression analysis indicates that only age and histological type, but not genetic alterations, are significant predictive factors."
},
{
"pmid": "29931168",
"title": "Adult Glioma Incidence and Survival by Race or Ethnicity in the United States From 2000 to 2014.",
"abstract": "Importance\nGlioma is the most commonly occurring malignant brain tumor in the United States, and its incidence varies by age, sex, and race or ethnicity. Survival after brain tumor diagnosis has been shown to vary by these factors.\n\n\nObjective\nTo quantify the differences in incidence and survival rates of glioma in adults by race or ethnicity.\n\n\nDesign, Setting, and Participants\nThis population-based study obtained incidence data from the Central Brain Tumor Registry of the United States and survival data from Surveillance, Epidemiology, and End Results registries, covering the period January 1, 2000, to December 31, 2014. Average annual age-adjusted incidence rates with 95% CIs were generated by glioma histologic groups, race, Hispanic ethnicity, sex, and age groups. One-year and 5-year relative survival rates were generated by glioma histologic groups, race, Hispanic ethnicity, and insurance status. The analysis included 244 808 patients with glioma diagnosed in adults aged 18 years or older. Data were collected from January 1, 2000, to December 31, 2014. Data analysis took place from December 11, 2017, to January 31, 2018.\n\n\nResults\nOverall, 244 808 patients with glioma were analyzed. Of these, 150 631 (61.5%) were glioblastomas, 46 002 (18.8%) were non-glioblastoma astrocytomas, 26 068 (10.7%) were oligodendroglial tumors, 8816 (3.6%) were ependymomas, and 13 291 (5.4%) were other glioma diagnoses in adults. The data set included 137 733 males (56.3%) and 107 075 (43.7%) females. There were 204 580 non-Hispanic whites (83.6%), 17 321 Hispanic whites (7.08%), 14 566 blacks (6.0%), 1070 American Indians or Alaska Natives (0.4%), and 5947 Asians or Pacific Islanders (2.4%). Incidences of glioblastoma, non-glioblastoma astrocytoma, and oligodendroglial tumors were higher among non-Hispanic whites than among Hispanic whites (30% lower overall), blacks (52% lower overall), American Indians or Alaska Natives (58% lower overall), or Asians or Pacific Islanders (52% lower overall). Most tumors were more common in males than in females across all race or ethnicity groups, with the great difference in glioblastoma where the incidence was 60% higher overall in males. Most tumors (193 329 [79.9%]) occurred in those aged 45 years or older, with differences in incidence by race or ethnicity appearing in all age groups. Survival after diagnosis of glioma of different subtypes was generally comparable among Hispanic whites, blacks, and Asians or Pacific Islanders but was lower among non-Hispanic whites for many tumor types, including glioblastoma, irrespective of treatment type.\n\n\nConclusions and Relevance\nIncidence of glioma and 1-year and 5-year survival rates after diagnosis vary significantly by race or ethnicity, with non-Hispanic whites having higher incidence and lower survival rates compared with individuals of other racial or ethnic groups. These findings can inform future discovery of risk factors and reveal unaddressed health disparities."
},
{
"pmid": "33123732",
"title": "CBTRUS Statistical Report: Primary Brain and Other Central Nervous System Tumors Diagnosed in the United States in 2013-2017.",
"abstract": "The Central Brain Tumor Registry of the United States (CBTRUS), in collaboration with the Centers for Disease Control (CDC) and National Cancer Institute (NCI), is the largest population-based registry focused exclusively on primary brain and other central nervous system (CNS) tumors in the United States (US) and represents the entire US population. This report contains the most up-to-date population-based data on primary brain tumors (malignant and non-malignant) and supersedes all previous CBTRUS reports in terms of completeness and accuracy. All rates (incidence and mortality) are age-adjusted using the 2000 US standard population and presented per 100,000 population. The average annual age-adjusted incidence rate (AAAIR) of all malignant and non-malignant brain and other CNS tumors was 23.79 (Malignant AAAIR=7.08, non-Malignant AAAIR=16.71). This rate was higher in females compared to males (26.31 versus 21.09), Blacks compared to Whites (23.88 versus 23.83), and non-Hispanics compared to Hispanics (24.23 versus 21.48). The most commonly occurring malignant brain and other CNS tumor was glioblastoma (14.5% of all tumors), and the most common non-malignant tumor was meningioma (38.3% of all tumors). Glioblastoma was more common in males, and meningioma was more common in females. In children and adolescents (age 0-19 years), the incidence rate of all primary brain and other CNS tumors was 6.14. An estimated 83,830 new cases of malignant and non-malignant brain and other CNS tumors are expected to be diagnosed in the US in 2020 (24,970 malignant and 58,860 non-malignant). There were 81,246 deaths attributed to malignant brain and other CNS tumors between 2013 and 2017. This represents an average annual mortality rate of 4.42. The 5-year relative survival rate following diagnosis of a malignant brain and other CNS tumor was 23.5% and for a non-malignant brain and other CNS tumor was 82.4%."
},
{
"pmid": "34277415",
"title": "Deep Neural Network Analysis of Pathology Images With Integrated Molecular Data for Enhanced Glioma Classification and Grading.",
"abstract": "Gliomas are primary brain tumors that originate from glial cells. Classification and grading of these tumors is critical to prognosis and treatment planning. The current criteria for glioma classification in central nervous system (CNS) was introduced by World Health Organization (WHO) in 2016. This criteria for glioma classification requires the integration of histology with genomics. In 2017, the Consortium to Inform Molecular and Practical Approaches to CNS Tumor Taxonomy (cIMPACT-NOW) was established to provide up-to-date recommendations for CNS tumor classification, which in turn the WHO is expected to adopt in its upcoming edition. In this work, we propose a novel glioma analytical method that, for the first time in the literature, integrates a cellularity feature derived from the digital analysis of brain histopathology images integrated with molecular features following the latest WHO criteria. We first propose a novel over-segmentation strategy for region-of-interest (ROI) selection in large histopathology whole slide images (WSIs). A Deep Neural Network (DNN)-based classification method then fuses molecular features with cellularity features to improve tumor classification performance. We evaluate the proposed method with 549 patient cases from The Cancer Genome Atlas (TCGA) dataset for evaluation. The cross validated classification accuracies are 93.81% for lower-grade glioma (LGG) and high-grade glioma (HGG) using a regular DNN, and 73.95% for LGG II and LGG III using a residual neural network (ResNet) DNN, respectively. Our experiments suggest that the type of deep learning has a significant impact on tumor subtype discrimination between LGG II vs. LGG III. These results outperform state-of-the-art methods in classifying LGG II vs. LGG III and offer competitive performance in distinguishing LGG vs. HGG in the literature. In addition, we also investigate molecular subtype classification using pathology images and cellularity information. Finally, for the first time in literature this work shows promise for cellularity quantification to predict brain tumor grading for LGGs with IDH mutations."
},
{
"pmid": "29279403",
"title": "A mixed-scale dense convolutional neural network for image analysis.",
"abstract": "Deep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results in practice, a large number of trainable parameters are often required. Here, we introduce a network architecture based on using dilated convolutions to capture features at different image scales and densely connecting all feature maps with each other. The resulting architecture is able to achieve accurate results with relatively few parameters and consists of a single set of operations, making it easier to implement, train, and apply in practice, and automatically adapts to different problems. We compare results of the proposed network architecture with popular existing architectures for several segmentation problems, showing that the proposed architecture is able to achieve accurate results with fewer parameters, with a reduced risk of overfitting the training data."
},
{
"pmid": "28031556",
"title": "Advances in the molecular genetics of gliomas - implications for classification and therapy.",
"abstract": "Genome-wide molecular-profiling studies have revealed the characteristic genetic alterations and epigenetic profiles associated with different types of gliomas. These molecular characteristics can be used to refine glioma classification, to improve prediction of patient outcomes, and to guide individualized treatment. Thus, the WHO Classification of Tumours of the Central Nervous System was revised in 2016 to incorporate molecular biomarkers - together with classic histological features - in an integrated diagnosis, in order to define distinct glioma entities as precisely as possible. This paradigm shift is markedly changing how glioma is diagnosed, and has important implications for future clinical trials and patient management in daily practice. Herein, we highlight the developments in our understanding of the molecular genetics of gliomas, and review the current landscape of clinically relevant molecular biomarkers for use in classification of the disease subtypes. Novel approaches to the genetic characterization of gliomas based on large-scale DNA-methylation profiling and next-generation sequencing are also discussed. In addition, we illustrate how advances in the molecular genetics of gliomas can promote the development and clinical translation of novel pathogenesis-based therapeutic approaches, thereby paving the way towards precision medicine in neuro-oncology."
},
{
"pmid": "31037246",
"title": "Glioma grading using structural magnetic resonance imaging and molecular data.",
"abstract": "A glioma grading method using conventional structural magnetic resonance image (MRI) and molecular data from patients is proposed. The noninvasive grading of glioma tumors is obtained using multiple radiomic texture features including dynamic texture analysis, multifractal detrended fluctuation analysis, and multiresolution fractal Brownian motion in structural MRI. The proposed method is evaluated using two multicenter MRI datasets: (1) the brain tumor segmentation (BRATS-2017) challenge for high-grade versus low-grade (LG) and (2) the cancer imaging archive (TCIA) repository for glioblastoma (GBM) versus LG glioma grading. The grading performance using MRI is compared with that of digital pathology (DP) images in the cancer genome atlas (TCGA) data repository. The results show that the mean area under the receiver operating characteristic curve (AUC) is 0.88 for the BRATS dataset. The classification of tumor grades using MRI and DP images in TCIA/TCGA yields mean AUC of 0.90 and 0.93, respectively. This work further proposes and compares tumor grading performance using molecular alterations (IDH1/2 mutations) along with MRI and DP data, following the most recent World Health Organization grading criteria, respectively. The overall grading performance demonstrates the efficacy of the proposed noninvasive glioma grading approach using structural MRI."
},
{
"pmid": "28622680",
"title": "Optimized Multistable Stochastic Resonance for the Enhancement of Pituitary Microadenoma in MRI.",
"abstract": "Magnetic resonance imaging (MRI) is the modality of choice as far as imaging diagnosis of pathologies in the pituitary gland is concerned. Furthermore, the advent of dynamic contrast enhanced (DCE) has enhanced the capability of this modality in detecting minute benign but endocrinologically significant tumors called microadenoma. These lesions are visible with difficulty and a low confidence level in routine MRI sequences, even after administration of intravenous gadolinium. Techniques to enhance the visualization of such foci would be an asset in improving the overall accuracy of DCE-MRI for detection of pituitary microadenomas. The present study proposes an algorithm for postprocessing DCE-MRI data using multistable stochastic resonance (MSSR) technique. Multiobjective ant lion optimization optimizes the contrast enhancement factor (CEF) and anisotropy of an image by varying the parameters associated with the dynamics of MSSR. The marked regions of interest (ROIs) are labeled as normal and microadenoma of pituitary obtained with increased level of accuracy and confidence using proposed algorithm. The increased difference between the mean intensity curves obtained using these ROIs validated the obtained subjective results. Furthermore, the proposed MSSR-based algorithm has been evaluated on standard T1 and T2 weighted BrainWeb dataset images and quantified in terms of CEF, peak signal to noise ratio (PSNR), structure similarity index measure (SSIM), and universal quality index (UQI). The obtained mean values of CEF 1.22, PSNR 27.68, SSIM 0.75, UQI 0.83 for twenty dataset images were highest among considered contrast enhancement algorithms for the comparison."
},
{
"pmid": "30315419",
"title": "Radiomics based on multicontrast MRI can precisely differentiate among glioma subtypes and predict tumour-proliferative behaviour.",
"abstract": "PURPOSE\nTo explore the feasibility and diagnostic performance of radiomics based on anatomical, diffusion and perfusion MRI in differentiating among glioma subtypes and predicting tumour proliferation.\n\n\nMETHODS\n220 pathology-confirmed gliomas and ten contrasts were included in the retrospective analysis. After being registered to T2FLAIR images and resampling to 1 mm3 isotropically, 431 radiomics features were extracted from each contrast map within a semi-automatic defined tumour volume. For single-contrast and the combination of all contrasts, correlations between the radiomics features and pathological biomarkers were revealed by partial correlation analysis, and multivariate models were built to identify the best predictive models with adjusted 0.632+ bootstrap AUC.\n\n\nRESULTS\nIn univariate analysis, both non-wavelet and wavelet radiomics features were correlated significantly with tumour grade and the Ki-67 labelling index. The max R was 0.557 (p = 2.04E-14) in T1C for tumour grade and 0.395 (p = 2.33E-07) in ADC for Ki-67. In the multivariate analysis, the combination of all-contrast radiomics features had the highest AUCs in both differentiating among glioma subtypes and predicting proliferation compared with those in single-contrast images. For low-/high-grade gliomas, the best AUC was 0.911. In differentiating among glioma subtypes, the best AUC was 0.896 for grades II-III, 0.997 for grades II-IV, and 0.881 for grades III-IV. In predicting proliferation levels, multicontrast features led to an AUC of 0.936.\n\n\nCONCLUSION\nMulticontrast radiomics supplies complementary information on both geometric characters and molecular biological traits, which correlated significantly with tumour grade and proliferation. Combining all-contrast radiomics models might precisely predict glioma biological behaviour, which may be attributed to presurgical personal diagnosis.\n\n\nKEY POINTS\n• Multicontrast MRI radiomics features are significantly correlated with tumour grade and Ki-67 LI. • Multimodality MRI provides independent but supplemental information in assessing glioma pathological behaviour. • Combined multicontrast MRI radiomics can precisely predict glioma subtypes and proliferation levels."
},
{
"pmid": "20644945",
"title": "Interobserver variation of the histopathological diagnosis in clinical trials on glioma: a clinician's perspective.",
"abstract": "Several studies have provided ample evidence of a clinically significant interobserver variation of the histological diagnosis of glioma. This interobserver variation has an effect on both the typing and grading of glial tumors. Since treatment decisions are based on histological diagnosis and grading, this affects patient care: erroneous classification and grading may result in both over- and undertreatment. In particular, the radiotherapy dosage and the use of chemotherapy are affected by tumor grade and lineage. It also affects the conduct and interpretation of clinical trials on glioma, in particular of studies into grade II and grade III gliomas. Although trials with central pathology review prior to inclusion will result in a more homogeneous patient population, the interpretation and external validity of such trials are still affected by this, and the question whether results of such trials can be generalized to patients diagnosed and treated elsewhere remains to be answered. Although molecular classification may help in typing and grading tumors, as of today this is still in its infancy and unlikely to completely replace histological classification. Routine pathology review in everyday clinical practice should be considered. More objective histological criteria for the grade and lineage of gliomas are urgently needed."
},
{
"pmid": "32642698",
"title": "Radiological differences between subtypes of WHO 2016 grade II-III gliomas: a systematic review and meta-analysis.",
"abstract": "BACKGROUND\nIsocitrate dehydrogenase (IDH) mutation and 1p/19q-codeletion are oncogenetic alterations with a positive prognostic value for diffuse gliomas, especially grade II and III. Some studies have suggested differences in biological behavior as reflected by radiological characteristics. In this paper, the literature regarding radiological characteristics in grade II and III glioma subtypes was systematically evaluated and a meta-analysis was performed.\n\n\nMETHODS\nStudies that addressed the relationship between conventional radiological characteristics and IDH mutations and/or 1p/19q-codeletions in newly diagnosed, grade II and III gliomas of adult patients were included. The \"3-group analysis\" compared radiological characteristics between the WHO 2016 glioma subtypes (IDH-mutant astrocytoma, IDH-wildtype astrocytoma, and oligodendroglioma), and the \"2-group analysis\" compared radiological characteristics between 1p/19q-codeleted gliomas and 1p/19q-intact gliomas.\n\n\nRESULTS\nFourteen studies (3-group analysis: 670 cases, 2-group analysis: 1042 cases) were included. IDH-mutated astrocytomas showed more often sharp borders and less frequently contrast enhancement compared to IDH-wildtype astrocytomas. 1p/19q-codeleted gliomas had less frequently sharp borders, but showed a heterogeneous aspect, calcification, cysts, and edema more frequently. For the 1p/19q-codeleted gliomas, a sensitivity of 96% was found for heterogeneity and a specificity of 88.1% for calcification.\n\n\nCONCLUSIONS\nSignificant differences in conventional radiological characteristics exist between the WHO 2016 glioma subtypes, which may reflect differences in biological behavior. However, the diagnostic value of the independent radiological characteristics is insufficient to reliably predict the molecular genetic subtype."
},
{
"pmid": "28815663",
"title": "WHO 2016 Classification of gliomas.",
"abstract": "Gliomas are the most frequent intrinsic tumours of the central nervous system and encompass two principle subgroups: diffuse gliomas and gliomas showing a more circumscribed growth pattern ('nondiffuse gliomas'). In the revised fourth edition of the WHO Classification of CNS tumours published in 2016, classification of especially diffuse gliomas has fundamentally changed: for the first time, a large subset of these tumours is now defined based on presence/absence of IDH mutation and 1p/19q codeletion. Following this approach, the diagnosis of (anaplastic) oligoastrocytoma can be expected to largely disappear. Furthermore, in the WHO 2016 Classification gliomatosis cerebri is not an entity anymore but is now considered as a growth pattern. The most important changes in the very diverse group of 'nondiffuse' gliomas and neuronal-glial tumours are the introduction of anaplastic pleomorphic xanthoastrocytoma, of diffuse leptomeningeal glioneuronal tumour and of RELA fusion-positive ependymoma as entities. In the last part of this review, after very briefly touching upon classification of neuronal, choroid plexus and pineal region tumours, some practical implications and challenges associated with the WHO 2016 Classification of gliomas are discussed."
},
{
"pmid": "29065648",
"title": "Semiautomatic Segmentation of Glioma on Mobile Devices.",
"abstract": "Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach."
},
{
"pmid": "30498429",
"title": "Glioma Grading on Conventional MR Images: A Deep Learning Study With Transfer Learning.",
"abstract": "Background: Accurate glioma grading before surgery is of the utmost importance in treatment planning and prognosis prediction. But previous studies on magnetic resonance imaging (MRI) images were not effective enough. According to the remarkable performance of convolutional neural network (CNN) in medical domain, we hypothesized that a deep learning algorithm can achieve high accuracy in distinguishing the World Health Organization (WHO) low grade and high grade gliomas. Methods: One hundred and thirteen glioma patients were retrospectively included. Tumor images were segmented with a rectangular region of interest (ROI), which contained about 80% of the tumor. Then, 20% data were randomly selected and leaved out at patient-level as test dataset. AlexNet and GoogLeNet were both trained from scratch and fine-tuned from models that pre-trained on the large scale natural image database, ImageNet, to magnetic resonance images. The classification task was evaluated with five-fold cross-validation (CV) on patient-level split. Results: The performance measures, including validation accuracy, test accuracy and test area under curve (AUC), averaged from five-fold CV of GoogLeNet which trained from scratch were 0.867, 0.909, and 0.939, respectively. With transfer learning and fine-tuning, better performances were obtained for both AlexNet and GoogLeNet, especially for AlexNet. Meanwhile, GoogLeNet performed better than AlexNet no matter trained from scratch or learned from pre-trained model. Conclusion: In conclusion, we demonstrated that the application of CNN, especially trained with transfer learning and fine-tuning, to preoperative glioma grading improves the performance, compared with either the performance of traditional machine learning method based on hand-crafted features, or even the CNNs trained from scratch."
},
{
"pmid": "26207249",
"title": "Current trends in the surgical management and treatment of adult glioblastoma.",
"abstract": "This manuscript discusses the current surgical management of glioblastoma. This paper highlights the common pathophysiology attributes of glioblastoma, surgical options for diagnosis/treatment, current thoughts of extent of resection (EOR) of tumor, and post-operative (neo)adjuvant treatment. Glioblastoma is not a disease that can be cured with surgery alone, however safely performed maximal surgical resection is shown to significantly increase progression free and overall survival while maximizing quality of life. Upon invariable tumor recurrence, re-resection also is shown to impact survival in a select group of patients. As adjuvant therapy continues to improve survival, the role of surgical resection in the treatment of glioblastoma looks to be further defined."
},
{
"pmid": "32040669",
"title": "Deep Convolutional Radiomic Features on Diffusion Tensor Images for Classification of Glioma Grades.",
"abstract": "The grading of glioma has clinical significance in determining a treatment strategy and evaluating prognosis to investigate a novel set of radiomic features extracted from the fractional anisotropy (FA) and mean diffusivity (MD) maps of brain diffusion tensor imaging (DTI) sequences for computer-aided grading of gliomas. This retrospective study included 108 patients who had pathologically confirmed brain gliomas and DTI scanned during 2012-2018. This cohort included 43 low-grade gliomas (LGGs; all grade II) and 65 high-grade gliomas (HGGs; grade III or IV). We extracted a set of radiomic features, including traditional texture, morphological, and novel deep features derived from pre-trained convolutional neural network models, in the manually-delineated tumor regions. We employed support vector machine and these radiomic features for two classification tasks: LGGs vs HGGs, and grade III vs IV. The area under the receiver operating characteristic (ROC) curve (AUC), accuracy, sensitivity, and specificity was reported as the performance metrics using the leave-one-out cross-validation method. When combining FA+MD, AUC = 0.93, accuracy = 0.94, sensitivity = 0.98, and specificity = 0.86 in classifying LGGs from HGGs, while AUC = 0.99, accuracy = 0.98, sensitivity = 0.98, and specificity = 1.00 in classifying grade III from IV. The AUC and accuracy remain close when features were extracted from only the solid tumor or additionally including necrosis, cyst, and peritumoral edema. Still, the effects in terms of sensitivity and specificity are mixed. Deep radiomic features derived from pre-trained convolutional neural networks showed higher prediction ability than the traditional texture and shape features in both classification experiments. Radiomic features extracted on the FA and MD maps of brain DTI images are useful for noninvasively classification/grading of LGGs vs HGGs, and grade III vs IV."
},
{
"pmid": "32937104",
"title": "MetNet: Computer-aided segmentation of brain metastases in post-contrast T1-weighted magnetic resonance imaging.",
"abstract": "PURPOSE\nBrain metastases are manually contoured during stereotactic radiosurgery (SRS) treatment planning, which is time-consuming, potentially challenging, and laborious. The purpose of this study was to develop and investigate a 2-stage deep learning (DL) approach (MetNet) for brain metastasis segmentation in pre-treatment magnetic resonance imaging (MRI).\n\n\nMATERIALS AND METHODS\nWe retrospectively analyzed postcontrast 3D T1-weighted spoiled gradient echo MRIs from 934 patients who underwent SRS between August 2009 and August 2018. Neuroradiologists manually identified brain metastases in the MRIs. The treating radiation oncologist or physicist contoured the brain metastases. We constructed a 2-stage DL ensemble consisting of detection and segmentation models to segment the brain metastases on the MRIs. We evaluated the performance of MetNet by computing sensitivity, positive predictive value (PPV), and Dice similarity coefficient (DSC) with respect to metastasis size, as well as free-response receiver operating characteristics.\n\n\nRESULTS\nThe 934 patients (mean [±standard deviation] age 59 ± 13 years, 474 women) were randomly split into 80% training and 20% testing groups (748:186). For patients with metastases 1-52 mm (n = 766), 648 (85%) were detected and segmented with a mean segmentation DSC of 81% ± 15%. Patient-averaged sensitivity was 88% ± 19%, PPV was 58% ± 25%, and DSC was 85% ± 13% with 3 ± 3 false positives (FPs) per patient. When considering only metastases ≥6 mm, patient-averaged sensitivity was 99% ± 5%, PPV was 67% ± 28%, and DSC was 87% ± 13% with 1 ± 2 FPs per patient.\n\n\nCONCLUSION\nMetNet can segment brain metastases across a broad range of metastasis sizes with high sensitivity, low FPs, and high segmentation accuracy in postcontrast T1-weighted MRI, potentially aiding treatment planning for SRS."
}
] |
Scientific Reports | null | PMC8978501 | 10.1038/s41598-022-09356-w | Tracking and predicting COVID-19 radiological trajectory on chest X-rays using deep learning | Radiological findings on chest X-ray (CXR) have shown to be essential for the proper management of COVID-19 patients as the maximum severity over the course of the disease is closely linked to the outcome. As such, evaluation of future severity from current CXR would be highly desirable. We trained a repurposed deep learning algorithm on the CheXnet open dataset (224,316 chest X-ray images of 65,240 unique patients) to extract features that mapped to radiological labels. We collected CXRs of COVID-19-positive patients from an open-source dataset (COVID-19 image data collection) and from a multi-institutional local ICU dataset. The data was grouped into pairs of sequential CXRs and were categorized into three categories: ‘Worse’, ‘Stable’, or ‘Improved’ on the basis of radiological evolution ascertained from images and reports. Classical machine-learning algorithms were trained on the deep learning extracted features to perform immediate severity evaluation and prediction of future radiological trajectory. Receiver operating characteristic analyses and Mann-Whitney tests were performed. Deep learning predictions between “Worse” and “Improved” outcome categories and for severity stratification were significantly different for three radiological signs and one diagnostic (‘Consolidation’, ‘Lung Lesion’, ‘Pleural effusion’ and ‘Pneumonia’; all P < 0.05). Features from the first CXR of each pair could correctly predict the outcome category between ‘Worse’ and ‘Improved’ cases with a 0.81 (0.74–0.83 95% CI) AUC in the open-access dataset and with a 0.66 (0.67–0.64 95% CI) AUC in the ICU dataset. Features extracted from the CXR could predict disease severity with a 52.3% accuracy in a 4-way classification. Severity evaluation trained on the COVID-19 image data collection had good out-of-distribution generalization when testing on the local dataset, with 81.6% of intubated ICU patients being classified as critically ill, and the predicted severity was correlated with the clinical outcome with a 0.639 AUC. CXR deep learning features show promise for classifying disease severity and trajectory. Once validated in studies incorporating clinical data and with larger sample sizes, this information may be considered to inform triage decisions. | Related WorksNotably, prognosis16 and diagnosis17–19 for COVID-19 patients using CT scans have reported good results in terms of accuracy and patient stratification. However, in most of the world, CT scans are not part of the standard of care in COVID-19 patients—especially in ICUs—and CXR is the modality of choice. As such, many authors have also developed applications using CXR. For instance, differential diagnosis (Healthy, pneumonia or COVID-19) of COVID-19 positive patients can be achieved20–22 with accuracies reaching over 90% using deep networks such as the Xception architecture. Disease severity can be assessed in an objective way using radiology23–25 which is useful to quickly assess the pulmonary involvement of the disease; and the potential for adverse events can be predicted (such as death, intubation or need for oxygenation) to help direct future treatments26 Finally, ventilation need can be predicted in the near future for hospital-admitted patients27 with over 90% accuracy. These cases demonstrate that CXRs possess enough information to predict clinical developments in the near future (around three days ahead of the event). It should be noted however that in these works, adverse events prediction and radiological severity are always treated as two separate endpoints. However, they are likely to be correlated as the clinical degradation of the patient should be reflected in the radiological signs on CXR; this forms the first contribution of this article.Most attempts at COVID-19 diagnosis or prognosis have used end-to-end deep learning methods. However, this comes at a high risk of overfitting when small COVID-19 datasets are used. Hence, alternatives for end-to-end methods are desired. Of particular interest is the combination of transfer learning and classical machine learning algorithms18 where machine learning algorithms were trained on the output of a convolutional network trained on COVID-19 images, or the use of radiomics as a feature extractor19. Another interesting avenue to limit overfitting is deep transfer learning on CXR datasets such as the CheXpert dataset28 to create a CXR-specific feature extractor, using supervised or contrastive learning26. The classifier layer is then fine-tuned jointly with the feature extractor to approach a prediction task with a dataset of>5000 images. Considering that fine-tuning deep networks is still at risk of overfitting on smaller datasets, an approach using deep transfer learning to extract general CXR pathological features that are not specific to COVID-19 would be valuable. Classical machine learning can then approach the task using these features without re-training on COVID-19 patients. This is the second major contribution of our article. | [
"32215647",
"32234725",
"32187463",
"32105641",
"32174129",
"32031570",
"32125873",
"32216717",
"33778544",
"32216962",
"33414495",
"33166898",
"33033861",
"32437939",
"32202721",
"32416069",
"32524445",
"32864270",
"32722697",
"32815519",
"26510957",
"32265220",
"32640463"
] | [
{
"pmid": "32174129",
"title": "Coronavirus Disease 2019 (COVID-19): A Systematic Review of Imaging Findings in 919 Patients.",
"abstract": "OBJECTIVE. Available information on CT features of the 2019 novel coronavirus disease (COVID-19) is scattered in different publications, and a cohesive literature review has yet to be compiled. MATERIALS AND METHODS. This article includes a systematic literature search of PubMed, Embase (Elsevier), Google Scholar, and the World Health Organization database. RESULTS. Known features of COVID-19 on initial CT include bilateral multilobar ground-glass opacification (GGO) with a peripheral or posterior distribution, mainly in the lower lobes and less frequently within the right middle lobe. Atypical initial imaging presentation of consolidative opacities superimposed on GGO may be found in a smaller number of cases, mainly in the elderly population. Septal thickening, bronchiectasis, pleural thickening, and subpleural involvement are some of the less common findings, mainly in the later stages of the disease. Pleural effusion, pericardial effusion, lymphadenopathy, cavitation, CT halo sign, and pneumothorax are uncommon but may be seen with disease progression. Follow-up CT in the intermediate stage of disease shows an increase in the number and size of GGOs and progressive transformation of GGO into multifocal consolidative opacities, septal thickening, and development of a crazy paving pattern, with the greatest severity of CT findings visible around day 10 after the symptom onset. Acute respiratory distress syndrome is the most common indication for transferring patients with COVID-19 to the ICU and the major cause of death in this patient population. Imaging patterns corresponding to clinical improvement usually occur after week 2 of the disease and include gradual resolution of consolidative opacities and decrease in the number of lesions and involved lobes. CONCLUSION. This systematic review of current literature on COVID-19 provides insight into the initial and follow-up CT characteristics of the disease."
},
{
"pmid": "32031570",
"title": "Clinical Characteristics of 138 Hospitalized Patients With 2019 Novel Coronavirus-Infected Pneumonia in Wuhan, China.",
"abstract": "Importance\nIn December 2019, novel coronavirus (2019-nCoV)-infected pneumonia (NCIP) occurred in Wuhan, China. The number of cases has increased rapidly but information on the clinical characteristics of affected patients is limited.\n\n\nObjective\nTo describe the epidemiological and clinical characteristics of NCIP.\n\n\nDesign, Setting, and Participants\nRetrospective, single-center case series of the 138 consecutive hospitalized patients with confirmed NCIP at Zhongnan Hospital of Wuhan University in Wuhan, China, from January 1 to January 28, 2020; final date of follow-up was February 3, 2020.\n\n\nExposures\nDocumented NCIP.\n\n\nMain Outcomes and Measures\nEpidemiological, demographic, clinical, laboratory, radiological, and treatment data were collected and analyzed. Outcomes of critically ill patients and noncritically ill patients were compared. Presumed hospital-related transmission was suspected if a cluster of health professionals or hospitalized patients in the same wards became infected and a possible source of infection could be tracked.\n\n\nResults\nOf 138 hospitalized patients with NCIP, the median age was 56 years (interquartile range, 42-68; range, 22-92 years) and 75 (54.3%) were men. Hospital-associated transmission was suspected as the presumed mechanism of infection for affected health professionals (40 [29%]) and hospitalized patients (17 [12.3%]). Common symptoms included fever (136 [98.6%]), fatigue (96 [69.6%]), and dry cough (82 [59.4%]). Lymphopenia (lymphocyte count, 0.8 × 109/L [interquartile range {IQR}, 0.6-1.1]) occurred in 97 patients (70.3%), prolonged prothrombin time (13.0 seconds [IQR, 12.3-13.7]) in 80 patients (58%), and elevated lactate dehydrogenase (261 U/L [IQR, 182-403]) in 55 patients (39.9%). Chest computed tomographic scans showed bilateral patchy shadows or ground glass opacity in the lungs of all patients. Most patients received antiviral therapy (oseltamivir, 124 [89.9%]), and many received antibacterial therapy (moxifloxacin, 89 [64.4%]; ceftriaxone, 34 [24.6%]; azithromycin, 25 [18.1%]) and glucocorticoid therapy (62 [44.9%]). Thirty-six patients (26.1%) were transferred to the intensive care unit (ICU) because of complications, including acute respiratory distress syndrome (22 [61.1%]), arrhythmia (16 [44.4%]), and shock (11 [30.6%]). The median time from first symptom to dyspnea was 5.0 days, to hospital admission was 7.0 days, and to ARDS was 8.0 days. Patients treated in the ICU (n = 36), compared with patients not treated in the ICU (n = 102), were older (median age, 66 years vs 51 years), were more likely to have underlying comorbidities (26 [72.2%] vs 38 [37.3%]), and were more likely to have dyspnea (23 [63.9%] vs 20 [19.6%]), and anorexia (24 [66.7%] vs 31 [30.4%]). Of the 36 cases in the ICU, 4 (11.1%) received high-flow oxygen therapy, 15 (41.7%) received noninvasive ventilation, and 17 (47.2%) received invasive ventilation (4 were switched to extracorporeal membrane oxygenation). As of February 3, 47 patients (34.1%) were discharged and 6 died (overall mortality, 4.3%), but the remaining patients are still hospitalized. Among those discharged alive (n = 47), the median hospital stay was 10 days (IQR, 7.0-14.0).\n\n\nConclusions and Relevance\nIn this single-center case series of 138 hospitalized patients with confirmed NCIP in Wuhan, China, presumed hospital-related transmission of 2019-nCoV was suspected in 41% of patients, 26% of patients received ICU care, and mortality was 4.3%."
},
{
"pmid": "32125873",
"title": "Relation Between Chest CT Findings and Clinical Conditions of Coronavirus Disease (COVID-19) Pneumonia: A Multicenter Study.",
"abstract": "OBJECTIVE. The increasing number of cases of confirmed coronavirus disease (COVID-19) in China is striking. The purpose of this study was to investigate the relation between chest CT findings and the clinical conditions of COVID-19 pneumonia. MATERIALS AND METHODS. Data on 101 cases of COVID-19 pneumonia were retrospectively collected from four institutions in Hunan, China. Basic clinical characteristics and detailed imaging features were evaluated and compared between two groups on the basis of clinical status: nonemergency (mild or common disease) and emergency (severe or fatal disease). RESULTS. Patients 21-50 years old accounted for most (70.2%) of the cohort, and five (5.0%) patients had disease associated with a family outbreak. Most patients (78.2%) had fever as the onset symptom. Most patients with COVID-19 pneumonia had typical imaging features, such as ground-glass opacities (GGO) (87 [86.1%]) or mixed GGO and consolidation (65 [64.4%]), vascular enlargement in the lesion (72 [71.3%]), and traction bronchiectasis (53 [52.5%]). Lesions present on CT images were more likely to have a peripheral distribution (88 [87.1%]) and bilateral involvement (83 [82.2%]) and be lower lung predominant (55 [54.5%]) and multifocal (55 [54.5%]). Patients in the emergency group were older than those in the non-emergency group. Architectural distortion, traction bronchiectasis, and CT involvement score aided in evaluation of the severity and extent of the disease. CONCLUSION. Patients with confirmed COVID-19 pneumonia have typical imaging features that can be helpful in early screening of highly suspected cases and in evaluation of the severity and extent of disease. Most patients with COVID-19 pneumonia have GGO or mixed GGO and consolidation and vascular enlargement in the lesion. Lesions are more likely to have peripheral distribution and bilateral involvement and be lower lung predominant and multifocal. CT involvement score can help in evaluation of the severity and extent of the disease."
},
{
"pmid": "32216717",
"title": "Frequency and Distribution of Chest Radiographic Findings in Patients Positive for COVID-19.",
"abstract": "Background Current coronavirus disease 2019 (COVID-19) radiologic literature is dominated by CT, and a detailed description of chest radiography appearances in relation to the disease time course is lacking. Purpose To describe the time course and severity of findings of COVID-19 at chest radiography and correlate these with real-time reverse transcription polymerase chain reaction (RT-PCR) testing for severe acute respiratory syndrome coronavirus 2, or SARS-CoV-2, nucleic acid. Materials and Methods This is a retrospective study of patients with COVID-19 confirmed by using RT-PCR and chest radiographic examinations who were admitted across four hospitals and evaluated between January and March 2020. Baseline and serial chest radiographs (n = 255) were reviewed with RT-PCR. Correlation with concurrent CT examinations (n = 28) was performed when available. Two radiologists scored each chest radiograph in consensus for consolidation, ground-glass opacity, location, and pleural fluid. A severity index was determined for each lung. The lung scores were summed to produce the final severity score. Results The study was composed of 64 patients (26 men; mean age, 56 years ± 19 [standard deviation]). Of these, 58 patients had initial positive findings with RT-PCR (91%; 95% confidence interval: 81%, 96%), 44 patients had abnormal findings at baseline chest radiography (69%; 95% confidence interval: 56%, 80%), and 38 patients had initial positive findings with RT-PCR testing and abnormal findings at baseline chest radiography (59%; 95% confidence interval: 46%, 71%). Six patients (9%) showed abnormalities at chest radiography before eventually testing positive for COVID-19 with RT-PCR. Sensitivity of initial RT-PCR (91%; 95% confidence interval: 83%, 97%) was higher than that of baseline chest radiography (69%; 95% confidence interval: 56%, 80%) (P = .009). Radiographic recovery (mean, 6 days ± 5) and virologic recovery (mean, 8 days ± 6) were not significantly different (P = .33). Consolidation was the most common finding (30 of 64; 47%) followed by ground-glass opacities (21 of 64; 33%). Abnormalities at chest radiography had a peripheral distribution (26 of 64; 41%) and lower zone distribution (32 of 64; 50%) with bilateral involvement (32 of 64; 50%). Pleural effusion was uncommon (two of 64; 3%). The severity of findings at chest radiography peaked at 10-12 days from the date of symptom onset. Conclusion Findings at chest radiography in patients with coronavirus disease 2019 frequently showed bilateral lower zone consolidation, which peaked at 10-12 days from symptom onset. © RSNA, 2020."
},
{
"pmid": "33778544",
"title": "Chest Imaging Appearance of COVID-19 Infection.",
"abstract": "Coronavirus disease 2019 (COVID-19) (previously known as novel coronavirus [2019-nCoV]), first reported in China, has now been declared a global health emergency by the World Health Organization. As confirmed cases are being reported in several countries from all over the world, it becomes important for all radiologists to be aware of the imaging spectrum of the disease and contribute to effective surveillance and response measures. © RSNA, 2020 See editorial by Kay and Abbara in this issue."
},
{
"pmid": "33414495",
"title": "Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study.",
"abstract": "Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide."
},
{
"pmid": "33166898",
"title": "Correlation of chest radiography findings with the severity and progression of COVID-19 pneumonia.",
"abstract": "PURPOSE\nAim is to assess the temporal changes and prognostic value of chest radiograph (CXR) in COVID-19 patients.\n\n\nMATERIAL AND METHODS\nWe performed a retrospective study of confirmed COVID-19 patients presented to the emergency between March 07-17, 2020. Clinical & radiological findings were reviewed. Clinical outcomes were classified into critical & non-critical based on severity. Two independent radiologists graded frontal view CXRs into COVID-19 pneumonia category 1 (CoV-P1) with <4 zones and CoV-P2 with ≥4 zones involvement. Interobserver agreement of CoV-P category for the CXR preceding the clinical outcome was assessed using Kendall's τ coefficient. Association between CXR findings and clinical deterioration was calculated along with temporal changes of CXR findings with disease progression.\n\n\nRESULTS\nSixty-two patients were evaluated for clinical features. 56 of these (total: 325 CXRs) were evaluated for radiological findings. Common patterns were progression from lower to upper zones, peripheral to diffuse involvement, & from ground glass opacities to consolidation. Consolidations starting peripherally were noted in 76%, 93% and 48% with critical outcomes, respectively. The interobserver agreement of the CoV-P category of CXRs in the critical and non-critical outcome groups were good and excellent, respectively (τ coefficient = 0.6 & 1.0). Significant association was observed between CoV-P2 and clinical deterioration into a critical status (χ2 = 27.7, p = 0.0001) with high sensitivity (95%) and specificity (71%) within a median interval time of 2 days (range: 0-4 days).\n\n\nCONCLUSION\nInvolvement of predominantly 4 or more zones on frontal chest radiograph can be used as predictive prognostic indicator of poorer outcome in COVID-19 patients."
},
{
"pmid": "33033861",
"title": "Chest X-ray for predicting mortality and the need for ventilatory support in COVID-19 patients presenting to the emergency department.",
"abstract": "OBJECTIVES\nTo evaluate the inter-rater agreement of chest X-ray (CXR) findings in coronavirus disease 2019 (COVID-19) and to determine the value of initial CXR along with demographic, clinical, and laboratory data at emergency department (ED) presentation for predicting mortality and the need for ventilatory support.\n\n\nMETHODS\nA total of 340 COVID-19 patients who underwent CXR in the ED setting (March 1-13, 2020) were retrospectively included. Two reviewers independently assessed CXR abnormalities, including ground-glass opacities (GGOs) and consolidation. Two scoring systems (Brixia score and percentage of lung involvement) were applied. Inter-rater agreement was assessed by weighted Cohen's kappa (κ) or intraclass correlation coefficient (ICC). Predictors of death and respiratory support were identified by logistic or Poisson regression.\n\n\nRESULTS\nGGO admixed with consolidation (n = 235, 69%) was the most common CXR finding. The inter-rater agreement was almost perfect for type of parenchymal opacity (κ = 0.90), Brixia score (ICC = 0.91), and percentage of lung involvement (ICC = 0.95). The Brixia score (OR: 1.19; 95% CI: 1.06, 1.34; p = 0.003), age (OR: 1.16; 95% CI: 1.11, 1.22; p < 0.001), PaO2/FiO2 ratio (OR: 0.99; 95% CI: 0.98, 1; p = 0.002), and cardiovascular diseases (OR: 3.21; 95% CI: 1.28, 8.39; p = 0.014) predicted death. Percentage of lung involvement (OR: 1.02; 95% CI: 1.01, 1.03; p = 0.001) and PaO2/FiO2 ratio (OR: 0.99; 95% CI: 0.99, 1.00; p < 0.001) were significant predictors of the need for ventilatory support.\n\n\nCONCLUSIONS\nCXR is a reproducible tool for assessing COVID-19 and integrates with patient history, PaO2/FiO2 ratio, and SpO2 values to early predict mortality and the need for ventilatory support.\n\n\nKEY POINTS\n• Chest X-ray is a reproducible tool for assessing COVID-19 pneumonia. • The Brixia score and percentage of lung involvement on chest X-ray integrate with patient history, PaO2/FIO2 ratio, and SpO2 values to early predict mortality and the need for ventilatory support in COVID-19 patients presenting to the emergency department."
},
{
"pmid": "32437939",
"title": "Chest X-ray severity index as a predictor of in-hospital mortality in coronavirus disease 2019: A study of 302 patients from Italy.",
"abstract": "OBJECTIVES\nThis study aimed to assess the usefulness of a new chest X-ray scoring system - the Brixia score - to predict the risk of in-hospital mortality in hospitalized patients with coronavirus disease 2019 (COVID-19).\n\n\nMETHODS\nBetween March 4, 2020 and March 24, 2020, all CXR reports including the Brixia score were retrieved. We enrolled only hospitalized Caucasian patients with COVID-19 for whom the final outcome was available. For each patient, age, sex, underlying comorbidities, immunosuppressive therapies, and the CXR report containing the highest score were considered for analysis. These independent variables were analyzed using a multivariable logistic regression model to extract the predictive factors for in-hospital mortality.\n\n\nRESULTS\n302 Caucasian patients who were hospitalized for COVID-19 were enrolled. In the multivariable logistic regression model, only Brixia score, patient age, and conditions that induced immunosuppression were the significant predictive factors for in-hospital mortality. According to receiver operating characteristic curve analyses, the optimal cutoff values for Brixia score and patient age were 8 points and 71 years, respectively. Three different models that included the Brixia score showed excellent predictive power.\n\n\nCONCLUSIONS\nPatients with a high Brixia score and at least one other predictive factor had the highest risk of in-hospital death."
},
{
"pmid": "32416069",
"title": "Clinically Applicable AI System for Accurate Diagnosis, Quantitative Measurements, and Prognosis of COVID-19 Pneumonia Using Computed Tomography.",
"abstract": "Many COVID-19 patients infected by SARS-CoV-2 virus develop pneumonia (called novel coronavirus pneumonia, NCP) and rapidly progress to respiratory failure. However, rapid diagnosis and identification of high-risk patients for early intervention are challenging. Using a large computed tomography (CT) database from 3,777 patients, we developed an AI system that can diagnose NCP and differentiate it from other common pneumonia and normal controls. The AI system can assist radiologists and physicians in performing a quick diagnosis especially when the health system is overloaded. Significantly, our AI system identified important clinical markers that correlated with the NCP lesion properties. Together with the clinical data, our AI system was able to provide accurate clinical prognosis that can aid clinicians to consider appropriate early clinical management and allocate resources appropriately. We have made this AI system available globally to assist the clinicians to combat COVID-19."
},
{
"pmid": "32524445",
"title": "Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.",
"abstract": "In this study, a dataset of X-ray images from patients with common bacterial pneumonia, confirmed Covid-19 disease, and normal incidents, was utilized for the automatic detection of the Coronavirus disease. The aim of the study is to evaluate the performance of state-of-the-art convolutional neural network architectures proposed over the recent years for medical image classification. Specifically, the procedure called Transfer Learning was adopted. With transfer learning, the detection of various abnormalities in small medical image datasets is an achievable target, often yielding remarkable results. The datasets utilized in this experiment are two. Firstly, a collection of 1427 X-ray images including 224 images with confirmed Covid-19 disease, 700 images with confirmed common bacterial pneumonia, and 504 images of normal conditions. Secondly, a dataset including 224 images with confirmed Covid-19 disease, 714 images with confirmed bacterial and viral pneumonia, and 504 images of normal conditions. The data was collected from the available X-ray images on public medical repositories. The results suggest that Deep Learning with X-ray imaging may extract significant biomarkers related to the Covid-19 disease, while the best accuracy, sensitivity, and specificity obtained is 96.78%, 98.66%, and 96.46% respectively. Since by now, all diagnostic tests show failure rates such as to raise concerns, the probability of incorporating X-rays into the diagnosis of the disease could be assessed by the medical community, based on the findings, while more research to evaluate the X-ray approach from different aspects may be conducted."
},
{
"pmid": "32864270",
"title": "Predicting COVID-19 Pneumonia Severity on Chest X-ray With Deep Learning.",
"abstract": "Introduction The need to streamline patient management for coronavirus disease-19 (COVID-19) has become more pressing than ever. Chest X-rays (CXRs) provide a non-invasive (potentially bedside) tool to monitor the progression of the disease. In this study, we present a severity score prediction model for COVID-19 pneumonia for frontal chest X-ray images. Such a tool can gauge the severity of COVID-19 lung infections (and pneumonia in general) that can be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. Methods Images from a public COVID-19 database were scored retrospectively by three blinded experts in terms of the extent of lung involvement as well as the degree of opacity. A neural network model that was pre-trained on large (non-COVID-19) chest X-ray datasets is used to construct features for COVID-19 images which are predictive for our task. Results This study finds that training a regression model on a subset of the outputs from this pre-trained chest X-ray model predicts our geographic extent score (range 0-8) with 1.14 mean absolute error (MAE) and our lung opacity score (range 0-6) with 0.78 MAE. Conclusions These results indicate that our model's ability to gauge the severity of COVID-19 lung infections could be used for escalation or de-escalation of care as well as monitoring treatment efficacy, especially in the ICU. To enable follow up work, we make our code, labels, and data available online."
},
{
"pmid": "32722697",
"title": "Deep transfer learning artificial intelligence accurately stages COVID-19 lung disease severity on portable chest radiographs.",
"abstract": "This study employed deep-learning convolutional neural networks to stage lung disease severity of Coronavirus Disease 2019 (COVID-19) infection on portable chest x-ray (CXR) with radiologist score of disease severity as ground truth. This study consisted of 131 portable CXR from 84 COVID-19 patients (51M 55.1±14.9yo; 29F 60.1±14.3yo; 4 missing information). Three expert chest radiologists scored the left and right lung separately based on the degree of opacity (0-3) and geographic extent (0-4). Deep-learning convolutional neural network (CNN) was used to predict lung disease severity scores. Data were split into 80% training and 20% testing datasets. Correlation analysis between AI-predicted versus radiologist scores were analyzed. Comparison was made with traditional and transfer learning. The average opacity score was 2.52 (range: 0-6) with a standard deviation of 0.25 (9.9%) across three readers. The average geographic extent score was 3.42 (range: 0-8) with a standard deviation of 0.57 (16.7%) across three readers. The inter-rater agreement yielded a Fleiss' Kappa of 0.45 for opacity score and 0.71 for extent score. AI-predicted scores strongly correlated with radiologist scores, with the top model yielding a correlation coefficient (R2) of 0.90 (range: 0.73-0.90 for traditional learning and 0.83-0.90 for transfer learning) and a mean absolute error of 8.5% (ranges: 17.2-21.0% and 8.5%-15.5, respectively). Transfer learning generally performed better. In conclusion, deep-learning CNN accurately stages disease severity on portable chest x-ray of COVID-19 lung infection. This approach may prove useful to stage lung disease severity, prognosticate, and predict treatment response and survival, thereby informing risk management and resource allocation."
},
{
"pmid": "32815519",
"title": "Determination of disease severity in COVID-19 patients using deep learning in chest X-ray images.",
"abstract": "PURPOSE\nChest X-ray plays a key role in diagnosis and management of COVID-19 patients and imaging features associated with clinical elements may assist with the development or validation of automated image analysis tools. We aimed to identify associations between clinical and radiographic features as well as to assess the feasibility of deep learning applied to chest X-rays in the setting of an acute COVID-19 outbreak.\n\n\nMETHODS\nA retrospective study of X-rays, clinical, and laboratory data was performed from 48 SARS-CoV-2 RT-PCR positive patients (age 60±17 years, 15 women) between February 22 and March 6, 2020 from a tertiary care hospital in Milan, Italy. Sixty-five chest X-rays were reviewed by two radiologists for alveolar and interstitial opacities and classified by severity on a scale from 0 to 3. Clinical factors (age, symptoms, comorbidities) were investigated for association with opacity severity and also with placement of central line or endotracheal tube. Deep learning models were then trained for two tasks: lung segmentation and opacity detection. Imaging characteristics were compared to clinical datapoints using the unpaired student's t-test or Mann-Whitney U test. Cohen's kappa analysis was used to evaluate the concordance of deep learning to conventional radiologist interpretation.\n\n\nRESULTS\nFifty-six percent of patients presented with alveolar opacities, 73% had interstitial opacities, and 23% had normal X-rays. The presence of alveolar or interstitial opacities was statistically correlated with age (P = 0.008) and comorbidities (P = 0.005). The extent of alveolar or interstitial opacities on baseline X-ray was significantly associated with the presence of endotracheal tube (P = 0.0008 and P = 0.049) or central line (P = 0.003 and P = 0.007). In comparison to human interpretation, the deep learning model achieved a kappa concordance of 0.51 for alveolar opacities and 0.71 for interstitial opacities.\n\n\nCONCLUSION\nChest X-ray analysis in an acute COVID-19 outbreak showed that the severity of opacities was associated with advanced age, comorbidities, as well as acuity of care. Artificial intelligence tools based upon deep learning of COVID-19 chest X-rays are feasible in the acute outbreak setting."
},
{
"pmid": "26510957",
"title": "STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies.",
"abstract": "Incomplete reporting has been identified as a major source of avoidable waste in biomedical research. Essential information is often not provided in study reports, impeding the identification, critical appraisal, and replication of studies. To improve the quality of reporting of diagnostic accuracy studies, the Standards for Reporting of Diagnostic Accuracy Studies (STARD) statement was developed. Here we present STARD 2015, an updated list of 30 essential items that should be included in every report of a diagnostic accuracy study. This update incorporates recent evidence about sources of bias and variability in diagnostic accuracy and is intended to facilitate the use of STARD. As such, STARD 2015 may help to improve completeness and transparency in reporting of diagnostic accuracy studies."
},
{
"pmid": "32265220",
"title": "Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal",
"abstract": "OBJECTIVE\nTo review and appraise the validity and usefulness of published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of covid-19 infection or being admitted to hospital with the disease.\n\n\nDESIGN\nLiving systematic review and critical appraisal by the COVID-PRECISE (Precise Risk Estimation to optimise covid-19 Care for Infected or Suspected patients in diverse sEttings) group.\n\n\nDATA SOURCES\nPubMed and Embase through Ovid, up to 1 July 2020, supplemented with arXiv, medRxiv, and bioRxiv up to 5 May 2020.\n\n\nSTUDY SELECTION\nStudies that developed or validated a multivariable covid-19 related prediction model.\n\n\nDATA EXTRACTION\nAt least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).\n\n\nRESULTS\n37 421 titles were screened, and 169 studies describing 232 prediction models were included. The review identified seven models for identifying people at risk in the general population; 118 diagnostic models for detecting covid-19 (75 were based on medical imaging, 10 to diagnose disease severity); and 107 prognostic models for predicting mortality risk, progression to severe disease, intensive care unit admission, ventilation, intubation, or length of hospital stay. The most frequent types of predictors included in the covid-19 prediction models are vital signs, age, comorbidities, and image features. Flu-like symptoms are frequently predictive in diagnostic models, while sex, C reactive protein, and lymphocyte counts are frequent prognostic factors. Reported C index estimates from the strongest form of validation available per model ranged from 0.71 to 0.99 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.54 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and unclear reporting. Many models did not include a description of the target population (n=27, 12%) or care setting (n=75, 32%), and only 11 (5%) were externally validated by a calibration plot. The Jehi diagnostic model and the 4C mortality score were identified as promising models.\n\n\nCONCLUSION\nPrediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that almost all pubished prediction models are poorly reported, and at high risk of bias such that their reported predictive performance is probably optimistic. However, we have identified two (one diagnostic and one prognostic) promising models that should soon be validated in multiple cohorts, preferably through collaborative efforts and data sharing to also allow an investigation of the stability and heterogeneity in their performance across populations and settings. Details on all reviewed models are publicly available at https://www.covprecise.org/. Methodological guidance as provided in this paper should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, prediction model authors should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.\n\n\nSYSTEMATIC REVIEW REGISTRATION\nProtocol https://osf.io/ehc47/, registration https://osf.io/wy245.\n\n\nREADERS' NOTE\nThis article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 3 of the original article published on 7 April 2020 (BMJ 2020;369:m1328). Previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp). When citing this paper please consider adding the update number and date of access for clarity."
},
{
"pmid": "32640463",
"title": "Factors associated with COVID-19-related death using OpenSAFELY.",
"abstract": "Coronavirus disease 2019 (COVID-19) has rapidly affected mortality worldwide1. There is unprecedented urgency to understand who is most at risk of severe outcomes, and this requires new approaches for the timely analysis of large datasets. Working on behalf of NHS England, we created OpenSAFELY-a secure health analytics platform that covers 40% of all patients in England and holds patient data within the existing data centre of a major vendor of primary care electronic health records. Here we used OpenSAFELY to examine factors associated with COVID-19-related death. Primary care records of 17,278,392 adults were pseudonymously linked to 10,926 COVID-19-related deaths. COVID-19-related death was associated with: being male (hazard ratio (HR) 1.59 (95% confidence interval 1.53-1.65)); greater age and deprivation (both with a strong gradient); diabetes; severe asthma; and various other medical conditions. Compared with people of white ethnicity, Black and South Asian people were at higher risk, even after adjustment for other factors (HR 1.48 (1.29-1.69) and 1.45 (1.32-1.58), respectively). We have quantified a range of clinical factors associated with COVID-19-related death in one of the largest cohort studies on this topic so far. More patient records are rapidly being added to OpenSAFELY, we will update and extend our results regularly."
}
] |
Frontiers in Plant Science | null | PMC8978562 | 10.3389/fpls.2022.839044 | CRIA: An Interactive Gene Selection Algorithm for Cancers Prediction Based on Copy Number Variations | Genomic copy number variations (CNVs) are among the most important structural variations of genes found to be related to the risk of individual cancer and therefore they can be utilized to provide a clue to the research on the formation and progression of cancer. In this paper, an improved computational gene selection algorithm called CRIA (correlation-redundancy and interaction analysis based on gene selection algorithm) is introduced to screen genes that are closely related to cancer from the whole genome based on the value of gene CNVs. The CRIA algorithm mainly consists of two parts. Firstly, the main effect feature is selected out from the original feature set that has the largest correlation with the class label. Secondly, after the analysis involving correlation, redundancy and interaction for each feature in the candidate feature set, we choose the feature that maximizes the value of the custom selection criterion and add it into the selected feature set and then remove it from the candidate feature set in each selection round. Based on the real datasets, CRIA selects the top 200 genes to predict the type of cancer. The experiments' results of our research show that, compared with the state-of-the-art related methods, the CRIA algorithm can extract the key features of CNVs and a better classification performance can be achieved based on them. In addition, the interpretable genes highly related to cancer can be known, which may provide new clues at the genetic level for the treatment of the cancer. | Related WorkThe irrelevant features and redundant features existed in high-dimensional data will damage the performance of the learning algorithm and reduce the efficiency of the learning algorithm. Therefore, the dimensionality reduction of features is one of the most common methods of data preprocessing (Orsenigo and Vercellis, 2013) and its purpose is to reduce the training time of the algorithm and improve the accuracy of final results (Bennasar et al., 2015). In recent years, the research of gene selection methods based on mutual information has received wide attention from scholars. Best individual gene selection (BIF) (Chandrashekar and Sahin, 2014) is the simplest and fastest filtering gene selection algorithm, especially suitable for high-dimensional data.Battiti utilized the mutual information (MI) between features and class labels [I(fi; c)] to measure the relevance and the mutual information between features [I(fi; fs)] to measure the redundancy (Battiti, 1994). He proposed the Mutual Information Gene selection (MIFS) criterion and it is defined as:
(8)JMIFS(fi)=I(fi;c)-β∑fs∈ΩSI(fi;fs), fi∈F-ΩS
where F is the original feature set, ΩS is the selected feature subset, F − ΩS is the candidate feature subset and c is the class label. β is a configurable parameter to determine the trade–off between relevance and redundancy. However, β is set experimentally, which results in an unstable performance.Peng et al. (2005) proposed the Minimum-Redundancy Maximum-Relevance (MRMR) criterion and its evaluation function is defined as:
(9)JmRMR(fi)=I(fi;c)-1|ns|∑fs∈ΩSI(fi;fs), fi∈F-ΩS
where |ns| is the number of selected features.Similarly, other gene selection methods that consider relevance between features and the class label and redundancy between features are concluded, such as Normalized Mutual Information Gene selection (NMIFS) and Conditional Mutual Information (CMI), and they were proposed by Estévez et al. (2009) and Liang et al. (2019) respectively. Their evaluation function are defined as follows:
(10)JNMIFS(fi)=I(fi;c)-1|ns|∑fs∈ΩSI(fi;fs)min(H(fi),H(fs)), fi∈F-ΩS
(11)JCMI(fi)=I(fi;c)-H(fi|c)H(fi)∑fs∈ΩSI(fs;c)I(fi;fs)H(fs)H(c), fi∈F-ΩS
where H(fi) is the information entropy and H(fi|c) is the conditional entropy.Many gene selection algorithms based on information theory tend to use mutual information as a measure of relevance, which will bring a disadvantage that mutual information tends to select features with more discrete values (Foithong et al., 2012). Thus, the symmetrical uncertainty (Witten and Frank, 2002) (a normalized form of mutual information, SU) is adopted to solve this problem. The symmetrical uncertainty can be described as:
(12)SU(fi;c)=2I(fi;c)H(fi)+H(c)
The SU can redress the bias of mutual information as much as possible and scale its values to [0,1] by penalizing inputs with large entropies. It will make the performance of gene selection better. Same as MI, for any two features fi1 and fi2, if SU(fi1; c) > SU(fi2; c), due to more information can be provided by the former, fi1 and c are more relevant. If SU(fi1; fs) > SU(fi2; fs), owing to the information shared by fi1 and fs being more and providing less information, fi1 and fs have greater redundancy.Additionally, these gene selection algorithms mentioned above fail to take the feature interaction into consideration. After relevance and redundancy analysis, one feature deemed useless may interact with other features to provide more useful information. Especially in complicated biology systems, molecules interacting with each other, they work together to express physiological and pathological changes. If we only consider relevance and redundancy but ignore the feature interaction in data analysis, we may miss some useful features and affect the analysis results (Chen et al., 2015).Sun et al. (2013), Zeng et al. (2015), and Gu et al. (2020), respectively proposed a gene selection method using dynamic feature weights: Dynamic Weighting-based Gene selection algorithm (DWFS), Interaction Weight based Gene selection algorithm (IWFS) and Redundancy Analysis and Interaction Weight-based gene selection algorithm (RAIW). All of them employ the symmetric uncertainty to measure the relevance between features and the class label, and exploit the three-dimensional interaction information (mentioned at Information Theory Definition 4) to measure the interaction between two features and the class label. The evaluation functions are defined as follow:
(13)JDWFS(fi)=SU(fi;c)×wDWFS(fi), fi∈-ΩS
(14)JIWFS(fi)=wIWFS(fi)×[1+SU(fi;c)], fi∈F-ΩS
(15)JRAIW(fi)=SU(fi;c)×[1-αSU(fi;fs)] ×wRAIW(fi),fiϵF-Ωs
where w(fi) is the weight of each feature and its initial value is set to 1, α is a redundancy coefficient and the value is relevant to the number of dataset's features, fs is one of features in the selected feature subset. In each round, the feature weight w(fi) is updated by their interaction weight factors.
(16)wDWFS(fi)=wDWFS(fi′)×[1+CR(fi,fs)]=wDWFS(fi′) ×[1+2I(fi;c|fs)-I(fi;c)H(fi)+H(c)] =wDWFS(fi ′)×[1+2I(fi;fs;c)H(fi)+H(c)]
(17)wIWFS(fi)=wIWFS(fi ′)×IW(fi,fs)=wIWFS(fi′)×[1+I(fi;fs;c)H(fi)+H(fs)]
(18)wRAIW(fi)=wRAIW(fi ′)×[1+If(fi,fs,c)] =wRAIW(fi′)×[1+2I(fi;fs;c)H(fi)+H(fs)+H(c)]
where w(fi′) denotes the feature weight of the previous round, I(fi; c|fs) is the conditional mutual information of fi and c when fs is given. I(fi; fs; c) is three-dimensional interaction information. However, we can find that although DWFS and IWFS take into account relevance and interaction, they ignore the redundancy between features. Correlation, redundancy and interaction are all taken into account by RAIW, but there is a no reasonable value for α in a specific dataset.Furthermore, some other gene selection methods about three-way mutual information are listed and their evaluation function are defined as follows, such as Composition of Feature Relevance (CFR) (Gao et al., 2018a), Joint Mutual Information Maximization (JMIM) (Bennasar et al., 2015), Dynamic Change of Selected Feature with the class (DCSF) (Gao et al., 2018b) and Max-Relevance and Max-Independence (MRI) (Wang et al., 2017).
(19)
JCFR(fi)=∑fs∈ΩSI(fi;c|fs)+∑fs∈ΩSI(fi;fs;c), fi∈F−ΩS
(20)
JJMIM(fi) =max[minfs∈ΩS(I(fi,fs;c))], fi∈F−ΩS
(21)
JDCSF(fi)=∑fs∈ΩSI(fi;c|fs)+∑fs∈ΩSI(fs;c|fi) −∑fs∈ΩSI(fi;fs), fi∈F−ΩS
(22)
JMRI(fi)=I(fi;c)+∑fs∈ΩSI(fi;c|fs) +∑fs∈ΩSI(fs;c|fi), fi∈F−ΩS
where I(fi, fs; c) is the joint mutual information of fi, fs and c. I(fs; c|fi) is the conditional mutual information of fs and c when fi is given. However, these algorithms only take into account three-way mutual information among features and the class label, and none of them considers relevance, redundancy and three-dimensional mutual information between features at the same time, which will affect the performance of these algorithms. | [
"18267827",
"18077431",
"17827395",
"31262163",
"22588877",
"24071851",
"25220419",
"19546859",
"19150792",
"25390032",
"17301065",
"23550210",
"33144283",
"34532568",
"33307925",
"32850687",
"21527027",
"25041379",
"16119262",
"17122850",
"30712080",
"28176296",
"12424115",
"32364424",
"32058674",
"27266344",
"29104657"
] | [
{
"pmid": "18267827",
"title": "Using mutual information for selecting features in supervised neural net learning.",
"abstract": "This paper investigates the application of the mutual information criterion to evaluate a set of candidate features and to select an informative subset to be used as input data for a neural network classifier. Because the mutual information measures arbitrary dependencies between random variables, it is suitable for assessing the \"information content\" of features in complex classification tasks, where methods bases on linear relations (like the correlation) are prone to mistakes. The fact that the mutual information is independent of the coordinates chosen permits a robust estimation. Nonetheless, the use of the mutual information for tasks characterized by high input dimensionality requires suitable approximations because of the prohibitive demands on computation and samples. An algorithm is proposed that is based on a \"greedy\" selection of the features and that takes both the mutual information with respect to the output class and with respect to the already-selected features into account. Finally the results of a series of experiments are discussed."
},
{
"pmid": "18077431",
"title": "Assessing the significance of chromosomal aberrations in cancer: methodology and application to glioma.",
"abstract": "Comprehensive knowledge of the genomic alterations that underlie cancer is a critical foundation for diagnostics, prognostics, and targeted therapeutics. Systematic efforts to analyze cancer genomes are underway, but the analysis is hampered by the lack of a statistical framework to distinguish meaningful events from random background aberrations. Here we describe a systematic method, called Genomic Identification of Significant Targets in Cancer (GISTIC), designed for analyzing chromosomal aberrations in cancer. We use it to study chromosomal aberrations in 141 gliomas and compare the results with two prior studies. Traditional methods highlight hundreds of altered regions with little concordance between studies. The new approach reveals a highly concordant picture involving approximately 35 significant events, including 16-18 broad events near chromosome-arm size and 16-21 focal events. Approximately half of these events correspond to known cancer-related genes, only some of which have been previously tied to glioma. We also show that superimposed broad and focal events may have different biological consequences. Specifically, gliomas with broad amplification of chromosome 7 have properties different from those with overlapping focalEGFR amplification: the broad events act in part through effects on MET and its ligand HGF and correlate with MET dependence in vitro. Our results support the feasibility and utility of systematic characterization of the cancer genome."
},
{
"pmid": "17827395",
"title": "Copy number variation of the activating FCGR2C gene predisposes to idiopathic thrombocytopenic purpura.",
"abstract": "Gene copy number variation (CNV) and single nucleotide polymorphisms (SNPs) count as important sources for interindividual differences, including differential responsiveness to infection or predisposition to autoimmune disease as a result of unbalanced immunity. By developing an FCGR-specific multiplex ligation-dependent probe amplification assay, we were able to study a notoriously complex and highly homologous region in the human genome and demonstrate extensive variation in the FCGR2 and FCGR3 gene clusters, including previously unrecognized CNV. As indicated by the prevalence of an open reading frame of FCGR2C, Fcgamma receptor (FcgammaR) type IIc is expressed in 18% of healthy individuals and is strongly associated with the hematological autoimmune disease idiopathic thrombocytopenic purpura (ITP) (present in 34.4% of ITP patients; OR 2.4 (1.3-4.5), P < .009). FcgammaRIIc acts as an activating IgG receptor that exerts antibody-mediated cellular cytotoxicity by immune cells. Therefore, we propose that the activating FCGR2C-ORF genotype predisposes to ITP by altering the balance of activating and inhibitory FcgammaR on immune cells."
},
{
"pmid": "31262163",
"title": "Assessment of HER-2/neu, с-MYC and CCNE1 gene copy number variations and protein expression in endometrial carcinomas.",
"abstract": "AIM\nTo analyze copy number variations of HER-2/neu, c-MYC and CCNE1 oncogenes and their protein expression in endometrioid endometrial carcinomas in relation to the degree of tumor progression and presence of a family history of cancer in cancer patients.\n\n\nMATERIALS AND METHODS\nThe study was conducted on endometrial cancer (EC) samples from 68 patients with I-II FIGO stages of disease. Copy number analysis of HER-2/neu, c-MYC and CCNE1 genes was performed by quantitative PCR. Protein expression was analyzed using immunohistochemistry.\n\n\nRESULTS\nAssessment of copy number variations of HER-2/neu, c-MYC and CCNE1 genes revealed their amplification in the tumors of 18.8, 25.0 and 14.3% of EC patients, respectively. High expression of corresponding proteins was detected in 14.6, 23.5 and 65.6% of patients, respectively. It was established that HER-2/neu gene amplification is more common in the group of tumors of low differentiation grade than in moderate grade EC (35.7 and 5.5% of cases, respectively, p < 0.05). Also, high expression of c-Myc protein was more frequently observed in low differentiated tumors compared to the moderately differentiated EC (36.6 and 13.2% of cases, respectively, p < 0.05). Expression of HER-2/neu and cyclin E proteins was found to be dependent on the depth of tumor invasion into the myometrium. High expression of HER-2/neu protein was observed in 25.0 and 4.1% of EC patients with tumor invasion > ½ and < ½ of the myometrium, respectively, and cyclin E - in 86.7 and 46.6% of cases, respectively, p < 0.05. It was shown that among patients with a family history of cancer, a larger proportion of cases with high expression of c-Myc protein was observed compared to the group of patients with sporadic tumors (43.8 and 17.3%, respectively; p < 0.05).\n\n\nCONCLUSIONS\nAmplification of HER-2/neu gene, along with high expression of c-Myc, HER-2/neu and cyclin E proteins, are associated with such indices of tumor progression as a low differentiation grade and deep myometrial invasion, suggesting the potential possibility of including these markers in the panel for determining the molecular EC subtype associated with an aggressive course of the disease. In a certain category of EC patients, there is a relationship between a family history of cancer and high expression of c-Myc protein."
},
{
"pmid": "22588877",
"title": "The cBio cancer genomics portal: an open platform for exploring multidimensional cancer genomics data.",
"abstract": "The cBio Cancer Genomics Portal (http://cbioportal.org) is an open-access resource for interactive exploration of multidimensional cancer genomics data sets, currently providing access to data from more than 5,000 tumor samples from 20 cancer studies. The cBio Cancer Genomics Portal significantly lowers the barriers between complex genomic data and cancer researchers who want rapid, intuitive, and high-quality access to molecular profiles and clinical attributes from large-scale cancer genomics projects and empowers researchers to translate these rich data sets into biologic insights and clinical applications."
},
{
"pmid": "24071851",
"title": "Emerging landscape of oncogenic signatures across human cancers.",
"abstract": "Cancer therapy is challenged by the diversity of molecular implementations of oncogenic processes and by the resulting variation in therapeutic responses. Projects such as The Cancer Genome Atlas (TCGA) provide molecular tumor maps in unprecedented detail. The interpretation of these maps remains a major challenge. Here we distilled thousands of genetic and epigenetic features altered in cancers to ∼500 selected functional events (SFEs). Using this simplified description, we derived a hierarchical classification of 3,299 TCGA tumors from 12 cancer types. The top classes are dominated by either mutations (M class) or copy number changes (C class). This distinction is clearest at the extremes of genomic instability, indicating the presence of different oncogenic processes. The full hierarchy shows functional event patterns characteristic of multiple cross-tissue groups of tumors, termed oncogenic signature classes. Targetable functional events in a tumor class are suggestive of class-specific combination therapy. These results may assist in the definition of clinical trials to match actionable oncogenic signatures with personalized therapies."
},
{
"pmid": "25220419",
"title": "Cancer systems biology: embracing complexity to develop better anticancer therapeutic strategies.",
"abstract": "The transformation of normal cells into cancer cells and maintenance of the malignant state and phenotypes are associated with genetic and epigenetic deregulations, altered cellular signaling responses and aberrant interactions with the microenvironment. These alterations are constantly evolving as tumor cells face changing selective pressures induced by the cells themselves, the microenvironment and drug treatments. Tumors are also complex ecosystems where different, sometime heterogeneous, subclonal tumor populations and a variety of nontumor cells coexist in a constantly evolving manner. The interactions between molecules and between cells that arise as a result of these alterations and ecosystems are even more complex. The cancer research community is increasingly embracing this complexity and adopting a combination of systems biology methods and integrated analyses to understand and predictively model the activity of cancer cells. Systems biology approaches are helping to understand the mechanisms of tumor progression and design more effective cancer therapies. These approaches work in tandem with rapid technological advancements that enable data acquisition on a broader scale, with finer accuracy, higher dimensionality and higher throughput than ever. Using such data, computational and mathematical models help identify key deregulated functions and processes, establish predictive biomarkers and optimize therapeutic strategies. Moving forward, implementing patient-specific computational and mathematical models of cancer will significantly improve the specificity and efficacy of targeted therapy, and will accelerate the adoption of personalized and precision cancer medicine."
},
{
"pmid": "19546859",
"title": "Rare structural variants found in attention-deficit hyperactivity disorder are preferentially associated with neurodevelopmental genes.",
"abstract": "Attention-deficit/hyperactivity disorder (ADHD) is a common and highly heritable disorder, but specific genetic factors underlying risk remain elusive. To assess the role of structural variation in ADHD, we identified 222 inherited copy number variations (CNVs) within 335 ADHD patients and their parents that were not detected in 2026 unrelated healthy individuals. Although no excess CNVs, either deletions or duplications, were found in the ADHD cohort relative to controls, the inherited rare CNV-associated gene set was significantly enriched for genes reported as candidates in studies of autism, schizophrenia and Tourette syndrome, including A2BP1, AUTS2, CNTNAP2 and IMMP2L. The ADHD CNV gene set was also significantly enriched for genes known to be important for psychological and neurological functions, including learning, behavior, synaptic transmission and central nervous system development. Four independent deletions were located within the protein tyrosine phosphatase gene, PTPRD, recently implicated as a candidate gene for restless legs syndrome, which frequently presents with ADHD. A deletion within the glutamate receptor gene, GRM5, was found in an affected parent and all three affected offspring whose ADHD phenotypes closely resembled those of the GRM5 null mouse. Together, these results suggest that rare inherited structural variations play an important role in ADHD development and indicate a set of putative candidate genes for further study in the etiology of ADHD."
},
{
"pmid": "19150792",
"title": "Normalized mutual information feature selection.",
"abstract": "A filter method of feature selection based on mutual information, called normalized mutual information feature selection (NMIFS), is presented. NMIFS is an enhancement over Battiti's MIFS, MIFS-U, and mRMR methods. The average normalized mutual information is proposed as a measure of redundancy among features. NMIFS outperformed MIFS, MIFS-U, and mRMR on several artificial and benchmark data sets without requiring a user-defined parameter. In addition, NMIFS is combined with a genetic algorithm to form a hybrid filter/wrapper method called GAMIFS. This includes an initialization procedure and a mutation operator based on NMIFS to speed up the convergence of the genetic algorithm. GAMIFS overcomes the limitations of incremental search algorithms that are unable to find dependencies between groups of features."
},
{
"pmid": "25390032",
"title": "Higher vulnerability and stress sensitivity of neuronal precursor cells carrying an alpha-synuclein gene triplication.",
"abstract": "Parkinson disease (PD) is a multi-factorial neurodegenerative disorder with loss of dopaminergic neurons in the substantia nigra and characteristic intracellular inclusions, called Lewy bodies. Genetic predisposition, such as point mutations and copy number variants of the SNCA gene locus can cause very similar PD-like neurodegeneration. The impact of altered α-synuclein protein expression on integrity and developmental potential of neuronal stem cells is largely unexplored, but may have wide ranging implications for PD manifestation and disease progression. Here, we investigated if induced pluripotent stem cell-derived neuronal precursor cells (NPCs) from a patient with Parkinson's disease carrying a genomic triplication of the SNCA gene (SNCA-Tri). Our goal was to determine if these cells these neuronal precursor cells already display pathological changes and impaired cellular function that would likely predispose them when differentiated to neurodegeneration. To achieve this aim, we assessed viability and cellular physiology in human SNCA-Tri NPCs both under normal and environmentally stressed conditions to model in vitro gene-environment interactions which may play a role in the initiation and progression of PD. Human SNCA-Tri NPCs displayed overall normal cellular and mitochondrial morphology, but showed substantial changes in growth, viability, cellular energy metabolism and stress resistance especially when challenged by starvation or toxicant challenge. Knockdown of α-synuclein in the SNCA-Tri NPCs by stably expressed short hairpin RNA (shRNA) resulted in reversal of the observed phenotypic changes. These data show for the first time that genetic alterations such as the SNCA gene triplication set the stage for decreased developmental fitness, accelerated aging, and increased neuronal cell loss. The observation of this \"stem cell pathology\" could have a great impact on both quality and quantity of neuronal networks and could provide a powerful new tool for development of neuroprotective strategies for PD."
},
{
"pmid": "17301065",
"title": "Copy number variant in the candidate tumor suppressor gene MTUS1 and familial breast cancer risk.",
"abstract": "Copy number variants (CNVs), insertions, deletions and duplications, contribute considerably to human genetic variation and disease development. A recent study has characterized 100 CNVs including a deletion in the mitochondrial tumor suppressor gene 1 (MTUS1) lacking the coding exon 4. MTUS1 maps to chromosome 8p, a region frequently deleted and associated with disease progression in human cancers, including breast cancer (BC). To investigate the effect of the MTUS1 CNV on familial BC risk, we analyzed 593 BC patients and 732 control individuals using a case-control study design. We found a significant association of the deletion variant with a decreased risk for both familial and high-risk familial BC (odds ratio (OR) = 0.58, 95% confidence interval (CI) = 0.37-0.90, P = 0.01 and OR = 0.41, 95% CI = 0.23-0.74, P = 0.003), supporting its role in human cancer. To our knowledge, the present study is the first to determine the impact of a CNV in a tumor suppressor gene on cancer risk."
},
{
"pmid": "23550210",
"title": "Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal.",
"abstract": "The cBioPortal for Cancer Genomics (http://cbioportal.org) provides a Web resource for exploring, visualizing, and analyzing multidimensional cancer genomics data. The portal reduces molecular profiling data from cancer tissues and cell lines into readily understandable genetic, epigenetic, gene expression, and proteomic events. The query interface combined with customized data storage enables researchers to interactively explore genetic alterations across samples, genes, and pathways and, when available in the underlying data, to link these to clinical outcomes. The portal provides graphical summaries of gene-level data from multiple platforms, network visualization and analysis, survival analysis, patient-centric queries, and software programmatic access. The intuitive Web interface of the portal makes complex cancer genomics profiles accessible to researchers and clinicians without requiring bioinformatics expertise, thus facilitating biological discoveries. Here, we provide a practical guide to the analysis and visualization features of the cBioPortal for Cancer Genomics."
},
{
"pmid": "33144283",
"title": "Cross-Cancer Genome-Wide Association Study of Endometrial Cancer and Epithelial Ovarian Cancer Identifies Genetic Risk Regions Associated with Risk of Both Cancers.",
"abstract": "BACKGROUND\nAccumulating evidence suggests a relationship between endometrial cancer and ovarian cancer. Independent genome-wide association studies (GWAS) for endometrial cancer and ovarian cancer have identified 16 and 27 risk regions, respectively, four of which overlap between the two cancers. We aimed to identify joint endometrial and ovarian cancer risk loci by performing a meta-analysis of GWAS summary statistics from these two cancers.\n\n\nMETHODS\nUsing LDScore regression, we explored the genetic correlation between endometrial cancer and ovarian cancer. To identify loci associated with the risk of both cancers, we implemented a pipeline of statistical genetic analyses (i.e., inverse-variance meta-analysis, colocalization, and M-values) and performed analyses stratified by subtype. Candidate target genes were then prioritized using functional genomic data.\n\n\nRESULTS\nGenetic correlation analysis revealed significant genetic correlation between the two cancers (r = 0.43, P = 2.66 × 10-5). We found seven loci associated with risk for both cancers (P Bonferroni < 2.4 × 10-9). In addition, four novel subgenome-wide regions at 7p22.2, 7q22.1, 9p12, and 11q13.3 were identified (P < 5 × 10-7). Promoter-associated HiChIP chromatin loops from immortalized endometrium and ovarian cell lines and expression quantitative trait loci data highlighted candidate target genes for further investigation.\n\n\nCONCLUSIONS\nUsing cross-cancer GWAS meta-analysis, we have identified several joint endometrial and ovarian cancer risk loci and candidate target genes for future functional analysis.\n\n\nIMPACT\nOur research highlights the shared genetic relationship between endometrial cancer and ovarian cancer. Further studies in larger sample sets are required to confirm our findings."
},
{
"pmid": "34532568",
"title": "Early-Onset Cerebral Amyloid Angiopathy and Alzheimer Disease Related to an APP Locus Triplication.",
"abstract": "BACKGROUND AND OBJECTIVE\nTo report a triplication of the amyloid-β precursor protein (APP) locus along with relative messenger RNA (mRNA) expression in a family with autosomal dominant early-onset cerebral amyloid angiopathy (CAA) and Alzheimer disease (AD).\n\n\nMETHODS\nFour copies of the APP gene were identified by quantitative multiplex PCR of short fluorescent fragments, fluorescent in situ hybridization (FISH), and array comparative genomic hybridization. APP mRNA levels were assessed using reverse-transcription-digital droplet PCR in the proband's whole blood and compared with 10 controls and 9 APP duplication carriers.\n\n\nRESULTS\nBeginning at age 39 years, the proband developed severe episodic memory deficits with a CSF biomarker profile typical of AD and multiple lobar microbleeds in the posterior regions on brain MRI. His father had seizures and recurrent cerebral hemorrhage since the age of 37 years. His cerebral biopsy showed abundant perivascular amyloid deposits, leading to a diagnosis of CAA. In the proband, we identified 4 copies of a 506-kb region located on chromosome 21q21.3 and encompassing the whole APP gene without any other gene. FISH suggested that the genotype of the proband was 3 copies/1 copy corresponding to an APP locus triplication, which was consistent with the presence of 2 APP copies in the healthy mother and with the paternal medical history. Analysis of the APP mRNA level showed a 2-fold increase in the proband and a 1.8 fold increase in APP duplication carriers compared with controls.\n\n\nDISCUSSION\nIncreased copy number of APP is sufficient to cause AD and CAA, with likely earlier onset in case of triplication compared with duplication."
},
{
"pmid": "33307925",
"title": "Difference of copy number variation in blood of patients with lung cancer.",
"abstract": "BACKGROUND\nLung cancer is the leading cause of cancer-related deaths worldwide. Copy number variation (CNV) in several genetic regions correlate with cancer susceptibility. Hence, this study evaluated the association between CNV and non-small cell lung cancer (NSCLC) in the peripheral blood.\n\n\nMETHODS\nBlood samples of 150 patients with NSCLC and 150 normal controls were obtained from a bioresource center (Seoul, Korea). Through an epigenome-wide analysis using the MethylationEPIC BeadChip method, we extracted CNVs by using an SVS8 software-supplied multivariate method. We compared CNV frequencies between the NSCLC and controls, and then performed stratification analyses according to smoking status.\n\n\nRESULTS\nWe acquired 979 CNVs, with 582 and 967 copy-number gains and losses, respectively. We identified five nominally significant associations (ACOT1, NAA60, GSDMD, HLA-DPA1, and SLC35B3 genes). Among the current smokers, the NSCLC group had more CNV losses and gains at the GSDMD gene in chromosome 8 (P=0.02) and at the ACOT1 gene in chromosome 14 (P=0.03) than the control group. It also had more CNV losses at the NAA60 gene in chromosome 16 (P=0.03) among non-smokers. In the NSCLC group, current smokers had more CNV gains and losses at the ACOT1 gene in chromosome 14 (P=0.003) and at HLA-DPA1 gene in chromosome 6 (P=0.02), respectively, than non-smokers.\n\n\nCONCLUSION\nFive nominally significant associations were found between the NSCLC and CNVs. CNVs are associated with the mechanism of lung cancer development. However, the role of CNVs in lung cancer development needs further investigation."
},
{
"pmid": "32850687",
"title": "A Deep Learning Framework to Predict Tumor Tissue-of-Origin Based on Copy Number Alteration.",
"abstract": "Cancer of unknown primary site (CUPS) is a type of metastatic tumor for which the sites of tumor origin cannot be determined. Precise diagnosis of the tissue origin for metastatic CUPS is crucial for developing treatment schemes to improve patient prognosis. Recently, there have been many studies using various cancer biomarkers to predict the tissue-of-origin (TOO) of CUPS. However, only a very few of them use copy number alteration (CNA) to trance TOO. In this paper, a two-step computational framework called CNA_origin is introduced to predict the tissue-of-origin of a tumor from its gene CNA levels. CNA_origin set up an intellectual deep-learning network mainly composed of an autoencoder and a convolution neural network (CNN). Based on real datasets released from the public database, CNA_origin had an overall accuracy of 83.81% on 10-fold cross-validation and 79% on independent datasets for predicting tumor origin, which improved the accuracy by 7.75 and 9.72% compared with the method published in a previous paper. Our results suggested that the autoencoder model can extract key characteristics of CNA and that the CNN classifier model developed in this study can predict the origin of tumors robustly and effectively. CNA_origin was written in Python and can be downloaded from https://github.com/YingLianghnu/CNA_origin."
},
{
"pmid": "21527027",
"title": "GISTIC2.0 facilitates sensitive and confident localization of the targets of focal somatic copy-number alteration in human cancers.",
"abstract": "We describe methods with enhanced power and specificity to identify genes targeted by somatic copy-number alterations (SCNAs) that drive cancer growth. By separating SCNA profiles into underlying arm-level and focal alterations, we improve the estimation of background rates for each category. We additionally describe a probabilistic method for defining the boundaries of selected-for SCNA regions with user-defined confidence. Here we detail this revised computational approach, GISTIC2.0, and validate its performance in real and simulated datasets."
},
{
"pmid": "25041379",
"title": "Region-specific dysregulation of glycogen synthase kinase-3β and β-catenin in the postmortem brains of subjects with bipolar disorder and schizophrenia.",
"abstract": "OBJECTIVES\nThere is both direct and indirect evidence suggesting abnormalities of glycogen synthase kinase (GSK)-3β and β-catenin, two important components of the Wingless-type (Wnt) signaling pathway, in the pathophysiology of bipolar illness and possibly schizophrenia (SZ). In order to further clarify the role of the Wnt signaling pathway in the pathophysiology of bipolar disorder (BP) and SZ, we studied GSK-3β and β-catenin in the postmortem brains of subjects with these disorders.\n\n\nMETHODS\nWe determined the protein expression of GSK-3β, phosphorylated form at serine 9 position (pGSK-3-ser-9), and β-catenin using the western blot technique, and mRNA using the quantitative polymerase chain reaction (qPCR) method, in the dorsolateral prefrontal cortex (DLPFC), cingulate gyrus (CG), and temporal cortex (TEMP) obtained from 19 subjects with BP, 20 subjects with SZ, and 20 normal control (NC) subjects.\n\n\nRESULTS\nWe found that the protein expression of GSK-3β, pGSK-3β-ser-9, and β-catenin was significantly decreased in the DLPFC and TEMP, but not in the CG, of subjects with BP compared with NC subjects. The mRNA expression of GSK-3β and β-catenin was significantly decreased in the DLPFC and TEMP, but not in the CG, of subjects with BP compared with NC subjects. There were no significant differences in the protein or mRNA expression of GSK-3β, pGSK-3β-ser-9, or β-catenin between subjects with SZ and NC subjects in any of the brain areas studied.\n\n\nCONCLUSIONS\nThese studies show region-specific abnormalities of both protein and mRNA expression of GSK-3β and β-catenin in postmortem brains of subjects with BP but not subjects with SZ. Thus, abnormalities of the Wnt signaling pathway may be associated with the pathophysiology of bipolar illness."
},
{
"pmid": "16119262",
"title": "Feature selection based on mutual information: criteria of max-dependency, max-relevance, and min-redundancy.",
"abstract": "Feature selection is an important problem for pattern classification systems. We study how to select good features according to the maximal statistical dependency criterion based on mutual information. Because of the difficulty in directly implementing the maximal dependency condition, we first derive an equivalent form, called minimal-redundancy-maximal-relevance criterion (mRMR), for first-order incremental feature selection. Then, we present a two-stage feature selection algorithm by combining mRMR and other more sophisticated feature selectors (e.g., wrappers). This allows us to select a compact set of superior features at very low cost. We perform extensive experimental comparison of our algorithm and other methods using three different classifiers (naive Bayes, support vector machine, and linear discriminate analysis) and four different data sets (handwritten digits, arrhythmia, NCI cancer cell lines, and lymphoma tissues). The results confirm that mRMR leads to promising improvement on feature selection and classification accuracy."
},
{
"pmid": "17122850",
"title": "Global variation in copy number in the human genome.",
"abstract": "Copy number variation (CNV) of DNA sequences is functionally significant but has yet to be fully ascertained. We have constructed a first-generation CNV map of the human genome through the study of 270 individuals from four populations with ancestry in Europe, Africa or Asia (the HapMap collection). DNA from these individuals was screened for CNV using two complementary technologies: single-nucleotide polymorphism (SNP) genotyping arrays, and clone-based comparative genomic hybridization. A total of 1,447 copy number variable regions (CNVRs), which can encompass overlapping or adjacent gains or losses, covering 360 megabases (12% of the genome) were identified in these populations. These CNVRs contained hundreds of genes, disease loci, functional elements and segmental duplications. Notably, the CNVRs encompassed more nucleotide content per genome than SNPs, underscoring the importance of CNV in genetic diversity and evolution. The data obtained delineate linkage disequilibrium patterns for many CNVs, and reveal marked variation in copy number among populations. We also demonstrate the utility of this resource for genetic disease studies."
},
{
"pmid": "30712080",
"title": "Estrogen Signaling in Endometrial Cancer: a Key Oncogenic Pathway with Several Open Questions.",
"abstract": "Endometrial cancer is the most common gynecological cancer in the developed world, and it is one of the few cancer types that is becoming more prevalent and leading to more deaths in the USA each year. The majority of endometrial tumors are considered to be hormonally driven, where estrogen signaling through estrogen receptor α (ER) acts as an oncogenic signal. The major risk factors and some treatment options for endometrial cancer patients emphasize a key role for estrogen signaling in the disease. Despite the strong connections between estrogen signaling and endometrial cancer, important molecular aspects of ER function remain poorly understood; however, progress is being made in our understanding of estrogen signaling in endometrial cancer. Here, we discuss the evidence for the importance of estrogen signaling in endometrial cancer, details of the endometrial cancer-specific actions of ER, and open questions surrounding estrogen signaling in endometrial cancer."
},
{
"pmid": "28176296",
"title": "BRCA1 and BRCA2 mutations in ovarian cancer patients from China: ethnic-related mutations in BRCA1 associated with an increased risk of ovarian cancer.",
"abstract": "BRCA1/2 are cancer predisposition genes involved in hereditary breast and ovarian cancer (HBOC). Mutation carriers display an increased sensitivity to inhibitors of poly(ADP-ribose) polymerase (PARP). Despite a number of small-size hospital-based studies being previously reported, there is not yet, to our knowledge, precise data of BRCA1/2 mutations among Chinese ovarian cancer patients. We performed a multicenter cohort study including 916 unselected consecutive epithelial ovarian cancer (EOC) patients from eastern China to screen for BRCA1/2 mutations using the next-generation sequencing approach. A total of 153 EOC patients were found to carry pathogenic germline mutations in BRCA1/2, accounting for an overall mutation incidence of 16.7% with the predominance in BRCA1 (13.1%) compared with BRCA2 (3.9%). We identified 53 novel pathogenic mutations, among which the c.283_286delCTTG and the c.4573C > T of BRCA1 were both found in two unrelated patients. More importantly, the most common mutation found in this study, c.5470_5477del8 was most likely to be Chinese population-related without an apparent founder origin. This hot-spot mutation was presumably associated with an increased risk of ovarian cancer. Taken together, germline BRCA1/2 mutations were common in Chinese EOC patients with distinct mutational spectrum compared to Western populations. Our study contributes to the current understanding of BRCA1/2 mutation prevalence worldwide. We recommend BRCA1/2 genetic testing to all Chinese women diagnosed with EOC to identify HBOC families, to provide genetic counseling and clinical management for at-risk relatives. Mutation carriers may also benefit from PARP-targeted therapies."
},
{
"pmid": "12424115",
"title": "Gene expression data analysis with a dynamically extended self-organized map that exploits class information.",
"abstract": "MOTIVATION\nCurrently the most popular approach to analyze genome-wide expression data is clustering. One of the major drawbacks of most of the existing clustering methods is that the number of clusters has to be specified a priori. Furthermore, by using pure unsupervised algorithms prior biological knowledge is totally ignored Moreover, most current tools lack an effective framework for tight integration of unsupervised and supervised learning for the analysis of high-dimensional expression data and only very few multi-class supervised approaches are designed with the provision for effectively utilizing multiple functional class labeling.\n\n\nRESULTS\nThe paper adapts a novel Self-Organizing map called supervised Network Self-Organized Map (sNet-SOM) to the peculiarities of multi-labeled gene expression data. The sNet-SOM determines adaptively the number of clusters with a dynamic extension process. This process is driven by an inhomogeneous measure that tries to balance unsupervised, supervised and model complexity criteria. Nodes within a rectangular grid are grown at the boundary nodes, weights rippled from the internal nodes towards the outer nodes of the grid, and whole columns inserted within the map The appropriate level of expansion is determined automatically. Multiple sNet-SOM models are constructed dynamically each for a different unsupervised/supervised balance and model selection criteria are used to select the one optimum one. The results indicate that sNet-SOM yields competitive performance to other recently proposed approaches for supervised classification at a significantly reduced computational cost and it provides extensive exploratory analysis potentiality within the analysis framework. Furthermore, it explores simple design decisions that are easier to comprehend and computationally efficient."
},
{
"pmid": "32364424",
"title": "Copy number variation of ubiquitin- specific proteases genes in blood leukocytes and colorectal cancer.",
"abstract": "Ubiquitin-specific proteases (USPs) play important roles in the regulation of many cancer-related biological processes. USPs copy number variation (CNVs) may affect the risk and prognosis of colorectal cancer (CRC). We detected CNVs of USPs genes in 468 matched CRC patients and controls, estimated the associations between the USPs genes CNVs and CRC risk and prognosis and their interactions with environmental factors on CRC risk. Finally, we generated five CRC risk predictive models with different CNVs patterns combining with environmental factors (EF). We identified significant association between CYLD deletion and CRC risk (ORadj = 4.18, 95% CI: 2.03-8.62), significant association between USP9X amplification and CRC risk (ORadj = 2.30, 95% CI: 1.48-3.57), and significant association between USP11 deletion and CRC risk (ORadj = 3.49, 95% CI: 1.49-8.64). There were significant gene-environment and gene-gene interactions on CRC risk. The area under the receiver operating characteristic curve (AUC) of EF + SIG (deletion of CYLD and USP11, amplification of USP9X) model was significantly larger than any other models (AUC = 0.75, 95% CI: 0.74-0.77). We did not identify significant associations between CNVs of the three genes and CRC prognosis. CNVs of CYLD, USP9X, and USP11 are significantly associated with the risk of CRC. Gene-gene and gene-environment interactions might also play an important role in the development of CRC."
},
{
"pmid": "32058674",
"title": "Somatic mutations and copy number variations in breast cancers with heterogeneous HER2 amplification.",
"abstract": "Intratumour heterogeneity fuels carcinogenesis and allows circumventing specific targeted therapies. HER2 gene amplification is associated with poor outcome in invasive breast cancer. Heterogeneous HER2 amplification has been described in 5-41% of breast cancers. Here, we investigated the genetic differences between HER2-positive and HER2-negative admixed breast cancer components. We performed an in-depth analysis to explore the potential heterogeneity in the somatic mutational landscape of each individual tumour component. Formalin-fixed, paraffin-embedded breast cancer tissue of ten patients with at least one HER2-negative and at least one HER2-positive component was microdissected. Targeted next-generation sequencing was performed using a customized 53-gene panel. Somatic mutations and copy number variations were analysed. Overall, the tumours showed a heterogeneous distribution of 12 deletions, 9 insertions, 32 missense variants and 7 nonsense variants in 26 different genes, which are (likely) pathogenic. Three splice site alterations were identified. One patient had an EGFR copy number gain restricted to a HER2-negative in situ component, resulting in EGFR protein overexpression. Two patients had FGFR1 copy number gains in at least one tumour component. Two patients had an 8q24 gain in at least one tumour component, resulting in a copy number increase in MYC and PVT1. One patient had a CCND1 copy number gain restricted to a HER2-negative tumour component. No common alternative drivers were identified in the HER2-negative tumour components. This series of 10 breast cancers with heterogeneous HER2 gene amplification illustrates that HER2 positivity is not an unconditional prerequisite for the maintenance of tumour growth. Many other molecular aberrations are likely to act as alternative or collaborative drivers. This study demonstrates that breast carcinogenesis is a dynamically evolving process characterized by a versatile somatic mutational profile, of which some genetic aberrations will be crucial for cancer progression, and others will be mere 'passenger' molecular anomalies."
},
{
"pmid": "27266344",
"title": "Classification of cancers based on copy number variation landscapes.",
"abstract": "Genomic alterations in DNA can cause human cancer. DNA copy number variants (CNV), as one of the types of DNA mutations, have been considered to be associated with various human cancers. CNVs vary in size from 1bp up to one complete chromosome arm. In order to understand the difference between different human cancers on CNVs, in this study, we developed a method to computationally classify six human cancer types by using only CNV level values. The CNVs of 23,082 genes were used as features to construct the classifier. Then the features are carefully selected by mRMR (minimum Redundancy Maximum Relevance Feature Selection) and IFS (Incremental Feature Selection) methods. An accuracy of over 0.75 was reached by using only the CNVs of 19 genes based on Dagging method in 10-fold cross validation. It was indicated that these 19 genes may play important roles in differentiating cancer types. We also analyzed the biological functions of several top genes within the 19 gene list. The statistical results and biological analysis of these genes from this work might further help understand different human cancer types and provide guidance for related validation experiments. This article is part of a Special Issue entitled \"System Genetics\" Guest Editor: Dr. Yudong Cai and Dr. Tao Huang."
},
{
"pmid": "29104657",
"title": "Low copy number of FCGR3B is associated with lupus nephritis in a Chinese population.",
"abstract": "Lupus nephritis (LN) is a polygenic disease caused by an interaction between hereditary and environmental factors. Numerous gene copy number variations have been identified to contribute to this disease. Previously, immunoglobulin (Ig)G Fcγ receptor 3B (FCGR3B) copy number variation (CNV) was reported to be associated with LN in the Caucasian population. However, the effect of FCGR3B CNV on LN in the Chinese population remains unknown. The present study aimed to investigate whether CNVs of FCGR3B are associated with LN in the Henan Chinese population. FCGR3B CNVs were determined in 142 LN patients and 328 healthy controls. A modified methodology based on competitive polymerase chain reaction, a Multiplex AccuCopy™ kit was used to detect FCGR3B copy number. Clinical and laboratory data was collected retrospectively from medical records. To evaluate associations between FCGR3B CNVs and LN susceptibility, the present study calculated the odds ratios using a logistic regression analysis. The current study identified that the distribution of FCGR3B copy number was significantly different between LN and healthy controls (P=0.031). A low copy number (<2) of FCGR3B was significantly enriched in LN patients (P=0.042), and was a risk factor for LN (odds ratio=2.059; 95% confidence interval, 1.081-3.921; P=0.028). However, a high copy number (>2) had no effect on LN. There were no associations between FCGR3B CNV and clinical phenotypes of LN. The results from the present study demonstrate that a low copy number of FCGR3B is a risk factor for LN in a Chinese population."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8978563 | 10.3389/fbioe.2022.818112 | Photoelastic Stress Field Recovery Using Deep Convolutional Neural Network | Recent work has shown that deep convolutional neural network is capable of solving inverse problems in computational imaging, and recovering the stress field of the loaded object from the photoelastic fringe pattern can also be regarded as an inverse problem solving process. However, the formation of the fringe pattern is affected by the geometry of the specimen and experimental configuration. When the loaded object produces complex fringe distribution, the traditional stress analysis methods still face difficulty in unwrapping. In this study, a deep convolutional neural network based on the encoder–decoder structure is proposed, which can accurately decode stress distribution information from complex photoelastic fringe images generated under different experimental configurations. The proposed method is validated on a synthetic dataset, and the quality of stress distribution images generated by the network model is evaluated using mean squared error (MSE), structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and other evaluation indexes. The results show that the proposed stress recovery network can achieve an average performance of more than 0.99 on the SSIM. | 2 Related WorkIt is a challenging task to recover the stress field of the loaded object from a single photoelastic fringe pattern. Most of the traditional methods are limited by different experimental conditions and calculation methods when dealing with complex fringe patterns. Recently reported methods based on deep learning provide new ideas to solve these shortcomings. Feng et al. (2019) proposed a fringe pattern analysis method based on deep learning. They collected phase-shifted fringe patterns in different scenes to generate training data and then trained neural networks to predict some intermediate results. Finally, combining these intermediate results, the high-precision phase image is recovered by using arc tangent function. The results show that this method can significantly improve the quality of phase recovery. Sergazinov and Kramar (2021) use the CNN to solve the problem of force reconstruction of photoelastic materials. They use the synthetic dataset obtained by theoretical calculation for training and then use the transfer learning to fine-tune a small amount of real experimental data, which shows good force reconstruction results.For the estimation of the photoelastic stress field under a single experimental condition, a dynamic photoelastic experimental method based on pattern recognition was proposed (Briñez-de León et al., 2020a). The ANN was used to process the color fringe patterns that changed with time so as to classify the stress of different sizes, isotropic points, and inconsistent information. In order to make the deep learning method suitable for a wider range of experimental conditions, Briñez-de León et al. (2020b) reported a powerful synthetic dataset which covered photoelastic fringe patterns and the corresponding stress field distribution patterns under various experimental conditions with highly diversified spatial fringe distribution. At the same time, a neural network structure based on VGG16 (Simonyan and Zisserman, 2014) is proposed to recover the stress field from the isochromatic pattern. However, the prediction results of this network model are somewhat different from the real maximum stress difference, and the prediction results of the stress on the rounded surfaces are not very accurate. In their reports in the other literature (Briñez-de León et al., 2020c), an image translation problem directly related to spatial transformation based on the generative adversarial network (GAN) model was proposed. This method showed good performance in the SSIM, but there is a supersaturation phenomenon of stress recovery in some specimens. In addition, GAN is not an easy training model for the convergence of the network (Sun et al., 2021; Ying Liu et al., 2021; Wu et al., 2022). Recently, they proposed a new neural network model to evaluate the stress field and named it PhotoelastNet (Briñez-de León et al., 2022). Considering the influence of noise and complex stress distribution patterns, the scale of synthetic data was further expanded, and a lighter network structure was designed, which achieved better performance in synthetic images and experimental images. However, there is still a certain gap between the accuracy of stress distribution estimation and ground truth. Our study improves on these methods by proposing a simpler and reasonable network structure and designing more effective loss functions to solve these problems. | [
"35297828",
"29652377",
"35096796",
"13602029",
"34502791",
"31163947",
"15376593",
"35083202",
"34746114",
"35223822"
] | [
{
"pmid": "35297828",
"title": "PhotoelastNet: a deep convolutional neural network for evaluating the stress field by using a single color photoelasticity image.",
"abstract": "Quantifying the stress field induced into a piece when it is loaded is important for engineering areas since it allows the possibility to characterize mechanical behaviors and fails caused by stress. For this task, digital photoelasticity has been highlighted by its visual capability of representing the stress information through images with isochromatic fringe patterns. Unfortunately, demodulating such fringes remains a complicated process that, in some cases, depends on several acquisitions, e.g., pixel-by-pixel comparisons, dynamic conditions of load applications, inconsistence corrections, dependence of users, fringe unwrapping processes, etc. Under these drawbacks and taking advantage of the power results reported on deep learning, such as the fringe unwrapping process, this paper develops a deep convolutional neural network for recovering the stress field wrapped into color fringe patterns acquired through digital photoelasticity studies. Our model relies on an untrained convolutional neural network to accurately demodulate the stress maps by inputting only one single photoelasticity image. We demonstrate that the proposed method faithfully recovers the stress field of complex fringe distributions on simulated images with an averaged performance of 92.41% according to the SSIM metric. With this, experimental cases of a disk and ring under compression were evaluated, achieving an averaged performance of 85% in the SSIM metric. These results, on the one hand, are in concordance with new tendencies in the optic community to deal with complicated problems through machine-learning strategies; on the other hand, it creates a new perspective in digital photoelasticity toward demodulating the stress field for a wider quantity of fringe distributions by requiring one single acquisition."
},
{
"pmid": "29652377",
"title": "Single shot multi-wavelength phase retrieval with coherent modulation imaging.",
"abstract": "A single shot multi-wavelength phase retrieval method is proposed by combining common coherent modulation imaging (CMI) and a low rank mixed-state algorithm together. A radiation beam consisting of multi-wavelength is illuminated on the sample to be observed, and the exiting field is incident on a random phase plate to form speckle patterns, which is the incoherent superposition of diffraction patterns of each wavelength. The exiting complex amplitude of the sample including both the modulus and phase of each wavelength can be reconstructed simultaneously from the recorded diffraction intensity using a low rank mixed-state algorithm. The feasibility of this proposed method was verified with visible light experimentally. This proposed method not only makes CMI realizable with partially coherent illumination but also can extend its application to various traditionally unrelated fields, where several wavelengths should be considered simultaneously."
},
{
"pmid": "35096796",
"title": "Intelligent Detection of Steel Defects Based on Improved Split Attention Networks.",
"abstract": "The intelligent monitoring and diagnosis of steel defects plays an important role in improving steel quality, production efficiency, and associated smart manufacturing. The application of the bio-inspired algorithms to mechanical engineering problems is of great significance. The split attention network is an improvement of the residual network, and it is an improvement of the visual attention mechanism in the bionic algorithm. In this paper, based on the feature pyramid network and split attention network, the network is improved and optimised in terms of data enhancement, multi-scale feature fusion and network structure optimisation. The DF-ResNeSt50 network model is proposed, which introduces a simple modularized split attention block, which can improve the attention mechanism of cross-feature graph groups. Finally, experimental validation proves that the proposed network model has good performance and application prospects in the intelligent detection of steel defects."
},
{
"pmid": "34502791",
"title": "A Time Sequence Images Matching Method Based on the Siamese Network.",
"abstract": "The similar analysis of time sequence images to achieve image matching is a foundation of tasks in dynamic environments, such as multi-object tracking and dynamic gesture recognition. Therefore, we propose a matching method of time sequence images based on the Siamese network. Inspired by comparative learning, two different comparative parts are designed and embedded in the network. The first part makes a comparison between the input image pairs to generate the correlation matrix. The second part compares the correlation matrix, which is the output of the first comparison part, with a template, in order to calculate the similarity. The improved loss function is used to constrain the image matching and similarity calculation. After experimental verification, we found that it not only performs better, but also has some ability to estimate the camera pose."
},
{
"pmid": "31163947",
"title": "One-step robust deep learning phase unwrapping.",
"abstract": "Phase unwrapping is an important but challenging issue in phase measurement. Even with the research efforts of a few decades, unfortunately, the problem remains not well solved, especially when heavy noise and aliasing (undersampling) are present. We propose a database generation method for phase-type objects and a one-step deep learning phase unwrapping method. With a trained deep neural network, the unseen phase fields of living mouse osteoblasts and dynamic candle flame are successfully unwrapped, demonstrating that the complicated nonlinear phase unwrapping task can be directly fulfilled in one step by a single deep neural network. Excellent anti-noise and anti-aliasing performances outperforming classical methods are highlighted in this paper."
},
{
"pmid": "15376593",
"title": "Image quality assessment: from error visibility to structural similarity.",
"abstract": "Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000."
},
{
"pmid": "35083202",
"title": "Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots.",
"abstract": "Mobile robots have an important role in material handling in manufacturing and can be used for a variety of automated tasks. The accuracy of the robot's moving trajectory has become a key issue affecting its work efficiency. This paper presents a method for optimizing the trajectory of the mobile robot based on the digital twin of the robot. The digital twin of the mobile robot is created by Unity, and the trajectory of the mobile robot is trained in the virtual environment and applied to the physical space. The simulation training in the virtual environment provides schemes for the actual movement of the robot. Based on the actual movement data returned by the physical robot, the preset trajectory of the virtual robot is dynamically adjusted, which in turn enables the correction of the movement trajectory of the physical robot. The contribution of this work is the use of genetic algorithms for path planning of robots, which enables trajectory optimization of mobile robots by reducing the error in the movement trajectory of physical robots through the interaction of virtual and real data. It provides a method to map learning in the virtual domain to the physical robot."
},
{
"pmid": "34746114",
"title": "Dynamic Gesture Recognition Using Surface EMG Signals Based on Multi-Stream Residual Network.",
"abstract": "Gesture recognition technology is widely used in the flexible and precise control of manipulators in the assisted medical field. Our MResLSTM algorithm can effectively perform dynamic gesture recognition. The result of surface EMG signal decoding is applied to the controller, which can improve the fluency of artificial hand control. Much current gesture recognition research using sEMG has focused on static gestures. In addition, the accuracy of recognition depends on the extraction and selection of features. However, Static gesture research cannot meet the requirements of natural human-computer interaction and dexterous control of manipulators. Therefore, a multi-stream residual network (MResLSTM) is proposed for dynamic hand movement recognition. This study aims to improve the accuracy and stability of dynamic gesture recognition. Simultaneously, it can also advance the research on the smooth control of the Manipulator. We combine the residual model and the convolutional short-term memory model into a unified framework. The architecture extracts spatiotemporal features from two aspects: global and deep, and combines feature fusion to retain essential information. The strategy of pointwise group convolution and channel shuffle is used to reduce the number of network calculations. A dataset is constructed containing six dynamic gestures for model training. The experimental results show that on the same recognition model, the gesture recognition effect of fusion of sEMG signal and acceleration signal is better than that of only using sEMG signal. The proposed approach obtains competitive performance on our dataset with the recognition accuracies of 93.52%, achieving state-of-the-art performance with 89.65% precision on the Ninapro DB1 dataset. Our bionic calculation method is applied to the controller, which can realize the continuity of human-computer interaction and the flexibility of manipulator control."
},
{
"pmid": "35223822",
"title": "Self-Tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm.",
"abstract": "With the manipulator performs fixed-point tasks, it becomes adversely affected by external disturbances, parameter variations, and random noise. Therefore, it is essential to improve the robust and accuracy of the controller. In this article, a self-tuning particle swarm optimization (PSO) fuzzy PID positioning controller is designed based on fuzzy PID control. The quantization and scaling factors in the fuzzy PID algorithm are optimized by PSO in order to achieve high robustness and high accuracy of the manipulator. First of all, a mathematical model of the manipulator is developed, and the manipulator positioning controller is designed. A PD control strategy with compensation for gravity is used for the positioning control system. Then, the PID controller parameters dynamically are minute-tuned by the fuzzy controller 1. Through a closed-loop control loop to adjust the magnitude of the quantization factors-proportionality factors online. Correction values are outputted by the modified fuzzy controller 2. A quantization factor-proportion factor online self-tuning strategy is achieved to find the optimal parameters for the controller. Finally, the control performance of the improved controller is verified by the simulation environment. The results show that the transient response speed, tracking accuracy, and follower characteristics of the system are significantly improved."
}
] |
Frontiers in Psychology | null | PMC8979791 | 10.3389/fpsyg.2022.778722 | Social Relationship Prediction Integrating Personality Traits and Asymmetric Interactions | Social networks have become an important way for users to find friends and expand their social circle. Social networks can improve users’ experience by recommending more suitable friends to them. The key lies in improving the accuracy of link prediction, which is also the main research issue of this study. In the study of personality traits, some scholars have proved that personality can be used to predict users’ behavior in social networks. Based on these studies, this study aims to improve the accuracy of link prediction in directed social networks. Considering the integration of personality link preference and asymmetric interaction into the link prediction model of social networks, a four-dimensional link prediction model is proposed. Through comparative experiments, it is proved that the four-dimensional social relationship prediction model proposed in this study is more accurate than the model only based on similarity. At the same time, it is also verified that the matching degree of personality link preference and asymmetric interaction intensity in the model can help improve the accuracy of link prediction. | Related WorksLink PredictionLink prediction is regarded as a basic problem of the social network’s evolution in time by Liben-Nowell and Kleinberg (2007), and they have proposed some classical prediction methods based on network topology information. It is common to measure the possibility of link generation by calculating the similarity between nodes since people usually establish new relationships with people who have certain similarities with them in topological or non-topological features (Bhattacharyya et al., 2011).Topology-based measurement is defined by using various topological information of the network. Indicators, such as Common Neighbors (Lorrain et al., 1971) and Jaccard Coefficient (Jaccard, 1912), are generated by defining neighbor nodes as neighbors, which can indirectly reflect users’ social behaviors and directly affect users’ choices. Besides, there are other indicators. For example, Hu Ma et al. (2019) calculated the number of all paths between two nodes, and Friend Link (Chen et al., 2016) considered the path with length of L between nodes, which are all measuring indexes based on paths between nodes. There are also some measurements based on random walks, including Hitting Time (Fouss et al., 2007), an asymmetric measurement of the expected number of steps required for a random walk between nodes, as well as Prop Flow (Lichtenwalter et al., 2010), which is a more localized measurement.Non-topological measurement focuses on information outside the network structure, such as the profile of users in social networks, including age, interests, geographic location and so on. Aiello et al. (2012) found that users’ tags could reflect their interests, so they finally proposed a method for link prediction based on tag similarity. In addition to the topological and non-topological measurements described above, link prediction can be viewed as a binary classification problem, where each pair of nodes is an instance, and positive and negative category labels indicate whether the node pair is connected. Many classification models have been applied to link prediction, such as the support vector machine (SVM) (Li and Chen, 2013) and k-nearest neighbor algorithm (KNN) (Zhu et al., 2017). Classification methods can also be considered as learning-based methods, the most critical part of which is the selection of features. The common neighbors or paths between two nodes can construct topological features, and a large number of experiments have proved that these topological features are effective in link prediction (Chiang et al., 2011). Also, it can construct non-topological features to improve the link prediction (Scellato et al., 2016). Moradabadi and Meybodi (2018) proposed a strategy of learning automata for link prediction in weighted social networks. Aziz et al. (2020) proposed a novel link prediction method that aims at improving the accuracy of existing path-based methods by incorporating information about the nodes along local paths. Tang R. et al. (2021) proposed a framework based on multiple types of consistency between embedding vectors (MulCEVs). In MulCEV, the traditional embedding-based method is applied to obtain the degree of consistency between the vectors representing the unmatched nodes, and a proposed distance consistency index based on the positions of nodes in each latent space provides additional clues for prediction. Mo et al. (2022) proposed a deep learning framework for temporal network link prediction. Bao et al. (2022) proposed an improved evaluation methodology for association rules and link prediction. Wei et al. (2022) proposed a novel time series-based graph model with text, called text with time series for graph (TT-Graph) model, which explicitly considers the user similarity and time series similarity. Link prediction applications, namely recommendation system, anomaly detection, influence analysis, and community detection become more strenuous due to network diversity and complex and dynamic network contexts (Daud et al., 2020).To sum up, topological information between nodes in a network is the key to topological measurement and the topological feature-based learning model, and the validity of non-topological measurements and non-topological features depends on the external available information of their domain and specific network. As for Weibo, however, its large number of users and complicated user relationships may cause the problem of the lack of information when accessing network structure data. In order to protect privacy, non-topological information such as users’ profiles is incomplete. All these factors will directly affect the above methods, so it is the trend of current research to analyze potential features based on existing information. This study is trying to add more potential supplementary factors to the link prediction model.Personality PredictionPersonality traits are defined as endogenous, stable, hierarchical, and are influenced by biological factors such as genes and brain structure (Romero et al., 2009). The most commonly used model to describe personality is the Five Factor Model proposed by Goldberg (1990) and Costa and McCrae (1992). It holds the idea that personality is mainly determined by physiology and consists of five basic tendencies: openness to experience, extraversion, agreeableness, conscientiousness, and neuroticism. These traits are relatively stable throughout a person’s life cycle and under different situations, which is the reason why users’ personality traits can serve as a starting point for predicting users’ behavior. Ngai et al. (2015) emphasized that personality characteristics are generally considered as one of the basic theories to explain the influence of users’ subsequent behavior characteristics.In recent years, scholars have begun to focus on the connection between personality and online social network behavior. Studies have shown that personality can be used to predict many aspects of life, including academic achievement (Komarraju et al., 2009), job performance (Neal et al., 2012), health status (Soldz and Vaillant, 1999), and social network behavior (Wang, 2013). McElroy et al. (2007) tested the general influence of personality on Internet use, and the results supported that personality should be taken as an explanatory factor. The big five personality traits could explain some of the differences in Internet use. Some scholars have preliminarily outlined a personalization-based approach. Hu and Pu (2011) attempted to solve the cold start problem by integrating collaborative filtering methods with personality traits. The so-called cold start problem refers to the dilemma of having no basic information to recommend (Ju et al., 2015).In earlier studies, user’s personalities were obtained through questionnaires. In recent years, it has been proved that the big five personality traits are significantly correlated with behaviors in social networks. For example, people with high extroversion are more active in social networks and have more friends (Blackwell et al., 2017), while people with high neuroticism tend to hide themselves, try to understand others in a passive way, and use more negative words in their published content (Liu et al., 2016). Based on the above correlation, some scholars have tried to extract the personality traits of users from social networks directly. Kosinski et al. (2013) proved that users’ private attributes including personality traits could be predicted by digital records of users’ behaviors in online social networks, and they also proved the correlation between Facebook likes and personality traits.Based on the above, it is concluded that the link between personality and social behavior has been demonstrated and personality can be predicted from social data. According to the Report on the Development of Weibo Users in 20201, the number of daily active users of Weibo has reached 224 million, and a large amount of user-generated content like blog posts and interactive data is created every day. All of these are important unstructured information but without being fully used. This study aims to explore users’ personality potential characteristics from the data, and then make link prediction based on topological and non-topological features and personality traits of the network. | [
"2283588",
"28702594",
"35155348",
"23479631",
"25552558",
"34735350",
"34501625",
"35009967",
"23992473",
"33889035",
"28771201"
] | [
{
"pmid": "2283588",
"title": "An alternative \"description of personality\": the big-five factor structure.",
"abstract": "In the 45 years since Cattell used English trait terms to begin the formulation of his \"description of personality,\" a number of investigators have proposed an alternative structure based on 5 orthogonal factors. The generality of this 5-factor model is here demonstrated across unusually comprehensive sets of trait terms. In the first of 3 studies, 1,431 trait adjectives grouped into 75 clusters were analyzed; virtually identical structures emerged in 10 replications, each based on a different factor-analytic procedure. A 2nd study of 479 common terms grouped into 133 synonym clusters revealed the same structure in 2 samples of self-ratings and in 2 samples of peer ratings. None of the factors beyond the 5th generalized across the samples. In the 3rd study, analyses of 100 clusters derived from 339 trait terms suggest their potential utility as Big-Five markers in future studies."
},
{
"pmid": "28702594",
"title": "LPI-ETSLP: lncRNA-protein interaction prediction using eigenvalue transformation-based semi-supervised link prediction.",
"abstract": "RNA-protein interactions are essential for understanding many important cellular processes. In particular, lncRNA-protein interactions play important roles in post-transcriptional gene regulation, such as splicing, translation, signaling and even the progression of complex diseases. However, the experimental validation of lncRNA-protein interactions remains time-consuming and expensive, and only a few theoretical approaches are available for predicting potential lncRNA-protein associations. Here, we presented eigenvalue transformation-based semi-supervised link prediction (LPI-ETSLP) to uncover the relationship between lncRNAs and proteins. Moreover, it is semi-supervised and does not need negative samples. Based on 5-fold cross validation, an AUC of 0.8876 and an AUPR of 0.6438 have demonstrated its reliable performance compared with three other computational models. Furthermore, the case study demonstrated that many lncRNA-protein interactions predicted by our method can be successfully confirmed by experiments. It is indicated that LPI-ETSLP would be a useful bioinformatics resource for biomedical research studies."
},
{
"pmid": "35155348",
"title": "Online Rumor Diffusion Model Based on Variation and Silence Phenomenon in the Context of COVID-19.",
"abstract": "In the era of mobile internet, information dissemination has made a new leap in speed and in breadth. With the outbreak of the coronavirus disease 2019 (COVID-19), the COVID-19 rumor diffusion that is not limited by time and by space often becomes extremely complex and fickle. It is also normal that a piece of unsubstantiated news about COVID-19 could develop to many versions. We focus on the stagnant role and information variants in the process of rumor diffusion about COVID-19, and through the study of variability and silence in the dissemination, which combines the effects of stagnation phenomenon and information variation on the whole communication system in the circulation of rumors about COVID-19, based on the classic rumor SIR (Susceptible Infected Recovered) model, we introduce a new concept of \"variation\" and \"oyster\". The stability of the new model is analyzed by the mean field equation, and the threshold of COVID-19 rumor propagation is obtained later. According to the results of the simulation experiment, whether in the small world network or in the scale-free network, the increase of the immure and the silent probability of the variation can effectively reduce the speed of rumor diffusion about COVID-19 and is conducive to the dissemination of the truth in the whole population. Studies have also shown that increasing the silence rate of variation can reduce COVID-19 rumor transmission more quickly than the immunization rate. The interesting discovery is that at the same time, a higher rumor infection rate can bring more rumors about COVID-19 but does not always maintain a high number of the variation which could reduce variant tendency of rumors. The more information diffuses in the social group, the more consistent the version and content of the information will be, which proves that the more adequate each individual information is, the slower and less likely rumors about COVID-19 spread. This consequence tells us that the government needs to guide the public to the truth. Announcing the true information publicly could instantly contain the COVID-19 rumor diffusion well rather than making them hidden or voiceless."
},
{
"pmid": "23479631",
"title": "Private traits and attributes are predictable from digital records of human behavior.",
"abstract": "We show that easily accessible digital records of behavior, Facebook Likes, can be used to automatically and accurately predict a range of highly sensitive personal attributes including: sexual orientation, ethnicity, religious and political views, personality traits, intelligence, happiness, use of addictive substances, parental separation, age, and gender. The analysis presented is based on a dataset of over 58,000 volunteers who provided their Facebook Likes, detailed demographic profiles, and the results of several psychometric tests. The proposed model uses dimensionality reduction for preprocessing the Likes data, which are then entered into logistic/linear regression to predict individual psychodemographic profiles from Likes. The model correctly discriminates between homosexual and heterosexual men in 88% of cases, African Americans and Caucasian Americans in 95% of cases, and between Democrat and Republican in 85% of cases. For the personality trait \"Openness,\" prediction accuracy is close to the test-retest accuracy of a standard personality test. We give examples of associations between attributes and Likes and discuss implications for online personalization and privacy."
},
{
"pmid": "25552558",
"title": "Percolation transition in dynamical traffic network with evolving critical bottlenecks.",
"abstract": "A critical phenomenon is an intrinsic feature of traffic dynamics, during which transition between isolated local flows and global flows occurs. However, very little attention has been given to the question of how the local flows in the roads are organized collectively into a global city flow. Here we characterize this organization process of traffic as \"traffic percolation,\" where the giant cluster of local flows disintegrates when the second largest cluster reaches its maximum. We find in real-time data of city road traffic that global traffic is dynamically composed of clusters of local flows, which are connected by bottleneck links. This organization evolves during a day with different bottleneck links appearing in different hours, but similar in the same hours in different days. A small improvement of critical bottleneck roads is found to benefit significantly the global traffic, providing a method to improve city traffic with low cost. Our results may provide insights on the relation between traffic dynamics and percolation, which can be useful for efficient transportation, epidemic control, and emergency evacuation."
},
{
"pmid": "34735350",
"title": "Interlayer Link Prediction in Multiplex Social Networks Based on Multiple Types of Consistency Between Embedding Vectors.",
"abstract": "Online users are typically active on multiple social media networks (SMNs), which constitute a multiplex social network. With improvements in cybersecurity awareness, users increasingly choose different usernames and provide different profiles on different SMNs. Thus, it is becoming increasingly challenging to determine whether given accounts on different SMNs belong to the same user; this can be expressed as an interlayer link prediction problem in a multiplex network. To address the challenge of predicting interlayer links, feature or structure information is leveraged. Existing methods that use network embedding techniques to address this problem focus on learning a mapping function to unify all nodes into a common latent representation space for prediction; positional relationships between unmatched nodes and their common matched neighbors (CMNs) are not utilized. Furthermore, the layers are often modeled as unweighted graphs, ignoring the strengths of the relationships between nodes. To address these limitations, we propose a framework based on multiple types of consistency between embedding vectors (MulCEVs). In MulCEV, the traditional embedding-based method is applied to obtain the degree of consistency between the vectors representing the unmatched nodes, and a proposed distance consistency index based on the positions of nodes in each latent space provides additional clues for prediction. By associating these two types of consistency, the effective information in the latent spaces is fully utilized. In addition, MulCEV models the layers as weighted graphs to obtain representation. In this way, the higher the strength of the relationship between nodes, the more similar their embedding vectors in the latent representation space will be. The results of our experiments on several real-world and synthetic datasets demonstrate that the proposed MulCEV framework markedly outperforms current embedding-based methods, especially when the number of training iterations is small."
},
{
"pmid": "34501625",
"title": "Exploring an Efficient Remote Biomedical Signal Monitoring Framework for Personal Health in the COVID-19 Pandemic.",
"abstract": "Nowadays people are mostly focused on their work while ignoring their health which in turn is creating a drastic effect on their health in the long run. Remote health monitoring through telemedicine can help people discover potential health threats in time. In the COVID-19 pandemic, remote health monitoring can help obtain and analyze biomedical signals including human body temperature without direct body contact. This technique is of great significance to achieve safe and efficient health monitoring in the COVID-19 pandemic. Existing remote biomedical signal monitoring methods cannot effectively analyze the time series data. This paper designs a remote biomedical signal monitoring framework combining the Internet of Things (IoT), 5G communication and artificial intelligence techniques. In the constructed framework, IoT devices are used to collect biomedical signals at the perception layer. Subsequently, the biomedical signals are transmitted through the 5G network to the cloud server where the GRU-AE deep learning model is deployed. It is noteworthy that the proposed GRU-AE model can analyze multi-dimensional biomedical signals in time series. Finally, this paper conducts a 24-week monitoring experiment for 2000 subjects of different ages to obtain real data. Compared with the traditional biomedical signal monitoring method based on the AutoEncoder model, the GRU-AE model has better performance. The research has an important role in promoting the development of biomedical signal monitoring techniques, which can be effectively applied to some kinds of remote health monitoring scenario."
},
{
"pmid": "35009967",
"title": "Unravelling Morphological and Topological Energy Contributions of Metal Nanoparticles.",
"abstract": "Metal nanoparticles (NPs) are ubiquitous in many fields, from nanotechnology to heterogeneous catalysis, with properties differing from those of single-crystal surfaces and bulks. A key aspect is the size-dependent evolution of NP properties toward the bulk limit, including the adoption of different NP shapes, which may bias the NP stability based on the NP size. Herein, the stability of different Pd NPs (n = 10-1504 atoms) considering a myriad of shapes is investigated by first-principles energy optimisation, leading to the determination that icosahedron shapes are the most stable up to a size of ca. 4 nm. In NPs larger than that size, truncated octahedron shapes become more stable, yet a presence of larger {001} facets than the Wulff construction is forecasted due to their increased stability, compared with (001) single-crystal surfaces, and the lower stability of {111} facets, compared with (111) single-crystal surfaces. The NP cohesive energy breakdown in terms of coordination numbers is found to be an excellent quantitative tool of the stability assessment, with mean absolute errors of solely 0.01 eV·atom-1, while a geometry breakdown allows only for a qualitative stability screening."
},
{
"pmid": "23992473",
"title": "\"I share, therefore I am\": personality traits, life satisfaction, and Facebook check-ins.",
"abstract": "This study explored whether agreeableness, extraversion, and openness function to influence self-disclosure behavior, which in turn impacts the intensity of checking in on Facebook. A complete path from extraversion to Facebook check-in through self-disclosure and sharing was found. The indirect effect from sharing to check-in intensity through life satisfaction was particularly salient. The central component of check-in is for users to disclose a specific location selectively that has implications on demonstrating their social lives, lifestyles, and tastes, enabling a selective and optimized self-image. Implications on the hyperpersonal model and warranting principle are discussed."
},
{
"pmid": "33889035",
"title": "Understanding the Relationship Between Tourists' Consumption Behavior and Their Consumption Substitution Willingness Under Unusual Environment.",
"abstract": "INTRODUCTION\nUnderstanding the relationship between tourists' consumption behavior and their willingness to substitute consumption in unusual environments can promote tourists' sustainable consumption behavior. This study explores the internal relationship between tourists' willingness to engage in sustainable consumption behavior and the substitution of tourism consumption willingness in an unusual environment and the related factors.\n\n\nMETHODS\nThrough qualitative and quantitative mixed research, this study first invited 32 interviewees related to the tourism industry to conduct in-depth and focus group interviews and extracted a research model based on the push-pull theoretical model (PPM) through three rounds of coding of grounded theory. Then, through questionnaire design, pre-release, and formal release, 268 valid questionnaires were collected using a convenience sampling method, and the hypothesis and its mediating effect were verified using a structural equation model.\n\n\nRESULTS\nFurther quantitative analysis and verification showed that being in an unusual environment had a positive effect on tourists' perception of crisis awareness, safety risk, and willingness to engage in sustainable consumption behavior. However, the results did not support the unusual environment positively affecting the substitution of tourism consumption willingness, the psychological transformation cost, and the fixed consumption habit negatively affecting the substitution of tourism consumption willingness. In this study, two mediating variables were used to verify the indirect effect of being in an unusual environment and the substitution of tourism consumption willingness. The results showed that the mediating effect was significant.\n\n\nCONCLUSION\nThis study explored an action mechanism model aimed at guiding tourists' willingness for sustainable consumption, based on the environment and consumption behavior, and provided relevant countermeasures for the government and business decision-makers, enterprises, and investors in the tourism sector."
},
{
"pmid": "28771201",
"title": "Constrained Active Learning for Anchor Link Prediction Across Multiple Heterogeneous Social Networks.",
"abstract": "Nowadays, people are usually involved in multiple heterogeneous social networks simultaneously. Discovering the anchor links between the accounts owned by the same users across different social networks is crucial for many important inter-network applications, e.g., cross-network link transfer and cross-network recommendation. Many different supervised models have been proposed to predict anchor links so far, but they are effective only when the labeled anchor links are abundant. However, in real scenarios, such a requirement can hardly be met and most anchor links are unlabeled, since manually labeling the inter-network anchor links is quite costly and tedious. To overcome such a problem and utilize the numerous unlabeled anchor links in model building, in this paper, we introduce the active learning based anchor link prediction problem. Different from the traditional active learning problems, due to the one-to-one constraint on anchor links, if an unlabeled anchor link a = ( u , v ) is identified as positive (i.e., existing), all the other unlabeled anchor links incident to account u or account v will be negative (i.e., non-existing) automatically. Viewed in such a perspective, asking for the labels of potential positive anchor links in the unlabeled set will be rewarding in the active anchor link prediction problem. Various novel anchor link information gain measures are defined in this paper, based on which several constraint active anchor link prediction methods are introduced. Extensive experiments have been done on real-world social network datasets to compare the performance of these methods with state-of-art anchor link prediction methods. The experimental results show that the proposed Mean-entropy-based Constrained Active Learning (MC) method can outperform other methods with significant advantages."
}
] |
Scientific Reports | 35379831 | PMC8980017 | 10.1038/s41598-022-08942-2 | Addressing the range anxiety of battery electric vehicles with charging en route | Battery electric vehicles (BEVs) have emerged as a promising alternative to traditional internal combustion engine (ICE) vehicles due to benefits in improved fuel economy, lower operating cost, and reduced emission. BEVs use electric motors rather than fossil fuels for propulsion and typically store electric energy in lithium-ion cells. With rising concerns over fossil fuel depletion and the impact of ICE vehicles on the climate, electric mobility is widely considered as the future of sustainable transportation. BEVs promise to drastically reduce greenhouse gas emissions as a result of the transportation sector. However, mass adoption of BEVs faces major barriers due to consumer worries over several important battery-related issues, such as limited range, long charging time, lack of charging stations, and high initial cost. Existing solutions to overcome these barriers, such as building more charging stations, increasing battery capacity, and stationary vehicle-to-vehicle (V2V) charging, often suffer from prohibitive investment costs, incompatibility to existing BEVs, or long travel delays. In this paper, we propose Peer-to-Peer Car Charging (P2C2), a scalable approach for charging BEVs that alleviates the need for elaborate charging infrastructure. The central idea is to enable BEVs to share charge among each other while in motion through coordination with a cloud-based control system. To re-vitalize a BEV fleet, which is continuously in motion, we introduce Mobile Charging Stations (MoCS), which are high-battery-capacity vehicles used to replenish the overall charge in a vehicle network. Unlike existing V2V charging solutions, the charge sharing in P2C2 takes place while the BEVs are in-motion, which aims at minimizing travel time loss. To reduce BEV-to-BEV contact time without increasing manufacturing costs, we propose to use multiple batteries of varying sizes and charge transfer rates. The faster but smaller batteries are used for charge transfer between vehicles, while the slower but larger ones are used for prolonged charge storage. We have designed the overall P2C2 framework and formalized the decision-making process of the cloud-based control system. We have evaluated the effectiveness of P2C2 using a well-characterized simulation platform and observed dramatic improvement in BEV mobility. Additionally, through statistical analysis, we show that a significant reduction in carbon emission is also possible if MoCS can be powered by renewable energy sources. | Related works and motivationIn this section, we shall look at different issues preventing BEVs from being widely adopted. We will also analyze some of the proposed solutions and qualitatively compare them to P2C2.Impact of battery-related issues on BEV adoptionBEVs have been around since 1823, but despite substantial corporate and government effort, it is still not a viable transport solution for the masses. Several battery-related concerns such as limited range, battery cost, and lack of charging stations have deterred consumers from allowing BEVs to become mainstream.Limited range and lack of charging stationsLong distance driving with a BEV can be difficult due to limited battery range. Detour to reach a charging station, availability of an open charging slot, and charge-up time are the main sources of frustration. Lithium-ion batteries remain expensive to build and greenhouse gas emission from battery manufacturing is becoming a bigger issue3. In Fig. 1 we show the range, charging time, and cost of different high-end electric car models. The values reported are for the 2021 Nissan LEAF SL Plus, the 2021 Volkswagon ID.4, the 2021 Tesla S (Tri-Motor All-wheel Drive Plaid), and the 2021 Tesla Y (Performance Dual-Motor All-Wheel Drive). These are approximate values based on an internet survey but they show a clear trend. High-end BEVs such as Tesla Model S and Model Y suffer from high charging times. Most of the charging stations are in urban areas, and most rural areas lack even 110 V charging stations making universal BEV adoption challenging. DCFC (Level-3) stations are scarce and building more is financially challenging5.Battery life and high initial purchase costThe life of a Lithium-ion (Li-ion) battery degrades faster if it is subject to complete discharge or inefficient charging cycles. Li-ion batteries are widely used in BEVs13. Hence, completely draining the BEV battery may be undesirable to the car owners. Hence, if the user chooses to avoid accelerated battery ageing, then it virtually decreases the BEV’s range. Also, BEVs are generally more expensive than their traditional ICE vehicle counterparts due to high battery manufacturing cost.Existing solutions to address charging issuesIssues relating to the battery and charging appears to be the core hurdle preventing a full-scale adoption of BEVs. Next, we shall discuss some of the proposed existing solutions aimed at countering battery related issues in BEVs. Table 1 provides a comparison among existing solutions and P2C2 (proposed).Table 1Comparison between P2C2 and other BEV sustainability solutions.SolutionCostDeployabilityMobilityCharge transfer modeEV-to-EV chargingOn-the-goMulti level batteryDense CSVery highHardLowPhysicalNoNoNoImproved BatteryVery highModerate––––NoCharging from Road6Very highHardHighPhysicalNoYesNoV2V (Hub)8,14–17HighModerateLowPhysicalYesNoNoV2V (Direct)7,18ModerateEasyLowPhysicalYesNoNoCharging Trucks/Robots19,20ModerateModerateLowPhysicalNoNoNoDynamic Charging9HighVery hardLowWirelessNoYesNoBattery Swapping21,22Very highHardModerate––NoNoP2C2 (proposed)ModerateModerateHighPhysical/droneYesYesYesMore charging stations, higher battery capacity and battery swappingBuilding a large number of very high speed (Level-3) charging stations in close proximity can alleviate range anxiety. However, dense and uniformly placed Level-3 stations are not financially feasible. Additionally, even a Level-3 charging station is not fast enough to allow a seamless long drive experience; hence, even faster charger stations are required. Furthermore, the local power grids must be re-designed to handle the huge load due to these fast BEV charging stations23. Increasing the BEV battery size can enable long-distance travel and in turn reduce range anxiety. However, this solution is expensive and not scalable3. Manufacturing larger batteries will also increase greenhouse gas emissions making BEVs less attractive. It also does not solve the core battery re-charging problems.Several research and industry efforts are also being made towards developing battery swapping techniques21,22. However, such battery swapping stations are very expensive to build and a large number of such stations will be required to support a big BEV fleet. Directly accessing the BEV battery (mostly located at the base of the BEV to lower the center of gravity) is also challenging and will require major changes to the core BEV architecture.Stationary V2V charge sharingSeveral solutions have been proposed around the idea of BEV-to-BEV charge sharing at designated hubs. A hub can be an aggregator or a charging station. In works such as8,22, the BEVs parked at a hub share charge among each other and the grid to optimize overall charging efficiency. The aggregator can also allow direct V2V charge sharing bypassing the grid15–17. Such a hub will be less expensive to build than a charging station because no grid connectivity is required.The idea of trucks distributing charge to regions lacking charging stations has been proposed in19,24,25. The trucks initially receive charge at a depot and then travel to a designated spot in which this charge can be distributed via stationary V2V charging. Additionally, to counter the lack of BEV charging ports in parking lots, the concept of a robot-like charging entity has been proposed that can move around the parking lot and serve multiple BEVs20.However, relying on designated hubs such as aggregators and charging stations to share charge is both expensive and inconvenient due to significant infrastructure requirements. Hence in works such as7,18, the authors experiment with V2V charge sharing without the availability of any designated hubs. The game theory based solution in7 achieved improved charge sharing efficiency in comparison to other techniques. Yet, for all of these solutions, the BEVs must be parked at equipped parking lots and remain stationary during the entire charging process.Charging from the road and dynamic chargingCharging BEVs from the road can be an effective solution, but it may not be the most efficient. A road in Normandy, France, was fitted with solar panels to generate electricity in 2018. It produced only a total of 80,000 kWh in that year and about 40,000 kWh by the end of July 20196. The lack of efficiency was due to (1) Normandy’s climate (average 44 days of sunshine), (2) damaged solar panels, and (3) obstructions from leaves. Converting every major roadways in the world into electric/solar roads is a big financial undertaking, rendering this solution practically infeasible.A wireless charging solution was proposed by Kosmanos, D.et al.9 which involves charging BEVs from a Bus or Truck. State-of-the-art wireless charge transfer techniques have efficiencies of about 40–60%. A coil of 340 cm or 11.15 feet in diameter has a maximum 60% power transfer efficiency while transmitting across 170 cm or 2.2 feet10. Such a small distance is extremely unsafe for on-the-go charging in most traffic scenarios and building/hosting such huge coils on both the receiver and the transmitter can be challenging.Why on-the-go charging?Refueling of an ICE vehicle is both fast and easy to the point that it is not even a concern, no matter how long a trip is. Similarly, if re-charging a BEV can be achieved without long wait time, meticulous planning, and lengthy detours, then ICE vehicle owners may get enticed to make the switch to BEVs. Solutions such as increasing battery size and building faster charging stations only serve as band-aids to the inherent BEV battery-related problems. Although V2V charging schemes can somewhat mitigate the lack of charging stations, it does not eliminate the need to remain stationary while charging and endure long travel time loss. The only functional on-the-go charging solution, solar road charging, although intriguing, is not financially feasible.We hypothesize that if BEV-to-BEV charge sharing can be done on-the-go (while in motion), then it can (1) eliminate re-charging wait time, (2) increase battery life by avoiding inefficient charging cycles, (3) eliminate range anxiety by reducing reliance on charging stations, (4) reduce BEV cost by eliminating the need to have big batteries, and (5) reduce greenhouse gas emission if MoCS are powered via renewable sources. Based on this hypothesis, we design our peer-to-peer on-the-go charging system called P2C2. | [
"33414495"
] | [
{
"pmid": "33414495",
"title": "Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study.",
"abstract": "Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide."
}
] |
Frontiers in Robotics and AI | null | PMC8980723 | 10.3389/frobt.2022.843816 | Deep Learning-Based Complete Coverage Path Planning With Re-Joint and Obstacle Fusion Paradigm | With the introduction of autonomy into the precision agriculture process, environmental exploration, disaster response, and other fields, one of the global demands is to navigate autonomous vehicles to completely cover entire unknown environments. In the previous complete coverage path planning (CCPP) research, however, autonomous vehicles need to consider mapping, obstacle avoidance, and route planning simultaneously during operating in the workspace, which results in an extremely complicated and computationally expensive navigation system. In this study, a new framework is developed in light of a hierarchical manner with the obtained environmental information and gradually solving navigation problems layer by layer, consisting of environmental mapping, path generation, CCPP, and dynamic obstacle avoidance. The first layer based on satellite images utilizes a deep learning method to generate the CCPP trajectory through the position of the autonomous vehicle. In the second layer, an obstacle fusion paradigm in the map is developed based on the unmanned aerial vehicle (UAV) onboard sensors. A nature-inspired algorithm is adopted for obstacle avoidance and CCPP re-joint. Equipped with the onboard LIDAR equipment, autonomous vehicles, in the third layer, dynamically avoid moving obstacles. Simulated experiments validate the effectiveness and robustness of the proposed framework. | 1.1 Related WorkFor decades, CCPP has undergone extensive research, and many algorithms have emerged, such as the bio-inspired neural network (BNN) approach, the Boustrophedon Cellular Decomposition (BCD) method, and the deep reinforcement learning approach (DRL). Luo and Yang (2008) developed the bio-inspired neural network (BNN) method to navigate robots to perform CCPP while avoiding obstacles within dynamic environments in real time (Zhu et al., 2017). The robot is attracted to unscanned areas and repelled by the accomplished areas or obstacles based on the neuron activity in the BNN given by the shunting equation (Yang and Luo, 2004; Li et al., 2018). Without any prior knowledge about the environment, the next position of the robot depends on the current position of the robot and neuron activity associated with its current position (Luo et al., 2016). However, it is time- and energy-consuming for the vehicles and requires high computing resources to process fine-resolution mapping (Sun et al., 2018). Unlike the BNN approach, the boundary representation method that defines the workspace is adopted by the Boustrophedon Cellular Decomposition (BCD) method and the deep reinforcement learning approach (DRL). The BCD method is proposed by Acar and Choset (2002), which decomposes the environment into many line scan partitions and is explored through a back-and-forth path (BFP) in the same direction. The BCD is an effective CCPP method with more diverse, non-polygonal obstacles in workspace. In trapezoidal decomposition as a cell, it is covered in back-and-forth patterns. For a complex configuration space with irregular-shaped obstacles, BCD needs to construct a graph that represents the adjacency connections of the cells in the boustrophedon decomposition. Therefore, a deep leaning-based method may promote it to a more efficient CCPP method (Sünderhauf et al., 2018; Valiente et al., 2020; Rawashdeh et al., 2021). Similarly, Nasirian et al. (2021) utilized traditional graph theory to segment the workspace and proposed a deep reinforcement learning approach to solve the CCPP problem in the complex workspace. However, the most common shape of the workspace is represented by polygons. As irregular areas of non-convex polygons, they can still be decomposed into multiple convex polygons (Li et al., 2011). Thus, the representation of polygons is also adopted in this study to express most workspace that needs to be explored. Such a method simplifies the complex environments and solves the covering irregularity for vehicles (Quin et al., 2021).Faster R-CNN originated from R-CNN, and Fast CNN uses a unified neural network (NN) for object detection shown in Figure 4A. The faster R-CNN avoids using selective search, which accelerates region selection and further reduces computational costs. The faster R-CNN detector is mainly composed of a region proposal network (RPN), which generates region proposals, and a network that uses these generated feature patches (FP) for object detection. The region of interest (ROI) pooling layer is used to resize the feature patch (RFP), finally concatenated with a set of fully connected (FC) layers in our study. The two fully connected NN layers are utilized to refine the location of the bounding box and classify the objects. Faster R-CNN effectively uses the bounding box in our studies to identify and locate vehicles and obstacles in the images. This is also applied to the map obtained from farms, search, and rescue scenes to distinguish the vehicles, machines, and human beings on the image.Although the above-mentioned CCPP approaches have achieved remarkable results, such approaches may still be sub-optimal when the starting and target positions required by the vehicle are included in the path. Especially for multiple sub-region exploration tasks shown in Figure 1A, the task is considered continuous to explore the four sub-regions, and the starting point of the next sub-region to be explored is the target point of the last sub-region as shown by the red circles in Figure 1. The selection of intermediate target points for multiple polygonal exploration areas is still an open problem because it needs to consider the shape and relative position of each sub-region, as well as the entrance and exit of the exploration area (Graves and Chakraborty, 2018). For simplicity, the entrances of the next sub-region are selected as target points here. The connection path length from the starting point to the target point should be considered, as shown in the blue lines in Figure 1B. In this case, ignoring the connection path may increase the complete path length of the overall exploration task (Xie et al., 2019). Thus, it is vital to consider the starting and target points of the vehicle, including the exploration task, and obtain a shorter path that effectively utilizes the limited onboard resources. Another challenging problem that arises in CCPP is obstacle avoidance (An et al., 2018; Wang et al., 2021). Based on the excellent optimization and search capabilities of nature-inspired algorithms, researchers have recently explored many nature-inspired computational approaches to solve vehicle collision-free navigation problems (Deng et al., 2016; Ewerton et al., 2019; Lei et al., 2019, 2021; Segato et al., 2019). For instance, a hybrid fireworks algorithm with LIDAR-based local navigation was developed by Lei et al. (2020a), capable of generating short collision-free trajectories in unstructured environments. Zhou et al. (2019) developed a modified firefly algorithm with the self-adaptive step factor to avoid the premature and improve the operational efficiency of autonomous vehicles. Lei et al. (2020b) proposed a graph-based model integrated with ant colony optimization (ACO) to navigate the robot under the robot’s kinematics constraints. Xiong et al. (2021) further improved ACO using the time Taboo strategy to improve the algorithm convergence speed and global search ability in a dynamic environment. Cèsar-Tondreau et al. (2021) proposed a human-demonstrated navigation system, which integrates the behavioral cloning model into an off-the-shelf navigation stack.FIGURE 1
(A) Illustration of multiple sub-regions exploration task. (B) The entire CCPP trajectory of multiple sub-regions with connection paths. | [
"33501221",
"33501104",
"33500906",
"32318314",
"33501085",
"33732132"
] | [
{
"pmid": "33501221",
"title": "Mutual Shaping in Swarm Robotics: User Studies in Fire and Rescue, Storage Organization, and Bridge Inspection.",
"abstract": "Many real-world applications have been suggested in the swarm robotics literature. However, there is a general lack of understanding of what needs to be done for robot swarms to be useful and trusted by users in reality. This paper aims to investigate user perception of robot swarms in the workplace, and inform design principles for the deployment of future swarms in real-world applications. Three qualitative studies with a total of 37 participants were done across three sectors: fire and rescue, storage organization, and bridge inspection. Each study examined the users' perceptions using focus groups and interviews. In this paper, we describe our findings regarding: the current processes and tools used in these professions and their main challenges; attitudes toward robot swarms assisting them; and the requirements that would encourage them to use robot swarms. We found that there was a generally positive reaction to robot swarms for information gathering and automation of simple processes. Furthermore, a human in the loop is preferred when it comes to decision making. Recommendations to increase trust and acceptance are related to transparency, accountability, safety, reliability, ease of maintenance, and ease of use. Finally, we found that mutual shaping, a methodology to create a bidirectional relationship between users and technology developers to incorporate societal choices in all stages of research and development, is a valid approach to increase knowledge and acceptance of swarm robotics. This paper contributes to the creation of such a culture of mutual shaping between researchers and users, toward increasing the chances of a successful deployment of robot swarms in the physical realm."
},
{
"pmid": "33501104",
"title": "Learning Trajectory Distributions for Assisted Teleoperation and Path Planning.",
"abstract": "Several approaches have been proposed to assist humans in co-manipulation and teleoperation tasks given demonstrated trajectories. However, these approaches are not applicable when the demonstrations are suboptimal or when the generalization capabilities of the learned models cannot cope with the changes in the environment. Nevertheless, in real co-manipulation and teleoperation tasks, the original demonstrations will often be suboptimal and a learning system must be able to cope with new situations. This paper presents a reinforcement learning algorithm that can be applied to such problems. The proposed algorithm is initialized with a probability distribution of demonstrated trajectories and is based on the concept of relevance functions. We show in this paper how the relevance of trajectory parameters to optimization objectives is connected with the concept of Pearson correlation. First, we demonstrate the efficacy of our algorithm by addressing the assisted teleoperation of an object in a static virtual environment. Afterward, we extend this algorithm to deal with dynamic environments by utilizing Gaussian Process regression. The full framework is applied to make a point particle and a 7-DoF robot arm autonomously adapt their movements to changes in the environment as well as to assist the teleoperation of a 7-DoF robot arm in a dynamic environment."
},
{
"pmid": "33500906",
"title": "A Linear Objective Function-Based Heuristic for Robotic Exploration of Unknown Polygonal Environments.",
"abstract": "This work presents a heuristic for describing the next best view location for an autonomous agent exploring an unknown environment. The approach considers each robot as a point mass with omnidirectional and unrestricted vision of the environment and line-of-sight communication operating in a polygonal environment which may contain holes. The number of robots in the team is always sufficient for full visual coverage of the space. The technique employed falls in the category of distributed visibility-based deployment algorithms which seek to segment the space based on each agent's field of view with the goal of deploying each agent into the environment to create a visually connected series of agents which fully observe the previously unknown region. The contributions made to this field are a technique for utilizing linear programming methods to determine the solution to the next best observation (NBO) problem as well as a method for calculating multiple NBO points simultaneously. Both contributions are incorporated into an algorithm and deployed in a simulated environment built with MATLAB for testing. The algorithm successfully deployed agents into polygons which may contain holes. The efficiency of the deployment method was compared with random deployment methods to establish a performance metric for the proposed tactic. It was shown that the heuristic presented in this work performs better the other tested strategies."
},
{
"pmid": "32318314",
"title": "Optimizing Motion-Planning Problem Setup via Bounded Evaluation with Application to Following Surgical Trajectories.",
"abstract": "A motion-planning problem's setup can drastically affect the quality of solutions returned by the planner. In this work we consider optimizing these setups, with a focus on doing so in a computationally-efficient fashion. Our approach interleaves optimization with motion planning, which allows us to consider the actual motions required of the robot. Similar prior work has treated the planner as a black box: our key insight is that opening this box in a simple-yet-effective manner enables a more efficient approach, by allowing us to bound the work done by the planner to optimizer-relevant computations. Finally, we apply our approach to a surgically-relevant motion-planning task, where our experiments validate our approach by more-efficiently optimizing the fixed insertion pose of a surgical robot."
},
{
"pmid": "33501085",
"title": "Automated Steerable Path Planning for Deep Brain Stimulation Safeguarding Fiber Tracts and Deep Gray Matter Nuclei.",
"abstract": "Deep Brain Stimulation (DBS) is a neurosurgical procedure consisting in the stereotactic implantation of stimulation electrodes to specific brain targets, such as deep gray matter nuclei. Current solutions to place the electrodes rely on rectilinear stereotactic trajectories (RTs) manually defined by surgeons, based on pre-operative images. An automatic path planner that accurately targets subthalamic nuclei (STN) and safeguards critical surrounding structures is still lacking. Also, robotically-driven curvilinear trajectories (CTs) computed on the basis of state-of-the-art neuroimaging would decrease DBS invasiveness, circumventing patient-specific obstacles. This work presents a new algorithm able to estimate a pool of DBS curvilinear trajectories for reaching a given deep target in the brain, in the context of the EU's Horizon EDEN2020 project. The prospect of automatically computing trajectory plans relying on sophisticated newly engineered steerable devices represents a breakthrough in the field of microsurgical robotics. By tailoring the paths according to single-patient anatomical constraints, as defined by advanced preoperative neuroimaging including diffusion MR tractography, this planner ensures a higher level of safety than the standard rectilinear approach. Ten healthy controls underwent Magnetic Resonance Imaging (MRI) on 3T scanner, including 3DT1-weighted sequences, 3Dhigh-resolution time-of-flight MR angiography (TOF-MRA) and high angular resolution diffusion MR sequences. A probabilistic q-ball residual-bootstrap MR tractography algorithm was used to reconstruct motor fibers, while the other deep gray matter nuclei surrounding STN and vessels were segmented on T1 and TOF-MRA images, respectively. These structures were labeled as obstacles. The reliability of the automated planner was evaluated; CTs were compared to RTs in terms of efficacy and safety. Targeting the anterior STN, CTs performed significantly better in maximizing the minimal distance from critical structures, by finding a tuned balance between all obstacles. Moreover, CTs resulted superior in reaching the center of mass (COM) of STN, as well as in optimizing the entry angle in STN and in the skull surface."
},
{
"pmid": "33732132",
"title": "Mobile Robot Path Planning Based on Time Taboo Ant Colony Optimization in Dynamic Environment.",
"abstract": "This article aims to improve the problem of slow convergence speed, poor global search ability, and unknown time-varying dynamic obstacles in the path planning of ant colony optimization in dynamic environment. An improved ant colony optimization algorithm using time taboo strategy is proposed, namely, time taboo ant colony optimization (TTACO), which uses adaptive initial pheromone distribution, rollback strategy, and pheromone preferential limited update to improve the algorithm's convergence speed and global search ability. For the poor global search ability of the algorithm and the unknown time-varying problem of dynamic obstacles in a dynamic environment, a time taboo strategy is first proposed, based on which a three-step arbitration method is put forward to improve its weakness in global search. For the unknown time-varying dynamic obstacles, an occupancy grid prediction model is proposed based on the time taboo strategy to solve the problem of dynamic obstacle avoidance. In order to improve the algorithm's calculation speed when avoiding obstacles, an ant colony information inheritance mechanism is established. Finally, the algorithm is used to conduct dynamic simulation experiments in a simulated factory environment and is compared with other similar algorithms. The experimental results show that the TTACO can obtain a better path and accelerate the convergence speed of the algorithm in a static environment and can successfully avoid dynamic obstacles in a dynamic environment."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8980781 | 10.3389/fbioe.2022.827408 | Pursuit and Evasion Strategy of a Differential Game Based on Deep Reinforcement Learning | Since the emergence of deep neural network (DNN), it has achieved excellent performance in various research areas. As the combination of DNN and reinforcement learning, deep reinforcement learning (DRL) becomes a new paradigm for solving differential game problems. In this study, we build up a reinforcement learning environment and apply relevant DRL methods to a specific bio-inspired differential game problem: the dog sheep game. The dog sheep game environment is set on a circle where the dog chases down the sheep attempting to escape. According to some presuppositions, we are able to acquire the kinematic pursuit and evasion strategy. Next, this study implements the value-based deep Q network (DQN) model and the deep deterministic policy gradient (DDPG) model to the dog sheep game, attempting to endow the sheep the ability to escape successfully. To enhance the performance of the DQN model, this study brought up the reward mechanism with a time-out strategy and the game environment with an attenuation mechanism of the steering angle of sheep. These modifications effectively increase the probability of escape for the sheep. Furthermore, the DDPG model is adopted due to its continuous action space. Results show the modifications of the DQN model effectively increase the escape probabilities to the same level as the DDPG model. When it comes to the learning ability under various environment difficulties, the refined DQN and the DDPG models have bigger performance enhancement over the naive evasion model in harsh environments than in loose environments. | 2 Related WorkResearchers have studied the implementations of DRL methods for differential pursuit–evasion games and improved them in many aspects. Isaacs (1965) first proposed the theory of differential games. In his book, Isaacs proposed a classic “homicidal chauffeur” problem which is a classic differential pursuit–evasion game. In this game, a slow but maneuverable pedestrian is against a driver with a faster but less maneuverable vehicle, attempting to run over the pedestrian. Merz (1971) presented the complete solution for the “homicidal chauffeur” game. Another classic game scenario is the game in constrained environments. For example, agents in Sundaram et al.’s (2017) study are constrained to road networks. The control policies of our research are based on the kinematic pursuit and evasion game theory. Apart from differential games, kinematic analysis is applied to various research areas, such as robotic control (2021) (Liu et al., 2021a; Liu et al., 2021b; Xiao et al., 2021; Zhao et al., 2022).Deep neural networks have shown their advantages over traditional methods in multiple research areas. For example, they have been adopted in semantic analysis (2021) (Chen et al., 2021a; Chen et al., 2021b; Jiang et al., 2021b; Chen et al., 2021c) and image recognitions (2021) (Hao et al., 2021; Jiang et al., 2021a; Yang et al., 2021). The DRL method, which combines DNNs and reinforcement learning, was first proposed by DeepMind (2013, 2015) (Mnih et al., 2013; Mnih et al., 2015). Their method called DQN is the combination of Q-learning and deep neural network. It shows excellent performance in the Atari game. The emergence of the DQN leads to a number of similar research studies based on DRL methods. Shao et al. (2019) surveyed the application of DRL methods in video games. Value-based, policy gradient, and model-based DRL methods are applied to various video games such as Atari, Minecraft, and StarCraft. For example, Wei et al. (2018) employed convolutional neural networks trained with refined DQN to play the snake game. DRL has been a long-standing research area when it comes to artificial intelligence in differential games. Jiang et al. (2020) introduced an approximate soft policy iteration-based reinforcement learning method which used a value neural network to provide cooperative policy for two pursuers versus an evader. Lin (1992) proved that the experience replay helps the network train faster and smoother.To overcome the defect of the discrete action space, researchers hypothesized several methods with a continuous action space. The actor-critic (A3C) method is utilized in many related research studies. For example, Perot et al. (2017) adopted the A3C with CNN + LSTM as the state encoder to play racing games. Lixin Wang et al. (2019) used a fuzzy deterministic policy gradient algorithm to obtain the specific physical meaning for policy learning in a pursuit–evasion game. Lillicrap et al. (2015) first introduced the DDPG methods with the continuous action space. Maolin Wang et al. (2019) implemented the DDPG model to an open pursuit–evasion environment to learn the control strategy. Several researchers (Lowe et al., 2020; Singh et al., 2020; Wan et al., 2021) proposed an actor-critic multi-agent DDPG algorithm to preprocess actions of multiple agents in the virtual environment. | [
"33562366",
"35096796",
"35083202",
"35223822",
"25719670",
"34828131",
"34746114"
] | [
{
"pmid": "33562366",
"title": "Combining Public Opinion Dissemination with Polarization Process Considering Individual Heterogeneity.",
"abstract": "The wide dissemination of false information and the frequent occurrence of extreme speeches on online social platforms have become increasingly prominent, which impact on the harmony and stability of society. In order to solve the problems in the dissemination and polarization of public opinion over online social platforms, it is necessary to conduct in-depth research on the formation mechanism of the dissemination and polarization of public opinion. This article appends individual communicating willingness and forgetting effects to the Susceptible-Exposed-Infected-Recovered (SEIR) model to describe individual state transitions; secondly, it introduces three heterogeneous factors describing the characteristics of individual differences in the Jager-Amblard (J-A) model, namely: Individual conformity, individual conservative degree, and inter-individual relationship strength in order to reflect the different roles of individual heterogeneity in the opinions interaction; thirdly, it integrates the improved SEIR model and J-A model to construct the SEIR-JA model to study the formation mechanism of public opinion dissemination and polarization. Transmission parameters and polarization parameters are simulated and analyzed. Finally, a public opinion event from the pricing of China's self-developed COVID-19 vaccine are used, and related Weibo comment data about this event are also collected so as to verify the rationality and effectiveness of the proposed model."
},
{
"pmid": "35096796",
"title": "Intelligent Detection of Steel Defects Based on Improved Split Attention Networks.",
"abstract": "The intelligent monitoring and diagnosis of steel defects plays an important role in improving steel quality, production efficiency, and associated smart manufacturing. The application of the bio-inspired algorithms to mechanical engineering problems is of great significance. The split attention network is an improvement of the residual network, and it is an improvement of the visual attention mechanism in the bionic algorithm. In this paper, based on the feature pyramid network and split attention network, the network is improved and optimised in terms of data enhancement, multi-scale feature fusion and network structure optimisation. The DF-ResNeSt50 network model is proposed, which introduces a simple modularized split attention block, which can improve the attention mechanism of cross-feature graph groups. Finally, experimental validation proves that the proposed network model has good performance and application prospects in the intelligent detection of steel defects."
},
{
"pmid": "35083202",
"title": "Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots.",
"abstract": "Mobile robots have an important role in material handling in manufacturing and can be used for a variety of automated tasks. The accuracy of the robot's moving trajectory has become a key issue affecting its work efficiency. This paper presents a method for optimizing the trajectory of the mobile robot based on the digital twin of the robot. The digital twin of the mobile robot is created by Unity, and the trajectory of the mobile robot is trained in the virtual environment and applied to the physical space. The simulation training in the virtual environment provides schemes for the actual movement of the robot. Based on the actual movement data returned by the physical robot, the preset trajectory of the virtual robot is dynamically adjusted, which in turn enables the correction of the movement trajectory of the physical robot. The contribution of this work is the use of genetic algorithms for path planning of robots, which enables trajectory optimization of mobile robots by reducing the error in the movement trajectory of physical robots through the interaction of virtual and real data. It provides a method to map learning in the virtual domain to the physical robot."
},
{
"pmid": "35223822",
"title": "Self-Tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm.",
"abstract": "With the manipulator performs fixed-point tasks, it becomes adversely affected by external disturbances, parameter variations, and random noise. Therefore, it is essential to improve the robust and accuracy of the controller. In this article, a self-tuning particle swarm optimization (PSO) fuzzy PID positioning controller is designed based on fuzzy PID control. The quantization and scaling factors in the fuzzy PID algorithm are optimized by PSO in order to achieve high robustness and high accuracy of the manipulator. First of all, a mathematical model of the manipulator is developed, and the manipulator positioning controller is designed. A PD control strategy with compensation for gravity is used for the positioning control system. Then, the PID controller parameters dynamically are minute-tuned by the fuzzy controller 1. Through a closed-loop control loop to adjust the magnitude of the quantization factors-proportionality factors online. Correction values are outputted by the modified fuzzy controller 2. A quantization factor-proportion factor online self-tuning strategy is achieved to find the optimal parameters for the controller. Finally, the control performance of the improved controller is verified by the simulation environment. The results show that the transient response speed, tracking accuracy, and follower characteristics of the system are significantly improved."
},
{
"pmid": "25719670",
"title": "Human-level control through deep reinforcement learning.",
"abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
},
{
"pmid": "34828131",
"title": "An Improved Approach towards Multi-Agent Pursuit-Evasion Game Decision-Making Using Deep Reinforcement Learning.",
"abstract": "A pursuit-evasion game is a classical maneuver confrontation problem in the multi-agent systems (MASs) domain. An online decision technique based on deep reinforcement learning (DRL) was developed in this paper to address the problem of environment sensing and decision-making in pursuit-evasion games. A control-oriented framework developed from the DRL-based multi-agent deep deterministic policy gradient (MADDPG) algorithm was built to implement multi-agent cooperative decision-making to overcome the limitation of the tedious state variables required for the traditionally complicated modeling process. To address the effects of errors between a model and a real scenario, this paper introduces adversarial disturbances. It also proposes a novel adversarial attack trick and adversarial learning MADDPG (A2-MADDPG) algorithm. By introducing an adversarial attack trick for the agents themselves, uncertainties of the real world are modeled, thereby optimizing robust training. During the training process, adversarial learning was incorporated into our algorithm to preprocess the actions of multiple agents, which enabled them to properly respond to uncertain dynamic changes in MASs. Experimental results verified that the proposed approach provides superior performance and effectiveness for pursuers and evaders, and both can learn the corresponding confrontational strategy during training."
},
{
"pmid": "34746114",
"title": "Dynamic Gesture Recognition Using Surface EMG Signals Based on Multi-Stream Residual Network.",
"abstract": "Gesture recognition technology is widely used in the flexible and precise control of manipulators in the assisted medical field. Our MResLSTM algorithm can effectively perform dynamic gesture recognition. The result of surface EMG signal decoding is applied to the controller, which can improve the fluency of artificial hand control. Much current gesture recognition research using sEMG has focused on static gestures. In addition, the accuracy of recognition depends on the extraction and selection of features. However, Static gesture research cannot meet the requirements of natural human-computer interaction and dexterous control of manipulators. Therefore, a multi-stream residual network (MResLSTM) is proposed for dynamic hand movement recognition. This study aims to improve the accuracy and stability of dynamic gesture recognition. Simultaneously, it can also advance the research on the smooth control of the Manipulator. We combine the residual model and the convolutional short-term memory model into a unified framework. The architecture extracts spatiotemporal features from two aspects: global and deep, and combines feature fusion to retain essential information. The strategy of pointwise group convolution and channel shuffle is used to reduce the number of network calculations. A dataset is constructed containing six dynamic gestures for model training. The experimental results show that on the same recognition model, the gesture recognition effect of fusion of sEMG signal and acceleration signal is better than that of only using sEMG signal. The proposed approach obtains competitive performance on our dataset with the recognition accuracies of 93.52%, achieving state-of-the-art performance with 89.65% precision on the Ninapro DB1 dataset. Our bionic calculation method is applied to the controller, which can realize the continuity of human-computer interaction and the flexibility of manipulator control."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8981035 | 10.3389/fbioe.2022.852408 | Time Optimal Trajectory Planing Based on Improved Sparrow Search Algorithm | Complete trajectory planning includes path planning, inverse solution solving and trajectory optimization. In this paper, a highly smooth and time-saving approach to trajectory planning is obtained by improving the kinematic and optimization algorithms for the time-optimal trajectory planning problem. By partitioning the joint space, the paper obtains an inverse solution calculation based on the partitioning of the joint space, saving 40% of the inverse kinematics solution time. This means that a large number of computational resources can be saved in trajectory planning. In addition, an improved sparrow search algorithm (SSA) is proposed to complete the solution of the time-optimal trajectory. A Tent chaotic mapping was used to optimize the way of generating initial populations. The algorithm was further improved by combining it with an adaptive step factor. The experiments demonstrated the performance of the improved SSA. The robot’s trajectory is further optimized in time by an improved sparrow search algorithm. Experimental results show that the method can improve convergence speed and global search capability and ensure smooth trajectories. | Related WorksThe time-optimal problem of trajectories has been actively studied. Earlier proposed algorithms (Bobrow et al., 1985; Shin and McKay, 1985) are based on the position-phase plane. These algorithm’s transform the time-optimal problem into a function of θ and v as parameters to find the optimal problem. In short, for each point of the path, the minimum time to pass the entire path is obtained by passing at the maximum speed. The maximum velocity allowed for each point is found in the plane formed by θ and v, and is made continuous when switching from point to point. These methods do not consider the acceleration continuity, which is not possible for an actuator moving in actual operation to produce discontinuous acceleration. This approach of ignoring actuator dynamics leads to two adverse effects: first, the discontinuous acceleration causes the actuator motion always to be delayed concerning the reference trajectory. This significantly reduces the tracking accuracy of the trajectory. In addition, constant switching can achieve discontinuous acceleration, but this introduces high-frequency oscillations to the actuator.Solutions to these problems may be found in this literature (Shiller, 1996; Constantinescu, 1998; Constantinescu and Croft, 2000). In these methods higher order derivatives are added for finite control, which requires the establishment of third order dynamic equations. However, building accurate kinetic models is often challenging to accomplish.Another approach that does not require a dynamics model is to use a smoothing function to express the trajectory in joint space. The torque of the actuator is directly reflected in the joint variation of the robot so that a smooth trajectory will result in a smooth model. In the usual case, the spline interpolation function is widely used. Three constraints need to be considered after describing the joint trajectory using the spline function.1) Speed limit;2) Acceleration limit;3) Jerk limit.
The third spline function has time as the horizontal axis and joints as the vertical axis, and the third-order derivability ensures that the acceleration is continuous. The calculation of the optimal trajectory time is completed by finding the minimum value of time under the satisfied constraints.Tandem robots are generally made up of multiple joints, and the motion of each joint is coupled, making it challenging to complete the optimization process through numerical solutions. Swarm intelligence search algorithms have shown great vitality in such problems (Tian et al., 2020; Chen et al., 2021a; Chen et al., 2021b; Chen et al., 2022). Swarm search algorithms are widely used in robotics, such as inverse solution computation (Zhao et al., 2022), control (Liu G et al., 2021; Wu et al., 2022), pose recognition (Li et al., 2019a; Tao et al., 2022a) and other nonlinear problems (Huang et al., 2019; Sun et al., 2020d; Hao et al., 2022). Recently published optimisers (Ghafori and Gharehchopogh, 2012; Abedi and Gharehchopogh, 2020; Abdollahzadeh et al., 2021a; Gharehchopogh et al., 2021a; Abdollahzadeh et al., 2021b; Benyamin et al., 2021; Gharehchopogh et al., 2021b; Gharehchopogh and Abdollahzadeh, 2021; Goldanloo and Gharechophugh, 2021; Mohammadzadeh and Gharehchopogh, 2021; Zaman and Gharehchopogh, 2021; Gharehchopogh, 2022) have achieved good performance but may not suit industrial scenarios with high real-time requirements. The particle swarm optimization algorithm (PSO) is used to search for the global time-optimal trajectory of a spatial robot in conjunction with robot dynamics (Huang and Xu, 2006). Huang uses a multi-objective particle swarm optimization algorithm optimization method for the multi-objective optimization of the motion trajectory of a space robot (Huang et al., 2008). Liu and Zhang (Zhang et al., 2018; Liu and Rodriguez, 2021) used a quintuple polynomial for trajectory planning for the PUMA560 robot and proposed an improved genetic algorithm (GA) to accomplish time-optimal trajectory planning.These works demonstrate the feasibility of the group search algorithm for this problem, but accuracy and convergence speed remains problematic. The sparrow search algorithm is an algorithm proposed by Jiankai Xue (Xu et al., 2022) in 2020. The algorithm outperforms PSO, GA, grey wolf optimization algorithms (GWO). It is widely used other search algorithms on uni-modal and multi-modal test functions and is widely used in problems such as path planning for mobile robots (Liu et al., 2022b), control of photovoltaic microgrids (Yuan et al., 2021) and optimization of battery stack model parameters (Liu Y et al., 2022). We find that SSA is suitable for time-optimal trajectory planning problems and improves the initial population generation in the original algorithm through the Tent chaotic mapping method. An adaptive step size factor adjusts the individual update to improve the global search capability. Time-optimal trajectory planning was completed on the UR5 collaborative robot, and experimental results demonstrate the effectiveness of the method.A complete process includes trajectory determination, inverse solution solving, and trajectory optimization in practical motion. Depending on the scenario, the conditions for determining the trajectory are different. Obtaining information about the environment in these scenarios can be done by different sensors, such as myoelectric signals (Li et al., 2019b; Cheng et al., 2020; Cheng et al., 2021; Yang et al., 2021; Liu et al., 2022c), visual sensors (Jiang and Li, 2019; Jiang and Li, 2021; Tan et al., 2020; Huang et al., 2021; Liao et al., 2021; Liu X. et al., 2022; Sun et al., 2021, 2022; Yun et al., 2022b), multi-sensor fusion (Li et al., 2019c; Liao et al., 2020; Tao et al., 2022b), etc. The theoretical time required for the robot to complete the motion of a specified trajectory includes the motor execution time and the kinematic computation time. Therefore, trajectory planning is closely integrated with inverse kinematic solving. However, none of the above methods takes into account the time taken up in the trajectory planning by the inverse solution calculation. Based on the work described above, this paper also combines the unique domain theory to improve the computational efficiency of the algorithm further when the trajectory is in motion. | [
"35096796",
"33572345",
"35083202",
"35155392"
] | [
{
"pmid": "35096796",
"title": "Intelligent Detection of Steel Defects Based on Improved Split Attention Networks.",
"abstract": "The intelligent monitoring and diagnosis of steel defects plays an important role in improving steel quality, production efficiency, and associated smart manufacturing. The application of the bio-inspired algorithms to mechanical engineering problems is of great significance. The split attention network is an improvement of the residual network, and it is an improvement of the visual attention mechanism in the bionic algorithm. In this paper, based on the feature pyramid network and split attention network, the network is improved and optimised in terms of data enhancement, multi-scale feature fusion and network structure optimisation. The DF-ResNeSt50 network model is proposed, which introduces a simple modularized split attention block, which can improve the attention mechanism of cross-feature graph groups. Finally, experimental validation proves that the proposed network model has good performance and application prospects in the intelligent detection of steel defects."
},
{
"pmid": "33572345",
"title": "A Modified Sparrow Search Algorithm with Application in 3d Route Planning for UAV.",
"abstract": "The unmanned aerial vehicle (UAV) route planning problem mainly centralizes on the process of calculating the best route between the departure point and target point as well as avoiding obstructions on route to avoid collisions within a given flight area. A highly efficient route planning approach is required for this complex high dimensional optimization problem. However, many algorithms are infeasible or have low efficiency, particularly in the complex three-dimensional (3d) flight environment. In this paper, a modified sparrow search algorithm named CASSA has been presented to deal with this problem. Firstly, the 3d task space model and the UAV route planning cost functions are established, and the problem of route planning is transformed into a multi-dimensional function optimization problem. Secondly, the chaotic strategy is introduced to enhance the diversity of the population of the algorithm, and an adaptive inertia weight is used to balance the convergence rate and exploration capabilities of the algorithm. Finally, the Cauchy-Gaussian mutation strategy is adopted to enhance the capability of the algorithm to get rid of stagnation. The results of simulation demonstrate that the routes generated by CASSA are preferable to the sparrow search algorithm (SSA), particle swarm optimization (PSO), artificial bee colony (ABC), and whale optimization algorithm (WOA) under the identical environment, which means that CASSA is more efficient for solving UAV route planning problem when taking all kinds of constraints into consideration."
},
{
"pmid": "35083202",
"title": "Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots.",
"abstract": "Mobile robots have an important role in material handling in manufacturing and can be used for a variety of automated tasks. The accuracy of the robot's moving trajectory has become a key issue affecting its work efficiency. This paper presents a method for optimizing the trajectory of the mobile robot based on the digital twin of the robot. The digital twin of the mobile robot is created by Unity, and the trajectory of the mobile robot is trained in the virtual environment and applied to the physical space. The simulation training in the virtual environment provides schemes for the actual movement of the robot. Based on the actual movement data returned by the physical robot, the preset trajectory of the virtual robot is dynamically adjusted, which in turn enables the correction of the movement trajectory of the physical robot. The contribution of this work is the use of genetic algorithms for path planning of robots, which enables trajectory optimization of mobile robots by reducing the error in the movement trajectory of physical robots through the interaction of virtual and real data. It provides a method to map learning in the virtual domain to the physical robot."
},
{
"pmid": "35155392",
"title": "Genetic-Based Optimization of 3D Burch-Schneider Cage With Functionally Graded Lattice Material.",
"abstract": "A Burch-Schneider (BS) cage is a reinforcement device used in total hip arthroplasty (THA) revision surgeries to bridge areas of acetabular loss. There have been a variety of BS cages in the market, which are made of solid metal. However, significant differences in structural configuration and mechanical behavior between bone and metal implants cause bone resorption and interface loosening, and hence lead to failure of the implant in the long term. To address this issue, an optimal design framework for a cellular BS cage was investigated in this study by genetic algorithm and topology optimization, inspired by porous human bone with variable holes. In this optimization, a BS cage is constructed with functionally graded lattice material which gradually evolves to achieve better mechanical behavior by natural selection and natural genetics. Clinical constraints that allow adequate bone ingrowth and manufacturing constraint that ensures the realization of the optimized implant are considered simultaneously. A homogenization method is introduced to calculate effective mechanical properties of octet-truss lattice material in a given range of relative density. At last, comparison of the optimum lattice BS cage with a fully solid cage and a lattice cage with identical element density indicates the validity of the optimization design strategy proposed in this article."
}
] |
Frontiers in Psychology | null | PMC8982321 | 10.3389/fpsyg.2022.828545 | Study on Smart Home Interface Design Characteristics Considering the Influence of Age Difference: Focusing on Sliders | Smart homes represent an effective approach to improve one’s quality of life. Developing user interfaces that are both comfortable and understandable can assist users, particularly the elderly, embrace smart home technologies. It’s critical to concentrate on the characteristics of smart home interface design and their impact on people of various ages. Since sliders are one of the most common components utilized in the smart home user interface, this article aimed to investigate the effects of slider design characteristics (e.g., button size, track color, and sliding orientation) on user performance and preference. Thirty-four participants were recruited for the experiment (16 for the young group, aged between 18 and 44 years; 18 for the middle-aged and elderly group, aged between 45 years and above). Our results revealed that both groups had shorter task completion time, less fixation time, and saccades on horizontal sliding orientation and larger buttons, which means better user performance. For the older group, the slider with color gradient track led to better user performance, while the track color only had less effect on the performance of the younger group. In terms of user preference, the results and performance of the older group were basically consistent, while the younger group had no significant difference in sliding orientation and track color. | Related WorkTouchscreen InteractionTouchscreen technologies have become increasingly common in personal devices because of their natural and convenient human-machine interaction (Tao et al., 2018). The use of touchscreen technology has many advantages. For example, compared with input devices such as mouse and keyboard, a touchscreen can be easily operated by inexperienced users, which greatly improves user operability (Ahearne et al., 2015). Although, when using touch technology, the performance of an age-related difference is small, making the technology fully accommodate the requirements of users of different ages still needs the effort. As designers, we should focus on the characteristics of users including perceptual, psychomotor, cognitive, and physical changes. Understanding the different age capabilities and limitations can help to create higher usability interface (Niamh et al., 2012). Finding a personalized design approach based on individual preferences can empower the users and mitigate erroneous representations, especially when the elderly are represented by a highly heterogeneous group (Menghi et al., 2017). The development of accessible, ergonomic, and user-friendly interfaces can enable older people to benefit from touchscreen devices, prevent digital exclusion, and improve the quality of life (Lilian et al., 2014).Slider ComponentSliders are widely used in user interfaces for touchscreens. The interaction approach it offers is press-drag, which means the user presses the slider component at the thumb and drags it to the desired release point, mostly for stepless adjustment. But as highlighted in Nielsen Norman Group, sliders are difficult to manipulate. Both the visual style and orientation of the slider will affect the precision of the entered value (Colley et al., 2019). Since the area where the finger first touches the slider is covered by the finger itself, the button size of the slider is also crucial to the availability of the slider component. Therefore, our experiment selected button size, track color, and sliding orientation as the design characteristics to be studied.Button SizeButton size is an important factor in interface interaction. Its influence covers the aspects of interaction performance, user mental load, and preference. There are standards for the size of buttons. American standard (ANSI/HFES 100, 2007) recommends a minimum key size of 9.5 mm (ANSI/HFES 100, 2007), while in ISO 9241-9, the recommended button size can be the breadth of the male distal finger joint at the 95th percentile, which is approximately 22–23 mm (Standards, 2000). However, the research results of relevant scholars deviate from the recommended size of the previous standard. The optimal size of the buttons obtained by them through experiments is 19.05 mm (Zhao et al., 2007) and 20 mm (Colle and Hiszem, 2004; Chen et al., 2013), which is closer to the ISO standard. Previous studies have shown that larger button sizes lead to better interaction, and this result was reflected in different task types. For example, users perform better when the button size is large (i.e., 17.5 mm and above) in the task of digit and letter input (Tao et al., 2018). In game tasks, whether physical solid or touch buttons, users performed better using 1.1 cm2 keys than 0.6 cm2 keys (Lin et al., 2019). In the virtual reality environment, the button size of 15 mm will be unavailable. With the increase in the button size, the task completion time and the error time will decrease, and the optimal button size is 25 mm (Park et al., 2020). In the case of one-handed thumb operation of mobile handheld devices, the optimal button size obtained varies. The study of Ouyang XW concluded that increasing the size of the button on a smartphone from 8 to 14 mm can improve the task completion rate and the task efficiency in the screen mirror of older adults when they click with one-handed posture (Ouyang et al., 2021). The results of Parhi showed that the task completion time of the subjects decreased with the increase in the target size, but when the target was larger than a certain size, there was no significant difference in the click operation error rate among different sizes, among which the button size was 9.6 mm for discrete click operation and 7.7 mm for continuous click operation (Parhi et al., 2006). Yong further concluded that the completion time of single-hand operation with 7 and 10 mm key size is the shortest, while the operation error is the least and subjective satisfaction is the highest with 10 mm button size (Yong and Han, 2010). Not only in terms of the interaction effect, the size of the button will also affect the physical health of users by affecting forces, impulses, and dwell times for participants completing tasks on a touch screen (Sesto et al., 2012). However, button function (Park et al., 2020), user posture (Amrish et al., 2013), and screen size (Hancock et al., 2015) all affect the users’ demand for button size, and existing studies are unable to provide a reference for the button size design based on the tablet size in a smart home environment. Moreover, most of the existing research targets are press buttons, and there is a lack of research on sliding buttons. In the smart home system, most of the adjustment buttons are in the nonpolar adjustment mode, that is, the sliding buttons are the main.ColorAs a feature, the color will capture attention as a distractor (Snowden, 2002) or guide attention as a target (Biggs et al., 2015). In the design of smart home slider interactive buttons, it is worth exploring whether the color of the slider components can guide users to interact efficiently, and under what conditions the effect is the best. Color will affect the users’ cognitive performance (Bhattacharyya et al., 2014), and the individual colors differed significantly in their level of guidance of attention (Andersen and Maier, 2019). In addition, the combination of the color of the readability of the information display system also has a great influence (Humar et al., 2008). In the design of data visualization, two types of phenomena should be considered, namely, simultaneous color contrast (Mittelstädt et al., 2014) and successive contrast (Geisler, 1978). That is, the interaction between adjacent or sequentially displayed colors will lead to perceptual bias. When the legibility of information is low, the users’ reading time will increase (Naujoks et al., 2019). Existing studies on interface color include the color contrast between buttons and text (Jung et al., 2021) and the influence of interface background color on user emotion (Cheng et al., 2019). As for the color study of buttons, Huang found that the color combination of the graphics and background in the button icon affected the visual search performance. The higher the color contrast, the better the user search performance of the subject (Huang, 2008). Sha confirmed this conclusion in elderly subjects (Sha et al., 2017). However, these studies are all based on touch screen press buttons, and there is still a gap in the study of the track color of slider components.OrientationFor sliding components, the interaction modes mainly include horizontal sliding, vertical sliding, and annular sliding. Poor sliding component design will lead to problems such as mismatch between input results and user intentions, resulting in low user experience. Colley studied the impact of visual style and sliding orientation of touch screen slider on input accuracy and compared the difference between horizontal and vertical sliding (Colley et al., 2019). But most of the research on sliders is on non-touch physical sliders. For example, Scott reported that the input value in the horizontal direction was slightly lower than that in the vertical direction (Scott and Huskisson, 1979), and Paul studied the influence of the end point and direction effect of sliding components (Paul-Dauphin et al., 1999). Which means there are few studies on the sliding components of touch screens.Study HypothesesIn order to explore the influence mechanism of different forms of sliding buttons in the smart home interface on user performance of different age groups, this study carried out experimental research from sliding direction, sliding track color, and slider button size to explore the optimal recognition efficiency and user preference, so as to optimize the design.Hypothesis 1: The larger the button size, the better the task performance.Hypothesis 2: Slide track color has an indicative role for the user.Hypothesis 3: Users will prefer horizontal swiping interaction.Hypothesis 1 was based on the common findings of previous scholars. Hypothesis 2 was proposed according to our expectations on the color that can guide attention. Hypothesis 3 was proposed based on Colley’s previous research results (Colley et al., 2019). | [
"26699535",
"23964418",
"31422277",
"30791414",
"23021630",
"15513716",
"664303",
"25600331",
"19615467",
"10568628",
"317239",
"22768644",
"11934005"
] | [
{
"pmid": "26699535",
"title": "Touch-screen technology usage in toddlers.",
"abstract": "OBJECTIVE\nTo establish the prevalence and patterns of use of touch-screen technologies in the toddler population.\n\n\nDESIGN\nParental questionnaires were completed for children aged 12 months to 3 years examining access to touch-screen devices and ability to perform common forms of interaction with touch-screen technologies.\n\n\nRESULTS\nThe 82 questionnaires completed on typically developing children revealed 71% of toddlers had access to touch-screen devices for a median of 15 min (IQR: 9.375-26.25) per day. By parental report, 24 months was the median age of ability to swipe (IQR: 19.5-30.5), unlock (IQR: 20.5-31.5) and active looking for touch-screen features (IQR: 22-30.5), while 25 months (IQR: 21-31.25) was the median age of ability to identify and use specific touch-screen features. Overall, 32.8% of toddlers could perform all four skills.\n\n\nCONCLUSIONS\nFrom 2 years of age toddlers have the ability to interact purposefully with touch-screen devices and demonstrate a variety of common skills required to utilise touch-screen technology."
},
{
"pmid": "23964418",
"title": "Effect of sitting or standing on touch screen performance and touch characteristics.",
"abstract": "OBJECTIVE\nThe aim of this study was to evaluate the effect of sitting and standing on performance and touch characteristics during a digit entry touch screen task in individuals with and without motor-control disabilities.\n\n\nBACKGROUND\nPreviously, researchers of touch screen design have not considered the effect of posture (sitting vs. standing) on touch screen performance (accuracy and timing) and touch characteristics (force and impulse).\n\n\nMETHOD\nParticipants with motor-control disabilities (n = 15) and without (n = 15) completed a four-digit touch screen number entry task in both sitting and standing postures. Button sizes varied from 10 mm to 30 mm (5-mm increments), and button gap was 3 mm or 5 mm.\n\n\nRESULTS\nParticipants had more misses and took longer to complete the task during standing for smaller button sizes (< 20 mm). At larger button sizes, performance was similar for both sitting and standing. In general, misses, time to complete task, and touch characteristics were increased for standing. Although disability affected performance (misses and timing), similar trends were observed for both groups across posture and button size.\n\n\nCONCLUSION\nStanding affects performance at smaller button sizes (< 20 mm). For participants with and without motor-control disabilities, standing led to greater exerted force and impulse.\n\n\nAPPLICATION\nAlong with interface design considerations, environmental conditions should also be considered to improve touch screen accessibility and usability."
},
{
"pmid": "31422277",
"title": "The attentional guidance of individual colours in increasingly complex displays.",
"abstract": "The use of colours is a prevalent and effective tool for improving design. Understanding the effect of colours on attention is crucial for designers that wish to understand how their interfaces will be used. Previous research has consistently shown that attention is biased towards colour. However, despite previous evidence indicating that colours should be treated individually, it has thus far not been investigated whether this difference is reflected in individual effects on attention. To address this, a visual search experiment was conducted that tested the attentional guidance of six individual colours (red,blue, green, yellow, orange, purple) in increasingly complex displays. Results showed that the individual colours differed significantly in their level of guidance of attention, and that these differences increased as the visual complexity of the display increased. Implications for visual design and future research on applying colour in visual attention research and design are discussed."
},
{
"pmid": "30791414",
"title": "A Human⁻Machine Interface Based on Eye Tracking for Controlling and Monitoring a Smart Home Using the Internet of Things.",
"abstract": "People with severe disabilities may have difficulties when interacting with their home devices due to the limitations inherent to their disability. Simple home activities may even be impossible for this group of people. Although much work has been devoted to proposing new assistive technologies to improve the lives of people with disabilities, some studies have found that the abandonment of such technologies is quite high. This work presents a new assistive system based on eye tracking for controlling and monitoring a smart home, based on the Internet of Things, which was developed following concepts of user-centered design and usability. With this system, a person with severe disabilities was able to control everyday equipment in her residence, such as lamps, television, fan, and radio. In addition, her caregiver was able to monitor remotely, by Internet, her use of the system in real time. Additionally, the user interface developed here has some functionalities that allowed improving the usability of the system as a whole. The experiments were divided into two steps. In the first step, the assistive system was assembled in an actual home where tests were conducted with 29 participants without disabilities. In the second step, the system was tested with online monitoring for seven days by a person with severe disability (end-user), in her own home, not only to increase convenience and comfort, but also so that the system could be tested where it would in fact be used. At the end of both steps, all the participants answered the System Usability Scale (SUS) questionnaire, which showed that both the group of participants without disabilities and the person with severe disabilities evaluated the assistive system with mean scores of 89.9 and 92.5, respectively."
},
{
"pmid": "23021630",
"title": "Touch screen performance by individuals with and without motor control disabilities.",
"abstract": "Touch technology is becoming more prevalent as functionality improves and cost decreases. Therefore, it is important that this technology is accessible to users with diverse abilities. The objective of this study was to investigate the effects of button and gap size on performance by individuals with varied motor abilities. Participants with (n = 38) and without (n = 15) a motor control disability completed a digit entry task. Button size ranged from 10 to 30 mm and gap size was either 1 or 3 mm. Results indicated that as button size increased, there was a decrease in misses, errors, and time to complete tasks. Performance for the non-disabled group plateaued at button size 20 mm, with minimal, if any gains observed with larger button sizes. In comparison, the disabled group's performance continued to improve as button size increased. Gap size did not affect user performance. These results may help to improve accessibility of touch technology."
},
{
"pmid": "15513716",
"title": "Standing at a kiosk: effects of key size and spacing on touch screen numeric keypad performance and user preference.",
"abstract": "Touch screen input keys compete with other information for limited screen space. The present study estimated the smallest key size that would not degrade performance or user satisfaction. Twenty participants used finger touches to enter one, four or 10 digits in a numeric keypad displayed on a capacitive touch screen, while standing in front of a touch screen kiosk. Key size (10, 15, 20, 25 mm square) and edge-to-edge key spacing (1, 3 mm) were factorially combined. Performance was evaluated with response time and errors, and user preferences were obtained. Spacing had no measurable effects. Entry times were longer and errors were higher for smaller key sizes, but no significant differences were found between key sizes of 20 and 25 mm. Participants also preferred 20 mm keys to smaller keys, and they were indifferent between 20 and 25 mm keys. Therefore, a key size of 20 mm was found to be sufficiently large for land-on key entry."
},
{
"pmid": "25600331",
"title": "The effects of display size on performance.",
"abstract": "We examined the systematic effects of display size on task performance as derived from a standard perceptual and cognitive test battery. Specifically, three experiments examined the influence of varying viewing conditions on response speed, response accuracy and subjective workload at four differing screen sizes under three different levels of time pressure. Results indicated a ubiquitous effect for time pressure on all facets of response while display size effects were contingent upon the nature of the viewing condition. Thus, performance decrement and workload elevation were evident only with the smallest display size under the two most restrictive levels of time pressure. This outcome generates a lower boundary threshold for display screen size for this order of task demand. Extrapolations to the design and implementation of all display sizes and forms of cognitive and psychomotor demand are considered."
},
{
"pmid": "19615467",
"title": "The technology acceptance model: its past and its future in health care.",
"abstract": "Increasing interest in end users' reactions to health information technology (IT) has elevated the importance of theories that predict and explain health IT acceptance and use. This paper reviews the application of one such theory, the Technology Acceptance Model (TAM), to health care. We reviewed 16 data sets analyzed in over 20 studies of clinicians using health IT for patient care. Studies differed greatly in samples and settings, health ITs studied, research models, relationships tested, and construct operationalization. Certain TAM relationships were consistently found to be significant, whereas others were inconsistent. Several key relationships were infrequently assessed. Findings show that TAM predicts a substantial portion of the use or acceptance of health IT, but that the theory may benefit from several additions and modifications. Aside from improved study quality, standardization, and theoretically motivated additions to the model, an important future direction for TAM is to adapt the model specifically to the health care context, using beliefs elicitation methods."
},
{
"pmid": "10568628",
"title": "Bias and precision in visual analogue scales: a randomized controlled trial.",
"abstract": "Various types of visual analogue scales (VAS) are used in epidemiologic and clinical research. This paper reports on a randomized controlled trial to investigate the effects of variations in the orientation and type of scale on bias and precision in cross-sectional and longitudinal analyses. This trial was included in the pilot study of the SU.VI.MAX (supplementation by antioxidant vitamins and minerals) prevention trial in France in 1994. Six types of VAS (simple, middle-marked, graphic rating, graduated, graduated-numbered, and numerical rating) and two orientations (horizontal and vertical) were used to measure three symptoms of ear, nose, and throat infection at 2-month intervals in 870 subjects. Differences between scales were analyzed by comparing variances (Levene's test) and means (variance-covariance analysis for repeated measures). Scale characteristics were shown to influence the proportion of zero and low values (i.e., there was a floor effect), but not mean scores. The precision of measurements varied cross-sectionally according to the type of scale, but no differences were observed in the precision of measurement of change over time. In conclusion, the characteristics of VAS seem to be important in cross-sectional studies, particularly when symptoms of low or high intensity are being measured. Researchers should try to reach a consensus on what type of VAS to use if studies are to be compared."
},
{
"pmid": "317239",
"title": "Vertical or horizontal visual analogue scales.",
"abstract": "Vertical and horizontal visual analogue scales have been compared in the measurement of pain. There was a good correlation between the 2 scales, but the scores from horizontal scales tended to be slightly lower than those from vertical scales."
},
{
"pmid": "22768644",
"title": "Effect of touch screen button size and spacing on touch characteristics of users with and without disabilities.",
"abstract": "OBJECTIVE\nThe aim of this study was to investigate the effect of button size and spacing on touch characteristics (forces, impulses, and dwell times) during a digit entry touch screen task. A secondary objective was to investigate the effect of disability on touch characteristics.\n\n\nBACKGROUND\nTouch screens are common in public settings and workplaces. Although research has examined the effect of button size and spacing on performance, the effect on touch characteristics is unknown.\n\n\nMETHOD\nA total of 52 participants (n = 23, fine motor control disability; n = 14, gross motor control disability; n = 15, no disability) completed a digit entry task. Button sizes varied from 10 mm to 30 mm, and button spacing was 1 mm or 3 mm.\n\n\nRESULTS\nTouch characteristics were significantly affected by button size. The exerted peak forces increased 17% between the largest and the smallest buttons, whereas impulses decreased 28%. Compared with the fine motor and nondisabled groups, the gross motor group had greater impulses (98% and 167%, respectively) and dwell times (60% and 129%, respectively). Peak forces were similar for all groups.\n\n\nCONCLUSION\nButton size but not spacing influenced touch characteristics during a digit entry task. The gross motor group had significantly greater dwell times and impulses than did the fine motor and nondisabled groups.\n\n\nAPPLICATION\nResearch on touch characteristics, in conjunction with that on user performance, can be used to guide human computer interface design strategies to improve accessibility of touch screen interfaces. Further research is needed to evaluate the effect of the exerted peak forces and impulses on user performance and fatigue."
},
{
"pmid": "11934005",
"title": "Visual attention to color: parvocellular guidance of attentional resources?",
"abstract": "Although transient changes in luminance have been well documented to automatically attract attention to their location, experiments looking at abrupt changes in color have failed to find similar attentional capture. These results are consistent with current theories of the role of the magnocellular (M) and parvocellular (P) streams that postulate that the M stream, which is \"color-blind,\" plays the dominant role in guiding attention and eye movements. The experiment reported here used stimuli that contained only information defined by color, and masked residual luminance information with dynamic noise, to assess the capacity of purely chromatic cues to automatically guide spatial attention. Such stimuli were as effective as those containing large luminance signals in guiding attention. To the extent that these purely chromatic signals isolated the P stream, these results suggest that this stream is also capable of automatic attentional capture. Hence, color vision not only aids target identification but also is a strong aid for target detection and localization."
}
] |
Journal of Healthcare Informatics Research | null | PMC8982732 | 10.1007/s41666-020-00072-6 | Modelling Patient Behaviour Using IoT Sensor Data: a Case Study to Evaluate Techniques for Modelling Domestic Behaviour in Recovery from Total Hip Replacement Surgery | The UK health service sees around 160,000 total hip or knee replacements every year and this number is expected to rise with an ageing population. Expectations of surgical outcomes are changing alongside demographic trends, whilst aftercare may be fractured as a result of resource limitations. Conventional assessments of health outcomes must evolve to keep up with these changing trends. Health outcomes may be assessed largely by self-report using Patient Reported Outcome Measures (PROMs), such as the Oxford Hip or Oxford Knee Score, in the months up to and following surgery. Though widely used, many PROMs have methodological limitations and there is debate about how to interpret results and definitions of clinically meaningful change. With the development of a home-monitoring system, there is opportunity to characterise the relationship between PROMs and behaviour in a natural setting and to develop methods of passive monitoring of outcome and recovery after surgery. In this paper, we discuss the motivation and technology used in long-term continuous observation of movement, sleep and domestic routine for healthcare applications, such as the HEmiSPHERE project for hip and knee replacement patients. In this case study, we evaluate trends evident in data of two patients, collected over a 3-month observation period post-surgery, by comparison with scores from PROMs for sleep and movement quality, and by comparison with a third control home. We find that accelerometer and indoor localisation data correctly highlight long-term trends in sleep and movement quality and can be used to predict sleep and wake times and measure sleep and wake routine variance over time, whilst indoor localisation provides context for the domestic routine and mobility of the patient. Finally, we discuss a visual method of sharing findings with healthcare professionals. | Related WorkKey indicators relevant to PROMS include movement patterns (such as room to room transfers), patterns of improvement (establishment or divergence from routine), high-level activities undertaken (such as cooking or cleaning) and sleep (e.g. hours sleeping, quality of sleep). This study focuses on sleep, movement and domestic routine by analysing and classifying three attributes of patient behaviour—indoor location, movement and activity class. In this section of the paper, the authors briefly introduce literature on the methods selected to perform the analysis.SPHERE: a Sensor Platform for HealthCare in a Residential EnvironmentSPHERE is an interdisciplinary research project which aims to develop sensor technologies capable of supporting a variety of practical use cases, including healthcare- and ambient-assisted living outcomes. An additional goal of SPHERE is to build systems that are considered acceptable by the public and which are flexible and powerful enough to function well in a broad variety of domestic environments [35, 36].‘Smart home’ systems development has primarily taken place in laboratory settings [1], or, as in the SPHERE project, in a customised home [30]. In 2017, the SPHERE project began to deploy a multimodal sensor network (Fig. 1) into dozens of homes in the South West of England. In 2018, the HEmiSphere project began deploying the SPHERE sensor network in the homes of patients as they underwent total hip or knee replacement surgery.
Fig. 1The SPHERE sensor platform consists of multiple subnetworks of sensors including environmental sensors, smart utility meters, cameras and wearable sensors. Sensors in the network communicate using Bluetooth Low-Energy (BLE) gatewaysAs shown in Fig. 1, the SPHERE sensor network provides an overlapping mesh of sensors within a domestic environment. The network incorporates a number of sensor subsystems, including video systems, environmental sensors, electricity and water meters and wearable sensors communicating over Bluetooth Low-Energy (BLE) connections. In this research, the authors focus on RSSI and accelerometer data collected from the wearable subsystem.Indoor LocalisationIndoor localisation [23, 33] is an important area of research for behavioural analysis in residential healthcare. The ability to predict the location of a patient not only gives insight into domestic routine and habitation but allows other information to be physically contextualised.The SPHERE network provides a mesh of Received Signal Strength Indicator (RSSI) fields. As in literature [23, 33], RSSI has been used to fingerprint locations within a space by learning the discriminant RSSI vectors from a moving average [23].Initial testing of RSSI fingerprinting within the SPHERE sensor network [13], using a multilayer perceptron network for location classification, yielded positive results. On a single sample home, the network achieved above 80% classification accuracy on a limited set of indoor locations.Measuring Movement with Wearable AccelerometersAccelerometers are sensors that measure the rate of change in velocity and can be used to measure movement of a person [22, 37]. The accelerometers used in the SPHERE sensor network (Fig. 1) are tri-axial, meaning they record acceleration in three dimensions, x, y and z. In [22, 37], wrist-worn accelerometers are used to monitor acceleration magnitude (Equation 1), which is the square root of the acceleration vector. The magnitude gives a single signal which describes the magnitude of acceleration regardless of the axis, or direction, of acceleration. Magnitude is useful in modelling the force of movement, when specific orientation information is not necessary.
1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ A = \sqrt[]{x^{2} + y^{2} + z^{2}} $$\end{document}A=x2+y2+z2The wrist-worn sensor will show spikes in magnitude indicative of movement such as ambulation (e.g. walking or running), hand or arm movements (e.g. chopping vegetables) or posture change (e.g. rolling over in bed). In aggregate, movement data can also be used to study activity levels over an extended period of time, such as sleep quality and rhythm [2], using for example sleep quality and consistency measures proposed and used in actigraphy.Classifying Posture and Ambulation ActivityActivity recognition using wearable and mobile devices has been a major focus for the recent years [3, 14, 17, 25, 28]. From a device prospective, mobile phones, smart watches and wrist bands have the dominant source of data, which normally captures the acceleration signal around the body of the users. In this paper, we also focus on the 3-axis acceleration data obtained from a wrist band, which is one of the standard approaches used in the field. | [
"12749557",
"27294030",
"23110821",
"2748771",
"26483921",
"12777182",
"26224824",
"21924751",
"24592460",
"20817472",
"21665167",
"26288278"
] | [
{
"pmid": "12749557",
"title": "The role of actigraphy in the study of sleep and circadian rhythms.",
"abstract": "In summary, although actigraphy is not as accurate as PSG for determining some sleep measurements, studies are in general agreement that actigraphy, with its ability to record continuously for long time periods, is more reliable than sleep logs which rely on the patients' recall of how many times they woke up or how long they slept during the night and is more reliable than observations which only capture short time periods. Actigraphy can provide information obtainable in no other practical way. It can also have a role in the medical care of patients with sleep disorders. However, it should not be held to the same expectations as polysomnography. Actigraphy is one-dimensional, whereas polysomnography comprises at least 3 distinct types of data (EEG, EOG, EMG), which jointly determine whether a person is asleep or awake. It is therefore doubtful whether actigraphic data will ever be informationally equivalent to the PSG, although progress on hardware and data processing software is continuously being made. Although the 1995 practice parameters paper determined that actigraphy was not appropriate for the diagnosis of sleep disorders, more recent studies suggest that for some disorders, actigraphy may be more practical than PSG. While actigraphy is still not appropriate for the diagnosis of sleep disordered breathing or of periodic limb movements in sleep, it is highly appropriate for examining the sleep variability (i.e., night-to-night variability) in patients with insomnia. Actigraphy is also appropriate for the assessment of and stability of treatment effects of anything from hypnotic drugs to light treatment to CPAP, particularly if assessments are done before and after the start of treatment. A recent independent review of the actigraphy literature by Sadeh and Acebo reached many of these same conclusions. Some of the research studies failed to find relationships between sleep measures and health-related symptoms. The interpretation of these data is also not clear-cut. Is it that the actigraph is not reliable enough to the access the relationship between sleep changes and quality of life measures, or, is it that, in fact, there is no relationship between sleep in that population and quality of life measures? Other studies of sleep disordered breathing, where actigraphy was not used and was not an outcome measure also failed to find any relationship with quality of life. Is it then the actigraph that is not reliable or that the associations just do not exist? The one area where actigraphy can be used for clinical diagnosis is in the evaluation of circadian rhythm disorders. Actigraphy has been shown to be very good for identifying rhythms. Results of actigraphic recordings correlate well with measurements of melatonin and of core body temperature rhythms. Activity records also show sleep disturbance when sleep is attempted at an unfavorable phase of the circadian cycle. Actigraphy therefore would be particularly good for aiding in the diagnosis of delayed or advanced sleep phase syndrome, non-24-hour-sleep syndrome and in the evaluation of sleep disturbances in shift workers. It must be remembered, however, that overt rest-activity rhythms are susceptible to various masking effects, so they may not always show the underlying rhythm of the endogenous circadian pacemaker. In conclusion, the latest set of research articles suggest that in the clinical setting, actigraphy is reliable for evaluating sleep patterns in patients with insomnia, for studying the effect of treatments designed to improve sleep, in the diagnosis of circadian rhythm disorders (including shift work), and in evaluating sleep in individuals who are less likely to tolerate PSG, such as infants and demented elderly. While actigraphy has been used in research studies for many years, up to now, methodological issues had not been systematically addressed in clinical research and practice. Those issues have now been addressed and actigraphy may now be reaching the maturity needed for application in the clinical arena."
},
{
"pmid": "27294030",
"title": "'nparACT' package for R: A free software tool for the non-parametric analysis of actigraphy data.",
"abstract": "For many studies, participants' sleep-wake patterns are monitored and recorded prior to, during and following an experimental or clinical intervention using actigraphy, i.e. the recording of data generated by movements. Often, these data are merely inspected visually without computation of descriptive parameters, in part due to the lack of user-friendly software. To address this deficit, we developed a package for R Core Team [6], that allows computing several non-parametric measures from actigraphy data. Specifically, it computes the interdaily stability (IS), intradaily variability (IV) and relative amplitude (RA) of activity and gives the start times and average activity values of M10 (i.e. the ten hours with maximal activity) and L5 (i.e. the five hours with least activity). Two functions compute these 'classical' parameters and handle either single or multiple files. Two other functions additionally allow computing an L-value (i.e. the least activity value) for a user-defined time span termed 'Lflex' value. A plotting option is included in all functions. The package can be downloaded from the Comprehensive R Archives Network (CRAN). •The package 'nparACT' for R serves the non-parametric analysis of actigraphy data.•Computed parameters include interdaily stability (IS), intradaily variability (IV) and relative amplitude (RA) as well as start times and average activity during the 10 h with maximal and the 5 h with minimal activity (i.e. M10 and L5)."
},
{
"pmid": "23110821",
"title": "Inertial sensor motion analysis of gait, sit-stand transfers and step-up transfers: differentiating knee patients from healthy controls.",
"abstract": "Patients undergoing total knee replacement for end stage knee osteoarthritis (OA) become increasingly younger and more demanding. Consequently, outcome assessment tools need to evolve toward objective performance-based measures. We applied a novel approach toward ambulatory biomechanical assessment of physical function using a single inertial sensor located at the pelvis to derive various motion parameters during activities of daily living. We investigated the potential of a clinically feasible battery of tests to define relevant parameters of physical function. We compared preoperative measures of end stage knee OA patients to healthy subjects. Our results show that measures of time yield the highest discriminative capacity to differentiate between groups. Additionally we found disease-dependent and task-specific alterations of movement for inertial sensor-derived motion parameters with good discriminative capacity. The inertial sensor's output quantities seem to capture another clinically relevant dimension of physical function that is supplementary to time. This study demonstrates the potential of inertial sensor-based motion analysis and provides a standardized test feasible for a routine clinical application in the longitudinal follow-up."
},
{
"pmid": "2748771",
"title": "The Pittsburgh Sleep Quality Index: a new instrument for psychiatric practice and research.",
"abstract": "Despite the prevalence of sleep complaints among psychiatric patients, few questionnaires have been specifically designed to measure sleep quality in clinical populations. The Pittsburgh Sleep Quality Index (PSQI) is a self-rated questionnaire which assesses sleep quality and disturbances over a 1-month time interval. Nineteen individual items generate seven \"component\" scores: subjective sleep quality, sleep latency, sleep duration, habitual sleep efficiency, sleep disturbances, use of sleeping medication, and daytime dysfunction. The sum of scores for these seven components yields one global score. Clinical and clinimetric properties of the PSQI were assessed over an 18-month period with \"good\" sleepers (healthy subjects, n = 52) and \"poor\" sleepers (depressed patients, n = 54; sleep-disorder patients, n = 62). Acceptable measures of internal homogeneity, consistency (test-retest reliability), and validity were obtained. A global PSQI score greater than 5 yielded a diagnostic sensitivity of 89.6% and specificity of 86.5% (kappa = 0.75, p less than 0.001) in distinguishing good and poor sleepers. The clinimetric and clinical properties of the PSQI suggest its utility both in psychiatric clinical practice and research activities."
},
{
"pmid": "26483921",
"title": "Nonparametric methods in actigraphy: An update.",
"abstract": "Circadian rhythmicity in humans has been well studied using actigraphy, a method of measuring gross motor movement. As actigraphic technology continues to evolve, it is important for data analysis to keep pace with new variables and features. Our objective is to study the behavior of two variables, interdaily stability and intradaily variability, to describe rest activity rhythm. Simulated data and actigraphy data of humans, rats, and marmosets were used in this study. We modified the method of calculation for IV and IS by modifying the time intervals of analysis. For each variable, we calculated the average value (IVm and ISm) results for each time interval. Simulated data showed that (1) synchronization analysis depends on sample size, and (2) fragmentation is independent of the amplitude of the generated noise. We were able to obtain a significant difference in the fragmentation patterns of stroke patients using an IVm variable, while the variable IV60 was not identified. Rhythmic synchronization of activity and rest was significantly higher in young than adults with Parkinson׳s when using the ISM variable; however, this difference was not seen using IS60. We propose an updated format to calculate rhythmic fragmentation, including two additional optional variables. These alternative methods of nonparametric analysis aim to more precisely detect sleep-wake cycle fragmentation and synchronization."
},
{
"pmid": "12777182",
"title": "Hip disability and osteoarthritis outcome score (HOOS)--validity and responsiveness in total hip replacement.",
"abstract": "BACKGROUND\nThe aim of the study was to evaluate if physical functions usually associated with a younger population were of importance for an older population, and to construct an outcome measure for hip osteoarthritis with improved responsiveness compared to the Western Ontario McMaster osteoarthritis score (WOMAC LK 3.0).\n\n\nMETHODS\nA 40 item questionnaire (hip disability and osteoarthritis outcome score, HOOS) was constructed to assess patient-relevant outcomes in five separate subscales (pain, symptoms, activity of daily living, sport and recreation function and hip related quality of life). The HOOS contains all WOMAC LK 3.0 questions in unchanged form. The HOOS was distributed to 90 patients with primary hip osteoarthritis (mean age 71.5, range 49-85, 41 females) assigned for total hip replacement for osteoarthritis preoperatively and at six months follow-up.\n\n\nRESULTS\nThe HOOS met set criteria of validity and responsiveness. It was more responsive than WOMAC regarding the subscales pain (SRM 2.11 vs. 1.83) and other symptoms (SRM 1.83 vs. 1.28). The responsiveness (SRM) for the two added subscales sport and recreation and quality of life were 1.29 and 1.65, respectively. Patients <or= 66 years of age (range 49-66) reported higher responsiveness in all five subscales than patients >66 years of age (range 67-85) (Pain SRM 2.60 vs. 1.97, other symptoms SRM 3.0 vs. 1.60, activity of daily living SRM 2.51 vs. 1.52, sport and recreation function SRM 1.53 vs. 1.21 and hip related quality of life SRM 1.95 vs. 1.57).\n\n\nCONCLUSION\nThe HOOS 2.0 appears to be useful for the evaluation of patient-relevant outcome after THR and is more responsive than the WOMAC LK 3.0. The added subscales sport and recreation function and hip related quality of life were highly responsive for this group of patients, with the responsiveness being highest for those younger than 66."
},
{
"pmid": "26224824",
"title": "The epidemiology of revision total knee and hip arthroplasty in England and Wales: a comparative analysis with projections for the United States. A study using the National Joint Registry dataset.",
"abstract": "Total knee arthroplasty (TKA) and total hip arthroplasty (THA) are recognised and proven interventions for patients with advanced arthritis. Studies to date have demonstrated a steady increase in the requirement for primary and revision procedures. Projected estimates made for the United States show that by 2030 the demand for primary TKA will grow by 673% and for revision TKA by 601% from the level in 2005. For THA the projected estimates are 174% and 137% for primary and revision surgery, respectively. The purpose of this study was to see if those predictions were similar for England and Wales using data from the National Joint Registry and the Office of National Statistics. Analysis of data for England and Wales suggest that by 2030, the volume of primary and revision TKAs will have increased by 117% and 332%, respectively between 2012 and 2030. The data for the United States translates to a 306% cumulative rate of increase between 2012 and 2030 for revision surgery, which is similar to our predictions for England and Wales. The predictions from the United States for primary TKA were similar to our upper limit projections. For THA, we predicted an increase of 134% and 31% for primary and revision hip surgery, respectively. Our model has limitations, however, it highlights the economic burden of arthroplasty in the future in England and Wales as a real and unaddressed problem. This will have significant implications for the provision of health care and the management of orthopaedic services in the future."
},
{
"pmid": "21924751",
"title": "Accelerometer validity and placement for detection of changes in physical activity in dogs under controlled conditions on a treadmill.",
"abstract": "The objective of the research was to determine the optimal location and method of attachment for accelerometer-based motion sensors, and to validate their ability to differentiate rest and increases in speed in healthy dogs moving on a treadmill. Two accelerometers were placed on a harness between the scapulae of dogs with one in a pouch and one directly attached to the harness. Two additional accelerometers were placed (pouched and not pouched) ventrally on the dog's collar. Data were recorded in 1s epochs with dogs moving in stages lasting 3 min each on a treadmill: (1) at rest, lateral recumbency, (2) treadmill at 0% slope, 3 km/h, (3) treadmill at 0% slope, 5 km/h, (4) treadmill at 0% slope, 7 km/h, (5) treadmill at 5% slope, 5 km/h, and (6) treadmill at 5% slope, 7 km/h. Only the harness with the accelerometer in a pouch along the dorsal midline yielded statistically significant increases (P<0.05) in vector magnitude as walking speed of the dogs increased (5-7 km/h) while on the treadmill. Statistically significant increases in vector magnitude were detected in the dogs as the walking speed increased from 5 to 7 km/h, however, changes in vector magnitude were not detected when activity intensity was increased as a result of walking up a 5% grade. Accelerometers are a valid and objective tool able to discriminate between and monitor different levels of activity in dogs in terms of speed of movement but not in energy expenditure that occurs with movement up hill."
},
{
"pmid": "24592460",
"title": "A survey on ambient-assisted living tools for older adults.",
"abstract": "In recent years, we have witnessed a rapid surge in assisted living technologies due to a rapidly aging society. The aging population, the increasing cost of formal health care, the caregiver burden, and the importance that the individuals place on living independently, all motivate development of innovative-assisted living technologies for safe and independent aging. In this survey, we will summarize the emergence of 'ambient-assisted living\" (AAL) tools for older adults based on ambient intelligence paradigm. We will summarize the state-of-the-art AAL technologies, tools, and techniques, and we will look at current and future challenges."
},
{
"pmid": "20817472",
"title": "The importance to including objective functional outcomes in the clinical follow up of total knee arthroplasty patients.",
"abstract": "In clinical practice, it is increasingly important to assess patients' daily functionality routinely and objectively. Acceleration-based gait analysis (AGA) has shown to be reliable and technically suitable for routine clinical use outside the laboratory. This study investigated the suitability of AGA for measuring function in orthopaedic patients with symptomatic gonarthrosis listed for total knee arthroplasty (TKA) by investigating (a) the ability of AGA to distinguish patients from healthy subjects, (b) the sensitivity to gait changes of AGA in assessing recovery following total knee arthroplasty in a subpopulation, and (c) correlations between AGA parameters and clinical scales. Gait was assessed using AGA in 24 patients with symptomatic gonarthrosis listed for TKA, and in 24 healthy subjects. AGA parameters (e.g. speed, asymmetry) and clinical scales (e.g. KSS) were used to monitor progress in 12 patients 3 months after TKA. The Mann-Whitney-U test, Receiver Operating Characteristic (ROC) curves, repeated measurement ANOVA and Pearson correlations were performed. AGA differentiated pathological from healthy gait. The area under the ROC curve, sensitivity and specificity values were high for speed, step frequency and step length. Different recovery profiles were found, with clinical scales showing faster recovery rates. None or only weak correlations were found between AGA and clinical scores. AGA was found to be of clinical relevance in identifying and monitoring patients with symptomatic gonarthrosis in orthopaedic practice, providing objective and additional information about function beyond clinical scales. This, together with the fact that AGA can be applied routinely, suggests the suitability of AGA for use in rehabilitation programs."
},
{
"pmid": "21665167",
"title": "Comparison of self-reported knee injury and osteoarthritis outcome score to performance measures in patients after total knee arthroplasty.",
"abstract": "OBJECTIVE\nTo characterize patient outcomes after total knee arthroplasty (TKA) by (1) examining changes in self-report measures (Knee Injury and Osteoarthritis Outcome Score [KOOS]) and performance measures over the first 6 months after TKA, (2) evaluating correlations between changes in KOOS self-report function (activities of daily living [ADL] subscale) and functional performance (6-minute walk [6MW]), and (3) exploring how changes in pain correlate with KOOS ADL and 6MW outcomes.\n\n\nDESIGN\nRetrospective cohort evaluation.\n\n\nSETTING\nClinical research laboratory.\n\n\nPATIENTS (OR PARTICIPANTS)\nThirty-nine patients scheduled for a unilateral, primary TKA for end-stage unilateral knee osteoarthritis.\n\n\nMETHODS\nPatients were evaluated 2 weeks before surgery and 1, 3, and 6 months after surgery.\n\n\nMAIN OUTCOME MEASUREMENTS\nKOOS, 6MW, timed-up-and-go (TUG), and stair climbing tests (SCT), quadriceps strength.\n\n\nRESULTS\nThree of 5 KOOS subscales significantly improved by 1 month after TKA. All 5 KOOS subscales significantly improved by 3 and 6 months after TKA. In contrast, performance measures (6MW, TUG, SCT, and quadriceps strength) all significantly declined from preoperative values by 1 month after TKA and significantly improved from preoperative values by 3 and 6 months after TKA; yet, improvements from preoperative values were not clinically meaningful. Pearson correlations between changes in the KOOS ADL subscale and 6MW from before surgery were not statistically significant at 1, 3, or 6 months after TKA. In addition, KOOS Pain was strongly correlated with KOOS ADL scores at all times, but KOOS Pain was not correlated with 6MW distance at any time.\n\n\nCONCLUSIONS\nPatient self-report by using the KOOS did not reflect the magnitude of performance deficits present after surgery, especially 1 month after TKA. Self-report KOOS outcomes closely paralleled pain relief after surgery, whereas performance measures were not correlated with pain. These results emphasize the importance of including performance measures when tracking recovery after TKA as opposed to solely relying on self-reported measures."
},
{
"pmid": "26288278",
"title": "Movement prediction using accelerometers in a human population.",
"abstract": "We introduce statistical methods for predicting the types of human activity at sub-second resolution using triaxial accelerometry data. The major innovation is that we use labeled activity data from some subjects to predict the activity labels of other subjects. To achieve this, we normalize the data across subjects by matching the standing up and lying down portions of triaxial accelerometry data. This is necessary to account for differences between the variability in the position of the device relative to gravity, which are induced by body shape and size as well as by the ambiguous definition of device placement. We also normalize the data at the device level to ensure that the magnitude of the signal at rest is similar across devices. After normalization we use overlapping movelets (segments of triaxial accelerometry time series) extracted from some of the subjects to predict the movement type of the other subjects. The problem was motivated by and is applied to a laboratory study of 20 older participants who performed different activities while wearing accelerometers at the hip. Prediction results based on other people's labeled dictionaries of activity performed almost as well as those obtained using their own labeled dictionaries. These findings indicate that prediction of activity types for data collected during natural activities of daily living may actually be possible."
}
] |
Journal of Healthcare Informatics Research | null | PMC8982803 | 10.1007/s41666-019-00059-y | Blood Glucose Prediction with Variance Estimation Using Recurrent Neural Networks | Many factors affect blood glucose levels in type 1 diabetics, several of which vary largely both in magnitude and delay of the effect. Modern rapid-acting insulins generally have a peak time after 60–90 min, while carbohydrate intake can affect blood glucose levels more rapidly for high glycemic index foods, or slower for other carbohydrate sources. It is important to have good estimates of the development of glucose levels in the near future both for diabetic patients managing their insulin distribution manually, as well as for closed-loop systems making decisions about the distribution. Modern continuous glucose monitoring systems provide excellent sources of data to train machine learning models to predict future glucose levels. In this paper, we present an approach for predicting blood glucose levels for diabetics up to 1 h into the future. The approach is based on recurrent neural networks trained in an end-to-end fashion, requiring nothing but the glucose level history for the patient. Our approach obtains results that are comparable to the state of the art on the Ohio T1DM dataset for blood glucose level prediction. In addition to predicting the future glucose value, our model provides an estimate of its certainty, helping users to interpret the predicted levels. This is realized by training the recurrent neural network to parameterize a univariate Gaussian distribution over the output. The approach needs no feature engineering or data preprocessing and is computationally inexpensive. We evaluate our method using the standard root-mean-squared error (RMSE) metric, along with a blood glucose-specific metric called the surveillance error grid (SEG). We further study the properties of the distribution that is learned by the model, using experiments that determine the nature of the certainty estimate that the model is able to capture. | Related WorkEarly work on predicting blood glucose levels from CGM data include Bremer, et al. [4], who explored the predictability of data from CGM systems, and showed how you can make predictions based on autocorrelation functions. Sparacino, et al. [23] proposed a first-order auto-regressive model.Wiley [24] proposed using support vector regression (SVR) to predict blood sugar levels from CGM data. They report RMSE of 4.5 mg/dl, but this is using data that was aggressively smoothed using a regularized cubic spline interpolation. Bunescu, et al. [5] extended this work with physiological models for meal absorption dynamics, insulin dynamics, and glucose dynamics to predict blood glucose levels 30 and 60 min into the future. They obtained a relative improvement of about 12% in prediction accuracy over the model proposed by Wiley. The experiments in [5] is performed on non-smoothed data.There have been approaches using neural networks to predict blood glucose levels. Perez, et al. [22] presented a feed-forward neural network (FFNN) taking CGM history as input, and predicting the level 15, 30, and 45 min into the future. RMSE accuracy for 30-min predictions is similar to those of [24]. Mougiakakou et al. [20] showed that RNNs can be used to predict blood glucose levels from CGM data. They evaluated their method on four different children with type 1 diabetes, and got some promising results. On average, they reported an RMSE accuracy of 24.1 mg/dl.Some papers have incorporated additional information (e.g., carbohydrate/meal intake, insulin injections, etc). Pappada et al. [21] proposed an FFNN taking as input CGM levels, insulin dosages, metered glucose levels, nutritional intake, lifestyle, and emotional factors. Despite having all this data at its disposal, the model makes predictions 75 min into the future with an RMSE score of 43.9 mg/dl. Zecchin et al. [26] proposed a neural network approach in combination with a first-order polynomial extrapolation algorithm to produce short-term predictions on blood glucose levels, taking into account meal intake information. The approach is evaluated both on simulated data, and on real data from 9 patients with Abbott FreeStyle Navigator. None of the above-mentioned approaches have the ability to output a confidence interval.A problem when modeling continuous outputs trained using least squares as a training criterion is that the model tends to learn a conditional average of the targets. Modeling a distribution over the outputs may limit this problem and make training more stable. Mixture density networks were proposed by [3]. By allowing the output vector from a neural network model to parameterize a mixture of Gaussians, they manage to learn a mapping even when the targets are not unique. Besides enabling learning stability, this also allows the model to visualize the certainty of its predictions. A similar approach was used together with RNNs in [11], to predict the distribution of next position for a pen during handwriting.The release of the Ohio dataset [16] in combination with The blood glucose level prediction challenge (BGLP) at The workshop on knowledge discovery in healthcare data (KDH) 2018, spurred further interest on blood glucose prediction models. At the workshop, a preliminary version of this study was presented [17]. While a challenge was formulated, no clear winner could be decided, because of differences in the evaluation procedure. The results listed below cannot directly be compared to the results in this paper due to these differences. However, they all refer to predictions made with a 30-min horizon. While our study has focused on predicting the blood glucose levels using only the CGM history as input, all methods below use more features provided in the dataset such as carbohydrate intake and insulin distribution, and none of them gives an estimate of the uncertainty.Chen et al. [6] used a recurrent neural network with dilations to model the data. Dilations allow a network to learn hierarchical structures and the authors chose to use the CGM values, insulin doses, and carbohydrate intake from the data, resulting in an average RMSE of 19.04 mg/dl. Xie et al. [25] compared autoregression with exogeneous inputs (ARX) with RNNs and convolutional neural networks (CNNs), and concluded that the simpler ARX models achieved the best scores on the Ohio blood glucose data, with an average RMSE of 19.59 mg/dl. Contreras et al. [8] used grammatical evolution (GE) in combination with feature engineering to search for a predictive model, obtaining an average RMSE of 24.83 mg/dl. Bertachi et al. [2] reported an average RMSE of 19.33 mg/dl by using physiological models for insulin onboard, carbohydrates onboard, and activity onboard, which are fed as features to a feed-forward neural network. Midroni et al. [18] employed XGBoost with a thorough investigation of feature importance and reported an average RMSE of 19.32 mg/dl. Zhu et al. [27] trained a CNN with CGM data, carbohydrate intake, and insulin distribution used as features and obtained an average RMSE of 21.72 mg/dl. | [
"18267787",
"10078542",
"9377276",
"17518291"
] | [
{
"pmid": "18267787",
"title": "Learning long-term dependencies with gradient descent is difficult.",
"abstract": "Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered."
},
{
"pmid": "10078542",
"title": "Is blood glucose predictable from previous values? A solicitation for data.",
"abstract": "An important question about blood glucose control in diabetes is, Can present and future blood glucose values be predicted from recent blood glucose history? If this is possible, new continuous blood glucose monitoring technologies under development may lead to qualitatively better therapeutic capabilities. Not only could continuous monitoring technologies alert a user when a hypoglycemic episode or other blood glucose excursion is underway, but measurements may also provide sufficient information to predict near-future blood glucose values. A predictive capability based only on recent blood glucose history would be advantageous because there would be no need to involve models of glucose and insulin distribution, with their inherent requirement for detailed accounting of vascular glucose loads and insulin availability. Published data analyzed here indicate that blood glucose dynamics are not random, and that blood glucose values can be predicted, at least for the near future, from frequently sampled previous values. Data useful in further exploring this concept are limited, however, and an appeal is made for collection of more."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "17518291",
"title": "Glucose concentration can be predicted ahead in time from continuous glucose monitoring sensor time-series.",
"abstract": "A clinically important task in diabetes management is the prevention of hypo/hyperglycemic events. In this proof-of-concept paper, we assess the feasibility of approaching the problem with continuous glucose monitoring (CGM) devices. In particular, we study the possibility to predict ahead in time glucose levels by exploiting their recent history monitored every 3 min by a minimally invasive CGM system, the Glucoday, in 28 type 1 diabetic volunteers for 48 h. Simple prediction strategies, based on the description of past glucose data by either a first-order polynomial or a first-order autoregressive (AR) model, both with time-varying parameters determined by weighted least squares, are considered. Results demonstrate that, even by using these simple methods, glucose can be predicted ahead in time, e.g., with a prediction horizon of 30 min crossing of the hypoglycemic threshold can be predicted 20-25 min ahead in time, a sufficient margin to mitigate the event by sugar ingestion."
}
] |
Frontiers in Robotics and AI | null | PMC8984145 | 10.3389/frobt.2022.830007 | Learning to Centralize Dual-Arm Assembly | Robotic manipulators are widely used in modern manufacturing processes. However, their deployment in unstructured environments remains an open problem. To deal with the variety, complexity, and uncertainty of real-world manipulation tasks, it is essential to develop a flexible framework with reduced assumptions on the environment characteristics. In recent years, reinforcement learning (RL) has shown great results for single-arm robotic manipulation. However, research focusing on dual-arm manipulation is still rare. From a classical control perspective, solving such tasks often involves complex modeling of interactions between two manipulators and the objects encountered in the tasks, as well as the two robots coupling at a control level. Instead, in this work, we explore the applicability of model-free RL to dual-arm assembly. As we aim to contribute toward an approach that is not limited to dual-arm assembly but dual-arm manipulation in general, we keep modeling efforts at a minimum. Hence, to avoid modeling the interaction between the two robots and the used assembly tools, we present a modular approach with two decentralized single-arm controllers, which are coupled using a single centralized learned policy. We reduce modeling effort to a minimum by using sparse rewards only. Our architecture enables successful assembly and simple transfer from simulation to the real world. We demonstrate the effectiveness of the framework on dual-arm peg-in-hole and analyze sample efficiency and success rates for different action spaces. Moreover, we compare results on different clearances and showcase disturbance recovery and robustness when dealing with position uncertainties. Finally, we zero-shot transfer policies trained in simulation to the real world and evaluate their performance. Videos of the experiments are available at the project website (https://sites.google.com/view/dual-arm-assembly/home). | 2 Related WorkDual-arm manipulation is a challenging area of research, which can be divided into decentralized and centralized approaches. The first one utilizes independent controllers for each robot with explicit (Petitti et al., 2016) or implicit (Wang and Schwager, 2014; Tsiamis et al., 2015) communication channels and is often combined with leader/follower behavior (Suomalainen et al., 2019; Wang and Schwager, 2014). Despite the resulting improvements in scalability and variability, decentralized control hardly reaches the efficiency and precision of centralized control, which integrates the control of both manipulators in a central unit. Among feasible manipulation objectives, peg-in-hole insertion can be seen as a benchmark since it requires accurate positioning, grasping, and handling of objects in contact-rich situations. Therefore, we select the task of dual-arm peg-in-hole to evaluate the performance of our approach.2.1 Single-Arm Peg-in-HoleAs research focusing on dual-arm peg-in-hole assembly is rare and mostly limited to extensive modeling (Suomalainen et al., 2019; Park et al., 2014; Zhang et al., 2017), research on classical peg-in-hole assembly with a single robotic arm provides a perspective on model-free approaches based on reinforcement learning. Vecerík et al. (2017) and Schoettler et al. (2019) show that sparse rewards are sufficient to successfully learn a policy for an insertion task if combined with learning from demonstrations. The work in Schoettler et al. (2019) uses residual reinforcement learning to leverage classical control, which performs well given sparse rewards only and provides a hint that the choice of action space can be crucial. An evaluation of action spaces on the task of single-arm peg-in-hole with a clearance of 2 mm and a shaped reward function is presented in Varin et al. (2019), where Cartesian impedance control performs best. Moreover, Beltran-Hernandez et al. (2020) apply position-force control with model-free reinforcement learning for peg-in-hole with a focus on transfer-learning and domain randomization.2.2 Decentralized Dual-Arm ManipulationIn the work by Suomalainen et al. (2019), a decentralized approach for the dual-arm peg-in-hole task is proposed. The method is based on a leader/follower architecture. Hence, no explicit coupling between both manipulators is required. The leader would perform the insertion, and the follower would hold its position and be compliant with the applied forces. Similar to the previously mentioned work, Zhang et al. (2017) utilizes a decentralized approach, where the hole keeps the desired position with low compliance and the peg is steered in a spiral-screw motion toward insertion with high compliance. However, despite reducing the necessity to model their interaction, both approaches lack dexterity, i.e., there is only one robot actively acting in the environment. In a general pipeline, there should be enough flexibility for both arms to be actively contributing toward the objective. Furthermore, Park et al. (2014) present a method based on decomposing the task into phases and utilizing a sophisticated control flow for the whole assembly process. Despite reducing efforts in modeling the interaction explicitly, the control flow is only engineered for one specific task and lacks dexterity as movements are bound to the preprogrammed procedure.2.3 Centralized Dual-Arm ManipulationWork on centralized dual-arm manipulation focuses on cooperative single object manipulation; hence, the applicability is limited to a few use-cases. Pairet et al. (2019) propose a learning-based approach and evaluate their method on a synthetic pick-and-place task. A set of primitive behaviors are demonstrated to the robot by a human; the robot combines those behaviors and tries to solve a given task. Finally, an evaluator measures its performance and decides if further demonstrations are necessary. The approach has promising potential toward more robust and less task-specific dual-arm manipulation. However, besides the required modeling efforts, it is limited by the human teaching process, which introduces an additional set of assumptions, limiting its applicability to semi-structured environments. Besides that, classical methods for cooperative single object manipulation with centralized control highly rely on accurate and complex modeling of the underlying system dynamics (Caccavale and Villani, 2001; Caccavale et al., 2008; Erhart et al., 2013; Heck et al., 2013; Bjerkeng et al., 2014; Ren et al., 2015).2.4 Sim-to-Real TransferSample inefficiency is one of the main challenges of deep RL algorithms. The problem is even worse for robotic tasks, which involve high-dimensional states and actions as well as complex dynamics. This motivates the use of simulation for data collection and training. However, due to the inaccuracies in the physics modeling and image rendering in simulation, policies trained in simulation tend to fail in the real world. This is usually referred to as the “reality gap.” The most popular paradigm to approach this problem is domain randomization (Tobin et al., 2017). The main goal of domain randomization is to subject the agent to samples based on diverse simulation parameters concerning the object (Tobin et al., 2017) and the dynamics properties (Peng et al., 2018). By doing so, the learned policy is supposed to be able to generalize to the different physical properties of real-world tasks. Recent work has explored active parameter sampling strategies as to dedicate more training time for troublesome parameter settings (Mehta et al., 2020). Another approach for sim-to-real transfer is system modularity (Clavera et al., 2017). Here, a policy is split into different modules responsible for different objectives such as pose detection, online motion planning, and control. Only components that will not suffer from the reality gap are trained in simulation and the rest is adapted or tailor-made for the real-world setup. This comes in contrast to the most common end-to-end training in deep RL (Levine et al., 2016). In our work, we use a modular architecture to enable zero-shot sim-to-real transfer. Namely, we parameterize the controllers differently in simulation compared with the real world to allow using the same high-level policy network.2.5 Hierarchical Reinforcement LearningHierarchical reinforcement learning (HRL) is very frequently used in robotics (Beyret et al., 2019; Bischoff et al., 2013; Jain et al., 2019). These methods typically introduce policies at different layers of abstractions and different time resolutions to break down the complexity of the overall learned behavior. Model-free HRL approaches can either attempt to formulate joint value functions over all the policies such as in Dietterich (2000) or rely on algorithms for hierarchical policy gradients, such as Ghavamzadeh and Mahadevan (2003). Furthermore, Parr and Russell (1998) illustrate how the modularity of HRL methods could enable transferring learned knowledge through component recombination. End et al. (2017) propose a method for autonomously discovering diverse sets of subpolicies and their activation policies. In this work, we design a method with two levels of abstractions. The first one is a learned model-free policy outputting high-level control commands/targets that are carried out by the low-level policy, which is a well-defined controller. Note that the policy used at the second layer of abstraction is not learned but instead designed based on well-established control methods. This enables sample-efficient learning and simple sim-to-real transfer.Despite various contributions toward a general framework for dual-arm manipulation, we do not know of any work that is task agnostic, does not require explicit modeling of interaction dynamics, and is centralized. Therefore, this work aims at proposing a unified pipeline for dual-arm manipulation based on a minimal set of assumptions. To the best of our knowledge, no prior work exists on contact-rich dual-arm peg-in-hole with model-free reinforcement learning or centralized dual-arm control for non-single object manipulation tasks in general. | [] | [] |
Frontiers in Big Data | null | PMC8984615 | 10.3389/fdata.2022.828666 | Graph Neural Networks for Charged Particle Tracking on FPGAs | The determination of charged particle trajectories in collisions at the CERN Large Hadron Collider (LHC) is an important but challenging problem, especially in the high interaction density conditions expected during the future high-luminosity phase of the LHC (HL-LHC). Graph neural networks (GNNs) are a type of geometric deep learning algorithm that has successfully been applied to this task by embedding tracker data as a graph—nodes represent hits, while edges represent possible track segments—and classifying the edges as true or fake track segments. However, their study in hardware- or software-based trigger applications has been limited due to their large computational cost. In this paper, we introduce an automated translation workflow, integrated into a broader tool called hls4ml, for converting GNNs into firmware for field-programmable gate arrays (FPGAs). We use this translation tool to implement GNNs for charged particle tracking, trained using the TrackML challenge dataset, on FPGAs with designs targeting different graph sizes, task complexites, and latency/throughput requirements. This work could enable the inclusion of charged particle tracking GNNs at the trigger level for HL-LHC experiments. | 2. Related WorkGNNs have been explored for particle physics applications (Duarte and Vlimant, 2020; Shlomi et al., 2020), including jet identification (Moreno et al., 2020a,b; Qu and Gouskos, 2020), pileup mitigation (Arjona Mart́ınez et al., 2019; Li et al., 2021), calorimeter energy measurements (Qasim et al., 2019), particle-flow reconstruction (Kieseler, 2020; Pata et al., 2021), and charged particle tracking (Farrell et al., 2018; DeZoort et al., 2021; Ju et al., 2021). Automatic translation of ML algorithms into FPGA firmware has also been studied for HEP tasks. Using hls4ml, several implementations for HEP-specific tasks have been provided for fully connected neural networks (NNs), autoencoders, boosted decision trees, and convolutional NNs (Duarte et al., 2018; Loncar et al., 2020; Summers et al., 2020; Aarrestad et al., 2021; Coelho et al., 2021; Govorkova et al., 2021). This tool has also been applied extensively for tasks in the HL-LHC upgrade of the CMS L1T system, including anomaly detection, muon energy regression and identification, tau lepton identification, and vector boson fusion event classification (CMS Collaboration, 2020). Moreover, a GNN model known as a GarNet was studied for calorimeter energy regression and deployed on FPGAs using hls4ml in Iiyama et al. (2021). Our current work extends this by allowing more generic IN architectures to be converted with hls4ml, permitting the specification of a variable graph adjacency matrix as input, and supporting per-node or per-edge outputs as well as per-graph outputs. This paper also supersedes an earlier version of this work in Heintz et al. (2020).Hardware acceleration of GNN inference, and graph processing in general, has been an active area of study (Besta et al., 2019; Gui et al., 2019; Ming Xiong, 2020). Nurvitadhi et al. (2014); Ozdal et al. (2016); Auten et al. (2020); Geng et al. (2020); Kiningham et al. (2020); Yan et al. (2020), and Zeng and Prasanna (2020) describe various other examples of GNN acceleration architectures. While these frameworks are applicable to various graph processing tasks, they may not apply to the strict latency requirements of the LHC trigger and they typically require the user to specify the design in a highly specialized format. | [
"29081711",
"33791596",
"32939066",
"34308339",
"33791596",
"33791596"
] | [
{
"pmid": "29081711",
"title": "Performance of the ATLAS track reconstruction algorithms in dense environments in LHC Run 2.",
"abstract": "With the increase in energy of the Large Hadron Collider to a centre-of-mass energy of 13 [Formula: see text] for Run 2, events with dense environments, such as in the cores of high-energy jets, became a focus for new physics searches as well as measurements of the Standard Model. These environments are characterized by charged-particle separations of the order of the tracking detectors sensor granularity. Basic track quantities are compared between 3.2 fb[Formula: see text] of data collected by the ATLAS experiment and simulation of proton-proton collisions producing high-transverse-momentum jets at a centre-of-mass energy of 13 [Formula: see text]. The impact of charged-particle separations and multiplicities on the track reconstruction performance is discussed. The track reconstruction efficiency in the cores of jets with transverse momenta between 200 and 1600 [Formula: see text] is quantified using a novel, data-driven, method. The method uses the energy loss, [Formula: see text], to identify pixel clusters originating from two charged particles. Of the charged particles creating these clusters, the measured fraction that fail to be reconstructed is [Formula: see text] and [Formula: see text] for jet transverse momenta of 200-400 [Formula: see text] and 1400-1600 [Formula: see text], respectively."
},
{
"pmid": "33791596",
"title": "Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics.",
"abstract": "Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one μs on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage."
},
{
"pmid": "32939066",
"title": "Array programming with NumPy.",
"abstract": "Array programming provides a powerful, compact and expressive syntax for accessing, manipulating and operating on data in vectors, matrices and higher-dimensional arrays. NumPy is the primary array programming library for the Python language. It has an essential role in research analysis pipelines in fields as diverse as physics, chemistry, astronomy, geoscience, biology, psychology, materials science, engineering, finance and economics. For example, in astronomy, NumPy was an important part of the software stack used in the discovery of gravitational waves1 and in the first imaging of a black hole2. Here we review how a few fundamental array concepts lead to a simple and powerful programming paradigm for organizing, exploring and analysing scientific data. NumPy is the foundation upon which the scientific Python ecosystem is constructed. It is so pervasive that several projects, targeting audiences with specialized needs, have developed their own NumPy-like interfaces and array objects. Owing to its central position in the ecosystem, NumPy increasingly acts as an interoperability layer between such array computation libraries and, together with its application programming interface (API), provides a flexible framework to support the next decade of scientific and industrial analysis."
},
{
"pmid": "34308339",
"title": "Ps and Qs: Quantization-Aware Pruning for Efficient Low Latency Neural Network Inference.",
"abstract": "Efficient machine learning implementations optimized for inference in hardware have wide-ranging benefits, depending on the application, from lower inference latency to higher data throughput and reduced energy consumption. Two popular techniques for reducing computation in neural networks are pruning, removing insignificant synapses, and quantization, reducing the precision of the calculations. In this work, we explore the interplay between pruning and quantization during the training of neural networks for ultra low latency applications targeting high energy physics use cases. Techniques developed for this study have potential applications across many other domains. We study various configurations of pruning during quantization-aware training, which we term quantization-aware pruning, and the effect of techniques like regularization, batch normalization, and different pruning schemes on performance, computational complexity, and information content metrics. We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task. Further, quantization-aware pruning typically performs similar to or better in terms of computational efficiency compared to other neural architecture search techniques like Bayesian optimization. Surprisingly, while networks with different training configurations can have similar performance for the benchmark application, the information content in the network can vary significantly, affecting its generalizability."
},
{
"pmid": "33791596",
"title": "Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics.",
"abstract": "Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one μs on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage."
},
{
"pmid": "33791596",
"title": "Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle Reconstruction in High Energy Physics.",
"abstract": "Graph neural networks have been shown to achieve excellent performance for several crucial tasks in particle physics, such as charged particle tracking, jet tagging, and clustering. An important domain for the application of these networks is the FGPA-based first layer of real-time data filtering at the CERN Large Hadron Collider, which has strict latency and resource constraints. We discuss how to design distance-weighted graph networks that can be executed with a latency of less than one μs on an FPGA. To do so, we consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider. We use a graph network architecture developed for such purposes, and apply additional simplifications to match the computing constraints of Level-1 trigger systems, including weight quantization. Using the hls4ml library, we convert the compressed models into firmware to be implemented on an FPGA. Performance of the synthesized models is presented both in terms of inference accuracy and resource usage."
}
] |
PLoS Computational Biology | null | PMC8985953 | 10.1371/journal.pcbi.1009961 | Epigenetic cell memory: The gene’s inner chromatin modification circuit | Epigenetic cell memory allows distinct gene expression patterns to persist in different cell types despite a common genotype. Although different patterns can be maintained by the concerted action of transcription factors (TFs), it was proposed that long-term persistence hinges on chromatin state. Here, we study how the dynamics of chromatin state affect memory, and focus on a biologically motivated circuit motif, among histones and DNA modifications, that mediates the action of TFs on gene expression. Memory arises from time-scale separation among three circuit’s constituent processes: basal erasure, auto and cross-catalysis, and recruited erasure of modifications. When the two latter processes are sufficiently faster than the former, the circuit exhibits bistability and hysteresis, allowing active and repressed gene states to coexist and persist after TF stimulus removal. The duration of memory is stochastic with a mean value that increases as time-scale separation increases, but more so for the repressed state. This asymmetry stems from the cross-catalysis between repressive histone modifications and DNA methylation and is enhanced by the relatively slower decay rate of the latter. Nevertheless, TF-mediated positive autoregulation can rebalance this asymmetry and even confers robustness of active states to repressive stimuli. More generally, by wiring positively autoregulated chromatin modification circuits under time scale separation, long-term distinct gene expression patterns arise, which are also robust to failure in the regulatory links. | Related workModels of chromatin-mediated gene regulation, in which histone modifications, DNA methylation, and TF-mediated regulation are combined together, remain under-represented in the literature [20]. Most existing models that include chromatin modifications into gene expression regulation are based on phenomenological rules rather than on the molecular reactions that regulate chromatin state, and focus on specific biological processes, such as iPSC reprogramming [21–24] or epithelial-mesenchymal transition [25], and most of them are only suitable for computational simulations [21–23]. On the opposite side of the spectrum, highly detailed mechanistic models have appeared for histone modifications alone, in which each nucleosome within a gene is modeled and simulated as a distinct unit [26]. Different from these existing works, we explicitly address how the duration of memory of a chromatin state is modulated by the topological properties of the chromatin modification circuit, by relevant biochemical parameters, and by the interplay between TF-based regulation and chromatin dynamics. | [
"23584020",
"14647386",
"19319911",
"22596319",
"17412320",
"24679529",
"27346641",
"1111098",
"1093816",
"26296162",
"26455413",
"30514178",
"33195816",
"33510302",
"20485562",
"22754535",
"26581803",
"32842865",
"17512413",
"26474904",
"12670868",
"21124070",
"21941617",
"17687327",
"21477851",
"16222338",
"22704655",
"26216216",
"25792596",
"21872465",
"24875481",
"27036965",
"27923996",
"15827124",
"18542062",
"9620804",
"9620779",
"12427740",
"12711675",
"20493208",
"17037977",
"26912859",
"19898493",
"24048479",
"16153702",
"16738015",
"21252997",
"25679502",
"15996204",
"17938240",
"23254757",
"27162367",
"23499384",
"17377526",
"21483788",
"33199860"
] | [
{
"pmid": "14647386",
"title": "A positive-feedback-based bistable 'memory module' that governs a cell fate decision.",
"abstract": "The maturation of Xenopus oocytes can be thought of as a process of cell fate induction, with the immature oocyte representing the default fate and the mature oocyte representing the induced fate. Crucial mediators of Xenopus oocyte maturation, including the p42 mitogen-activated protein kinase (MAPK) and the cell-division cycle protein kinase Cdc2, are known to be organized into positive feedback loops. In principle, such positive feedback loops could produce an actively maintained 'memory' of a transient inductive stimulus and could explain the irreversibility of maturation. Here we show that the p42 MAPK and Cdc2 system normally generates an irreversible biochemical response from a transient stimulus, but the response becomes transient when positive feedback is blocked. Our results explain how a group of intrinsically reversible signal transducers can generate an irreversible response at a systems level, and show how a cell fate can be maintained by a self-sustaining pattern of protein kinase activation."
},
{
"pmid": "19319911",
"title": "Reprogramming cell fates: reconciling rarity with robustness.",
"abstract": "The stunning possibility of \"reprogramming\" differentiated somatic cells to express a pluripotent stem cell phenotype (iPS, induced pluripotent stem cell) and the \"ground state\" character of pluripotency reveal fundamental features of cell fate regulation that lie beyond existing paradigms. The rarity of reprogramming events appears to contradict the robustness with which the unfathomably complex phenotype of stem cells can reliably be generated. This apparent paradox, however, is naturally explained by the rugged \"epigenetic landscape\" with valleys representing \"preprogrammed\" attractor states that emerge from the dynamical constraints of the gene regulatory network. This article provides a pedagogical primer to the fundamental principles of gene regulatory networks as integrated dynamic systems and reviews recent insights in gene expression noise and fate determination, thereby offering a formal framework that may help us to understand why cell fate reprogramming events are inherently rare and yet so robust."
},
{
"pmid": "22596319",
"title": "Maintaining differentiated cellular identity.",
"abstract": "Various studies have demonstrated that somatic differentiated cells can be reprogrammed into other differentiated states or into pluripotency, thus showing that the differentiated cellular state is not irreversible. These findings have generated intense interest in the process of reprogramming and in mechanisms that govern the pluripotent state. However, the realization that differentiated cells can be triggered to switch to considerably different lineages also emphasizes that we need to understand how the identity of mature cells is normally maintained. Here we review recent studies on how the differentiated state is controlled at the transcriptional level and discuss how new insights have begun to elucidate mechanisms underlying the stable maintenance of mature cell identities."
},
{
"pmid": "17412320",
"title": "Bifurcation dynamics in lineage-commitment in bipotent progenitor cells.",
"abstract": "Lineage specification of multipotent progenitor cells is governed by a balance of lineage-affiliated transcription factors, such as GATA1 and PU.1, which regulate the choice between erythroid and myelomonocytic fates. But how ratios of lineage-determining transcription factors stabilize progenitor cells and resolve their indeterminacy to commit them to discrete, mutually exclusive fates remains unexplained. We used a simple model and experimental measurements to analyze the dynamics of a binary fate decision governed by a gene-circuit containing auto-stimulation and cross-inhibition, as embodied by the GATA1-PU.1 paradigm. This circuit generates stable attractors corresponding to erythroid and myelomonocytic fates, as well as an uncommitted metastable state characterized by coexpression of both regulators, explaining the phenomenon of \"multilineage priming\". GATA1 and PU.1 mRNA and transcriptome dynamics of differentiating progenitor cells confirm that commitment occurs in two stages, as suggested by the model: first, the progenitor state is destabilized in an almost symmetrical bifurcation event, resulting in a poised state at the boundary between the two lineage-specific attractors; second, the cell is driven to the respective, now accessible attractors. This minimal model captures fundamental features of binary cell fate decisions, uniting the concepts of stochastic (selective) and deterministic (instructive) regulation, and hence, may apply to a wider range of binary fate decision points."
},
{
"pmid": "24679529",
"title": "Transgenerational epigenetic inheritance: myths and mechanisms.",
"abstract": "Since the human genome was sequenced, the term \"epigenetics\" is increasingly being associated with the hope that we are more than just the sum of our genes. Might what we eat, the air we breathe, or even the emotions we feel influence not only our genes but those of descendants? The environment can certainly influence gene expression and can lead to disease, but transgenerational consequences are another matter. Although the inheritance of epigenetic characters can certainly occur-particularly in plants-how much is due to the environment and the extent to which it happens in humans remain unclear."
},
{
"pmid": "27346641",
"title": "The molecular hallmarks of epigenetic control.",
"abstract": "Over the past 20 years, breakthrough discoveries of chromatin-modifying enzymes and associated mechanisms that alter chromatin in response to physiological or pathological signals have transformed our knowledge of epigenetics from a collection of curious biological phenomena to a functionally dissected research field. Here, we provide a personal perspective on the development of epigenetics, from its historical origins to what we define as 'the modern era of epigenetic research'. We primarily highlight key molecular mechanisms of and conceptual advances in epigenetic control that have changed our understanding of normal and perturbed development."
},
{
"pmid": "1093816",
"title": "X inactivation, differentiation, and DNA methylation.",
"abstract": "A model based on DNA methylation is proposed to explain the initiation and maintenance of mammalian X inactivation and certain aspects of other permanent events in eukaryotic cell differentiation. A key feature of the model is the proposal of sequence-specific DNA methylases that methylate unmethylated sites with great difficulty but easily methylate half-methylated sites. Although such enzymes have not yet been detected in eukaryotes, they are known in bacteria. An argument is presented, based on recent data on DNA-binding proteins, that DNA methylation should affect the binding of regulatory proteins. In support of the model, short reviews are included covering both mammalian X inactivation and bacterial restriction and modification enzymes."
},
{
"pmid": "26296162",
"title": "DNA methylation pathways and their crosstalk with histone methylation.",
"abstract": "Methylation of DNA and of histone 3 at Lys 9 (H3K9) are highly correlated with gene silencing in eukaryotes from fungi to humans. Both of these epigenetic marks need to be established at specific regions of the genome and then maintained at these sites through cell division. Protein structural domains that specifically recognize methylated DNA and methylated histones are key for targeting enzymes that catalyse these marks to appropriate genome sites. Genetic, genomic, structural and biochemical data reveal connections between these two epigenetic marks, and these domains mediate much of the crosstalk."
},
{
"pmid": "26455413",
"title": "Efficient Recombinase-Mediated Cassette Exchange in hPSCs to Study the Hepatocyte Lineage Reveals AAVS1 Locus-Mediated Transgene Inhibition.",
"abstract": "Tools for rapid and efficient transgenesis in \"safe harbor\" loci in an isogenic context remain important to exploit the possibilities of human pluripotent stem cells (hPSCs). We created hPSC master cell lines suitable for FLPe recombinase-mediated cassette exchange (RMCE) in the AAVS1 locus that allow generation of transgenic lines within 15 days with 100% efficiency and without random integrations. Using RMCE, we successfully incorporated several transgenes useful for lineage identification, cell toxicity studies, and gene overexpression to study the hepatocyte lineage. However, we observed unexpected and variable transgene expression inhibition in vitro, due to DNA methylation and other unknown mechanisms, both in undifferentiated hESC and differentiating hepatocytes. Therefore, the AAVS1 locus cannot be considered a universally safe harbor locus for reliable transgene expression in vitro, and using it for transgenesis in hPSC will require careful assessment of the function of individual transgenes."
},
{
"pmid": "30514178",
"title": "Silencing of transgene expression in mammalian cells by DNA methylation and histone modifications in gene therapy perspective.",
"abstract": "DNA methylation and histone modifications are vital in maintaining genomic stability and modulating cellular functions in mammalian cells. These two epigenetic modifications are the most common gene regulatory systems known to spatially control gene expression. Transgene silencing by these two mechanisms is a major challenge to achieving effective gene therapy for many genetic conditions. The implications of transgene silencing caused by epigenetic modifications have been extensively studied and reported in numerous gene delivery studies. This review highlights instances of transgene silencing by DNA methylation and histone modification with specific focus on the role of these two epigenetic effects on the repression of transgene expression in mammalian cells from integrative and non-integrative based gene delivery systems in the context of gene therapy. It also discusses the prospects of achieving an effective and sustained transgene expression for future gene therapy applications."
},
{
"pmid": "33195816",
"title": "Rosa26 docking sites for investigating genetic circuit silencing in stem cells.",
"abstract": "Approaches in mammalian synthetic biology have transformed how cells can be programmed to have reliable and predictable behavior, however, the majority of mammalian synthetic biology has been accomplished using immortalized cell lines that are easy to grow and easy to transfect. Genetic circuits that integrate into the genome of these immortalized cell lines remain functional for many generations, often for the lifetime of the cells, yet when genetic circuits are integrated into the genome of stem cells gene silencing is observed within a few generations. To investigate the reactivation of silenced genetic circuits in stem cells, the Rosa26 locus of mouse pluripotent stem cells was modified to contain docking sites for site-specific integration of genetic circuits. We show that the silencing of genetic circuits can be reversed with the addition of sodium butyrate, a histone deacetylase inhibitor. These findings demonstrate an approach to reactivate the function of genetic circuits in pluripotent stem cells to ensure robust function over many generations. Altogether, this work introduces an approach to overcome the silencing of genetic circuits in pluripotent stem cells that may enable the use of genetic circuits in pluripotent stem cells for long-term function."
},
{
"pmid": "33510302",
"title": "Epigenetic silencing directs expression heterogeneity of stably integrated multi-transcript unit genetic circuits.",
"abstract": "We report that epigenetic silencing causes the loss of function of multi-transcript unit constructs that are integrated using CRISPR-Cas9. Using a modular two color reporter system flanked by selection markers, we demonstrate that expression heterogeneity does not correlate with sequence alteration but instead correlates with chromosomal accessibility. We partially reverse this epigenetic silencing via small-molecule inhibitors of methylation and histone deacetylation. We then correlate each heterogeneously-expressing phenotype with its expected epigenetic state by employing ATAC-seq. The stability of each expression phenotype is reinforced by selective pressure, which indicates that ongoing epigenetic remodeling can occur for over one month after integration. Collectively, our data suggests that epigenetic silencing limits the utility of multi-transcript unit constructs that are integrated via double-strand repair pathways. Our research implies that mammalian synthetic biologists should consider localized epigenetic outcomes when designing complex genetic circuits."
},
{
"pmid": "20485562",
"title": "A model for genetic and epigenetic regulatory networks identifies rare pathways for transcription factor induced pluripotency.",
"abstract": "With relatively low efficiency, differentiated cells can be reprogrammed to a pluripotent state by ectopic expression of a few transcription factors. An understanding of the mechanisms that underlie data emerging from such experiments can help design optimal strategies for creating pluripotent cells for patient-specific regenerative medicine. We have developed a computational model for the architecture of the epigenetic and genetic regulatory networks which describes transformations resulting from expression of reprogramming factors. Importantly, our studies identify the rare temporal pathways that result in induced pluripotent cells. Further experimental tests of predictions emerging from our model should lead to fundamental advances in our understanding of how cellular identity is maintained and transformed."
},
{
"pmid": "22754535",
"title": "A stochastic model of epigenetic dynamics in somatic cell reprogramming.",
"abstract": "Somatic cell reprogramming has dramatically changed stem cell research in recent years. The high pace of new findings in the field and an ever increasing amount of data from new high throughput techniques make it challenging to isolate core principles of the process. In order to analyze such mechanisms, we developed an abstract mechanistic model of a subset of the known regulatory processes during cell differentiation and production of induced pluripotent stem cells. This probabilistic Boolean network describes the interplay between gene expression, chromatin modifications, and DNA methylation. The model incorporates recent findings in epigenetics and partially reproduces experimentally observed reprogramming efficiencies and changes in methylation and chromatin remodeling. It enables us to investigate, how the temporal progression of the process is regulated. It also explicitly includes the transduction of factors using viral vectors and their silencing in reprogrammed cells, since this is still a standard procedure in somatic cell reprogramming. Based on the model we calculate an epigenetic landscape for probabilities of cell states. Simulation results show good reproduction of experimental observations during reprogramming, despite the simple structure of the model. An extensive analysis and introduced variations hint toward possible optimizations of the process that could push the technique closer to clinical applications. Faster changes in DNA methylation increase the speed of reprogramming at the expense of efficiency, while accelerated chromatin modifications moderately improve efficiency."
},
{
"pmid": "26581803",
"title": "Effects of Collective Histone State Dynamics on Epigenetic Landscape and Kinetics of Cell Reprogramming.",
"abstract": "Cell reprogramming is a process of transitions from differentiated to pluripotent cell states via transient intermediate states. Within the epigenetic landscape framework, such a process is regarded as a sequence of transitions among basins on the landscape; therefore, theoretical construction of a model landscape which exhibits experimentally consistent dynamics can provide clues to understanding epigenetic mechanism of reprogramming. We propose a minimal gene-network model of the landscape, in which each gene is regulated by an integrated mechanism of transcription-factor binding/unbinding and the collective chemical modification of histones. We show that the slow collective variation of many histones around each gene locus alters topology of the landscape and significantly affects transition dynamics between basins. Differentiation and reprogramming follow different transition pathways on the calculated landscape, which should be verified experimentally via single-cell pursuit of the reprogramming process. Effects of modulation in collective histone state kinetics on transition dynamics and pathway are examined in search for an efficient protocol of reprogramming."
},
{
"pmid": "32842865",
"title": "A mathematical model exhibiting the effect of DNA methylation on the stability boundary in cell-fate networks.",
"abstract": "Cell-fate networks are traditionally studied within the framework of gene regulatory networks. This paradigm considers only interactions of genes through expressed transcription factors and does not incorporate chromatin modification processes. This paper introduces a mathematical model that seamlessly combines gene regulatory networks and DNA methylation (DNAm), with the goal of quantitatively characterizing the contribution of epigenetic regulation to gene silencing. The 'Basin of Attraction percentage' is introduced as a metric to quantify gene silencing abilities. As a case study, a computational and theoretical analysis is carried out for a model of the pluripotent stem cell circuit as well as a simplified self-activating gene model. The results confirm that the methodology quantitatively captures the key role that DNAm plays in enhancing the stability of the silenced gene state."
},
{
"pmid": "17512413",
"title": "Theoretical analysis of epigenetic cell memory by nucleosome modification.",
"abstract": "Chromosomal regions can adopt stable and heritable alternative states resulting in bistable gene expression without changes to the DNA sequence. Such epigenetic control is often associated with alternative covalent modifications of histones. The stability and heritability of the states are thought to involve positive feedback where modified nucleosomes recruit enzymes that similarly modify nearby nucleosomes. We developed a simplified stochastic model for dynamic nucleosome modification based on the silent mating-type region of the yeast Schizosaccharomyces pombe. We show that the mechanism can give strong bistability that is resistant both to high noise due to random gain or loss of nucleosome modifications and to random partitioning upon DNA replication. However, robust bistability required: (1) cooperativity, the activity of more than one modified nucleosome, in the modification reactions and (2) that nucleosomes occasionally stimulate modification beyond their neighbor nucleosomes, arguing against a simple continuous spreading of nucleosome modification."
},
{
"pmid": "26474904",
"title": "The interplay of histone modifications - writers that read.",
"abstract": "Histones are subject to a vast array of posttranslational modifications including acetylation, methylation, phosphorylation, and ubiquitylation. The writers of these modifications play important roles in normal development and their mutation or misregulation is linked with both genetic disorders and various cancers. Readers of these marks contain protein domains that allow their recruitment to chromatin. Interestingly, writers often contain domains which can read chromatin marks, allowing the reinforcement of modifications through a positive feedback loop or inhibition of their activity by other modifications. We discuss how such positive reinforcement can result in chromatin states that are robust and can be epigenetically maintained through cell division. We describe the implications of these regulatory systems in relation to modifications including H3K4me3, H3K79me3, and H3K36me3 that are associated with active genes and H3K27me3 and H3K9me3 that have been linked to transcriptional repression. We also review the crosstalk between active and repressive modifications, illustrated by the interplay between the Polycomb and Trithorax histone-modifying proteins, and discuss how this may be important in defining gene expression states during development."
},
{
"pmid": "12670868",
"title": "Human Sin3 deacetylase and trithorax-related Set1/Ash2 histone H3-K4 methyltransferase are tethered together selectively by the cell-proliferation factor HCF-1.",
"abstract": "The abundant and chromatin-associated protein HCF-1 is a critical player in mammalian cell proliferation as well as herpes simplex virus (HSV) transcription. We show here that separate regions of HCF-1 critical for its role in cell proliferation associate with the Sin3 histone deacetylase (HDAC) and a previously uncharacterized human trithorax-related Set1/Ash2 histone methyltransferase (HMT). The Set1/Ash2 HMT methylates histone H3 at Lys 4 (K4), but not if the neighboring K9 residue is already methylated. HCF-1 tethers the Sin3 and Set1/Ash2 transcriptional regulatory complexes together even though they are generally associated with opposite transcriptional outcomes: repression and activation of transcription, respectively. Nevertheless, this tethering is context-dependent because the transcriptional activator VP16 selectively binds HCF-1 associated with the Set1/Ash2 HMT complex in the absence of the Sin3 HDAC complex. These results suggest that HCF-1 can broadly regulate transcription, both positively and negatively, through selective modulation of chromatin structure."
},
{
"pmid": "21124070",
"title": "Trimethylation of histone H3 lysine 4 impairs methylation of histone H3 lysine 9: regulation of lysine methyltransferases by physical interaction with their substrates.",
"abstract": "Chromatin is broadly compartmentalized in two defined states: euchromatin and heterochromatin. Generally, euchromatin is trimethylated on histone H3 lysine 4 (H3K4(me3)) while heterochromatin contains the H3K9(me3) marks. The H3K9(me3) modification is added by lysine methyltransferases (KMTs) such as SETDB1. Herein, we show that SETDB1 interacts with its substrate H3, but only in the absence of the euchromatic mark H3K4(me3). In addition, we show that SETDB1 fails to methylate substrates containing the H3K4(me3) mark. Likewise, the functionally related H3K9 KMTs G9A, GLP, and SUV39H1 also fail to bind and to methylate H3K4(me3) substrates. Accordingly, we provide in vivo evidence that H3K9(me2)-enriched histones are devoid of H3K4(me2/3) and that histones depleted of H3K4(me2/3) have elevated H3K9(me2/3). The correlation between the loss of interaction of these KMTs with H3K4 (me3) and concomitant methylation impairment leads to the postulate that, at least these four KMTs, require stable interaction with their respective substrates for optimal activity. Thus, novel substrates could be discovered via the identification of KMT interacting proteins. Indeed, we find that SETDB1 binds to and methylates a novel substrate, the inhibitor of growth protein ING2, while SUV39H1 binds to and methylates the heterochromatin protein HP1α. Thus, our observations suggest a mechanism of post-translational regulation of lysine methylation and propose a potential mechanism for the segregation of the biologically opposing marks, H3K4(me3) and H3K9(me3). Furthermore, the correlation between H3-KMTs interaction and substrate methylation highlights that the identification of novel KMT substrates may be facilitated by the identification of interaction partners."
},
{
"pmid": "21941617",
"title": "DNA methylation: superior or subordinate in the epigenetic hierarchy?",
"abstract": "Epigenetic modifications are heritable changes in gene expression not encoded by the DNA sequence. In the past decade, great strides have been made in characterizing epigenetic changes during normal development and in disease states like cancer. However, the epigenetic landscape has grown increasingly complicated, encompassing DNA methylation, the histone code, noncoding RNA, and nucleosome positioning, along with DNA sequence. As a stable repressive mark, DNA methylation, catalyzed by the DNA methyltransferases (DNMTs), is regarded as a key player in epigenetic silencing of transcription. DNA methylation may coordinately regulate the chromatin status via the interaction of DNMTs with other modifications and with components of the machinery mediating those marks. In this review, we will comprehensively examine the current understanding of the connections between DNA methylation and other epigenetic marks and discuss molecular mechanisms of transcriptional repression in development and in carcinogenesis."
},
{
"pmid": "17687327",
"title": "DNMT3L connects unmethylated lysine 4 of histone H3 to de novo methylation of DNA.",
"abstract": "Mammals use DNA methylation for the heritable silencing of retrotransposons and imprinted genes and for the inactivation of the X chromosome in females. The establishment of patterns of DNA methylation during gametogenesis depends in part on DNMT3L, an enzymatically inactive regulatory factor that is related in sequence to the DNA methyltransferases DNMT3A and DNMT3B. The main proteins that interact in vivo with the product of an epitope-tagged allele of the endogenous Dnmt3L gene were identified by mass spectrometry as DNMT3A2, DNMT3B and the four core histones. Peptide interaction assays showed that DNMT3L specifically interacts with the extreme amino terminus of histone H3; this interaction was strongly inhibited by methylation at lysine 4 of histone H3 but was insensitive to modifications at other positions. Crystallographic studies of human DNMT3L showed that the protein has a carboxy-terminal methyltransferase-like domain and an N-terminal cysteine-rich domain. Cocrystallization of DNMT3L with the tail of histone H3 revealed that the tail bound to the cysteine-rich domain of DNMT3L, and substitution of key residues in the binding site eliminated the H3 tail-DNMT3L interaction. These data indicate that DNMT3L recognizes histone H3 tails that are unmethylated at lysine 4 and induces de novo DNA methylation by recruitment or activation of DNMT3A2."
},
{
"pmid": "21477851",
"title": "Wdr5 mediates self-renewal and reprogramming via the embryonic stem cell core transcriptional network.",
"abstract": "The embryonic stem (ES) cell transcriptional and chromatin-modifying networks are critical for self-renewal maintenance. However, it remains unclear whether these networks functionally interact and, if so, what factors mediate such interactions. Here, we show that WD repeat domain 5 (Wdr5), a core member of the mammalian Trithorax (trxG) complex, positively correlates with the undifferentiated state and is a regulator of ES cell self-renewal. We demonstrate that Wdr5, an \"effector\" of H3K4 methylation, interacts with the pluripotency transcription factor Oct4. Genome-wide protein localization and transcriptome analyses demonstrate overlapping gene regulatory functions between Oct4 and Wdr5. The Oct4-Sox2-Nanog circuitry and trxG cooperate in activating transcription of key self-renewal regulators, and furthermore, Wdr5 expression is required for the efficient formation of induced pluripotent stem (iPS) cells. We propose an integrated model of transcriptional and epigenetic control, mediated by select trxG members, for the maintenance of ES cell self-renewal and somatic cell reprogramming."
},
{
"pmid": "16222338",
"title": "PU.1 inhibits the erythroid program by binding to GATA-1 on DNA and creating a repressive chromatin structure.",
"abstract": "Transcriptional repression mechanisms are important during differentiation of multipotential hematopoietic progenitors, where they are thought to regulate lineage commitment and to extinguish alternative differentiation programs. PU.1 and GATA-1 are two critical hematopoietic transcription factors that physically interact and mutually antagonize each other's transcriptional activity and ability to promote myeloid and erythroid differentiation, respectively. We find that PU.1 inhibits the erythroid program by binding to GATA-1 on its target genes and organizing a complex of proteins that creates a repressive chromatin structure containing lysine-9 methylated H3 histones and heterochromatin protein 1. Although these features are thought to be stable aspects of repressed chromatin, we find that silencing of PU.1 expression leads to removal of the repression complex, loss of the repressive chromatin marks and reactivation of the erythroid program. This process involves incorporation of the replacement histone variant H3.3 into nucleosomes. Repression of one transcription factor bound to DNA by another transcription factor not on the DNA represents a new mechanism for downregulating an alternative gene expression program during lineage commitment of multipotential hematopoietic progenitors."
},
{
"pmid": "22704655",
"title": "Dynamics and memory of heterochromatin in living cells.",
"abstract": "Posttranslational histone modifications are important for gene regulation, yet the mode of propagation and the contribution to heritable gene expression states remains controversial. To address these questions, we developed a chromatin in vivo assay (CiA) system employing chemically induced proximity to initiate and terminate chromatin modifications in living cells. We selectively recruited HP1α to induce H3K9me3-dependent gene silencing and describe the kinetics and extent of chromatin modifications at the Oct4 locus in fibroblasts and pluripotent cells. H3K9me3 propagated symmetrically and continuously at average rates of ~0.18 nucleosomes/hr to produce domains of up to 10 kb. After removal of the HP1α stimulus, heterochromatic domains were heritably transmitted, undiminished through multiple cell generations. Our data enabled quantitative modeling of reaction kinetics, which revealed that dynamic competition between histone marking and turnover, determines the boundaries and stability of H3K9me3 domains. This framework predicts the steady-state dynamics and spatial features of the majority of euchromatic H3K9me3 domains over the genome."
},
{
"pmid": "26216216",
"title": "Epigenetic inheritance and the missing heritability.",
"abstract": "Genome-wide association studies of complex physiological traits and diseases consistently found that associated genetic factors, such as allelic polymorphisms or DNA mutations, only explained a minority of the expected heritable fraction. This discrepancy is known as \"missing heritability\", and its underlying factors and molecular mechanisms are not established. Epigenetic programs may account for a significant fraction of the \"missing heritability.\" Epigenetic modifications, such as DNA methylation and chromatin assembly states, reflect the high plasticity of the genome and contribute to stably alter gene expression without modifying genomic DNA sequences. Consistent components of complex traits, such as those linked to human stature/height, fertility, and food metabolism or to hereditary defects, have been shown to respond to environmental or nutritional condition and to be epigenetically inherited. The knowledge acquired from epigenetic genome reprogramming during development, stem cell differentiation/de-differentiation, and model organisms is today shedding light on the mechanisms of (a) mitotic inheritance of epigenetic traits from cell to cell, (b) meiotic epigenetic inheritance from generation to generation, and (c) true transgenerational inheritance. Such mechanisms have been shown to include incomplete erasure of DNA methylation, parental effects, transmission of distinct RNA types (mRNA, non-coding RNA, miRNA, siRNA, piRNA), and persistence of subsets of histone marks."
},
{
"pmid": "25792596",
"title": "Two distinct modes for propagation of histone PTMs across the cell cycle.",
"abstract": "Epigenetic states defined by chromatin can be maintained through mitotic cell division. However, it remains unknown how histone-based information is transmitted. Here we combine nascent chromatin capture (NCC) and triple-SILAC (stable isotope labeling with amino acids in cell culture) labeling to track histone modifications and histone variants during DNA replication and across the cell cycle. We show that post-translational modifications (PTMs) are transmitted with parental histones to newly replicated DNA. Di- and trimethylation marks are diluted twofold upon DNA replication, as a consequence of new histone deposition. Importantly, within one cell cycle, all PTMs are restored. In general, new histones are modified to mirror the parental histones. However, H3K9 trimethylation (H3K9me3) and H3K27me3 are propagated by continuous modification of parental and new histones because the establishment of these marks extends over several cell generations. Together, our results reveal how histone marks propagate and demonstrate that chromatin states oscillate within the cell cycle."
},
{
"pmid": "21872465",
"title": "Coordinated methyl-lysine erasure: structural and functional linkage of a Jumonji demethylase domain and a reader domain.",
"abstract": "Both components of chromatin (DNA and histones) are subjected to dynamic postsynthetic covalent modifications. Dynamic histone lysine methylation involves the activities of modifying enzymes (writers), enzymes removing modifications (erasers), and readers of the epigenetic code. Known histone lysine demethylases include flavin-dependent monoamine oxidase lysine-specific demethylase 1 and α-ketoglutarate-Fe(II)-dependent dioxygenases containing Jumonji domains. Importantly, the Jumonji domain often associates with at least one additional recognizable domain (reader) within the same polypeptide that detects the methylation status of histones and/or DNA. Here, we summarize recent developments in characterizing structural and functional properties of various histone lysine demethylases, with emphasis on a mechanism of crosstalk between a Jumonji domain and its associated reader module(s). We further discuss the role of recently identified Tet1 enzyme in oxidizing 5-methylcytosine to 5-hydroxymethylcytosine in DNA."
},
{
"pmid": "24875481",
"title": "TET1 is a maintenance DNA demethylase that prevents methylation spreading in differentiated cells.",
"abstract": "TET1 is a 5-methylcytosine dioxygenase and its DNA demethylating activity has been implicated in pluripotency and reprogramming. However, the precise role of TET1 in DNA methylation regulation outside of developmental reprogramming is still unclear. Here, we show that overexpression of the TET1 catalytic domain but not full length TET1 (TET1-FL) induces massive global DNA demethylation in differentiated cells. Genome-wide mapping reveals that 5-hydroxymethylcytosine production by TET1-FL is inhibited as DNA methylation increases, which can be explained by the preferential binding of TET1-FL to unmethylated CpG islands (CGIs) through its CXXC domain. TET1-FL specifically accumulates 5-hydroxymethylcytosine at the edges of hypomethylated CGIs, while knockdown of endogenous TET1 induces methylation spreading from methylated edges into hypomethylated CGIs. We also found that gene expression changes after TET1-FL overexpression are relatively small and independent of its dioxygenase function. Thus, our results identify TET1 as a maintenance DNA demethylase that does not purposely decrease methylation levels, but specifically prevents aberrant methylation spreading into CGIs in differentiated cells."
},
{
"pmid": "27036965",
"title": "Role of TET enzymes in DNA methylation, development, and cancer.",
"abstract": "The pattern of DNA methylation at cytosine bases in the genome is tightly linked to gene expression, and DNA methylation abnormalities are often observed in diseases. The ten eleven translocation (TET) enzymes oxidize 5-methylcytosines (5mCs) and promote locus-specific reversal of DNA methylation. TET genes, and especially TET2, are frequently mutated in various cancers, but how the TET proteins contribute to prevent the onset and maintenance of these malignancies is largely unknown. Here, we highlight recent advances in understanding the physiological function of the TET proteins and their role in regulating DNA methylation and transcription. In addition, we discuss some of the key outstanding questions in the field."
},
{
"pmid": "27923996",
"title": "Binding of MBD proteins to DNA blocks Tet1 function thereby modulating transcriptional noise.",
"abstract": "Aberrant DNA methylation is a hallmark of various human disorders, indicating that the spatial and temporal regulation of methylation readers and modifiers is imperative for development and differentiation. In particular, the cross-regulation between 5-methylcytosine binders (MBD) and modifiers (Tet) has not been investigated. Here, we show that binding of Mecp2 and Mbd2 to DNA protects 5-methylcytosine from Tet1-mediated oxidation. The mechanism is not based on competition for 5-methylcytosine binding but on Mecp2 and Mbd2 directly restricting Tet1 access to DNA. We demonstrate that the efficiency of this process depends on the number of bound MBDs per DNA molecule. Accordingly, we find 5-hydroxymethylcytosine enriched at heterochromatin of Mecp2-deficient neurons of a mouse model for Rett syndrome and Tet1-induced reexpression of silenced major satellite repeats. These data unveil fundamental regulatory mechanisms of Tet enzymes and their potential pathophysiological role in Rett syndrome. Importantly, it suggests that Mecp2 and Mbd2 have an essential physiological role as guardians of the epigenome."
},
{
"pmid": "15827124",
"title": "A population-epigenetic model to infer site-specific methylation rates from double-stranded DNA methylation patterns.",
"abstract": "Cytosine methylation is an epigenetic mechanism in eukaryotes that is often associated with stable transcriptional silencing, such as in X-chromosome inactivation and genomic imprinting. Aberrant methylation patterns occur in several inherited human diseases and in many cancers. To understand how methylated and unmethylated states of cytosine residues are transmitted during DNA replication, we develop a population-epigenetic model of DNA methylation dynamics. The model is informed by our observation that de novo methylation can occur on the daughter strand while leaving the opposing cytosine unmethylated, as revealed by the patterns of methylation on the two complementary strands of individual DNA molecules. Under our model, we can infer site-specific rates of both maintenance and de novo methylation, values that determine the fidelity of methylation inheritance, from double-stranded methylation data. This approach can be used for populations of cells obtained from individuals without the need for cell culture. We use our method to infer cytosine methylation rates at several sites within the promoter of the human gene FMR1."
},
{
"pmid": "18542062",
"title": "Proteins that bind methylated DNA and human cancer: reading the wrong words.",
"abstract": "DNA methylation and the machinery involved in epigenetic regulation are key elements in the maintenance of cellular homeostasis. Epigenetic mechanisms are involved in embryonic development and the establishment of tissue-specific expression, X-chromosome inactivation and imprinting patterns, and maintenance of chromosome stability. The balance between all the enzymes and factors involved in DNA methylation and its interpretation by different groups of nuclear factors is crucial for normal cell behaviour. In cancer and other diseases, misregulation of epigenetic marks is a common feature, also including DNA methylation and histone post-translational modifications. In this scenario, it is worth mentioning a family of proteins characterized by the presence of a methyl-CpG-binding domain (MBDs) that are involved in interpreting the information encoded by DNA methylation and the recruitment of the enzymes responsible for establishing a silenced state of the chromatin. The generation of novel aberrantly hypermethylated regions during cancer development and progression makes MBD proteins interesting targets for their biological and clinical implications."
},
{
"pmid": "9620804",
"title": "Transcriptional repression by the methyl-CpG-binding protein MeCP2 involves a histone deacetylase complex.",
"abstract": "Cytosine residues in the sequence 5'CpG (cytosine-guanine) are often postsynthetically methylated in animal genomes. CpG methylation is involved in long-term silencing of certain genes during mammalian development and in repression of viral genomes. The methyl-CpG-binding proteins MeCP1 and MeCP2 interact specifically with methylated DNA and mediate transcriptional repression. Here we study the mechanism of repression by MeCP2, an abundant nuclear protein that is essential for mouse embryogenesis. MeCP2 binds tightly to chromosomes in a methylation-dependent manner. It contains a transcriptional-repression domain (TRD) that can function at a distance in vitro and in vivo. We show that a region of MeCP2 that localizes with the TRD associates with a corepressor complex containing the transcriptional repressor mSin3A and histone deacetylases. Transcriptional repression in vivo is relieved by the deacetylase inhibitor trichostatin A, indicating that deacetylation of histones (and/or of other proteins) is an essential component of this repression mechanism. The data suggest that two global mechanisms of gene regulation, DNA methylation and histone deacetylation, can be linked by MeCP2."
},
{
"pmid": "9620779",
"title": "Methylated DNA and MeCP2 recruit histone deacetylase to repress transcription.",
"abstract": "CpG methylation in vertebrates correlates with alterations in chromatin structure and gene silencing. Differences in DNA-methylation status are associated with imprinting phenomena and carcinogenesis. In Xenopus laevis oocytes, DNA methylation dominantly silences transcription through the assembly of a repressive nucleosomal array. Methylated DNA assembled into chromatin binds the transcriptional repressor MeCP2 which cofractionates with Sin3 and histone deacetylase. Silencing conferred by MeCP2 and methylated DNA can be relieved by inhibition of histone deacetylase, facilitating the remodelling of chromatin and transcriptional activation. These results establish a direct causal relationship between DNA methylation-dependent transcriptional silencing and the modification of chromatin."
},
{
"pmid": "12427740",
"title": "The methyl-CpG-binding protein MeCP2 links DNA methylation to histone methylation.",
"abstract": "DNA methylation plays an important role in mammalian development and correlates with chromatin-associated gene silencing. The recruitment of MeCP2 to methylated CpG dinucleotides represents a major mechanism by which DNA methylation can repress transcription. MeCP2 silences gene expression partly by recruiting histone deacetylase (HDAC) activity, resulting in chromatin remodeling. Here, we show that MeCP2 associates with histone methyltransferase activity in vivo and that this activity is directed against Lys(9) of histone H3. Two characterized repression domains of MeCP2 are involved in tethering the histone methyltransferase to MeCP2. We asked if MeCP2 can deliver Lys(9) H3 methylation to the H19 gene, whose activity it represses. We show that the presence of MeCP2 on nucleosomes within the repressor region of the H19 gene (the differentially methylated domain) coincides with an increase in H3 Lys(9) methylation. Our data provide evidence that MeCP2 reinforces a repressive chromatin state by acting as a bridge between two global epigenetic modifications, DNA methylation and histone methylation."
},
{
"pmid": "12711675",
"title": "The DNA methyltransferases associate with HP1 and the SUV39H1 histone methyltransferase.",
"abstract": "The DNA methyltransferases, Dnmts, are the enzymes responsible for methylating DNA in mammals, which leads to gene silencing. Repression by DNA methylation is mediated partly by recruitment of the methyl-CpG-binding protein MeCP2. Recently, MeCP2 was shown to associate and facilitate histone methylation at Lys9 of H3, which is a key epigenetic modification involved in gene silencing. Here, we show that endogenous Dnmt3a associates primarily with histone H3-K9 methyltransferase activity as well as, to a lesser extent, with H3-K4 enzymatic activity. The association with enzymatic activity is mediated by the conserved PHD-like motif of Dnmt3a. The H3-K9 histone methyltransferase that binds Dnmt3a is likely the H3-K9 specific SUV39H1 enzyme since we find that it interacts both in vitro and in vivo with Dnmt3a, using its PHD-like motif. We find that SUV39H1 also binds to Dnmt1 and, consistent with these interactions, SUV39H1 can purify DNA methyltransferase activity from nuclear extracts. In addition, we show that HP1beta, a SUV39H1-interacting partner, binds directly to Dnmt1 and Dnmt3a and that native HP1beta associates with DNA methyltransferase activity. Our data show a direct connection between the enzymes responsible for DNA methylation and histone methylation. These results further substantiate the notion of a self-reinforcing repressive chromatin state through the interplay between these two global epigenetic modifications."
},
{
"pmid": "20493208",
"title": "Structure and function of SWI/SNF chromatin remodeling complexes and mechanistic implications for transcription.",
"abstract": "ATP-dependent chromatin remodeling complexes are specialized protein machinery able to restructure the nucleosome to make its DNA accessible during transcription, replication and DNA repair. During the past few years structural biologists have defined the architecture and dynamics of some of these complexes using electron microscopy, shedding light on the mechanisms of action of these important assemblies. In this paper we review the existing structural information on the SWI/SNF family of the ATP-dependent chromatin remodeling complexes, and discuss their mechanistic implications."
},
{
"pmid": "17037977",
"title": "Stochastic simulation of chemical kinetics.",
"abstract": "Stochastic chemical kinetics describes the time evolution of a well-stirred chemically reacting system in a way that takes into account the fact that molecules come in whole numbers and exhibit some degree of randomness in their dynamical behavior. Researchers are increasingly using this approach to chemical kinetics in the analysis of cellular systems in biology, where the small molecular populations of only a few reactant species can lead to deviations from the predictions of the deterministic differential equations of classical chemical kinetics. After reviewing the supporting theory of stochastic chemical kinetics, I discuss some recent advances in methods for using that theory to make numerical simulations. These include improvements to the exact stochastic simulation algorithm (SSA) and the approximate explicit tau-leaping procedure, as well as the development of two approximate strategies for simulating systems that are dynamically stiff: implicit tau-leaping and the slow-scale SSA."
},
{
"pmid": "26912859",
"title": "Dynamics of epigenetic regulation at the single-cell level.",
"abstract": "Chromatin regulators play a major role in establishing and maintaining gene expression states. Yet how they control gene expression in single cells, quantitatively and over time, remains unclear. We used time-lapse microscopy to analyze the dynamic effects of four silencers associated with diverse modifications: DNA methylation, histone deacetylation, and histone methylation. For all regulators, silencing and reactivation occurred in all-or-none events, enabling the regulators to modulate the fraction of cells silenced rather than the amount of gene expression. These dynamics could be described by a three-state model involving stochastic transitions between active, reversibly silent, and irreversibly silent states. Through their individual transition rates, these regulators operate over different time scales and generate distinct types of epigenetic memory. Our results provide a framework for understanding and engineering mammalian chromatin regulation and epigenetic memory."
},
{
"pmid": "19898493",
"title": "Direct cell reprogramming is a stochastic process amenable to acceleration.",
"abstract": "Direct reprogramming of somatic cells into induced pluripotent stem (iPS) cells can be achieved by overexpression of Oct4, Sox2, Klf4 and c-Myc transcription factors, but only a minority of donor somatic cells can be reprogrammed to pluripotency. Here we demonstrate that reprogramming by these transcription factors is a continuous stochastic process where almost all mouse donor cells eventually give rise to iPS cells on continued growth and transcription factor expression. Additional inhibition of the p53/p21 pathway or overexpression of Lin28 increased the cell division rate and resulted in an accelerated kinetics of iPS cell formation that was directly proportional to the increase in cell proliferation. In contrast, Nanog overexpression accelerated reprogramming in a predominantly cell-division-rate-independent manner. Quantitative analyses define distinct cell-division-rate-dependent and -independent modes for accelerating the stochastic course of reprogramming, and suggest that the number of cell divisions is a key parameter driving epigenetic reprogramming to pluripotency."
},
{
"pmid": "24048479",
"title": "Deterministic direct reprogramming of somatic cells to pluripotency.",
"abstract": "Somatic cells can be inefficiently and stochastically reprogrammed into induced pluripotent stem (iPS) cells by exogenous expression of Oct4 (also called Pou5f1), Sox2, Klf4 and Myc (hereafter referred to as OSKM). The nature of the predominant rate-limiting barrier(s) preventing the majority of cells to successfully and synchronously reprogram remains to be defined. Here we show that depleting Mbd3, a core member of the Mbd3/NuRD (nucleosome remodelling and deacetylation) repressor complex, together with OSKM transduction and reprogramming in naive pluripotency promoting conditions, result in deterministic and synchronized iPS cell reprogramming (near 100% efficiency within seven days from mouse and human cells). Our findings uncover a dichotomous molecular function for the reprogramming factors, serving to reactivate endogenous pluripotency networks while simultaneously directly recruiting the Mbd3/NuRD repressor complex that potently restrains the reactivation of OSKM downstream target genes. Subsequently, the latter interactions, which are largely depleted during early pre-implantation development in vivo, lead to a stochastic and protracted reprogramming trajectory towards pluripotency in vitro. The deterministic reprogramming approach devised here offers a novel platform for the dissection of molecular dynamics leading to establishing pluripotency at unprecedented flexibility and resolution."
},
{
"pmid": "16153702",
"title": "Core transcriptional regulatory circuitry in human embryonic stem cells.",
"abstract": "The transcription factors OCT4, SOX2, and NANOG have essential roles in early development and are required for the propagation of undifferentiated embryonic stem (ES) cells in culture. To gain insights into transcriptional regulation of human ES cells, we have identified OCT4, SOX2, and NANOG target genes using genome-scale location analysis. We found, surprisingly, that OCT4, SOX2, and NANOG co-occupy a substantial portion of their target genes. These target genes frequently encode transcription factors, many of which are developmentally important homeodomain proteins. Our data also indicate that OCT4, SOX2, and NANOG collaborate to form regulatory circuitry consisting of autoregulatory and feedforward loops. These results provide new insights into the transcriptional regulation of stem cells and reveal how OCT4, SOX2, and NANOG contribute to pluripotency and self-renewal."
},
{
"pmid": "21252997",
"title": "Dedifferentiation, transdifferentiation and reprogramming: three routes to regeneration.",
"abstract": "The ultimate goal of regenerative medicine is to replace lost or damaged cells. This can potentially be accomplished using the processes of dedifferentiation, transdifferentiation or reprogramming. Recent advances have shown that the addition of a group of genes can not only restore pluripotency in a fully differentiated cell state (reprogramming) but can also induce the cell to proliferate (dedifferentiation) or even switch to another cell type (transdifferentiation). Current research aims to understand how these processes work and to eventually harness them for use in regenerative medicine."
},
{
"pmid": "25679502",
"title": "Undifferentiated bronchial fibroblasts derived from asthmatic patients display higher elastic modulus than their non-asthmatic counterparts.",
"abstract": "During asthma development, differentiation of epithelial cells and fibroblasts towards the contractile phenotype is associated with bronchial wall remodeling and airway constriction. Pathological fibroblast-to-myofibroblast transition (FMT) can be triggered by local inflammation of bronchial walls. Recently, we have demonstrated that human bronchial fibroblasts (HBFs) derived from asthmatic patients display some inherent features which facilitate their FMT in vitro. In spite of intensive research efforts, these properties remain unknown. Importantly, the role of undifferentiated HBFs in the asthmatic process was systematically omitted. Specifically, biomechanical properties of undifferentiated HBFs have not been considered in either FMT or airway remodeling in vivo. Here, we combine atomic force spectroscopy with fluorescence microscopy to compare mechanical properties and actin cytoskeleton architecture of HBFs derived from asthmatic patients and non-asthmatic donors. Our results demonstrate that asthmatic HBFs form thick and aligned 'ventral' stress fibers accompanied by enlarged focal adhesions. The differences in cytoskeleton architecture between asthmatic and non-asthmatic cells correlate with higher elastic modulus of asthmatic HBFs and their increased predilection to TGF-β-induced FMT. Due to the obvious links between cytoskeleton architecture and mechanical equilibrium, our observations indicate that HBFs derived from asthmatic bronchi can develop considerably higher static tension than non-asthmatic HBFs. This previously unexplored property of asthmatic HBFs may be potentially important for their myofibroblastic differentiation and bronchial wall remodeling during asthma development."
},
{
"pmid": "15996204",
"title": "Genetic regulation of stem cell origins in the mouse embryo.",
"abstract": "'Stem cell' has practically become a household term, but what is a stem cell and where does it come from? Insight into these questions has come from the early mouse embryo, or blastocyst, from which three kinds of stem cells have been derived: embryonic stem (ES) cells, trophoblast stem (TS) cells, and extraembryonic endoderm (XEN) cells. These stem cells appear to derive from three distinct tissue lineages within the blastocyst: the epiblast, the trophectoderm, and the extraembryonic endoderm. Understanding how these lineages arise during development will illuminate efforts to understand the establishment and maintenance of the stem cell state and the mechanisms that restrict stem cell potency. Genetic analysis has enabled the identification of several genes important for lineage decisions in the mouse blastocyst. Among these, Oct4, Nanog, Cdx2, and Gata6 encode transcription factors required for the three lineages of the blastocyst and for the maintenance their respective stem cell types. Interestingly, genetic manipulation of several of these factors can cause lineage switching among these stem cells, suggesting that knowledge of key lineage-determining genes could help control differentiation of stem cells more generally. Pluripotent stem cells have also been isolated from the human blastocyst, but the relationship between these cells and stem cells of the mouse blastocyst remains to be explored. This review describes the genetic regulation of lineage allocation during blastocyst formation and discusses similarities and differences between mouse and human ES cells."
},
{
"pmid": "17938240",
"title": "Jmjd1a and Jmjd2c histone H3 Lys 9 demethylases regulate self-renewal in embryonic stem cells.",
"abstract": "Embryonic stem (ES) cells are pluripotent cells with the ability to self-renew indefinitely. These unique properties are controlled by genetic factors and chromatin structure. The exit from the self-renewing state is accompanied by changes in epigenetic chromatin modifications such as an induction in the silencing-associated histone H3 Lys 9 dimethylation and trimethylation (H3K9Me2/Me3) marks. Here, we show that the H3K9Me2 and H3K9Me3 demethylase genes, Jmjd1a and Jmjd2c, are positively regulated by the ES cell transcription factor Oct4. Interestingly, Jmjd1a or Jmjd2c depletion leads to ES cell differentiation, which is accompanied by a reduction in the expression of ES cell-specific genes and an induction of lineage marker genes. Jmjd1a demethylates H3K9Me2 at the promoter regions of Tcl1, Tcfcp2l1, and Zfp57 and positively regulates the expression of these pluripotency-associated genes. Jmjd2c acts as a positive regulator for Nanog, which encodes for a key transcription factor for self-renewal in ES cells. We further demonstrate that Jmjd2c is required to reverse the H3K9Me3 marks at the Nanog promoter region and consequently prevents transcriptional repressors HP1 and KAP1 from binding. Our results connect the ES cell transcription circuitry to chromatin modulation through H3K9 demethylation in pluripotent cells."
},
{
"pmid": "23254757",
"title": "Oct4 and the small molecule inhibitor, SC1, regulates Tet2 expression in mouse embryonic stem cells.",
"abstract": "The ten eleven translocation (Tet) family of proteins includes three members (Tet1-3), all of which have the capacity to convert 5-methylcytosine to 5-hydroxymethylcytosine in a 2-oxoglutarate- and Fe(II)-dependent manner. Tet1 and Tet2 are highly expressed in undifferentiated embryonic stem cells (ESCs), and this expression decreases upon differentiation. Notably, the expression patterns of Tet1 and Tet2 in ESCs parallels that of pluripotency genes. To date, however, the mechanisms underlying the regulation of Tet gene expression in ESCs remain largely unexplored. Here we report that the pluripotency transcription factor, Oct4, directly regulates the expression of Tet2. Using RNAi, real time quantitative PCR, dual-luciferase reporter assays and electrophoretic mobility shift assays, we show that Oct4 promotes Tet2 transcription by binding to consensus sites in the proximal promoter region. Furthermore, we explored the role of the small molecule inhibitor, SC1 (pluripotin) on Tet gene expression. We show that SC1 promotes Tet3 expression, but represses Tet1 and Tet2 expression. Our findings indicate that Tet2 are crucial downstream targets of the pluripotency factor Oct4, and highlight a role for Oct4 in the regulation of DNA methylation in ESCs. In addition, these findings also provide a new insight into drug-mediated gene regulation."
},
{
"pmid": "27162367",
"title": "Achieving diverse and monoallelic olfactory receptor selection through dual-objective optimization design.",
"abstract": "Multiple-objective optimization is common in biological systems. In the mammalian olfactory system, each sensory neuron stochastically expresses only one out of up to thousands of olfactory receptor (OR) gene alleles; at the organism level, the types of expressed ORs need to be maximized. Existing models focus only on monoallele activation, and cannot explain recent observations in mutants, especially the reduced global diversity of expressed ORs in G9a/GLP knockouts. In this work we integrated existing information on OR expression, and constructed a comprehensive model that has all its components based on physical interactions. Analyzing the model reveals an evolutionarily optimized three-layer regulation mechanism, which includes zonal segregation, epigenetic barrier crossing coupled to a negative feedback loop that mechanistically differs from previous theoretical proposals, and a previously unidentified enhancer competition step. This model not only recapitulates monoallelic OR expression, but also elucidates how the olfactory system maximizes and maintains the diversity of OR expression, and has multiple predictions validated by existing experimental results. Through making an analogy to a physical system with thermally activated barrier crossing and comparative reverse engineering analyses, the study reveals that the olfactory receptor selection system is optimally designed, and particularly underscores cooperativity and synergy as a general design principle for multiobjective optimization in biology."
},
{
"pmid": "23499384",
"title": "Replacement of Oct4 by Tet1 during iPSC induction reveals an important role of DNA methylation and hydroxymethylation in reprogramming.",
"abstract": "DNA methylation and demethylation have been proposed to play an important role in somatic cell reprogramming. Here, we demonstrate that the DNA hydroxylase Tet1 facilitates pluripotent stem cell induction by promoting Oct4 demethylation and reactivation. Moreover, Tet1 (T) can replace Oct4 and initiate somatic cell reprogramming in conjunction with Sox2 (S), Klf4 (K), and c-Myc (M). We established an efficient TSKM secondary reprogramming system and used it to characterize the dynamic profiles of 5-methylcytosine (5mC), 5-hydroxymethylcytosine (5hmC), and gene expression during reprogramming. Our analysis revealed that both 5mC and 5hmC modifications increased at an intermediate stage of the process, correlating with a transition in the transcriptional profile. We also found that 5hmC enrichment is involved in the demethylation and reactivation of genes and regulatory regions that are important for pluripotency. Our data indicate that changes in DNA methylation and hydroxymethylation play important roles in genome-wide epigenetic remodeling during reprogramming."
},
{
"pmid": "17377526",
"title": "Metaplasia and transdifferentiation: from pure biology to the clinic.",
"abstract": "Transformations from one tissue type to another make up a well established set of phenomena that can be explained by the principles of developmental biology. Although these phenomena might be rare in nature, we can now imagine the possibility of deliberately reprogramming cells from one tissue type to another by manipulating the expression of transcription factors. This approach could generate new therapies for many human diseases."
},
{
"pmid": "21483788",
"title": "Activation of neural and pluripotent stem cell signatures correlates with increased malignancy in human glioma.",
"abstract": "The presence of stem cell characteristics in glioma cells raises the possibility that mechanisms promoting the maintenance and self-renewal of tissue specific stem cells have a similar function in tumor cells. Here we characterized human gliomas of various malignancy grades for the expression of stem cell regulatory proteins. We show that cells in high grade glioma co-express an array of markers defining neural stem cells (NSCs) and that these proteins can fulfill similar functions in tumor cells as in NSCs. However, in contrast to NSCs glioma cells co-express neural proteins together with pluripotent stem cell markers, including the transcription factors Oct4, Sox2, Nanog and Klf4. In line with this finding, in high grade gliomas mesodermal- and endodermal-specific transcription factors were detected together with neural proteins, a combination of lineage markers not normally present in the central nervous system. Persistent presence of pluripotent stem cell traits could only be detected in solid tumors, and observations based on in vitro studies and xenograft transplantations in mice imply that this presence is dependent on the combined activity of intrinsic and extrinsic regulatory cues. Together these results demonstrate a general deregulated expression of neural and pluripotent stem cell traits in malignant human gliomas, and indicate that stem cell regulatory factors may provide significant targets for therapeutic strategies."
},
{
"pmid": "33199860",
"title": "Rethinking organoid technology through bioengineering.",
"abstract": "In recent years considerable progress has been made in the development of faithful procedures for the differentiation of human pluripotent stem cells (hPSCs). An important step in this direction has also been the derivation of organoids. This technology generally relies on traditional three-dimensional culture techniques that exploit cell-autonomous self-organization responses of hPSCs with minimal control over the external inputs supplied to the system. The convergence of stem cell biology and bioengineering offers the possibility to provide these stimuli in a controlled fashion, resulting in the development of naturally inspired approaches to overcome major limitations of this nascent technology. Based on the current developments, we emphasize the achievements and ongoing challenges of bringing together hPSC organoid differentiation, bioengineering and ethics. This Review underlines the need for providing engineering solutions to gain control of self-organization and functionality of hPSC-derived organoids. We expect that this knowledge will guide the community to generate higher-grade hPSC-derived organoids for further applications in developmental biology, drug screening, disease modelling and personalized medicine."
}
] |
Scientific Reports | 35388054 | PMC8986811 | 10.1038/s41598-022-09685-w | An effective modular approach for crowd counting in an image using convolutional neural networks | Abrupt and continuous nature of scale variation in a crowded scene is a challenging task to enhance crowd counting accuracy in an image. Existing crowd counting techniques generally used multi-column or single-column dilated convolution to tackle scale variation due to perspective distortion. However, due to multi-column nature, they obtain identical features, whereas, the standard dilated convolution (SDC) with expanded receptive field size has sparse pixel sampling rate. Due to sparse nature of SDC, it is highly challenging to obtain relevant contextual information. Further, features at multiple scale are not extracted despite some inception-based model is not used (which is cost effective). To mitigate theses drawbacks in SDC, we therefore, propose a hierarchical dense dilated deep pyramid feature extraction through convolution neural network (CNN) for single image crowd counting (HDPF). It comprises of three modules: general feature extraction module (GFEM), deep pyramid feature extraction module (PFEM) and fusion module (FM). The GFEM is responsible to obtain task independent general features. Whereas, PFEM plays a vital role to obtain the relevant contextual information due to dense pixel sampling rate caused by densely connected dense stacked dilated convolutional modules (DSDCs). Further, due to dense connections among DSDCs, the final feature map acquires multi-scale information with expanded receptive field as compared to SDC. Due to dense pyramid nature, it is very effective to propagate the extracted feature from lower dilated convolutional layers (DCLs) to middle and higher DCLs, which result in better estimation accuracy. The FM is used to fuse the incoming features extracted by other modules. The proposed technique is tested through simulations on three well known datasets: Shanghaitech (Part-A), Shanghaitech (Part-B) and Venice. Results justify its relative effectiveness in terms of selected performance. | Related workRapid growth of CNN-based methods in classification, and segmentation tasks, the CNN-based techniques proved promising results in density estimation and CC. CNN-based CC methods face a lot of challenges such as perspective distortion, density level variation, non-uniform crowd distribution. To overcome these challenges, researchers play their role to develop a state of the art CC method.Detection-based techniques for crowd counting utilize a moving-window detector to identify objects and count the number of people in an image17. Extraction of common features from appearance-based crowd images to count crowd, however they have limited recognition performance in dense crowded scenes. To overcome this issue, researchers used part-based methods to detect the specific body parts such as the head or the shoulder to count pedestrians18,19. However, these detection-based methods are only suitable for counting sparse crowds because they are affected by severe occlusions.To address the problem faced by detection-based techniques, regression-based methods have been introduced for crowd counting. The main idea of regression-based methods is to learn a mapping from low-level features extracted from local image patches to the crowd count20,21. These extracted low-level features include edge, textures, foreground and gradient features such as local binary pattern, and histogram oriented gradients. Authors in Ref.22 proposed a new and accurate counting model based on YOLO_v3 to automatically and efficiently count dense steel pipes by images. To promote counting models-development and verification, a large-scale steel pipe image data set including various on-site conditions was constructed and publicly available. The proposed model was observed to be superior to the original YOLO_v3 detector in terms of average precision, mean absolute error, and root-mean-square error based on the steel pipe data set. Whereas authors in Ref.23 employs 11 well-known CNN models as the feature extractor of the YOLO_v2 for crack detection. The results confirm that a different feature extractor model of the YOLO_v2 network leads to a different detection results.The regression approaches include linear regression24, piece-wise linear regression25, ridge regression26, and Gaussian process regression. These methods refine the previous detection-based ones, however they ignore spatial distribution information of crowds. To utilize spatial distribution information, the method proposed by Lempitsky and Zisserman27 regresses a density map rather than the crowd count. The method learns a linear mapping between local patch features and density maps, then estimates the total number of objects via integrating over the whole density map. Whereas, method proposed by Pham et al.28 learns a non-linear mapping between local patch features and density maps by using random forest.Due to strong representation ability of CNN’s, a wide variety CNN-based crowd counting techniques have been proposed. Benefited from CNN’s strong ability to learn representations, a variety of CNN-based methods have recently been introduced in crowd counting. A pioneering work for CNN-based crowd counting proposed by Wang et al.29 used multiple convolutional layers to extract features and sent these features into a fully connected layer to estimate density in dense crowded environment. Another work done by authors in Ref.30 by fine-tuning the pre-trained network on specific scenes by selecting similar image patches from the training data. The main drawback is that the approach requires perspective information which is not always available. Due to variation in densities and appearance of a crowded image, authors in Refs.7,8 proposed a multi-column network for density estimation. Different columns are explicitly designed for learning density variations across different feature resolutions. Despite different sizes of filters, it is difficult for different columns to recognize varying density crowds, and this lack of recognition results in some ineffective branches. Sindagi and Patel13 proposed a multi-task framework to simultaneously predict density classification and generate the density map based on high-level prior information. Authors in Ref.31 proposed a method of fusing multi-scale density predictions of corresponding multi-scale inputs, while32 designed an aggregated multi-column dilated convolution network for perspective-free counting. However, none of these works consider local information. To avoid the issues of ineffective branches and expensive computation in previous multi-column networks, Li et al.10 introduced a deeper single-column-based dilated convolutional network called CSRNet. Reference16 developed an encoder decoder-based scale aggregation network for crowd counting. | [
"34067707",
"28463186"
] | [
{
"pmid": "34067707",
"title": "HADF-Crowd: A Hierarchical Attention-Based Dense Feature Extraction Network for Single-Image Crowd Counting.",
"abstract": "Crowd counting is a challenging task due to large perspective, density, and scale variations. CNN-based crowd counting techniques have achieved significant performance in sparse to dense environments. However, crowd counting in high perspective-varying scenes (images) is getting harder due to different density levels occupied by the same number of pixels. In this way large variations for objects in the same spatial area make it difficult to count accurately. Further, existing CNN-based crowd counting methods are used to extract rich deep features; however, these features are used locally and disseminated while propagating through intermediate layers. This results in high counting errors, especially in dense and high perspective-variation scenes. Further, class-specific responses along channel dimensions are underestimated. To address these above mentioned issues, we therefore propose a CNN-based dense feature extraction network for accurate crowd counting. Our proposed model comprises three main modules: (1) backbone network, (2) dense feature extraction modules (DFEMs), and (3) channel attention module (CAM). The backbone network is used to obtain general features with strong transfer learning ability. The DFEM is composed of multiple sub-modules called dense stacked convolution modules (DSCMs), densely connected with each other. In this way features extracted from lower and middle-lower layers are propagated to higher layers through dense connections. In addition, combinations of task independent general features obtained by the former modules and task-specific features obtained by later ones are incorporated to obtain high counting accuracy in large perspective-varying scenes. Further, to exploit the class-specific response between background and foreground, CAM is incorporated at the end to obtain high-level features along channel dimensions for better counting accuracy. Moreover, we have evaluated the proposed method on three well known datasets: Shanghaitech (Part-A), Shanghaitech (Part-B), and Venice. The performance of the proposed technique justifies its relative effectiveness in terms of selected performance compared to state-of-the-art techniques."
},
{
"pmid": "28463186",
"title": "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.",
"abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."
}
] |
Frontiers in Public Health | null | PMC8987163 | 10.3389/fpubh.2022.819865 | A Multistage Heterogeneous Stacking Ensemble Model for Augmented Infant Cry Classification | Understanding the reason for an infant's cry is the most difficult thing for parents. There might be various reasons behind the baby's cry. It may be due to hunger, pain, sleep, or diaper-related problems. The key concept behind identifying the reason behind the infant's cry is mainly based on the varying patterns of the crying audio. The audio file comprises many features, which are highly important in classifying the results. It is important to convert the audio signals into the required spectrograms. In this article, we are trying to find efficient solutions to the problem of predicting the reason behind an infant's cry. In this article, we have used the Mel-frequency cepstral coefficients algorithm to generate the spectrograms and analyzed the varying feature vectors. We then came up with two approaches to obtain the experimental results. In the first approach, we used the Convolution Neural network (CNN) variants like VGG16 and YOLOv4 to classify the infant cry signals. In the second approach, a multistage heterogeneous stacking ensemble model was used for infant cry classification. Its major advantage was the inclusion of various advanced boosting algorithms at various levels. The proposed multistage heterogeneous stacking ensemble model had the edge over the other neural network models, especially in terms of overall performance and computing power. Finally, after many comparisons, the proposed model revealed the virtuoso performance and a mean classification accuracy of up to 93.7%. | Related WorkDuring the 2000s, most techniques employed in newborn child research were identified with neural organizations, including the scaled form (5). Their review included the details about applying many neural network models and traditional machine learning algorithms like KNN and SVM to predict the reason for the infant's cry. Considering a unique circumstance, the work in (28) zeroed in on making a programmed framework that could recognize diverse newborn child needs dependent on crying. It separated different arrangements of paralinguistic highlights from the child cry sound signals and prepared different rule-based or measurable classifiers.The work in (29) developed an NonLinear Forcasting (NLF) model that includes the Euclidean distance for its goal work, which is normally a unique instance of difference. In addition, it often experiences slow intermingling. This review proposes a summed up and quick uniting non-negative dormant variable [a generalized and fast-converging non-negative latent factor (GFNLF)] model to resolve these issues. Its primary thought is two-fold: (a) taking on—dissimilarity for its goal work, subsequently improving its portrayal capacity for Host Based Intrusion Detection System (HiDS) information; (b) concluding its energy joined non-negative multiplicative update calculation, along these lines accomplishing its quick intermingling. Experimental investigations on two HiDS grids rising out of genuine RSs show that, with cautiously tuned hyperparameters, the GFNLF model outperforms groundbreaking models in both computational effectiveness and expectation exactness for missing information in a HiDS lattice.The research in (4) developed a time–frequency-based analysis called STFT. A total of 256 discrete Fourier transform focuses were considered to figure out the Fourier change. It accomplished a deep convolutional neural organization called AlexNet with a few improvements to group the recorded newborn child cry. To work on the viability of the previously mentioned neural organization, stochastic gradient descent with momentum (SGDM) was used to perform the calculation.The authors in (6) obtained and broke down sound elements of infant's cry signals in schedule and recurrence areas. In view of the connected elements, we can arrange cry signals to clear cry implications for cry language acknowledgment. Highlights separated from sound component space incorporate linear predictive coding, linear prediction cepstral coefficients, Bark frequency cepstral coefficients, and MFCCs. Packed detecting method was used for characterization, and useful information was used to plan further and confirm the proposed approaches. Tests showed that the proposed infant's cry detecting approaches offer accurate and promising outcomes.The work in (7) portraying the advancement of significant information innovation, anticipating clients' buying goals through precise information of their buying practices has turned into a fundamental system for organizations to perform accuracy promotion and increase deal volume. The information of clients' buying behavior is described by an enormous sum, significant changeability, and long haul reliance. Along these lines, the bidirectional long short-term memory (BiLSTM) model is used in this article to examine the client's buying behavior. First, the model accepts client ID as the benchmark of grouping, catching the variance law of the client purchase volume and completely mining the drawn-out reliance of client's buying behavior. Second, the BiLSTM model adaptively extricates highlights, figures out the “start to finish” forecast of client's buy behavior, and diminishes the design subjectivity. This article checks the viability of this strategy depending on the genuine client buying behavior informational indexes. The investigation results show that the BiLSTM technique has high precision in examining the client's buying behavior.The significant goal of this exploration work (18) was to introduce another procedure to recognize cancer. The proposed engineering precisely divided and characterized harmless and dangerous cancer cases. Diverse spatial area techniques to improve and precisely divide the information pictures were applied. Also, AlexNET and GoogLeNet were used for characterization, wherein two score vectors were acquired. Further, both score vectors were melded and provided to many classifiers alongside the Softmax layer. Assessment of this model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets, i.e., multimodal brain tumor image segmentation 2013, 2014, 2015, and 2016 and ischemic stroke lesion segmentation 2018 separately.The work in (30) emphasized the complete exploration plans to arrange baby's cries into their social characteristics by utilizing evenhanded and insightful AI approaches. Toward this objective, the authors have considered customary AI and later profound learning-based models for child cry arrangement using acoustic elements, spectrograms, and a mix of the two. They have performed a point-by-point experimental review of the open access corpus and the CRIED dataset to feature the adequacy of fitting acoustic elements, signal processing, or AI procedures for this purpose. Major work was done by presuming that acoustic elements and spectrograms together will bring better outcomes. As a side outcome, this work additionally underscored the test of a deficient child cry data set in displaying baby's behavioral attributes.This study (31) investigates a neural transfer learning way to create precise models for recognizing babies that have experienced perinatal asphyxia. Specifically, the authors have investigated the speculation that portrayals obtained from grown-up discourse could educate and further develop execution based on models created on newborn baby discourse. Their tests show that models depending on such portrayal moves are resilient to various kinds and levels of commotion, just as to flag misfortune on schedule and recurrence areas. The work analyzes the exhibition of a residual neural organization. Their ResNet model was pretrained on a few discourse assignments in characterizing perinatal asphyxia. Among the implemented models, the model for the word recognition task performed the best, recommending that the varieties learned for this undertaking are generally closely resembling and helpful to their objective assignment. The support vector machine prepared straightforwardly on MFCC highlights ended up being a solid benchmark and, assuming fluctuation in forecasts was of concern, a favored model.In this article (32), the authors present a safe medical care framework that performs an acoustic examination of messy boisterous baby cry signs to concentrate and gauge specific cry attributes quantitatively and group strong and weak babies indicated only by their cries. In the lead of this infant cry-based indicative framework, the unique MFCC highlights as well as static MFCCs are chosen and removed for both expiratory and inspiratory cry vocalizations to deliver a discriminative and instructive component vector. Then, the authors made a remarkable cry design for each cry vocalization type and neurotic condition by presenting a clever thought utilizing the boosting mixture learning (BML) technique to infer either sound or pathology subclass models independently from the Gaussian mixture model-universal background model. Also, a score-level combination of the proposed expiratory and inspiratory cry-based subsystems was performed to settle on a more dependable choice. The trial results show that the adjusted BML strategy has lower error rates than the Bayesian methodology when considered as a kind of perspective technique. | [
"27295638",
"34178926",
"17596217",
"31643004",
"34508971",
"34804460",
"34595149",
"27524848",
"31350110"
] | [
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "34178926",
"title": "Deep Learning Assisted Neonatal Cry Classification via Support Vector Machine Models.",
"abstract": "Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy."
},
{
"pmid": "17596217",
"title": "Analysis of the validation of existing behavioral pain and distress scales for use in the procedural setting.",
"abstract": "BACKGROUND\nAssessing procedural pain and distress in young children is difficult. A number of behavior-based pain and distress scales exist which can be used in preverbal and early-verbal children, and these are validated in particular settings and to variable degrees.\n\n\nMETHODS\nWe identified validated preverbal and early-verbal behavioral pain and distress scales and critically analysed the validation and reliability testing of these scales as well as their use in procedural pain and distress research. We analysed in detail six behavioral pain and distress scales: Children's Hospital of Eastern Ontario Pain Scale (CHEOPS), Faces Legs Activity Cry Consolability Pain Scale (FLACC), Toddler Preschooler Postoperative Pain Scale (TPPPS), Preverbal Early Verbal Pediatric Pain Scale (PEPPS), the observer Visual Analog Scale (VASobs) and the Observation Scale of Behavioral Distress (OSBD).\n\n\nRESULTS\nDespite their use in procedural pain studies none of the behavioral pain scales reviewed had been adequately validated in the procedural setting and validation of the single distress scale was limited.\n\n\nCONCLUSIONS\nThere is a need to validate behavioral pain and distress scales for procedural use in preverbal or early-verbal children."
},
{
"pmid": "31643004",
"title": "A New Approach for Brain Tumor Segmentation and Classification Based on Score Level Fusion Using Transfer Learning.",
"abstract": "Brain tumor is one of the most death defying diseases nowadays. The tumor contains a cluster of abnormal cells grouped around the inner portion of human brain. It affects the brain by squeezing/ damaging healthy tissues. It also amplifies intra cranial pressure and as a result tumor cells growth increases rapidly which may lead to death. It is, therefore desirable to diagnose/ detect brain tumor at an early stage that may increase the patient survival rate. The major objective of this research work is to present a new technique for the detection of tumor. The proposed architecture accurately segments and classifies the benign and malignant tumor cases. Different spatial domain methods are applied to enhance and accurately segment the input images. Moreover Alex and Google networks are utilized for classification in which two score vectors are obtained after the softmax layer. Further, both score vectors are fused and supplied to multiple classifiers along with softmax layer. Evaluation of proposed model is done on top medical image computing and computer-assisted intervention (MICCAI) challenge datasets i.e., multimodal brain tumor segmentation (BRATS) 2013, 2014, 2015, 2016 and ischemic stroke lesion segmentation (ISLES) 2018 respectively."
},
{
"pmid": "34508971",
"title": "Ensemble based machine learning approach for prediction of glioma and multi-grade classification.",
"abstract": "Glioma is the most pernicious cancer of the nervous system, with histological grade influencing the survival of patients. Despite many studies on the multimodal treatment approach, survival time remains brief. In this study, a novel two-stage ensemble of an ensemble-type machine learning-based predictive framework for glioma detection and its histograde classification is proposed. In the proposed framework, five characteristics belonging to 135 subjects were considered: human telomerase reverse transcriptase (hTERT), chitinase-like protein (YKL-40), interleukin 6 (IL-6), tissue inhibitor of metalloproteinase-1 (TIMP-1) and neutrophil/lymphocyte ratio (NLR). These characteristics were examined using distinctive ensemble-based machine learning classifiers and combination strategies to develop a computer-aided diagnostic system for the non-invasive prediction of glioma cases and their grade. In the first stage, the analysis was conducted to classify glioma cases and control subjects. Machine learning approaches were applied in the second stage to classify the recognised glioma cases into three grades, from grade II, which has a good prognosis, to grade IV, which is also known as glioblastoma. All experiments were evaluated with a five-fold cross-validation method, and the classification results were analysed using different statistical parameters. The proposed approach obtained a high value of accuracy and other statistical parameters compared with other state-of-the-art machine learning classifiers. Therefore, the proposed framework can be utilised for designing other intervention strategies for the prediction of glioma cases and their grades."
},
{
"pmid": "34804460",
"title": "An Efficient Classification of Neonates Cry Using Extreme Gradient Boosting-Assisted Grouped-Support-Vector Network.",
"abstract": "The cry is a loud, high pitched verbal communication of infants. The very high fundamental frequency and resonance frequency characterize a neonatal infant cry having certain sudden variations. Furthermore, in a tiny duration solitary utterance, the cry signal also possesses both voiced and unvoiced features. Mostly, infants communicate with their caretakers through cries, and sometimes, it becomes difficult for the caretakers to comprehend the reason behind the newborn infant cry. As a result, this research proposes a novel work for classifying the newborn infant cries under three groups such as hunger, sleep, and discomfort. For each crying frame, twelve features get extracted through acoustic feature engineering, and the variable selection using random forests was used for selecting the highly discriminative features among the twelve time and frequency domain features. Subsequently, the extreme gradient boosting-powered grouped-support-vector network is deployed for neonate cry classification. The empirical results show that the proposed method could effectively classify the neonate cries under three different groups. The finest experimental results showed a mean accuracy of around 91% for most scenarios, and this exhibits the potential of the proposed extreme gradient boosting-powered grouped-support-vector network in neonate cry classification. Also, the proposed method has a fast recognition rate of 27 seconds in the identification of these emotional cries."
},
{
"pmid": "34595149",
"title": "Performance Evaluation of Regression Models for the Prediction of the COVID-19 Reproduction Rate.",
"abstract": "This paper aims to evaluate the performance of multiple non-linear regression techniques, such as support-vector regression (SVR), k-nearest neighbor (KNN), Random Forest Regressor, Gradient Boosting, and XGBOOST for COVID-19 reproduction rate prediction and to study the impact of feature selection algorithms and hyperparameter tuning on prediction. Sixteen features (for example, Total_cases_per_million and Total_deaths_per_million) related to significant factors, such as testing, death, positivity rate, active cases, stringency index, and population density are considered for the COVID-19 reproduction rate prediction. These 16 features are ranked using Random Forest, Gradient Boosting, and XGBOOST feature selection algorithms. Seven features are selected from the 16 features according to the ranks assigned by most of the above mentioned feature-selection algorithms. Predictions by historical statistical models are based solely on the predicted feature and the assumption that future instances resemble past occurrences. However, techniques, such as Random Forest, XGBOOST, Gradient Boosting, KNN, and SVR considered the influence of other significant features for predicting the result. The performance of reproduction rate prediction is measured by mean absolute error (MAE), mean squared error (MSE), root mean squared error (RMSE), R-Squared, relative absolute error (RAE), and root relative squared error (RRSE) metrics. The performances of algorithms with and without feature selection are similar, but a remarkable difference is seen with hyperparameter tuning. The results suggest that the reproduction rate is highly dependent on many features, and the prediction should not be based solely upon past values. In the case without hyperparameter tuning, the minimum value of RAE is 0.117315935 with feature selection and 0.0968989 without feature selection, respectively. The KNN attains a low MAE value of 0.0008 and performs well without feature selection and with hyperparameter tuning. The results show that predictions performed using all features and hyperparameter tuning is more accurate than predictions performed using selected features."
},
{
"pmid": "27524848",
"title": "Cry-based infant pathology classification using GMMs.",
"abstract": "Traditional studies of infant cry signals focus more on non-pathology-based classification of infants. In this paper, we introduce a noninvasive health care system that performs acoustic analysis of unclean noisy infant cry signals to extract and measure certain cry characteristics quantitatively and classify healthy and sick newborn infants according to only their cries. In the conduct of this newborn cry-based diagnostic system, the dynamic MFCC features along with static Mel-Frequency Cepstral Coefficients (MFCCs) are selected and extracted for both expiratory and inspiratory cry vocalizations to produce a discriminative and informative feature vector. Next, we create a unique cry pattern for each cry vocalization type and pathological condition by introducing a novel idea using the Boosting Mixture Learning (BML) method to derive either healthy or pathology subclass models separately from the Gaussian Mixture Model-Universal Background Model (GMM-UBM). Our newborn cry-based diagnostic system (NCDS) has a hierarchical scheme that is a treelike combination of individual classifiers. Moreover, a score-level fusion of the proposed expiratory and inspiratory cry-based subsystems is performed to make a more reliable decision. The experimental results indicate that the adapted BML method has lower error rates than the Bayesian approach or the maximum a posteriori probability (MAP) adaptation approach when considered as a reference method."
},
{
"pmid": "31350110",
"title": "The Vocalist in the Crib: the Flexibility of Respiratory Behaviour During Crying in Healthy Neonates.",
"abstract": "OBJECTIVE\nTo evaluate the flexibility of respiratory behavior during spontaneous crying using an objective analysis of temporal measures in healthy neonates.\n\n\nPARTICIPANTS\nA total of 1,375 time intervals, comprising breath cycles related to the spontaneous crying of 72 healthy, full-term neonates (35 females) aged between two and four days, were analyzed quantitatively.\n\n\nMETHODS\nDigital recordings (44 kHz, 16 bit) of cries emitted in a spontaneous, pain-free context were obtained at the University Children's Hospital Wurzburg. The amplitude-by-time representation of PRAAT: doing phonetics by computer (38) was used for the manual segmentation of single breath-cycles involving phonation. Cursors were set in these time intervals to mark the duration of inspiratory (IPh) and expiratory phases (EPh), and double-checks were carried out using auditory analyses. A PRAAT script was used to extract temporal features automatically. The only intervals analyzed were those that contained an expiratory cry utterance embedded within preceding and subsequent inspiratory phonation (IP). Beyond the reliable identification of IPh and EPh, this approach also guaranteed inter-individual and inter-utterance homogenization with respect to inspiratory strength and an unconstructed vocal tract.\n\n\nRESULTS\nDespite the physiological constraints of the neonatal respiratory system, a high degree of flexibility in the ratio of IPh/EPh was observed. This ratio changed hyperbolically (r = 0.71) with breath-cycle duration. Descriptive statistics for all the temporal measures are reported as reference values for future studies.\n\n\nCONCLUSION\nThe existence of respiratory exploration during the spontaneous crying of healthy neonates is supported by quantitative data. From a clinical perspective, the data demonstrate the presence of a high degree of flexibility in the respiratory behavior, particularly neonates' control capability with respect to variable cry durations. These data are discussed in relation to future clinical applications."
}
] |
Frontiers in Neurorobotics | null | PMC8987443 | 10.3389/fnbot.2022.806898 | Learning Suction Graspability Considering Grasp Quality and Robot Reachability for Bin-Picking | Deep learning has been widely used for inferring robust grasps. Although human-labeled RGB-D datasets were initially used to learn grasp configurations, preparation of this kind of large dataset is expensive. To address this problem, images were generated by a physical simulator, and a physically inspired model (e.g., a contact model between a suction vacuum cup and object) was used as a grasp quality evaluation metric to annotate the synthesized images. However, this kind of contact model is complicated and requires parameter identification by experiments to ensure real world performance. In addition, previous studies have not considered manipulator reachability such as when a grasp configuration with high grasp quality is unable to reach the target due to collisions or the physical limitations of the robot. In this study, we propose an intuitive geometric analytic-based grasp quality evaluation metric. We further incorporate a reachability evaluation metric. We annotate the pixel-wise grasp quality and reachability by the proposed evaluation metric on synthesized images in a simulator to train an auto-encoder–decoder called suction graspability U-Net++ (SG-U-Net++). Experiment results show that our intuitive grasp quality evaluation metric is competitive with a physically-inspired metric. Learning the reachability helps to reduce motion planning computation time by removing obviously unreachable candidates. The system achieves an overall picking speed of 560 PPH (pieces per hour). | 2. Related Works2.1. Pixel-Wise Graspability LearningIn early studies, deep neural networks were used to directly predict the candidate grasp configurations without considering the grasp quality (Asif et al., 2018; Zhou X. et al., 2018; Xu et al., 2021). However, since there can be multiple grasp candidates for an object that has a complicated shape or multiple objects in a cluttered scene, learning graspablity is required for the planner to find the optimal grasp among the candidates.Pixel-wise graspablity learning uses RGB-D or depth-only images to infer the grasp success probability at each pixel. Zeng et al. (2018b) used a manually labeled dataset to train fully convolutional networks (FCNs) for predicting pixel-wise grasp quality (affordance) maps of four pre-defined grasping primitives. Liu et al. (2020) performed active exploration by pushing objects to find good grasp affordable maps predicted by Zeng's FCNs. Recently, Utomo et al. (2021) modified the architecture of Zeng's FCNs to improve the inference precision and speed. Based on Zeng's concept, Hasegawa et al. (2019) incorporated a primitive template matching module, making the system adaptive to changes in grasping primitives. Zeng et al. also applied the concept of pixel-wise affordance learning to other manipulation tasks such as picking by synergistic coordination of push and grasp motions (Zeng et al., 2018a), and picking and throwing (Zeng et al., 2020). However, preparing huge amounts of RGB-D images and manually labeling the grasp quality requires a large amount of effort.Faced with the dataset generation cost of RGB-D based graspability learning, researchers started to use depth-image-only based learning. The merits of using depth images are that they are easier to synthesize and annotate in a physical simulator compared with RGB images. Morrison et al. (2020) proposed a generative grasping convolutional neural network (GG-CNN) to rapidly predict pixel-wise grasp quality. Based on a similar concept of grasp quality learning, the U-Grasping fully convolutional neural network (UGNet) (Song et al., 2019), Generative Residual Convolutional Neural Network (GRConvNet) (Kumra et al., 2020), and Generative Inception Neural Network (GI-NNet) (Shukla et al., 2021) were later proposed and were reported to achieve higher accuracy than GG-CNN. Le et al. (2021) extended GG-CNN to be capable of predicting the grasp quality of deformable objects by incorporating stiffness information. Morrison et al. (2019) also applied GG-CNN to a multi-view picking controller to avoid bad grasp poses caused by occlusion and collision. However, the grasp quality dataset of GG-CNN was generated by creating masks of the center third of each grasping rectangle of the Cornell Grasping dataset (Lenz et al., 2015) and Jacquard dataset (Depierre et al., 2018). This annotation method did not deeply analyze the interaction between hand and object, which is expected to lead to insufficient representation of grasp robustness.To improve the robustness of grasp quality annotation, a physically-inspired contact force model was designed to label pixel-wise grasp quality. Mahler et al. (2018, 2019) designed a quasi-static spring model for the contact force between the vacuum cup and the object. Based on the designed compliant contact model, they assessed the grasp quality in terms of grasp robustness in a physical simulator. They further proposed GQ-CNN to learn the grasp quality and used a sampling-based method to propose an optimal grasp in the inference phase, and also extended their study by proposing a fully convolutional GQ-CNN (Satish et al., 2019) to infer pixel-wise grasp quality, which achieved faster grasping. Recently, (Cao et al., 2021) used an auto-encoder–decoder to infer the grasp quality, which was labeled by a similar contact model to that used in GQ-CNN, to generate the suction pose. However, the accuracy of the contact model depends on the model complexity and parameter tuning. High complexity may lead to a long computation cost of annotation. Parameter identification by real world experiment (Bernardin et al., 2019) might also be necessary to ensure the validity of the contact model.Our approach also labeled the grasp quality in synthesized depth images. Unlike GQ-CNN, we proposed a more intuitive evaluation metric based on a geometrical analytic method rather than a complicated contact analytic model. Our results showed that the intuitive evaluation metric was competitive with GQ-CNN. A reachability heatmap was further incorporated to help filter pixels that had high grasp quality value but were unreachable.2.2. Reachability AssessmentReachability was previously assessed by sampling a large number of grasp poses and then using forward kinematics calculation, inverse kinematics calculation, or manipulability ellipsoid evaluation to investigate whether the sampled poses were reachable (Zacharias et al., 2007; Porges et al., 2014, 2015; Vahrenkamp and Asfour, 2015; Makhal and Goins, 2018). The reachability map was generated off-line, and the feasibility of candidate grasp poses was queried during grasp planning for picking static (Akinola et al., 2018; Sundaram et al., 2020) or moving (Akinola et al., 2021) objects. However, creating an off-line map with high accuracy for a large space is computationally expensive. In addition, although the off-line map considered only collisions between the manipulator and a constrained environment (e.g., fixed bin or wall) since the environment for picking in a cluttered scene is dynamic, collision checking between the manipulator and surrounding objects is still needed and this can be time consuming. Hence, recent studies have started to learn reachability with collision awareness of grasp poses. Kim and Perez (2021) designed a density net to learn the reachability density of a given pose but considered only self-collision. Murali et al. (2020) used a learned grasp sampler to sample 6D grasp poses and proposed a CollisionNet to assess the collision score of sampled poses. Lou et al. (2020) proposed a 3D CNN and reachability predictor to predict the pose stability and reachability of sampled grasp poses. They later extended the work by incorporating collision awareness for learning approachable grasp poses (Lou et al., 2021). These sampling-based methods have required designing or training a good grasp sampler for inferring the reachability. Our approach is one-shot, which directly infers the pixel-wise reachability from the depth image without sampling. | [
"27295638",
"32012874",
"33869293",
"33162883",
"27295638",
"33137754",
"26074671",
"32015543"
] | [
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "32012874",
"title": "Depth Image-Based Deep Learning of Grasp Planning for Textureless Planar-Faced Objects in Vision-Guided Robotic Bin-Picking.",
"abstract": "Bin-picking of small parcels and other textureless planar-faced objects is a common task at warehouses. A general color image-based vision-guided robot picking system requires feature extraction and goal image preparation of various objects. However, feature extraction for goal image matching is difficult for textureless objects. Further, prior preparation of huge numbers of goal images is impractical at a warehouse. In this paper, we propose a novel depth image-based vision-guided robot bin-picking system for textureless planar-faced objects. Our method uses a deep convolutional neural network (DCNN) model that is trained on 15,000 annotated depth images synthetically generated in a physics simulator to directly predict grasp points without object segmentation. Unlike previous studies that predicted grasp points for a robot suction hand with only one vacuum cup, our DCNN also predicts optimal grasp patterns for a hand with two vacuum cups (left cup on, right cup on, or both cups on). Further, we propose a surface feature descriptor to extract surface features (center position and normal) and refine the predicted grasp point position, removing the need for texture features for vision-guided robot control and sim-to-real modification for DCNN model training. Experimental results demonstrate the efficiency of our system, namely that a robot with 7 degrees of freedom can pick randomly posed textureless boxes in a cluttered environment with a 97.5% success rate at speeds exceeding 1000 pieces per hour."
},
{
"pmid": "33869293",
"title": "Development and Grasp Stability Estimation of Sensorized Soft Robotic Hand.",
"abstract": "This paper introduces the development of an anthropomorphic soft robotic hand integrated with multiple flexible force sensors in the fingers. By leveraging on the integrated force sensing mechanism, grip state estimation networks have been developed. The robotic hand was tasked to hold the given object on the table for 1.5 s and lift it up within 1 s. The object manipulation experiment of grasping and lifting the given objects were conducted with various pneumatic pressure (50, 80, and 120 kPa). Learning networks were developed to estimate occurrence of object instability and slippage due to acceleration of the robot or insufficient grasp strength. Hence the grip state estimation network can potentially feedback object stability status to the pneumatic control system. This would allow the pneumatic system to use suitable pneumatic pressure to efficiently handle different objects, i.e., lower pneumatic pressure (50 kPa) for lightweight objects which do not require high grasping strength. The learning process of the soft hand is made challenging by curating a diverse selection of daily objects, some of which displays dynamic change in shape upon grasping. To address the cost of collecting extensive training datasets, we adopted one-shot learning (OSL) technique with a long short-term memory (LSTM) recurrent neural network. OSL aims to allow the networks to learn based on limited training data. It also promotes the scalability of the network to accommodate more grasping objects in the future. Three types of LSTM-based networks have been developed and their performance has been evaluated in this study. Among the three LSTM networks, triplet network achieved overall stability estimation accuracy at 89.96%, followed by LSTM network with 88.00% and Siamese LSTM network with 85.16%."
},
{
"pmid": "33162883",
"title": "Event-Based Robotic Grasping Detection With Neuromorphic Vision Sensor and Event-Grasping Dataset.",
"abstract": "Robotic grasping plays an important role in the field of robotics. The current state-of-the-art robotic grasping detection systems are usually built on the conventional vision, such as the RGB-D camera. Compared to traditional frame-based computer vision, neuromorphic vision is a small and young community of research. Currently, there are limited event-based datasets due to the troublesome annotation of the asynchronous event stream. Annotating large scale vision datasets often takes lots of computation resources, especially when it comes to troublesome data for video-level annotation. In this work, we consider the problem of detecting robotic grasps in a moving camera view of a scene containing objects. To obtain more agile robotic perception, a neuromorphic vision sensor (Dynamic and Active-pixel Vision Sensor, DAVIS) attaching to the robot gripper is introduced to explore the potential usage in grasping detection. We construct a robotic grasping dataset named Event-Grasping dataset with 91 objects. A spatial-temporal mixed particle filter (SMP Filter) is proposed to track the LED-based grasp rectangles, which enables video-level annotation of a single grasp rectangle per object. As LEDs blink at high frequency, the Event-Grasping dataset is annotated at a high frequency of 1 kHz. Based on the Event-Grasping dataset, we develop a deep neural network for grasping detection that considers the angle learning problem as classification instead of regression. The method performs high detection accuracy on our Event-Grasping dataset with 93% precision at an object-wise level split. This work provides a large-scale and well-annotated dataset and promotes the neuromorphic vision applications in agile robot."
},
{
"pmid": "27295638",
"title": "A Novel Method to Detect Functional microRNA Regulatory Modules by Bicliques Merging.",
"abstract": "UNLABELLED\nMicroRNAs (miRNAs) are post-transcriptional regulators that repress the expression of their targets. They are known to work cooperatively with genes and play important roles in numerous cellular processes. Identification of miRNA regulatory modules (MRMs) would aid deciphering the combinatorial effects derived from the many-to-many regulatory relationships in complex cellular systems. Here, we develop an effective method called BiCliques Merging (BCM) to predict MRMs based on bicliques merging. By integrating the miRNA/mRNA expression profiles from The Cancer Genome Atlas (TCGA) with the computational target predictions, we construct a weighted miRNA regulatory network for module discovery. The maximal bicliques detected in the network are statistically evaluated and filtered accordingly. We then employed a greedy-based strategy to iteratively merge the remaining bicliques according to their overlaps together with edge weights and the gene-gene interactions. Comparing with existing methods on two cancer datasets from TCGA, we showed that the modules identified by our method are more densely connected and functionally enriched. Moreover, our predicted modules are more enriched for miRNA families and the miRNA-mRNA pairs within the modules are more negatively correlated. Finally, several potential prognostic modules are revealed by Kaplan-Meier survival analysis and breast cancer subtype analysis.\n\n\nAVAILABILITY\nBCM is implemented in Java and available for download in the supplementary materials, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/ TCBB.2015.2462370."
},
{
"pmid": "33137754",
"title": "Learning ambidextrous robot grasping policies.",
"abstract": "Universal picking (UP), or reliable robot grasping of a diverse range of novel objects from heaps, is a grand challenge for e-commerce order fulfillment, manufacturing, inspection, and home service robots. Optimizing the rate, reliability, and range of UP is difficult due to inherent uncertainty in sensing, control, and contact physics. This paper explores \"ambidextrous\" robot grasping, where two or more heterogeneous grippers are used. We present Dexterity Network (Dex-Net) 4.0, a substantial extension to previous versions of Dex-Net that learns policies for a given set of grippers by training on synthetic datasets using domain randomization with analytic models of physics and geometry. We train policies for a parallel-jaw and a vacuum-based suction cup gripper on 5 million synthetic depth images, grasps, and rewards generated from heaps of three-dimensional objects. On a physical robot with two grippers, the Dex-Net 4.0 policy consistently clears bins of up to 25 novel objects with reliability greater than 95% at a rate of more than 300 mean picks per hour."
},
{
"pmid": "26074671",
"title": "Grasp quality measures: review and performance.",
"abstract": "The correct grasp of objects is a key aspect for the right fulfillment of a given task. Obtaining a good grasp requires algorithms to automatically determine proper contact points on the object as well as proper hand configurations, especially when dexterous manipulation is desired, and the quantification of a good grasp requires the definition of suitable grasp quality measures. This article reviews the quality measures proposed in the literature to evaluate grasp quality. The quality measures are classified into two groups according to the main aspect they evaluate: location of contact points on the object and hand configuration. The approaches that combine different measures from the two previous groups to obtain a global quality measure are also reviewed, as well as some measures related to human hand studies and grasp performance. Several examples are presented to illustrate and compare the performance of the reviewed measures."
},
{
"pmid": "32015543",
"title": "SciPy 1.0: fundamental algorithms for scientific computing in Python.",
"abstract": "SciPy is an open-source scientific computing library for the Python programming language. Since its initial release in 2001, SciPy has become a de facto standard for leveraging scientific algorithms in Python, with over 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories and millions of downloads per year. In this work, we provide an overview of the capabilities and development practices of SciPy 1.0 and highlight some recent technical developments."
}
] |
Frontiers in Psychology | null | PMC8987594 | 10.3389/fpsyg.2022.859159 | Metaverse-Powered Experiential Situational English-Teaching Design: An Emotion-Based Analysis Method | Metaverse is to build a virtual world that is both mapped and independent of the real world in cyberspace by using the improvement in the maturity of various digital technologies, such as virtual reality (VR), augmented reality (AR), big data, and 5G, which is important for the future development of a wide variety of professions, including education. The metaverse represents the latest stage of the development of visual immersion technology. Its essence is an online digital space parallel to the real world, which is becoming a practical field for the innovation and development of human society. The most prominent advantage of the English-teaching metaverse is that it can provide an immersive and interactive teaching field for teachers and students, simultaneously meeting the teaching and learning needs of teachers and students in both the physical world and virtual world. This study constructs experiential situational English-teaching scenario and convolutional neural networks (CNNs)–recurrent neural networks (RNNs) fusion models are proposed to recognize students’ emotion electroencephalogram (EEG) in experiential English teaching during the feature space of time domain, frequency domain, and spatial domain. Analyzing EEG data collected by OpenBCI EEG Electrode Cap Kit from students, experiential English-teaching scenario is designed into three types: sequential guidance, comprehensive exploration, and crowd-creation construction. Experimental data analysis of the three kinds of learning activities shows that metaverse-powered experiential situational English teaching can promote the improvement of students’ sense of interactivity, immersion, and cognition, and the accuracy and analysis time of CNN–RNN fusion model is much higher than that of baselines. This study can provide a nice reference for the emotion recognition of students under COVID-19. | Related WorkAlthough the metaverse has not been widely concerned by educational researchers, its underlying technology has been discussed by scholars and achieved many results. In Yang et al. (2020), by establishing an AI education robot based on voice interaction, the authors built a hybrid physical education teaching mode to realize the personalized education of students. In Wang (2021), the author proposed a simulation model which could dynamically analyze the role of students in educational management services. In Xuan et al. (2021), the authors presented a deep neural network-based personalized learning material recommendation algorithm. In Xiao and Yi (2021), the authors used AI to perform personalized education reform, analyze the information of students before entering colleges and universities, and propose a construction method of personalized AI-based training model. Based on the theory of deep learning, in Wang Z. et al. (2021), the authors evaluated the practical application value of teaching methods under the guidance of educational psychology and AI design. In Sun et al. (2021), the authors combined the AI module with knowledge recommendation and developed an online English-teaching system based on the general teaching assistant system.The digital twin also provides a resource pattern for the integration of virtual and real education. To improve the teaching efficiency of rhythmic gymnastics, based on VR image recognition technology and digital twin, combined with the actual needs of rhythmic gymnastics teaching, in Shi (2021), the author constructed the corresponding auxiliary teaching system. For the sake of studying the methods to improve teachers’ teaching ability, in Chen et al. (2021), the authors established a machine learning and digital twin-based corresponding teacher ability evaluation model. In Razzaq et al. (2022), the authors proposed a digital twin framework called “deep class rooms” to monitor school attendance and curriculum content.In the most critical aspect of creating the telepresence, some researchers have carried out research on the effect of VR teaching and verified the role of VR technology in promoting students’ learning participation, teaching efficiency, and learning effect through experiments. At the same time, it also provides relevant references for students to create a virtual environment with telepresence. In Tang (2021), the author studied the application of VR technology in college physical education and discussed the role of VR technology in improving the quality of physical education. In Young et al. (2020), the authors evaluated students’ experience of applying immersive technology in a higher education environment, especially in physical geography students’ exploration of geomorphology theory using VR. Based on the VR education platform, in Zhang et al. (2020), the authors clarified the importance and some disadvantages of education under the traditional education mode. Through the collected literature and survey data, the authors analyzed the feasibility of the combination of VR and education and provided a theoretical basis for it. In Hagge (2021), VR technology was introduced into four face-to-face geography courses in two semesters. Throughout the semester, part of the students regularly used VR devices to visit places related to the lecture.Many EEG emotion recognition methods have been proposed. In Liu and Fu (2021), the authors proposed to learn multi-channel features from EEG signals for human emotion recognition. In Aguinaga et al. (2021), the authors proposed a two-stage deep learning model to recognize emotional states by associating facial expressions with EEG. In Song et al. (2020), the authors proposed a dynamic graph CNN-based multi-channel EEG emotion recognition method. In Zeng et al. (2021), the authors proposed an emotional wheel attention-based emotion distribution learning model (EWAEDL). In Gao et al. (2022), the authors proposed a novel refined-detrended fluctuation analysis method, that was, multi-order detrended fluctuation analysis (MODFA). In Han et al. (2021), the authors proposed a novel cross-modal emotion embedding framework, called EmoBed, which aimed to improve the performance of existing emotion recognition systems by using the knowledge from other auxiliary patterns. | [
"34681982",
"33670860",
"33476273",
"34554373",
"35017794",
"34294002",
"34671295",
"34964667",
"34777091"
] | [
{
"pmid": "34681982",
"title": "An Insider Data Leakage Detection Using One-Hot Encoding, Synthetic Minority Oversampling and Machine Learning Techniques.",
"abstract": "Insider threats are malicious acts that can be carried out by an authorized employee within an organization. Insider threats represent a major cybersecurity challenge for private and public organizations, as an insider attack can cause extensive damage to organization assets much more than external attacks. Most existing approaches in the field of insider threat focused on detecting general insider attack scenarios. However, insider attacks can be carried out in different ways, and the most dangerous one is a data leakage attack that can be executed by a malicious insider before his/her leaving an organization. This paper proposes a machine learning-based model for detecting such serious insider threat incidents. The proposed model addresses the possible bias of detection results that can occur due to an inappropriate encoding process by employing the feature scaling and one-hot encoding techniques. Furthermore, the imbalance issue of the utilized dataset is also addressed utilizing the synthetic minority oversampling technique (SMOTE). Well known machine learning algorithms are employed to detect the most accurate classifier that can detect data leakage events executed by malicious insiders during the sensitive period before they leave an organization. We provide a proof of concept for our model by applying it on CMU-CERT Insider Threat Dataset and comparing its performance with the ground truth. The experimental results show that our model detects insider data leakage events with an AUC-ROC value of 0.99, outperforming the existing approaches that are validated on the same dataset. The proposed model provides effective methods to address possible bias and class imbalance issues for the aim of devising an effective insider data leakage detection system."
},
{
"pmid": "33670860",
"title": "Detection of Gadolinium with an Impedimetric Platform Based on Gold Electrodes Functionalized by 2-Methylpyridine-Substituted Cyclam.",
"abstract": "Gadolinium is extensively used in pharmaceuticals and is very toxic, so its sensitive detection is mandatory. This work presents the elaboration of a gadolinium chemical sensor based on 2-methylpyridine-substituted cyclam thin films, deposited on gold electrodes, using electrochemical impedance spectroscopy (EIS). The 2-methylpyridine-substituted cyclam (bis-N-MPyC) was synthesized in three steps, including the protection of cyclam by the formation of its CH2-bridged aminal derivative; the product was characterized by liquid 1H and 13C NMR spectroscopy. Spin-coated thin films of bis-N-MPyC on gold wafers were characterized by means of infrared spectroscopy in ATR (Attenuated Total Reflectance) mode, contact angle measurements and atomic force microscopy. The impedimetric chemical sensor was studied in the presence of increasing concentrations of lanthanides (Gd3+, Eu3+, Tb3+, Dy3+). Nyquist plots were fitted with an equivalent electrical circuit including two RC circuits in series corresponding to the bis-N-MPyC film and its interface with the electrolyte. The main parameter that varies with gadolinium concentration is the resistance of the film/electrolyte interface (R), correlated to the rate of exchange between the proton and the lanthanide ion. Based on this parameter, the detection limit obtained is 35 pM. The bis-N-MPyC modified gold electrode was tested for the detection of gadolinium in spiked diluted negative urine control samples."
},
{
"pmid": "33476273",
"title": "An Adaptive Localized Decision Variable Analysis Approach to Large-Scale Multiobjective and Many-Objective Optimization.",
"abstract": "This article proposes an adaptive localized decision variable analysis approach under the decomposition-based framework to solve the large-scale multiobjective and many-objective optimization problems (MaOPs). Its main idea is to incorporate the guidance of reference vectors into the control variable analysis and optimize the decision variables using an adaptive strategy. Especially, in the control variable analysis, for each search direction, the convergence relevance degree of each decision variable is measured by a projection-based detection method. In the decision variable optimization, the grouped decision variables are optimized with an adaptive scalarization strategy, which is able to adaptively balance the convergence and diversity of the solutions in the objective space. The proposed algorithm is evaluated with a suite of test problems with 2-10 objectives and 200-1000 variables. Experimental results validate the effectiveness and efficiency of the proposed algorithm on the large-scale multiobjective and MaOPs."
},
{
"pmid": "34554373",
"title": "Unified Retrospective EEG Motion Educated Artefact Suppression for EEG-fMRI to Suppress Magnetic Field Gradient Artefacts During Motion.",
"abstract": "The data quality of simultaneously acquired electroencephalography and functional magnetic resonance imaging (EEG-fMRI) can be strongly affected by motion. Recent work has shown that the quality of fMRI data can be improved by using a Moiré-Phase-Tracker (MPT)-camera system for prospective motion correction. The use of the head position acquired by the MPT-camera-system has also been shown to correct motion-induced voltages, ballistocardiogram (BCG) and gradient artefact residuals separately. In this work we show the concept of an integrated framework based on the general linear model to provide a unified motion informed model of in-MRI artefacts. This model (retrospective EEG motion educated gradient artefact suppression, REEG-MEGAS) is capable of correcting voltage-induced, BCG and gradient artefact residuals of EEG data acquired simultaneously with prospective motion corrected fMRI. In our results, we have verified that applying REEG-MEGAS correction to EEG data acquired during subject motion improves the data quality in terms of motion induced voltages and also GA residuals in comparison to standard Artefact Averaging Subtraction and Retrospective EEG Motion Artefact Suppression. Besides that, we provide preliminary evidence that although adding more regressors to a model may slightly affect the power of physiological signals such as the alpha-rhythm, its application may increase the overall quality of a dataset, particularly when strongly affected by motion. This was verified by analysing the EEG traces, power spectra density and the topographic distribution from two healthy subjects. We also have verified that the correction by REEG-MEGAS improves higher frequency artefact correction by decreasing the power of Gradient Artefact harmonics. Our method showed promising results for decreasing the power of artefacts for frequencies up to 250 Hz. Additionally, REEG-MEGAS is a hybrid framework that can be implemented for real time prospective motion correction of EEG and fMRI data. Among other EEG-fMRI applications, the approach described here may benefit applications such as EEG-fMRI neurofeedback and brain computer interface, which strongly rely on the prospective acquisition and application of motion artefact removal."
},
{
"pmid": "35017794",
"title": "DeepClassRooms: a deep learning based digital twin framework for on-campus class rooms.",
"abstract": "A lot of different methods are being opted for improving the educational standards through monitoring of the classrooms. The developed world uses Smart classrooms to enhance faculty efficiency based on accumulated learning outcomes and interests. Smart classroom boards, audio-visual aids, and multimedia are directly related to the Smart classroom environment. Along with these facilities, more effort is required to monitor and analyze students' outcomes, teachers' performance, attendance records, and contents delivery in on-campus classrooms. One can achieve more improvement in quality teaching and learning outcomes by developing digital twins in on-campus classrooms. In this article, we have proposed DeepClass-Rooms, a digital twin framework for attendance and course contents monitoring for the public sector schools of Punjab, Pakistan. DeepClassRooms is cost-effective and requires RFID readers and high-edge computing devices at the Fog layer for attendance monitoring and content matching, using convolution neural network for on-campus and online classes."
},
{
"pmid": "34294002",
"title": "Immersion, presence, and episodic memory in virtual reality environments.",
"abstract": "Although virtual reality (VR) represents a promising tool for psychological research, much remains unknown about how properties of VR environments affect episodic memory. Two closely related characteristics of VR are immersion (i.e., the objective degree to which VR naturalistically portrays a real-world environment) and presence (i.e., the subjective sense of being \"mentally transported\" to the virtual world). Although some research has demonstrated benefits of increased immersion on VR-based learning, it is uncertain how broadly and consistently this benefit extends to individual components of immersion. Moreover, it is unclear whether presence may mediate the effect of immersion on memory. Three experiments assessed how presence and memory were affected by three manipulations of immersion: field of view, unimodal (visual only) vs. bimodal (audiovisual) environments, and the realism of lighting effects (e.g., the occurrence or absence of shadows). Results indicated that effects of different manipulations of immersion are heterogeneous, affecting memory in some instances and presence in others, but not necessarily both. Importantly, no evidence for a mediating effect of presence emerged in any of these experiments, nor in a combined cross-experimental analysis. This outcome demonstrates a degree of independence between immersion and presence with regard to their influence on episodic memory performance."
},
{
"pmid": "34671295",
"title": "The Teaching Design Methods Under Educational Psychology Based on Deep Learning and Artificial Intelligence.",
"abstract": "This study aims to evaluate the practical application value of the teaching method under the guidance of educational psychology and artificial intelligence (AI) design, taking the deep learning theory as the basis of teaching design. The research objects of this study involve all the teachers, students, and students' parents of Ningbo Middle School. The questionnaires are developed to survey the changes in the performance of students before and after the implementation of the teaching design and the satisfaction of all teachers, students, and parents to different teaching methods by comparing the two results and the satisfaction ratings. All objects in this study volunteer to participate in the questionnaire survey. The results suggest the following: (1) the effective return rates of the questionnaires to teachers, students, and parents are 97, 99, and 95%, respectively, before implementation; whereas those after implementation are 98, 99, and 99%, respectively. Comparison of the two return results suggests that there was no significant difference statistically (P > 0.05). (2) Proportion of scoring results before and after implementation is given as follows: the proportions of levels A, B, C, and D are 35, 40, 15, and 10% before implementation, respectively; while those after implementation are 47, 36, 12, and 5%, respectively. After the implementation, the proportion of level A is obviously higher than that before the implementation, and the proportions of other levels decreased in contrast to those before the implementation, showing statistically obvious differences (P < 0.05). (3) The change in the performance of each subject after 1 year implementation is significantly higher than that before the implementation, and the change in the average performance of each subject shows an upward trend. In summary, (1) the comparison on the effective return rate of the satisfaction survey questionnaire proves the feasibility of its scoring results. (2) The comparison of the survey scoring results shows that people are more satisfied with the new educational design teaching method. (3) The comparison of the change in the performance of each subject before and after the implementation indirectly reflects the drawbacks of partial subject education, indicating that the school should pay the same equal attention to every subject. (4) Due to various objective and subjective factors, the results of this study may be different from the actual situation slightly, and its accuracy has to be further explored in the future."
},
{
"pmid": "34777091",
"title": "The Teaching Pattern of Law Majors Using Artificial Intelligence and Deep Neural Network Under Educational Psychology.",
"abstract": "With the increasing attention to the cultivation of legal talents, a new teaching model has been explored through artificial intelligence (AI) technology under educational psychology, which focuses on improving learning initiative, teaching methods, and teaching quality of students. First, the application of AI and deep neural network (DNN) algorithms are reviewed in education, and the advantages and disadvantages of traditional learning material recommendation algorithms are summarized. Then, a personalized learning material recommendation algorithm is put forward based on DNN, together with an adaptive learning system based on DNN. Finally, the traditional user-based collaborative filtering (UserCF) model and lifelong topic modeling (LTM) algorithm are introduced as the control group to verify the performance of the proposed recommendation system. The results show that the best learning rate of model training is 0.0001, the best dropout value is 0.5, and the best batch size is 32. The proposed personalized learning resource recommendation method based on deep learning (DL) still has good stability under various training data scales. The personalized test questions of recommended students are moderately difficult. It is easier to recommend materials according to the acquisition of knowledge points and the practicability of the recommended test questions of students. Personalized learning material recommendation algorithm based on AI can timely feedback needs of students, thereby improving the effect of classroom teaching. Using the combination of AI and DL algorithms in teaching design, students can complete targeted personalized learning assignments, which is of great significance to cultivate high-level legal professionals."
}
] |
Frontiers in Neuroscience | null | PMC8987922 | 10.3389/fnins.2022.836100 | Ridge Penalization in High-Dimensional Testing With Applications to Imaging Genetics | High-dimensionality is ubiquitous in various scientific fields such as imaging genetics, where a deluge of functional and structural data on brain-relevant genetic polymorphisms are investigated. It is crucial to identify which genetic variations are consequential in identifying neurological features of brain connectivity compared to merely random noise. Statistical inference in high-dimensional settings poses multiple challenges involving analytical and computational complexity. A widely implemented strategy in addressing inference goals is penalized inference. In particular, the role of the ridge penalty in high-dimensional prediction and estimation has been actively studied in the past several years. This study focuses on ridge-penalized tests in high-dimensional hypothesis testing problems by proposing and examining a class of methods for choosing the optimal ridge penalty. We present our findings on strategies to improve the statistical power of ridge-penalized tests and what determines the optimal ridge penalty for hypothesis testing. The application of our work to an imaging genetics study and biological research will be presented. | 2. Related Work2.1. Mantel TestSuppose we have (Xi,Yi)∈ℝp×ℝq for all subjects i = 1, 2, …, n where p is the number of explanatory variables and q is the number of response variables. In imaging genetics studies, the value of p usually correspond to the total number of genetic variations, such as single nucleotide polymorphisms (SNPs) in genomics or differentially methylated probes in epigenetics. Meanwhile, the response variables correspond to the brain imaging information, such as pairwise alpha-band coherence measures obtained from several EEG channels.Suppose Xi and Xj correspond to the vector of explanatory variables for subjects i and j, respectively. As described in Pluta et al. (2021), let KX(·, ·) and KY(·, ·) be positive semi-definite kernel functions on X × X and Y × Y, respectively where the data matrices X and Y are column-centered. Specifically, we are interested in investigating the kernel function KX(Xi,Xj)=Xi⊤WλXXj where WλX=(X⊤X+λXIp)-1 is the ridge-penalized weight matrix. The corresponding Gram matrix for this kernel is denoted by
(4)
HλX=X(X⊤X+λXIp)-1X⊤.
We define KY and the associated Gram matrix KλY similarly using Y. The Mantel test statistic is equivalent to tr(HλXKλY). Under the null hypothesis, there is no association between the similarities measured by KX and KY. In practice, the reference distribution can be obtained via a permutation procedure (Nichols and Holmes, 2002; Shaw and Proschan, 2013; Zhou et al., 2014). For instance, we can simultaneously permute the rows and columns of Y, while keeping X fixed. Equivalently, for a fixed matrix HλX, we can permute the observation labels for KλY and calculate the empirical null distribution.2.2. Score Test in Linear ModelsThe general framework of Mantel Test presented in Section 2.1 encompasses several association tests (for examples, see Robert and Escoufier, 1976; Székely et al., 2007; Xu et al., 2017) and various kernel functions can be investigated to reflect model complexity and detect underlying linear or non-linear associations. Moreover, Pluta et al. (2021) developed a unified framework of linear models that links the Mantel test and Rao's score test (Rao, 1948) in a class of tests indexed by the ridge penalty. Following the discussion of Pluta et al. (2021), we consider the following linear models:Fixed Effects Model: Y = Xβ + ε, where ε ~ N(0, In, Σ) or alternatively using the vectorized response variables, vec(Y) ~ N(vec(Xβ), Σ ⊗ In) where vec(·) is the vectorization operator and ⊗ refers to the Kronecker product operator on two matrices.Random Effects Model: Y = Xb + ε where ε ~ N(0, In, Σq) and b ~ N(0, Ip, Σb) or equivalently, vec(Y)~N(0,Σb⊗XX⊤+Σ⊗In).To describe the score statistic compactly, we consider the Singular Value Decomposition (SVD) of the matrix X = UDV⊤, where U and V are orthogonal, and D is a diagonal matrix with the (non-negative) singular values. To perform the global test H0:β = 0 under the fixed effects model, the score test statistic is given by
(5)
SFE≍Y⊤X(X⊤X)-(X⊤Y)=tr(ZZ⊤)=∑j=1rZj2~H0∑j=1rχ1,j2
where Z = U⊤Y and r = rank(X). The notation A− denotes the Moore-Penrose pseudoinverse of the matrix A. It is well-known that the Moore-Penrose psedoinverse leads to the minimum norm solution to the least-squares problem. On the other hand, to test H0:Σb = 0 under certain conditions, the score test statistic for the random effects (variance components) model is
(6)
SRE≍Y⊤X(X⊤Y)=tr(Z⊤DD⊤Z)=∑j=1rdj2Zj2~H0∑j=1rdj2χ1,j2.
Finally, the ridge regression score test statistic for testing H0:β = 0 is
(7)
SRR≍Y⊤X(X⊤X+λXIp)-1(X⊤Y) =tr(Z⊤D(D⊤D+λXIp)-1D⊤Z).
Hence,
(8)
∑j=1rdj2dj2+λXZj2~H0∑j=1rdj2dj2+λXχ1,j2.
As summarized in Pluta et al. (2021), the score test statistics described in (5) – (7) can be formulated equivalently as tr(HλXKλY) which is the expression for the Mantel test statistic described in Section 2.1. In particular, the fixed effects score test statistic is equivalent to tr(H0YY⊤) where H0=X(X⊤X)-X⊤. Meanwhile, when Σ=σ2Iq and Σb=σb2Iq, then the score statistic corresponding to the random effects model is proportional to tr(H∞K∞) where H∞=XX⊤ and K∞=YY⊤ (Pluta et al., 2021). Lastly, the ridge regression score test statistic can be written as tr(HλXYY⊤) using HλX provided in (4). Furthermore, Pluta et al. (2021) highlights that the ridge regression score statistic is a compromise between the fixed effects and variance components tests. For small values of the ridge penalty λX, the test statistic in (7) approaches the fixed effects score test statistic, and is identical at λX = 0. Also, Pluta et al. (2021) remarked that a large choice of λX yields a test close to the random effects score statistic, converging to identical tests as λX → ∞.2.3. Examining the Choice of Ridge Penalty ParameterMotivated by the framework introduced by Pluta et al. (2021), which categorizes the association test and score tests into a single class of tests characterized by the ridge penalty, we examine the choice of this parameter in the high-dimensional hypothesis testing set-up. In practice, the optimal choice of ridge penalty parameter is based on the observed data and proper data-dependent tuning is among the central tasks in statistical learning (Patil et al., 2021).2.3.1. Ridge Predictive PerformanceThe role of the ridge penalty in high-dimensional prediction and estimation has been an active area of research in the past several years. For both asymptotic and non-asymptotic settings, the predictive performance of ridge regression has been studied extensively (see Hsu et al., 2012; Cule and De Iorio, 2013; Karoui, 2013; Dobriban and Wager, 2018; Hastie et al., 2019; Wu and Xu, 2020; Richards et al., 2021 for examples). Furthermore, Kobak et al. (2020) demonstrated that an explicit positive ridge penalty can fail to provide any improvement over the minimum-norm least squares estimator using simulations and real-life high-dimensional data sets. In particular, they showed that the optimal value of ridge penalty in this situation could be negative when n ≪ p. Similar to these work, in this article, we focus on the role of ridge penalty in hypothesis testing for a univariate response, i.e., q = 1. The extension of to multivariate responses will be considered in future research. In Sections 2.1 and 2.2, λX corresponds to the tuning parameter in the Gram matrix of the ridge kernel associated with X which is not necessarily the same as λY, the tuning parameter in the ridge kernel corresponding to Y. However, under the univariate response y setting, we only have to specify the ridge penalty parameter λX. For brevity, we will refer to λX as λ in the next sections.2.3.2. Ridge Cross-ValidationThe performance of the fitted model is affected by the calibration of the regularization parameter. One of the most widely used methods for regularization tuning is cross-validation (for examples, see Allen, 1971; Stone, 1974; Delaney and Chatterjee, 1986; Arlot and Celisse, 2010). In ridge regression, two commonly used cross-validation procedures are generalized cross-validation (GCV) (Golub et al., 1979) and leave-one-out cross-validation (LOOCV), a variant of the k-fold cross-validation (Hastie et al., 2009). GCV, a rotation-invariant version of the predicted residual error sum of squares (PRESS), is a popular choice in practice because it does not require model refitting. Similarly, approximation methods to LOOCV (e.g., Kumar et al., 2013; Meijer and Goeman, 2013) to circumvent the problem of computational complexity brought by multiple model refitting.The LOOCV estimate for a response vector y containing n observations is defined as
(9)
loocv(λ)=1n∑i=1n(yi-Xi⊤β^-i,λ)2
where β^-i,λ is the ridge estimate when the ith observation is not included in the training set. As cited in Patil et al. (2021), an alternative formula for the LOOCV (Hastie et al., 2009) is given by
(10)
loocv(λ)=1n∑i=1n(yi-Xi⊤β^λ1-[Hλ]ii)2
where [Hλ]ii corresponds to the ith diagonal entry of the matrix Hλ=X(X⊤X+λIp)-X⊤. Closely similar to (10), the GCV estimate formulation provided by Patil et al. (2021) is
(11)
gcv(λ)=1n∑i=1n(yi-Xi⊤β^λ1-tr(Hλ)/n)2
where the average of the trace elements is used instead of the ith diagonal entry. When λ = 0 and rank(X) = n, the diagonal elements of H0 is equal to 1 and tr(H0) reduces to n. In this case, the ridge regression is an interpolator of Xβ^λ=y (Patil et al., 2021). Since both numerator and denominator of the expressions in (10) and (11) are 0, Hastie et al. (2019) defined the LOOCV and GCV estimates based on the limits λ → 0, respectively.Moreover, the asymptotic optimality of LOOCV and GCV tuning for ridge regression in high-dimensional setting is presented by Hastie et al. (2019). Patil et al. (2021) generalized the scope discussed in Hastie et al. (2019) by showing that the GCV converges almost surely to the expected out-of-sample prediction error, uniformly over a set of candidate ridge regularization parameters. The discussion provided by Patil et al. (2021) is aligned with Kobak et al. (2020) wherein the optimal ridge penalty parameter can be positive, negative, or zero.2.4. Rationale and Illustration of Contributions of Our WorkThe Adaptive Mantel test (AdaMant) coined by Pluta et al. (2021) is an extension of the classical Mantel Test by incorporating the ridge penalty parameter to association testing. The adaptive procedure involves the calculation of similarity matrices Hm=KmX(X) and Km=KmY(Y) for every pair of input metrics or kernels (KX, KY), m = 1, 2, …, M. Under the null hypothesis of no association between the similarities measured by KX and KY, Pluta et al. (2021) proposed a permutation procedure where they generate B permutations of the observation labels for Hm for a fixed matrix Km. The p-value Pm(b) is computed as a function of the test statistic tr(Hm(b)Km) for each m = 1, 2, …, M and permutations b = 1, 2, …, B. Finally, Pluta et al. (2021) defined the AdaMant test statistic as P(0)=minm∈{1,2,…,M}Pm(0) where b = 0 refers to the original data set. Using the permutation procedure to obtain the empirical null distribution of P(0), the corresponding AdaMant p-value is the proportion of P(b) less than or equal to P(0), that is,
(12)
PAdaMant=1B+1∑b=0BI(P(b)≤P(0))
where P(b)=minm∈{1,2,…,M}Pm(b).However, the main limitation in the ridge-penalized AdaMant procedure by Pluta et al. (2021) is the optimal selection of the ridge penalty parameter. When kernels of the form X(X⊤X+λIp)-1X⊤ are considered in AdaMant, λ is chosen to be proportional to the signal-to-noise ratio re-expressed as a function of genetic heritability h2 and number of explanatory variables p (Pluta et al., 2021). This implies that the value of the chosen ridge penalty is restricted to be non-negative. In their examples, the ridge penalty is chosen from a set with only a few values such as λ ∈ {100, 1, 000, 2, 500, 5, 000, 7, 500, 25, 000, ∞}. With a limited number of ridge penalty parameters to choose from, the λ which yields the highest empirical power may not be captured by the initial study of Pluta et al. (2021).As highlighted in Section 1, the primary objective of this article is to examine the optimal choice of ridge penalty in the high-dimensional hypothesis testing scenario. To illustrate the utility of addressing this goal and our subsequent contributions, we study liver.toxicity data set in Bushel et al. (2007). This data contains microarray expression levels of p = 3, 116 genes and 10 clinical chemistry measurements in liver tissue of n = 64 rats. First, we replicate the results presented in Kobak et al. (2020) using 10-fold cross-validation for varying ridge penalty parameter λ using one dependent variable at a time. The cross-validated MSE plotted for each dependent variable is displayed in Figure 1. In Figure 1A where n > p, Kobak et al. (2020) showed that this result is in agreement with the seminal article by Hoerl and Kennard (1970) wherein the optimal penalty is always larger than zero under the low-dimensional setting. However, in Figure 1B, five out of ten dependent variables yielded a minimum cross-validated MSE corresponding to the smallest value of λ considered when n ≪ p (Kobak et al., 2020).Figure 1Cross-validated MSE of ridge regression using (A)
n = 64 and p = 50 randomly selected explanatory variables; (B)
n = 64 and p = 3, 116, all explanatory variables. The blue dot corresponds to the minimum cross-validated MSE for each dependent variable.Motivated by the aforementioned results, we investigated the empirical power and average of the -log10
p-values of the Adaptive Mantel test for several values of λ. We employ the liver toxicity data as our motivating example because it has been widely used recently to better understand overfitting. It was found that the clinical variables may not facilitate in the detection of paracetamol toxicity in the liver, but gene expression could be an alternative solution (Heinloth et al., 2004; Bushel et al., 2007). In this illustration, we compute the empirical power for a fixed λ, using one dependent variable at a time. For each replication, we add a vector of random noise to the vector of response, that is, ys = y + εs for s = 1, 2, …, S. Under the null hypothesis when β = 0, the linear model y = Xβ + ε reduces to y = ε. Hence, we can view the recursive expression as ys = (Xβ + ε) + εs ≠ ε in favor of the case that the alternative hypothesis is true. For s = 1, 2, …, S, we compute the AdaMant p-value at each λ using ys and the entire matrix of gene expression X as inputs to the ridge kernel described in (4). After repeating this procedure for a total of S replications, the empirical power is computed as the proportion of replications where the Adamant p-value is less than the nominal level of significance α.
(13)
Powerλ=1S∑s=1SI(PAdaMant,λ,s≤α)
The results are presented in Figure 2.Figure 2Heat maps of (A) Average of −log10
p-values and (B) Empirical Power of the Adaptive Mantel test using the liver toxicity data with n = 64 observations, p = 3, 116 genetic features. The green vertical line corresponds to λ = 0.To circumvent the limitations of the range of ridge penalty parameter considered in Pluta et al. (2021), we allowed the interval of λ to include negative, zero and positive values. According to Figure 2A, even though there is a more distinct gradient in the values of the average of the −log10
p-values compared to the empirical power in Figure 2B, eight out of ten dependent variables displayed more or less similar patterns in terms of empirical power. Also, based on Figure 2B, some λ < 0 lead to an empirical power approaching 1 when n ≪ p. This result is in alignment with the main result reported by Kobak et al. (2020) where the optimal ridge penalty for real-world high-dimensional data can be negative due to implicit ridge regularization. This phenomenon prompted us to further investigate real-valued ridge penalty parameters using imaging genetics data where the signals are weaker and sparsity is much more evident. | [
"32332161",
"17408499",
"23893343",
"21929786",
"23874214",
"29314109",
"24385847",
"11290733",
"15084756",
"23462021",
"26229047",
"18078480",
"6018555",
"19932755",
"23348970",
"18201908",
"31274952",
"11747097",
"27225129",
"17415783",
"34216035",
"22639702",
"22807023",
"16364064",
"23707675",
"22100419",
"28714590",
"35278218",
"25289113"
] | [
{
"pmid": "32332161",
"title": "Benign overfitting in linear regression.",
"abstract": "The phenomenon of benign overfitting is one of the key mysteries uncovered by deep learning methodology: deep neural networks seem to predict well, even with a perfect fit to noisy training data. Motivated by this phenomenon, we consider when a perfect fit to training data in linear regression is compatible with accurate prediction. We give a characterization of linear regression problems for which the minimum norm interpolating prediction rule has near-optimal prediction accuracy. The characterization is in terms of two notions of the effective rank of the data covariance. It shows that overparameterization is essential for benign overfitting in this setting: the number of directions in parameter space that are unimportant for prediction must significantly exceed the sample size. By studying examples of data covariance properties that this characterization shows are required for benign overfitting, we find an important role for finite-dimensional data: the accuracy of the minimum norm interpolating prediction rule approaches the best possible accuracy for a much narrower range of properties of the data distribution when the data lie in an infinite-dimensional space vs. when the data lie in a finite-dimensional space with dimension that grows faster than the sample size."
},
{
"pmid": "17408499",
"title": "Simultaneous clustering of gene expression data with clinical chemistry and pathological evaluations reveals phenotypic prototypes.",
"abstract": "BACKGROUND\nCommonly employed clustering methods for analysis of gene expression data do not directly incorporate phenotypic data about the samples. Furthermore, clustering of samples with known phenotypes is typically performed in an informal fashion. The inability of clustering algorithms to incorporate biological data in the grouping process can limit proper interpretation of the data and its underlying biology.\n\n\nRESULTS\nWe present a more formal approach, the modk-prototypes algorithm, for clustering biological samples based on simultaneously considering microarray gene expression data and classes of known phenotypic variables such as clinical chemistry evaluations and histopathologic observations. The strategy involves constructing an objective function with the sum of the squared Euclidean distances for numeric microarray and clinical chemistry data and simple matching for histopathology categorical values in order to measure dissimilarity of the samples. Separate weighting terms are used for microarray, clinical chemistry and histopathology measurements to control the influence of each data domain on the clustering of the samples. The dynamic validity index for numeric data was modified with a category utility measure for determining the number of clusters in the data sets. A cluster's prototype, formed from the mean of the values for numeric features and the mode of the categorical values of all the samples in the group, is representative of the phenotype of the cluster members. The approach is shown to work well with a simulated mixed data set and two real data examples containing numeric and categorical data types. One from a heart disease study and another from acetaminophen (an analgesic) exposure in rat liver that causes centrilobular necrosis.\n\n\nCONCLUSION\nThe modk-prototypes algorithm partitioned the simulated data into clusters with samples in their respective class group and the heart disease samples into two groups (sick and buff denoting samples having pain type representative of angina and non-angina respectively) with an accuracy of 79%. This is on par with, or better than, the assignment accuracy of the heart disease samples by several well-known and successful clustering algorithms. Following modk-prototypes clustering of the acetaminophen-exposed samples, informative genes from the cluster prototypes were identified that are descriptive of, and phenotypically anchored to, levels of necrosis of the centrilobular region of the rat liver. The biological processes cell growth and/or maintenance, amine metabolism, and stress response were shown to discern between no and moderate levels of acetaminophen-induced centrilobular necrosis. The use of well-known and traditional measurements directly in the clustering provides some guarantee that the resulting clusters will be meaningfully interpretable."
},
{
"pmid": "23893343",
"title": "Ridge regression in prediction problems: automatic choice of the ridge parameter.",
"abstract": "To date, numerous genetic variants have been identified as associated with diverse phenotypic traits. However, identified associations generally explain only a small proportion of trait heritability and the predictive power of models incorporating only known-associated variants has been small. Multiple regression is a popular framework in which to consider the joint effect of many genetic variants simultaneously. Ordinary multiple regression is seldom appropriate in the context of genetic data, due to the high dimensionality of the data and the correlation structure among the predictors. There has been a resurgence of interest in the use of penalised regression techniques to circumvent these difficulties. In this paper, we focus on ridge regression, a penalised regression approach that has been shown to offer good performance in multivariate prediction problems. One challenge in the application of ridge regression is the choice of the ridge parameter that controls the amount of shrinkage of the regression coefficients. We present a method to determine the ridge parameter based on the data, with the aim of good performance in high-dimensional prediction problems. We establish a theoretical justification for our approach, and demonstrate its performance on simulated genetic data and on a real data example. Fitting a ridge regression model to hundreds of thousands to millions of genetic variants simultaneously presents computational challenges. We have developed an R package, ridge, which addresses these issues. Ridge implements the automatic choice of ridge parameter presented in this paper, and is freely available from CRAN."
},
{
"pmid": "21929786",
"title": "Significance testing in ridge regression for genetic data.",
"abstract": "BACKGROUND\nTechnological developments have increased the feasibility of large scale genetic association studies. Densely typed genetic markers are obtained using SNP arrays, next-generation sequencing technologies and imputation. However, SNPs typed using these methods can be highly correlated due to linkage disequilibrium among them, and standard multiple regression techniques fail with these data sets due to their high dimensionality and correlation structure. There has been increasing interest in using penalised regression in the analysis of high dimensional data. Ridge regression is one such penalised regression technique which does not perform variable selection, instead estimating a regression coefficient for each predictor variable. It is therefore desirable to obtain an estimate of the significance of each ridge regression coefficient.\n\n\nRESULTS\nWe develop and evaluate a test of significance for ridge regression coefficients. Using simulation studies, we demonstrate that the performance of the test is comparable to that of a permutation test, with the advantage of a much-reduced computational cost. We introduce the p-value trace, a plot of the negative logarithm of the p-values of ridge regression coefficients with increasing shrinkage parameter, which enables the visualisation of the change in p-value of the regression coefficients with increasing penalisation. We apply the proposed method to a lung cancer case-control data set from EPIC, the European Prospective Investigation into Cancer and Nutrition.\n\n\nCONCLUSIONS\nThe proposed test is a useful alternative to a permutation test for the estimation of the significance of ridge regression coefficients, at a much-reduced computational cost. The p-value trace is an informative graphical tool for evaluating the results of a test of significance of ridge regression coefficients as the shrinkage parameter increases, and the proposed test makes its production computationally feasible."
},
{
"pmid": "23874214",
"title": "Prediction of complex human traits using the genomic best linear unbiased predictor.",
"abstract": "Despite important advances from Genome Wide Association Studies (GWAS), for most complex human traits and diseases, a sizable proportion of genetic variance remains unexplained and prediction accuracy (PA) is usually low. Evidence suggests that PA can be improved using Whole-Genome Regression (WGR) models where phenotypes are regressed on hundreds of thousands of variants simultaneously. The Genomic Best Linear Unbiased Prediction (G-BLUP, a ridge-regression type method) is a commonly used WGR method and has shown good predictive performance when applied to plant and animal breeding populations. However, breeding and human populations differ greatly in a number of factors that can affect the predictive performance of G-BLUP. Using theory, simulations, and real data analysis, we study the performance of G-BLUP when applied to data from related and unrelated human subjects. Under perfect linkage disequilibrium (LD) between markers and QTL, the prediction R-squared (R(2)) of G-BLUP reaches trait-heritability, asymptotically. However, under imperfect LD between markers and QTL, prediction R(2) based on G-BLUP has a much lower upper bound. We show that the minimum decrease in prediction accuracy caused by imperfect LD between markers and QTL is given by (1-b)(2), where b is the regression of marker-derived genomic relationships on those realized at causal loci. For pairs of related individuals, due to within-family disequilibrium, the patterns of realized genomic similarity are similar across the genome; therefore b is close to one inducing small decrease in R(2). However, with distantly related individuals b reaches very low values imposing a very low upper bound on prediction R(2). Our simulations suggest that for the analysis of data from unrelated individuals, the asymptotic upper bound on R(2) may be of the order of 20% of the trait heritability. We show how PA can be enhanced with use of variable selection or differential shrinkage of estimates of marker effects."
},
{
"pmid": "29314109",
"title": "Fridge: Focused fine-tuning of ridge regression for personalized predictions.",
"abstract": "Statistical prediction methods typically require some form of fine-tuning of tuning parameter(s), with K-fold cross-validation as the canonical procedure. For ridge regression, there exist numerous procedures, but common for all, including cross-validation, is that one single parameter is chosen for all future predictions. We propose instead to calculate a unique tuning parameter for each individual for which we wish to predict an outcome. This generates an individualized prediction by focusing on the vector of covariates of a specific individual. The focused ridge-fridge-procedure is introduced with a 2-part contribution: First we define an oracle tuning parameter minimizing the mean squared prediction error of a specific covariate vector, and then we propose to estimate this tuning parameter by using plug-in estimates of the regression coefficients and error variance parameter. The procedure is extended to logistic ridge regression by using parametric bootstrap. For high-dimensional data, we propose to use ridge regression with cross-validation as the plug-in estimate, and simulations show that fridge gives smaller average prediction error than ridge with cross-validation for both simulated and real data. We illustrate the new concept for both linear and logistic regression models in 2 applications of personalized medicine: predicting individual risk and treatment response based on gene expression data. The method is implemented in the R package fridge."
},
{
"pmid": "24385847",
"title": "Mantel test in population genetics.",
"abstract": "The comparison of genetic divergence or genetic distances, estimated by pairwise FST and related statistics, with geographical distances by Mantel test is one of the most popular approaches to evaluate spatial processes driving population structure. There have been, however, recent criticisms and discussions on the statistical performance of the Mantel test. Simultaneously, alternative frameworks for data analyses are being proposed. Here, we review the Mantel test and its variations, including Mantel correlograms and partial correlations and regressions. For illustrative purposes, we studied spatial genetic divergence among 25 populations of Dipteryx alata (\"Baru\"), a tree species endemic to the Cerrado, the Brazilian savannas, based on 8 microsatellite loci. We also applied alternative methods to analyze spatial patterns in this dataset, especially a multivariate generalization of Spatial Eigenfunction Analysis based on redundancy analysis. The different approaches resulted in similar estimates of the magnitude of spatial structure in the genetic data. Furthermore, the results were expected based on previous knowledge of the ecological and evolutionary processes underlying genetic variation in this species. Our review shows that a careful application and interpretation of Mantel tests, especially Mantel correlograms, can overcome some potential statistical problems and provide a simple and useful tool for multivariate analysis of spatial patterns of genetic divergence."
},
{
"pmid": "11290733",
"title": "Prediction of total genetic value using genome-wide dense marker maps.",
"abstract": "Recent advances in molecular genetic techniques will make dense marker maps available and genotyping many individuals for these markers feasible. Here we attempted to estimate the effects of approximately 50,000 marker haplotypes simultaneously from a limited number of phenotypic records. A genome of 1000 cM was simulated with a marker spacing of 1 cM. The markers surrounding every 1-cM region were combined into marker haplotypes. Due to finite population size N(e) = 100, the marker haplotypes were in linkage disequilibrium with the QTL located between the markers. Using least squares, all haplotype effects could not be estimated simultaneously. When only the biggest effects were included, they were overestimated and the accuracy of predicting genetic values of the offspring of the recorded animals was only 0.32. Best linear unbiased prediction of haplotype effects assumed equal variances associated to each 1-cM chromosomal segment, which yielded an accuracy of 0.73, although this assumption was far from true. Bayesian methods that assumed a prior distribution of the variance associated with each chromosome segment increased this accuracy to 0.85, even when the prior was not correct. It was concluded that selection on genetic values predicted from markers could substantially increase the rate of genetic gain in animals and plants, especially if combined with reproductive techniques to shorten the generation interval."
},
{
"pmid": "15084756",
"title": "Gene expression profiling of rat livers reveals indicators of potential adverse effects.",
"abstract": "This study tested the hypothesis that gene expression profiling can reveal indicators of subtle injury to the liver induced by a low dose of a substance that does not cause overt toxicity as defined by conventional criteria of toxicology (e.g., abnormal clinical chemistry and histopathology). For the purpose of this study we defined this low dose as subtoxic, i.e., a dose that elicits effects which are below the detection of conventional toxicological parameters. Acetaminophen (APAP) was selected as a model hepatotoxicant because (1) considerable information exists concerning the mechanism of APAP hepatotoxicity that can occur following high doses, (2) intoxication with APAP is the leading cause of emergency room visits involving acute liver failure within the United States, and (3) conventional clinical markers have poor predictive value. Rats treated with a single dose of 0, 50, 150, or 1500 mg/kg APAP were examined at 6, 24, or 48 h after exposure for conventional toxicological parameters and for gene expression alterations. Patterns of gene expression were found which indicated cellular energy loss as a consequence of APAP toxicity. Elements of these patterns were apparent even after exposure to subtoxic doses. With increasing dose, the magnitude of changes increased and additional members of the same biological pathways were differentially expressed. The energy loss suggested by gene expression changes was confirmed at the 1500 mg/kg dose exposure by measuring ATP levels. Only by ultrastructural examination could any indication of toxicity be identified after exposure to a subtoxic dose of APAP and that was occasional mitochondrial damage. In conclusion, this study provides evidence that supports the hypothesis that gene expression profiling may be a sensitive means of identifying indicators of potential adverse effects in the absence of the occurrence of overt toxicity."
},
{
"pmid": "23462021",
"title": "Test for interactions between a genetic marker set and environment in generalized linear models.",
"abstract": "We consider in this paper testing for interactions between a genetic marker set and an environmental variable. A common practice in studying gene-environment (GE) interactions is to analyze one single-nucleotide polymorphism (SNP) at a time. It is of significant interest to analyze SNPs in a biologically defined set simultaneously, e.g. gene or pathway. In this paper, we first show that if the main effects of multiple SNPs in a set are associated with a disease/trait, the classical single SNP-GE interaction analysis can be biased. We derive the asymptotic bias and study the conditions under which the classical single SNP-GE interaction analysis is unbiased. We further show that, the simple minimum p-value-based SNP-set GE analysis, can be biased and have an inflated Type 1 error rate. To overcome these difficulties, we propose a computationally efficient and powerful gene-environment set association test (GESAT) in generalized linear models. Our method tests for SNP-set by environment interactions using a variance component test, and estimates the main SNP effects under the null hypothesis using ridge regression. We evaluate the performance of GESAT using simulation studies, and apply GESAT to data from the Harvard lung cancer genetic study to investigate GE interactions between the SNPs in the 15q24-25.1 region and smoking on lung cancer risk."
},
{
"pmid": "26229047",
"title": "Test for rare variants by environment interactions in sequencing association studies.",
"abstract": "We consider in this article testing rare variants by environment interactions in sequencing association studies. Current methods for studying the association of rare variants with traits cannot be readily applied for testing for rare variants by environment interactions, as these methods do not effectively control for the main effects of rare variants, leading to unstable results and/or inflated Type 1 error rates. We will first analytically study the bias of the use of conventional burden-based tests for rare variants by environment interactions, and show the tests can often be invalid and result in inflated Type 1 error rates. To overcome these difficulties, we develop the interaction sequence kernel association test (iSKAT) for assessing rare variants by environment interactions. The proposed test iSKAT is optimal in a class of variance component tests and is powerful and robust to the proportion of variants in a gene that interact with environment and the signs of the effects. This test properly controls for the main effects of the rare variants using weighted ridge regression while adjusting for covariates. We demonstrate the performance of iSKAT using simulation studies and illustrate its application by analysis of a candidate gene sequencing study of plasma adiponectin levels."
},
{
"pmid": "18078480",
"title": "Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models.",
"abstract": "We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene expression effect might be nonlinear and the genes within the same pathway are likely to interact with each other in a complicated way. This semiparametric model also makes it possible to test for the overall genetic pathway effect. We show that the LSKM semiparametric regression can be formulated using a linear mixed model. Estimation and inference hence can proceed within the linear mixed model framework using standard mixed model software. Both the regression coefficients of the covariate effects and the LSKM estimator of the genetic pathway effect can be obtained using the best linear unbiased predictor in the corresponding linear mixed model formulation. The smoothing parameter and the kernel parameter can be estimated as variance components using restricted maximum likelihood. A score test is developed to test for the genetic pathway effect. Model/variable selection within the LSKM framework is discussed. The methods are illustrated using a prostate cancer data set and evaluated using simulations."
},
{
"pmid": "19932755",
"title": "Imaging genetics of structural brain connectivity and neural integrity markers.",
"abstract": "We review studies that have used diffusion imaging (DI) and magnetic resonance spectroscopy (MRS) to investigate genetic associations. A brief description of the measures obtainable with these methods and of some methodological and interpretability limitations is given. The usefulness of DI and MRS in defining intermediate phenotypes and in demonstrating the effects of common genetic variants known to increase risk for psychiatric manifestations on anatomical and metabolic phenotypes is reviewed. The main focus is on schizophrenia where the greatest amount of data has been collected. Moreover, we present an example coming from a different approach, where the genetic alteration is known (the deletion that causes Williams syndrome) and the DI phenotype can shed new light on the function of genes affected by the mutation. We conclude that, although these are still early days of this type of research and many findings remain controversial, both techniques can significantly contribute to the understanding of genetic effects in the brain and the pathophysiology of psychiatric disorders."
},
{
"pmid": "23348970",
"title": "Efficient approximate k-fold and leave-one-out cross-validation for ridge regression.",
"abstract": "In model building and model evaluation, cross-validation is a frequently used resampling method. Unfortunately, this method can be quite time consuming. In this article, we discuss an approximation method that is much faster and can be used in generalized linear models and Cox' proportional hazards model with a ridge penalty term. Our approximation method is based on a Taylor expansion around the estimate of the full model. In this way, all cross-validated estimates are approximated without refitting the model. The tuning parameter can now be chosen based on these approximations and can be optimized in less time. The method is most accurate when approximating leave-one-out cross-validation results for large data sets which is originally the most computationally demanding situation. In order to demonstrate the method's performance, it will be applied to several microarray data sets. An R package penalized, which implements the method, is available on CRAN."
},
{
"pmid": "18201908",
"title": "False positives in imaging genetics.",
"abstract": "Imaging genetics provides an enormous amount of functional-structural data on gene effects in living brain, but the sheer quantity of potential phenotypes raises concerns about false discovery. Here, we provide the first empirical results on false positive rates in imaging genetics. We analyzed 720 frequent coding SNPs without significant association with schizophrenia and a subset of 492 of these without association with cognitive function. Effects on brain structure (using voxel-based morphometry, VBM) and brain function, using two archival imaging tasks, the n-back working memory task and an emotional face matching task, were studied in whole brain and regions of interest and corrected for multiple comparisons using standard neuroimaging procedures. Since these variants are unlikely to impact relevant brain function, positives obtained provide an upper empirical estimate of the false positive association rate. In a separate analysis, we randomly permuted genotype labels across subjects, removing any true genotype-phenotype association in the data, to derive a lower empirical estimate. At a set correction level of 0.05, in each region of interest and data set used, the rate of positive findings was well below 5% (0.2-4.1%). There was no relationship between the region of interest and the false positive rate. Permutation results were in the same range as empirically derived rates. The observed low rates of positives provide empirical evidence that the type I error rate is well controlled by current commonly used correction procedures in imaging genetics, at least in the context of the imaging paradigms we have used. In fact, our observations indicate that these statistical thresholds are conservative."
},
{
"pmid": "31274952",
"title": "A Review of Statistical Methods in Imaging Genetics.",
"abstract": "With the rapid growth of modern technology, many biomedical studies are being conducted to collect massive datasets with volumes of multi-modality imaging, genetic, neurocognitive, and clinical information from increasingly large cohorts. Simultaneously extracting and integrating rich and diverse heterogeneous information in neuroimaging and/or genomics from these big datasets could transform our understanding of how genetic variants impact brain structure and function, cognitive function, and brain-related disease risk across the lifespan. Such understanding is critical for diagnosis, prevention, and treatment of numerous complex brain-related disorders (e.g., schizophrenia and Alzheimer's disease). However, the development of analytical methods for the joint analysis of both high-dimensional imaging phenotypes and high-dimensional genetic data, a big data squared (BD2) problem, presents major computational and theoretical challenges for existing analytical methods. Besides the high-dimensional nature of BD2, various neuroimaging measures often exhibit strong spatial smoothness and dependence and genetic markers may have a natural dependence structure arising from linkage disequilibrium. We review some recent developments of various statistical techniques for imaging genetics, including massive univariate and voxel-wise approaches, reduced rank regression, mixture models, and group sparse multi-task regression. By doing so, we hope that this review may encourage others in the statistical community to enter into this new and exciting field of research."
},
{
"pmid": "11747097",
"title": "Nonparametric permutation tests for functional neuroimaging: a primer with examples.",
"abstract": "Requiring only minimal assumptions for validity, nonparametric permutation testing provides a flexible and intuitive methodology for the statistical analysis of data from functional neuroimaging experiments, at some computational expense. Introduced into the functional neuroimaging literature by Holmes et al. ([1996]: J Cereb Blood Flow Metab 16:7-22), the permutation approach readily accounts for the multiple comparisons problem implicit in the standard voxel-by-voxel hypothesis testing framework. When the appropriate assumptions hold, the nonparametric permutation approach gives results similar to those obtained from a comparable Statistical Parametric Mapping approach using a general linear model with multiple comparisons corrections derived from random field theory. For analyses with low degrees of freedom, such as single subject PET/SPECT experiments or multi-subject PET/SPECT or fMRI designs assessed for population effects, the nonparametric approach employing a locally pooled (smoothed) variance estimate can outperform the comparable Statistical Parametric Mapping approach. Thus, these nonparametric techniques can be used to verify the validity of less computationally expensive parametric approaches. Although the theory and relative advantages of permutation approaches have been discussed by various authors, there has been no accessible explication of the method, and no freely distributed software implementing it. Consequently, there have been few practical applications of the technique. This article, and the accompanying MATLAB software, attempts to address these issues. The standard nonparametric randomization and permutation testing ideas are developed at an accessible level, using practical examples from functional neuroimaging, and the extensions for multiple comparisons described. Three worked examples from PET and fMRI are presented, with discussion, and comparisons with standard parametric approaches made where appropriate. Practical considerations are given throughout, and relevant statistical concepts are expounded in appendices."
},
{
"pmid": "27225129",
"title": "Genome-wide association study identifies 74 loci associated with educational attainment.",
"abstract": "Educational attainment is strongly influenced by social and other environmental factors, but genetic factors are estimated to account for at least 20% of the variation across individuals. Here we report the results of a genome-wide association study (GWAS) for educational attainment that extends our earlier discovery sample of 101,069 individuals to 293,723 individuals, and a replication study in an independent sample of 111,349 individuals from the UK Biobank. We identify 74 genome-wide significant loci associated with the number of years of schooling completed. Single-nucleotide polymorphisms associated with educational attainment are disproportionately found in genomic regions regulating gene expression in the fetal brain. Candidate genes are preferentially expressed in neural tissue, especially during the prenatal period, and enriched for biological pathways involved in neural development. Our findings demonstrate that, even for a behavioural phenotype that is mostly environmentally determined, a well-powered GWAS identifies replicable associated genetic variants that suggest biologically relevant pathways. Because educational attainment is measured in large numbers of individuals, it will continue to be useful as a proxy phenotype in efforts to characterize the genetic influences of related phenotypes, including cognition and neuropsychiatric diseases."
},
{
"pmid": "17415783",
"title": "Genetic influences on human brain structure: a review of brain imaging studies in twins.",
"abstract": "Twin studies suggest that variation in human brain volume is genetically influenced. The genes involved in human brain volume variation are still largely unknown, but several candidate genes have been suggested. An overview of structural Magnetic Resonance (brain) Imaging studies in twins is presented, which focuses on the influence of genetic factors on variation in healthy human brain volume. Twin studies have shown that genetic effects varied regionally within the brain, with high heritabilities of frontal lobe volumes (90-95%), moderate estimates in the hippocampus (40-69%), and environmental factors influencing several medial brain areas. High heritability estimates of brain structures were revealed for regional amounts of gray matter (density) in medial frontal cortex, Heschl's gyrus, and postcentral gyrus. In addition, moderate to high heritabilities for densities of Broca's area, anterior cingulate, hippocampus, amygdala, gray matter of the parahippocampal gyrus, and white matter of the superior occipitofrontal fasciculus were reported. The high heritability for (global) brain volumes, including the intracranium, total brain, cerebral gray, and white matter, seems to be present throughout life. Estimates of genetic and environmental influences on age-related changes in brain structure in children and adults await further longitudinal twin-studies. For prefrontal cortex volume, white matter, and hippocampus volumes, a number of candidate genes have been identified, whereas for other brain areas, only a few or even a single candidate gene has been found so far. New techniques such as genome-wide scans may become helpful in the search for genes that are involved in the regulation of human brain volume throughout life."
},
{
"pmid": "34216035",
"title": "Ridge-penalized adaptive Mantel test and its application in imaging genetics.",
"abstract": "We propose a ridge-penalized adaptive Mantel test (AdaMant) for evaluating the association of two high-dimensional sets of features. By introducing a ridge penalty, AdaMant tests the association across many metrics simultaneously. We demonstrate how ridge penalization bridges Euclidean and Mahalanobis distances and their corresponding linear models from the perspective of association measurement and testing. This result is not only theoretically interesting but also has important implications in penalized hypothesis testing, especially in high-dimensional settings such as imaging genetics. Applying the proposed method to an imaging genetic study of visual working memory in healthy adults, we identified interesting associations of brain connectivity (measured by electroencephalogram coherence) with selected genetic features."
},
{
"pmid": "22639702",
"title": "Structured penalties for functional linear models-partially empirical eigenvectors for regression.",
"abstract": "One of the challenges with functional data is incorporating geometric structure, or local correlation, into the analysis. This structure is inherent in the output from an increasing number of biomedical technologies, and a functional linear model is often used to estimate the relationship between the predictor functions and scalar responses. Common approaches to the problem of estimating a coefficient function typically involve two stages: regularization and estimation. Regularization is usually done via dimension reduction, projecting onto a predefined span of basis functions or a reduced set of eigenvectors (principal components). In contrast, we present a unified approach that directly incorporates geometric structure into the estimation process by exploiting the joint eigenproperties of the predictors and a linear penalty operator. In this sense, the components in the regression are 'partially empirical' and the framework is provided by the generalized singular value decomposition (GSVD). The form of the penalized estimation is not new, but the GSVD clarifies the process and informs the choice of penalty by making explicit the joint influence of the penalty and predictors on the bias, variance and performance of the estimated coefficient function. Laboratory spectroscopy data and simulations are used to illustrate the concepts."
},
{
"pmid": "22807023",
"title": "Null but not void: considerations for hypothesis testing.",
"abstract": "Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA."
},
{
"pmid": "16364064",
"title": "Heritability of background EEG across the power spectrum.",
"abstract": "We estimated the genetic and nongenetic (environmental) contributions to individual differences in the background EEG power spectrum in two age cohorts with mean ages of 26.2 and 49.4 years. Nineteen-lead EEG was recorded with eyes closed from 142 monozygotic and 167 dizygotic twin pairs and their siblings, totaling 760 subjects. We obtained power spectra in 24 bins of 1 Hz ranging from 1.0 to 25.0 Hz. Generally, heritability was highest around the alpha peak frequency and lower in the theta and delta bands. In the beta band heritability gradually decreased with increasing frequency, especially in the temporal regions. Genetic correlations between power in the classical broad bands indicated that half to three-quarters of the genetic variance can be attributed to a common source. We conclude that across the scalp and most of the frequency spectrum, individual differences in adult EEG are largely determined by genetic factors."
},
{
"pmid": "23707675",
"title": "Genetics of the connectome.",
"abstract": "Connectome genetics attempts to discover how genetic factors affect brain connectivity. Here we review a variety of genetic analysis methods--such as genome-wide association studies (GWAS), linkage and candidate gene studies--that have been fruitfully adapted to imaging data to implicate specific variants in the genome for brain-related traits. Studies that emphasized the genetic influences on brain connectivity. Some of these analyses of brain integrity and connectivity using diffusion MRI, and others have mapped genetic effects on functional networks using resting state functional MRI. Connectome-wide genome-wide scans have also been conducted, and we review the multivariate methods required to handle the extremely high dimension of the genomic and network data. We also review some consortium efforts, such as ENIGMA, that offer the power to detect robust common genetic associations using phenotypic harmonization procedures and meta-analysis. Current work on connectome genetics is advancing on many fronts and promises to shed light on how disease risk genes affect the brain. It is already discovering new genetic loci and even entire genetic networks that affect brain organization and connectivity."
},
{
"pmid": "22100419",
"title": "Brain connectivity in psychiatric imaging genetics.",
"abstract": "In the past decade, imaging genetics has evolved into a highly successful neuroimaging discipline with a variety of sophisticated research tools. To date, several neural systems mechanisms have been identified that mediate genetic risk for mental disorders linked to common candidate and genome-wide-supported variants. In particular, the examination of intermediate connectivity phenotypes has recently gained increasing popularity. This paper gives an overview of the scientific methods and evidence that link indices of neural network organization to the genetic susceptibility for mental illness with a focus on the effects of candidate genes and genome-wide supported risk variants on brain structure and function."
},
{
"pmid": "28714590",
"title": "Adaptive testing for association between two random vectors in moderate to high dimensions.",
"abstract": "Testing for association between two random vectors is a common and important task in many fields, however, existing tests, such as Escoufier's RV test, are suitable only for low-dimensional data, not for high-dimensional data. In moderate to high dimensions, it is necessary to consider sparse signals, which are often expected with only a few, but not many, variables associated with each other. We generalize the RV test to moderate-to-high dimensions. The key idea is to data adaptively weight each variable pair based on its empirical association. As the consequence, the proposed test is adaptive, alleviating the effects of noise accumulation in high-dimensional data, and thus maintaining the power for both dense and sparse alternative hypotheses. We show the connections between the proposed test with several existing tests, such as a generalized estimating equations-based adaptive test, multivariate kernel machine regression (KMR), and kernel distance methods. Furthermore, we modify the proposed adaptive test so that it can be powerful for nonlinear or nonmonotonic associations. We use both real data and simulated data to demonstrate the advantages and usefulness of the proposed new test. The new test is freely available in R package aSPC on CRAN at https://cran.r-project.org/web/packages/aSPC/index.html and https://github.com/jasonzyx/aSPC."
},
{
"pmid": "35278218",
"title": "Cross-trait prediction accuracy of summary statistics in genome-wide association studies.",
"abstract": "In the era of big data, univariate models have widely been used as a workhorse tool for quickly producing marginal estimators; and this is true even when in a high-dimensional dense setting, in which many features are \"true,\" but weak signals. Genome-wide association studies (GWAS) epitomize this type of setting. Although the GWAS marginal estimator is popular, it has long been criticized for ignoring the correlation structure of genetic variants (i.e., the linkage disequilibrium [LD] pattern). In this paper, we study the effects of LD pattern on the GWAS marginal estimator and investigate whether or not additionally accounting for the LD can improve the prediction accuracy of complex traits. We consider a general high-dimensional dense setting for GWAS and study a class of ridge-type estimators, including the popular marginal estimator and the best linear unbiased prediction (BLUP) estimator as two special cases. We show that the performance of GWAS marginal estimator depends on the LD pattern through the first three moments of its eigenvalue distribution. Furthermore, we uncover that the relative performance of GWAS marginal and BLUP estimators highly depends on the ratio of GWAS sample size over the number of genetic variants. Particularly, our finding reveals that the marginal estimator can easily become near-optimal within this class when the sample size is relatively small, even though it ignores the LD pattern. On the other hand, BLUP estimator has substantially better performance than the marginal estimator as the sample size increases toward the number of genetic variants, which is typically in millions. Therefore, adjusting for the LD (such as in the BLUP) is most needed when GWAS sample size is large. We illustrate the importance of our results by using the simulated data and real GWAS."
},
{
"pmid": "25289113",
"title": "Efficient Blockwise Permutation Tests Preserving Exchangeability.",
"abstract": "In this paper, we present a new blockwise permutation test approach based on the moments of the test statistic. The method is of importance to neuroimaging studies. In order to preserve the exchangeability condition required in permutation tests, we divide the entire set of data into certain exchangeability blocks. In addition, computationally efficient moments-based permutation tests are performed by approximating the permutation distribution of the test statistic with the Pearson distribution series. This involves the calculation of the first four moments of the permutation distribution within each block and then over the entire set of data. The accuracy and efficiency of the proposed method are demonstrated through simulated experiment on the magnetic resonance imaging (MRI) brain data, specifically the multi-site voxel-based morphometry analysis from structural MRI (sMRI)."
}
] |
Frontiers in Artificial Intelligence | null | PMC8988042 | 10.3389/frai.2022.801564 | Beyond the Failure of Direct-Matching in Keyword Evaluation: A Sketch of a Graph Based Solution | The starting point of this paper is the observation that methods based on the direct match of keywords are inadequate because they do not consider the cognitive ability of concept formation and abstraction. We argue that keyword evaluation needs to be based on a semantic model of language capturing the semantic relatedness of words to satisfy the claim of the human-like ability of concept formation and abstraction and achieve better evaluation results. Evaluation of keywords is difficult since semantic informedness is required for this purpose. This model must be capable of identifying semantic relationships such as synonymy, hypernymy, hyponymy, and location-based abstraction. For example, when gathering texts from online sources, one usually finds a few keywords with each text. Still, these keyword sets are neither complete for the text nor are they in themselves closed, i.e., in most cases, the keywords are a random subset of all possible keywords and not that informative w.r.t. the complete keyword set. Therefore all algorithms based on this cannot achieve good evaluation results and provide good/better keywords or even a complete keyword set for a text. As a solution, we propose a word graph that captures all these semantic relationships for a given language. The problem with the hyponym/hyperonym relationship is that, unlike synonyms, it is not bidirectional. Thus the space of keyword sets requires a metric that is non-symmetric, in other words, a quasi-metric. We sketch such a metric that works on our graph. Since it is nearly impossible to obtain such a complete word graph for a language, we propose for the keyword task a simpler graph based on the base text upon which the keyword sets should be evaluated. This reduction is usually sufficient for evaluating keyword sets. | 2. Related WorkAs discussed in the introduction, the evaluation method widely used for keyword extraction is Precision, which is the ratio of relevant instances among the retrieved instances (see Equation 1), Recall, the ratio of relevant instances that were retrieved (see Equation 2), and F1, the weighted average of the two (see Equation 3). All three measures are based on direct matching, i.e., the direct comparison of two sets. There are some unique evaluation measures inspired by them or combined with them.
(1)
Precision=true positivetrue positives+false positive
(2)
Recall=true positivetrue positives+false negatives
(3)
F1=2·Precision·RecallPrecision+Recall
Saga et al. (2014) propose a method named Topic Coverage by which the performance of keyword extraction is evaluated without any answer set or reference. The Topic Coverage is defined in Equation (4), where |E| denotes the number of elements of set E, and T is the set of topics in the document sets, which are extracted employing clustering methods such as k-means, etc. Further Ei denotes the set of the top j keywords in topic i, and Mi is the set of keywords in topic i extracted by a certain method to be evaluated. Since this measurement is similar to Recall, the performance of Topic Coverage is examined by the comparison with Recall and is confirmed with their high correlation. In the end, this study concludes that Topic Coverage may be used instead of Recall. Unlike Topic Coverage, our method requires a gold standard keyword set for each text. However, this gives the benefit of being able to judge the quality of a keyword set with a stronger focus on the actual text it was assigned to, instead of having to rely on a topic based average.
(4)
TC=1|T|∑i∈T|Ei∩Mi||Ei|
Zesch and Gurevych (2009) use the R-precision (R-p) measure for evaluation of keyphrases. They define R-p as the Precision when the number of retrieved keyphrase matchings equals the number of gold standard keyphrases assigned to the document. That is, only extracted keyphrases that are regarded to match the gold standard keyphrases are counted. As for the matching strategy, instead of exact matching, they propose a new approximate matching that accounts for morphological variants (MORPH) and the two cases of overlapping phrases: either the extracted key phrase includes the gold standard keyphrase (INCLUDES) or the extracted key phrase is a part of the gold standard keyphrase (PARTOF). For overlapping phrases, they do not allow character level variations, but only token level variations and morphological variations (MORPH) are limited only to detecting plurals. The evaluation based on these matching strategies is compared to human evaluation, and MORPH matchings put out the best result with 96% agreement to human evaluations. For INCLUDES and PARTOF, on the other hand, agreement to human evaluations is lower. The main difference to our approach is the fact that this method does not take more abstract semantic relationships into account.Liu et al. (2009) compare the system output to human-annotated keywords using F-measure, and in addition to this they also adopt Pyramid metric proposed by Nekova and Passonneau (2004). In the Pyramid metric, a score is assigned to each keyword candidate based on how many human annotators selected it. Keywords with a high score are placed at a high level of the pyramid, and the score of hypothesized keywords is computed by adding the scores of keywords that exist in the pyramid. However, since unmatched keywords cannot be measured by these two metrics, they resort to a human evaluation. In this human evaluation, evaluators are asked to exclude non-keywords from the sets of human and machine-generated candidates.Apart from Precision, Recall and F1, Pointwise Mutual Information (PMI) is adopted by Jarmasz and Barrière (2012)'s study for the evaluation of keyphrases. Unlike traditional evaluations based on string matching, the PMI estimates semantic similarity. Thanks to relative scores generated by the PMI, it can be used to compare various keyphrase extraction algorithms.Graph theory, which has been contributing to various fields of natural language processing, is also indispensable when it comes to evaluation measures. Since the method of the present paper is based on semantic distances in word graphs, it makes sense to consider techniques for automatic construction of semantic classes and identification of semantic distance.For automatic construction of semantic classes, the following method is presented by Widdows and Dorow (2002): The method starts by constructing a large graph consisting of all nouns in a large corpus. Each node represents a noun, and two nodes get connected if they co-occur, separated by the conjunctions and and or. Rare words are filtered out by a cut-off value, that is, the top n neighbors of each word, which could be determined by the user. To identify the elements of a semantic class, to begin with, “seed words” as a small set of exemplars are chosen manually. Next, in an iterative process, the “most similar” node is added to the manually selected set of seed words. A candidate node is not added just because of the connection with one single node of the seed set, but rather it is added only when it has a link to some other neighboring node in the seed set. In doing so, the inclusion of an out-of-category word, which happens to co-occur with one of the category words, is avoided. This process is repeated until no new elements can be added to the seed set.In addition to the automatic construction of semantic classes, the semantic distance between words can be measured given existing semantic networks such as WordNet (Miller, 1995; Oram, 2001), in which nouns are organized as nodes into hierarchical structures. Wu and Palmer (1994)'s similarity metric measures what they call conceptual similarity between two nodes c1 and c2 in a hierarchy (see Equation 5), where depth(ci) is the length of the path to ci from the global root, that is, the top node of the taxonomy. Further lso(ci, cj) denotes the lowest super-ordinate, namely the closest common parent node between ci and cj.
(5)
simWuPalmer(c1,c2)=2depth(lso(c1,c2))depth(c1)+depth(c2)
Resnik (1995), using the lso(ci, cj) in combination with information theory, proposes a similarity measure. Let p(c) be the probability of encountering an instance of a concept c in the taxonomy such as WordNet. For instance, if c is “fruit,” its hyponyms such as “apple,” “orange,” etc., are the instances. According to Shannon's information theory, the information content (IC) is −logp(c), and the semantic similarity between c1 and c2 is defined in Equation (6).
(6)
simResnik(c1,c2)=-logp(lso(c1,c2))
The key idea of this measure is the extent to which two concepts share information in common. If the position of the lowest super-ordinate between c1 and c2 is lower, that is, if the closest common parent node of c1 and c2 is a less abstract concept, the possibility of encountering an instance of the lowest super-ordinate is lower. That implies a higher IC, which indicates that the two concepts are similar. Moreover, if the lowest super-ordinate of the two nodes is the top node in the taxonomy, their similarity will be −logp(1) = 0 (see also Budanitsky and Hirst, 2006).While it is possible to build our method on top of any of these similarity measures, the constructions we propose are asymmetric. That is because the comparison of a keyword set with a gold standard set is an asymmetric process: if the adequacy of one keyword set implies the adequacy of another, it does not necessarily follow that the same is true the other way around. Hence, we prefer the usage of quasi-metrics rather than metrics to measure semantic similarity.Nowadays a state-of-the-art method for keyword extraction is the graph-based model, TextRank (Mihalcea and Tarau, 2004). In TextRank, text units such as words and sentences are represented as vertices in a graph, and the graph is constructed based on their co-occurrences. In the graph, edges connecting the vertices are defined according to the relation between the text units, e.g., lexical or semantic relations, contextual overlap, etc. As a graph-based ranking algorithm (Mihalcea and Tarau, 2004) modify Google's PageRank developed by Brin and Page (1998) and offer a new formula for graph-based ranking (see Equation 7), where In(Vi) denotes the set of vertices pointing to the vertex Vi, while Out(Vi) denotes the set of vertices that the vertex Vi points to. Further d is a damping factor that integrates into the model the probability of jumping from a given vertex to another random vertex in the graph. The damping factor d is usually set to 0.85 (Brin and Page, 1998). Next wij is defined as a weight of the edge between two vertices Vi and Vj. In this regard, it is worth noting that the graph-based ranking in the original PageRank definition is not weighted. In the end, this TextRank algorithm computes scores of the text units by the iteration until convergence and based on the final scores; the relevant text units are extracted. Kölbl et al. (2021) have shown, that TextRank performs very poorly for German texts.
(7)
WS(Vi)=(1-d)+d∑Vj∈In(Vi)wji∑Vk∈Out(Vj)wjkWS(Vj)
Since some lexical ontologies are relevant to our study, brief remarks about them must be made. WordNet is the most popular ontology, and nouns, verbs, adjectives, and adverbs are connected with each other based on their semantic relations. The main relation among words in WordNet is synonymy. In addition, the super-subordinate relation such as hypernymy and hyponymy is also integrated. GermaNet (Hamp and Feldweg, 1997; Henrich and Hinrichs, 2010) is designed for the German language and shares such common structural features with WordNet. BabelNet (Navigli and Ponzetto, 2012) is a multilingual semantic network constructed from WordNet and Wikipedia. The most distinctive feature of this ontology is that concepts are semantically related to each other across various languages. FrameNet is also one of the lexical ontologies, but it is not constructed based on words per se, but on semantic frames (Baker and Fellbaum, 2009). | [
"23726319",
"25089073",
"11749717",
"11830677",
"12109371",
"18367686"
] | [
{
"pmid": "23726319",
"title": "Networks in cognitive science.",
"abstract": "Networks of interconnected nodes have long played a key role in Cognitive Science, from artificial neural networks to spreading activation models of semantic memory. Recently, however, a new Network Science has been developed, providing insights into the emergence of global, system-scale properties in contexts as diverse as the Internet, metabolic reactions, and collaborations among scientists. Today, the inclusion of network theory into Cognitive Sciences, and the expansion of complex-systems science, promises to significantly change the way in which the organization and dynamics of cognitive and behavioral processes are understood. In this paper, we review recent contributions of network theory at different levels and domains within the Cognitive Sciences."
},
{
"pmid": "25089073",
"title": "How children explore the phonological network in child-directed speech: A survival analysis of children's first word productions.",
"abstract": "We explored how phonological network structure influences the age of words' first appearance in children's (14-50 months) speech, using a large, longitudinal corpus of spontaneous child-caregiver interactions. We represent the caregiver lexicon as a network in which each word is connected to all of its phonological neighbors, and consider both words' local neighborhood density (degree), and also their embeddedness among interconnected neighborhoods (clustering coefficient and coreness). The larger-scale structure reflected in the latter two measures is implicated in current theories of lexical development and processing, but its role in lexical development has not yet been explored. Multilevel discrete-time survival analysis revealed that children are more likely to produce new words whose network properties support lexical access for production: high degree, but low clustering coefficient and coreness. These effects appear to be strongest at earlier ages and largely absent from 30 months on. These results suggest that both a word's local connectivity in the lexicon and its position in the lexicon as a whole influences when it is learned, and they underscore how general lexical processing mechanisms contribute to productive vocabulary development."
},
{
"pmid": "11749717",
"title": "Language as an evolving word web.",
"abstract": "Human language may be described as a complex network of linked words. In such a treatment, each distinct word in language is a vertex of this web, and interacting words in sentences are connected by edges. The empirical distribution of the number of connections of words in this network is of a peculiar form that includes two pronounced power-law regions. Here we propose a theory of the evolution of language, which treats language as a self-organizing network of interacting words. In the framework of this concept, we completely describe the observed word web structure without any fitting. We show that the two regimes in the distribution naturally emerge from the evolutionary dynamics of the word web. It follows from our theory that the size of the core part of language, the 'kernel lexicon', does not vary as language evolves."
},
{
"pmid": "11830677",
"title": "Global organization of the Wordnet lexicon.",
"abstract": "The lexicon consists of a set of word meanings and their semantic relationships. A systematic representation of the English lexicon based in psycholinguistic considerations has been put together in the database Wordnet in a long-term collaborative effort. We present here a quantitative study of the graph structure of Wordnet to understand the global organization of the lexicon. Semantic links follow power-law, scale-invariant behaviors typical of self-organizing networks. Polysemy (the ambiguity of an individual word) is one of the links in the semantic network, relating the different meanings of a common word. Polysemous links have a profound impact in the organization of the semantic graph, conforming it as a small world network, with clusters of high traffic (hubs) representing abstract concepts such as line, head, or circle. Our results show that: (i) Wordnet has global properties common to many self-organized systems, and (ii) polysemy organizes the semantic graph in a compact and categorical representation, in a way that may explain the ubiquity of polysemy across languages."
},
{
"pmid": "12109371",
"title": "Restructuring of similarity neighbourhoods in the developing mental lexicon.",
"abstract": "Previous evidence suggests that the structure of similarity neighbourhoods in the developing mental lexicon may differ from that of the fully developed lexicon. The similarity relationships used to organize words into neighbourhoods was investigated in 20 pre-school children (age 3;7 to 5;11) using a two alternative forced-choice classification task. Children classified the similarity of test words relative to a standard word to determine neighbourhood membership. The similarity relationship between the test and standard words varied orthogonally in terms of type of similarity and position of overlap. Standard words were drawn from neighbourhoods differing in density. Results showed that dense neighbourhoods were organized by phoneme similarity in the onset + nucleus or rhyme positions of overlap. In contrast, sparse neighbourhoods appeared to be organized by phoneme similarity in the onset + nucleus, but manner similarity in the rhyme. These results are integrated with previous findings from infants and adults to propose a developmental course of change in the mental lexicon."
},
{
"pmid": "18367686",
"title": "What can graph theory tell us about word learning and lexical retrieval?",
"abstract": "PURPOSE\nGraph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms.\n\n\nMETHOD\nPajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors.\n\n\nRESULTS\nThe average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon.\n\n\nCONCLUSIONS\nThe graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing."
}
] |
Frontiers in Neurorobotics | null | PMC8988301 | 10.3389/fnbot.2022.859610 | Generative Adversarial Training for Supervised and Semi-supervised Learning | Neural networks have played critical roles in many research fields. The recently proposed adversarial training (AT) can improve the generalization ability of neural networks by adding intentional perturbations in the training process, but sometimes still fail to generate worst-case perturbations, thus resulting in limited improvement. Instead of designing a specific smoothness function and seeking an approximate solution used in existing AT methods, we propose a new training methodology, named Generative AT (GAT) in this article, for supervised and semi-supervised learning. The key idea of GAT is to formulate the learning task as a minimax game, in which the perturbation generator aims to yield the worst-case perturbations that maximize the deviation of output distribution, while the target classifier is to minimize the impact of this perturbation and prediction error. To solve this minimax optimization problem, a new adversarial loss function is constructed based on the cross-entropy measure. As a result, the smoothness and confidence of the model are both greatly improved. Moreover, we develop a trajectory-preserving-based alternating update strategy to enable the stable training of GAT. Numerous experiments conducted on benchmark datasets clearly demonstrate that the proposed GAT significantly outperforms the state-of-the-art AT methods in terms of supervised and semi-supervised learning tasks, especially when the number of labeled examples is rather small in semi-supervised learning. | 2. Problem Setting and Related WorksWithout loss of generality, we consider the classification tasks in a semi-supervised setting. Let x∈X=RI be the input vector with I-dimension and y∈Y=ZK be the one-hot vector of labels with K categories. Dl={x(i)l,y(i)l|i=1,...,Nl} and Dul={x(j)ul|j=1,...,Nul} denote the labeled and unlabeled dataset, where Nl and Nul are the number of labeled and unlabeled examples. AT regularizes the neural network such that both the natural and perturbed examples output the intended predictions. That is, we aim to learn a mapping 𝔽:X → [0, 1]K parameterized with θ ∈ Θ via solving the following optimization problem
(1)
min{LS(Dl,θ)+λ·LR(Dl,Dul,θ)}.
The symbol LS in Equation 1 represents the supervised loss over the labeled dataset, which can be expanded as
(2)
LS=E(xl,yl)~DlΓ(yl,Fθ(xl)),
where Fθ(xl) denotes the output distribution vector of the neural network on the input xl given the model parameter θ, yl is the one-hot vector of the true label for xl. The operator Γ(·, ·) denotes the distance measure used to evaluate the similarity of two distributions. A common choice of Γ for the supervised cost LS is the measure of cross entropy. LR is the adversarial loss, which is served as a regularization term for promoting the smoothness of the model. The adversarial loss plays an important role in enhancing the generalization performance while the number of labeled examples is small relative to the number of the whole training examples (i.e., Nl < < Nul+Nl). λ is a non-negative value that controls the relative balance between the supervised loss and the adversarial loss.Many approaches are presented to construct LR based on the smoothness assumption, which can be generally represented in a framework as
(3)
LR=Ex~DΓ(Fθ(x;ξ),F~θ′(x;ξ′)),
where x is sampled from the dataset D which consists of both labeled and unlabel examples. Γ(Fθ(x;ξ),F~θ′(x;ξ′)) is termed as the smoothness function, which is comprised of a teacher model Fθ(x; ξ) and a student model F~θ′(x;ξ′). The teacher model is parameterized with parameter θ and perturbation ξ, while the student model is parameterized with parameter θ′ and perturbation ξ′. The goal of LR is to improve the model's smoothness by forcing the student model to follow the teacher model. That is to say, the output distributions yielded by F~ is supported to be consistent with the outputs derived by F. To this end, the teacher model, student model, and similarity measure are required to be carefully crafted for formulating an appropriate smoothness function against the perturbation of the input and the variance of the parameters. Based on the implementations of this smoothness function, some typical AT approaches can be explicitly defined.Random Adversarial Training: In RAT, random noises are introduced in the student model instead of the teacher model, and the parameters of the student model are shared with the teacher model. Moreover, L2 distance is used to measure the similarity of the output distributions derived by F~ and F on the whole training examples. That is, θ′ = θ, ξ′~N(0,1), ξ = 0, and D=Dul⋃Dl for Equation 3.Adversarial Training With
Π-Model: In contrast to RAT, Π-model introduces random noises to both the teacher model and student model, i.e., ξ′,ξ~N(0,1). The reason for this is based on the assumption that predictions yielded by natural example may itself be an outlier, hence it is reasonable to make two noisy predictions learn from each other. In this case, optimizing the smoothness function for Π-model is equivalent to minimizing the prediction variance of the classifier (Luo et al., 2018).Standard Adversarial Training: Instead of adding random noises to the teacher/student model, the perturbation adopted in SAT is some imperceptible noise that is carefully designed to fool the neural network. The adversarial loss
LRsat of SAT can be written as
(4)
LRsat=E(xl,yl)~DlKL(yl||F~θ(xl;ξadv))s.t. ξadv=argmaxξ;||ξ||≤εKL(yl||F~θ(xl;ξ)),
where the operator KL(·||·) denotes the similarity measure of Kullback-Leibler (K-L) divergence. ξadv denotes adversarial perturbation which is added into xl to make the output distribution of the student model most greatly deviate yl. ε is a prior constant that controls the perturbation strength. Note that the teacher model, in this case, is degenerated into the one-hot vector of the true label. Generally, we cannot obtain the exact adversarial direction of ξadv in a closed form. Hence, a linear approximation of this objective function is applied to approximate the adversarial perturbation. For ℓ∞ norm, the adversarial perturbation ξadv can be efficiently approximated by using the famous fast gradient sign method (FGSM) (Madry et al., 2017). That is,
(5)
ξadv≈ε·sign(∇xlKL(yl||F~θ(xl;ξ))).
Some alternative invariants such as the iterative gradient sign method (IGSM) (Tramèr et al., 2017) and the momentum IGSM (M-IGSM) (Dong et al., 2018) are available to solve the objective function. By adding adversarial perturbations to the student model, SAT obtains better generalization performance than RAT and Π-model. Unfortunately, SAT can only be applied in supervised learning tasks since it has to use the labeled examples to compute the adversarial loss.Virtual Adversarial Training: Different from SAT, the key idea of VAT is to define the adversarial loss based on the output distribution inferred on the unlabeled examples. In this regard, the adversarial loss
LRvat of VAT can be written as
(6)
LRvat=Ex~Dl∪DulKL(Fθ(x)||F~θ(x;ξadv))s.t. ξadv=argmaxξ;||ξ||≤εKL(Fθ(x)||F~θ(x;ξ)).
To obtain the adversarial perturbation εadv, Miyato et al. (2018) proposed to approximate the objective function with a second-order Taylor's expansion at ε = 0. That is,
(7)
ξadv≈argmaxξ;||ξ||≤ε12ξTH(x,θ)ξ,
where H is a Hessian matrix which is defined by H(x,θ)=∇∇ξKL(Fθ(x)||F~θ(x;ξ)). This binomial optimization is an eigenvalue problem that can be solved using power iteration algorithm. Since VAT acquires the adversarial perturbation in the absence of label information, this method is applicable to both supervised and semi-supervised learning. | [
"33397941",
"34824600",
"30040630",
"25719670",
"30640631"
] | [
{
"pmid": "33397941",
"title": "Anomalous collapses of Nares Strait ice arches leads to enhanced export of Arctic sea ice.",
"abstract": "The ice arches that usually develop at the northern and southern ends of Nares Strait play an important role in modulating the export of Arctic Ocean multi-year sea ice. The Arctic Ocean is evolving towards an ice pack that is younger, thinner, and more mobile and the fate of its multi-year ice is becoming of increasing interest. Here, we use sea ice motion retrievals from Sentinel-1 imagery to report on the recent behavior of these ice arches and the associated ice fluxes. We show that the duration of arch formation has decreased over the past 20 years, while the ice area and volume fluxes along Nares Strait have both increased. These results suggest that a transition is underway towards a state where the formation of these arches will become atypical with a concomitant increase in the export of multi-year ice accelerating the transition towards a younger and thinner Arctic ice pack."
},
{
"pmid": "34824600",
"title": "IoT-Based Smart Health Monitoring System for COVID-19 Patients.",
"abstract": "During the ongoing COVID-19 pandemic, Internet of Things- (IoT-) based health monitoring systems are potentially immensely beneficial for COVID-19 patients. This study presents an IoT-based system that is a real-time health monitoring system utilizing the measured values of body temperature, pulse rate, and oxygen saturation of the patients, which are the most important measurements required for critical care. This system has a liquid crystal display (LCD) that shows the measured temperature, pulse rate, and oxygen saturation level and can be easily synchronized with a mobile application for instant access. The proposed IoT-based method uses an Arduino Uno-based system, and it was tested and verified for five human test subjects. The results obtained from the system were promising: the data acquired from the system are stored very quickly. The results obtained from the system were found to be accurate when compared to other commercially available devices. IoT-based tools may potentially be valuable during the COVID-19 pandemic for saving people's lives."
},
{
"pmid": "30040630",
"title": "Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning.",
"abstract": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10."
},
{
"pmid": "25719670",
"title": "Human-level control through deep reinforcement learning.",
"abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
},
{
"pmid": "30640631",
"title": "Adversarial Examples: Attacks and Defenses for Deep Learning.",
"abstract": "With rapid progress and significant successes in a wide spectrum of applications, deep learning is being applied in many safety-critical environments. However, deep neural networks (DNNs) have been recently found vulnerable to well-designed input samples called adversarial examples. Adversarial perturbations are imperceptible to human but can easily fool DNNs in the testing/deploying stage. The vulnerability to adversarial examples becomes one of the major risks for applying DNNs in safety-critical environments. Therefore, attacks and defenses on adversarial examples draw great attention. In this paper, we review recent findings on adversarial examples for DNNs, summarize the methods for generating adversarial examples, and propose a taxonomy of these methods. Under the taxonomy, applications for adversarial examples are investigated. We further elaborate on countermeasures for adversarial examples. In addition, three major challenges in adversarial examples and the potential solutions are discussed."
}
] |
Frontiers in Psychology | null | PMC8989849 | 10.3389/fpsyg.2022.781448 | A Study of Subliminal Emotion Classification Based on Entropy Features | Electroencephalogram (EEG) has been widely utilized in emotion recognition. Psychologists have found that emotions can be divided into conscious emotion and unconscious emotion. In this article, we explore to classify subliminal emotions (happiness and anger) with EEG signals elicited by subliminal face stimulation, that is to select appropriate features to classify subliminal emotions. First, multi-scale sample entropy (MSpEn), wavelet packet energy (Ei), and wavelet packet entropy (WpEn) of EEG signals are extracted. Then, these features are fed into the decision tree and improved random forest, respectively. The classification accuracy with Ei and WpEn is higher than MSpEn, which shows that Ei and WpEn can be used as effective features to classify subliminal emotions. We compared the classification results of different features combined with the decision tree algorithm and the improved random forest algorithm. The experimental results indicate that the improved random forest algorithm attains the best classification accuracy for subliminal emotions. Finally, subliminal emotions and physiological proof of subliminal affective priming effect are discussed. | 2. Related WorkThe methods based on physiological signals are more effective and reliable because humans can not control them intentionally, such as electroencephalogram, electromyogram (EMG), electrocardiogram (ECG), skin resistance (SC) (Kim et al., 2004; Kim and Andr, 2008), pulse rate, and respiration signals. Among these methods, EEG-based emotion recognition has become quite common in recent years. There are many research projects focusing on EEG-based emotion recognition (Hosseini and Naghibi-Sistani, 2011; Colic et al., 2015; Bhatti et al., 2016). Jatupaiboon et al. (2013) indicated that the power spectrum from each frequency band is used as features and the accuracy rate of the SVM classifier is about 85.41%. Bajaj and Pachori (2014) proposed new features based on multiwavelet transform for the classification of human emotions from EEG signals. Duan et al. (2013) proposed a new effective EEG feature named differential entropy to represent the characteristics associated with emotional states.Extracting effective features is the key to the subliminal emotion recognition of EEG signals. Four different features (time domain, frequency domain, time-frequency based, and non-linear) are commonly identified in the feature extraction phase. Compared to traditional time domain and frequency domain analysis, time-frequency based, and non-linear are more widely used (Vijith et al., 2017). Wavelet packet transform is a typical linear time-frequency analysis method. Wavelet packet decomposition is a wavelet transform that provides a time-frequency decomposition of multi-level signals. Murugappan et al. (2008) used video stimuli to trigger emotional responses and extract wavelet coefficients to obtain the energy of the frequency band as input features. Verma and Tiwary (2014) used discrete wavelet transform for feature extraction and classified emotions with support vector machine (SVM), multilayer perceptron (MLP), K nearest neighbor, and metamulticlass (MMC). In recent years, many scholars have tried to analyze EEG signals by non-linear dynamics methods. Commonly used methods are correlation dimension, Lyapunov exponent, Hurst exponent, and other entropy-based analysis methods (Sen et al., 2014). Hosseini and Naghibi-Sistani (2011) proposed an emotion recognition system using EEG signals, and a new approach to emotion state analysis by approximate entropy (ApEn) and wavelet entropy (WE) is integrated. Xin et al. (2015) proposed an improved multi-scale entropy algorithm for emotion EEG features extraction. Michalopoulos and Bourbakis (2017) applied multi-scale entropy (MSE) to EEG recordings of subjects who were watching musical videos selected to elicit specific emotions and found that MSE is able to discover significant differences in the temporal organization of the EEG during events that elicit emotions with low/high valence and arousal.The upsurge in the study of emotion research attracts scholars to explore and discover subliminal emotions. The analysis and processing of EEG signals have become an indispensable research focus in emotion recognition. | [
"31841425",
"17333922",
"12424363",
"19172646",
"33465029",
"18988943",
"15191089",
"25258346",
"24609509",
"21955918",
"24269801",
"25250001",
"30778507",
"32240426",
"33636711"
] | [
{
"pmid": "31841425",
"title": "Deep Learning Classification of Neuro-Emotional Phase Domain Complexity Levels Induced by Affective Video Film Clips.",
"abstract": "In the present article, a novel emotional complexity marker is proposed for classification of discrete emotions induced by affective video film clips. Principal Component Analysis (PCA) is applied to full-band specific phase space trajectory matrix (PSTM) extracted from short emotional EEG segment of 6 s, then the first principal component is used to measure the level of local neuronal complexity. As well, Phase Locking Value (PLV) between right and left hemispheres is estimated for in order to observe the superiority of local neuronal complexity estimation to regional neuro-cortical connectivity measurements in clustering nine discrete emotions (fear, anger, happiness, sadness, amusement, surprise, excitement, calmness, disgust) by using Long-Short-Term-Memory Networks as deep learning applications. In tests, two groups (healthy females and males aged between 22 and 33 years old) are classified with the accuracy levels of [Formula: see text] and [Formula: see text] through the proposed emotional complexity markers and and connectivity levels in terms of PLV in amusement. The groups are found to be statistically different ( p << 0.5) in amusement with respect to both metrics, even if gender difference does not lead to different neuro-cortical functions in any of the other discrete emotional states. The high deep learning classification accuracy of [Formula: see text] is commonly obtained for discrimination of positive emotions from negative emotions through the proposed new complexity markers. Besides, considerable useful classification performance is obtained in discriminating mixed emotions from each other through full-band connectivity features. The results reveal that emotion formation is mostly influenced by individual experiences rather than gender. In detail, local neuronal complexity is mostly sensitive to the affective valance rating, while regional neuro-cortical connectivity levels are mostly sensitive to the affective arousal ratings."
},
{
"pmid": "17333922",
"title": "[The sample entropy and its application in EEG based epilepsy detection].",
"abstract": "It is of great importance for the detection of epilepsy in clinical applications. Based on the limitations of the common used approximate entropy (ApEn) in the epilepsy detection, this paper analyzes epileptic EEG signals with the sample entropy (SampEn) approach, a new method for signal analysis with much higher precision than that of the ApEn. Data analysis results show that the values from both ApEn and SampEn decrease significantly when the epilepsy is burst. Furthermore, the SampEn is more sensitive to EEG changes caused by the epilepsy, about 15%-20% higher than the results of the ApEn."
},
{
"pmid": "12424363",
"title": "Emotion, cognition, and behavior.",
"abstract": "Emotion is central to the quality and range of everyday human experience. The neurobiological substrates of human emotion are now attracting increasing interest within the neurosciences motivated, to a considerable extent, by advances in functional neuroimaging techniques. An emerging theme is the question of how emotion interacts with and influences other domains of cognition, in particular attention, memory, and reasoning. The psychological consequences and mechanisms underlying the emotional modulation of cognition provide the focus of this article."
},
{
"pmid": "19172646",
"title": "Coordinate-based activation likelihood estimation meta-analysis of neuroimaging data: a random-effects approach based on empirical estimates of spatial uncertainty.",
"abstract": "A widely used technique for coordinate-based meta-analyses of neuroimaging data is activation likelihood estimation (ALE). ALE assesses the overlap between foci based on modeling them as probability distributions centered at the respective coordinates. In this Human Brain Project/Neuroinformatics research, the authors present a revised ALE algorithm addressing drawbacks associated with former implementations. The first change pertains to the size of the probability distributions, which had to be specified by the used. To provide a more principled solution, the authors analyzed fMRI data of 21 subjects, each normalized into MNI space using nine different approaches. This analysis provided quantitative estimates of between-subject and between-template variability for 16 functionally defined regions, which were then used to explicitly model the spatial uncertainty associated with each reported coordinate. Secondly, instead of testing for an above-chance clustering between foci, the revised algorithm assesses above-chance clustering between experiments. The spatial relationship between foci in a given experiment is now assumed to be fixed and ALE results are assessed against a null-distribution of random spatial association between experiments. Critically, this modification entails a change from fixed- to random-effects inference in ALE analysis allowing generalization of the results to the entire population of studies analyzed. By comparative analysis of real and simulated data, the authors showed that the revised ALE-algorithm overcomes conceptual problems of former meta-analyses and increases the specificity of the ensuing results without loosing the sensitivity of the original approach. It may thus provide a methodologically improved tool for coordinate-based meta-analyses on functional imaging data."
},
{
"pmid": "33465029",
"title": "EEG-Based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications.",
"abstract": "Brain-Computer interfaces (BCIs) enhance the capability of human brain activities to interact with the environment. Recent advancements in technology and machine learning algorithms have increased interest in electroencephalographic (EEG)-based BCI applications. EEG-based intelligent BCI systems can facilitate continuous monitoring of fluctuations in human cognitive states under monotonous tasks, which is both beneficial for people in need of healthcare support and general researchers in different domain areas. In this review, we survey the recent literature on EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensating for the gaps in the systematic summary of the past five years. Specifically, we first review the current status of BCI and signal sensing technologies for collecting reliable EEG signals. Then, we demonstrate state-of-the-art computational intelligence techniques, including fuzzy models and transfer learning in machine learning and deep learning algorithms, to detect, monitor, and maintain human cognitive states and task performance in prevalent applications. Finally, we present a couple of innovative BCI-inspired healthcare applications and discuss future research directions in EEG-based BCI research."
},
{
"pmid": "18988943",
"title": "Emotion recognition based on physiological changes in music listening.",
"abstract": "Little attention has been paid so far to physiological signals for emotion recognition compared to audiovisual emotion channels such as facial expression or speech. This paper investigates the potential of physiological signals as reliable channels for emotion recognition. All essential stages of an automatic recognition system are discussed, from the recording of a physiological dataset to a feature-based multiclass classification. In order to collect a physiological dataset from multiple subjects over many weeks, we used a musical induction method which spontaneously leads subjects to real emotional states, without any deliberate lab setting. Four-channel biosensors were used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to find the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by classification results. Classification of four musical emotions (positive/high arousal, negative/high arousal, negative/low arousal, positive/low arousal) is performed by using an extended linear discriminant analysis (pLDA). Furthermore, by exploiting a dichotomic property of the 2D emotion model, we develop a novel scheme of emotion-specific multilevel dichotomous classification (EMDC) and compare its performance with direct multiclass classification using the pLDA. Improved recognition accuracy of 95\\% and 70\\% for subject-dependent and subject-independent classification, respectively, is achieved by using the EMDC scheme."
},
{
"pmid": "15191089",
"title": "Emotion recognition system using short-term monitoring of physiological signals.",
"abstract": "A physiological signal-based emotion recognition system is reported. The system was developed to operate as a user-independent system, based on physiological signal databases obtained from multiple subjects. The input signals were electrocardiogram, skin temperature variation and electrodermal activity, all of which were acquired without much discomfort from the body surface, and can reflect the influence of emotion on the autonomic nervous system. The system consisted of preprocessing, feature extraction and pattern classification stages. Preprocessing and feature extraction methods were devised so that emotion-specific characteristics could be extracted from short-segment signals. Although the features were carefully extracted, their distribution formed a classification problem, with large overlap among clusters and large variance within clusters. A support vector machine was adopted as a pattern classifier to resolve this difficulty. Correct-classification ratios for 50 subjects were 78.4% and 61.8%, for the recognition of three and four categories, respectively."
},
{
"pmid": "25258346",
"title": "The subliminal affective priming effects of faces displaying various levels of arousal: an ERP study.",
"abstract": "This study on the subliminal affective priming effects of faces displaying various levels of arousal employed event-related potentials (ERPs). The participants were asked to rate the arousal of ambiguous medium-arousing faces that were preceded by high- or low-arousing priming faces presented subliminally. The results revealed that the participants exhibited arousal-consistent variation in their arousal level ratings of the probe faces exclusively in the negative prime condition. Compared with high-arousing faces, the low-arousing faces tended to elicit greater late positive component (LPC, 450-660ms) and greater N400 (330-450ms) potentials. These findings support the following conclusions: (1) the effect of subliminal affective priming of faces can be detected in the affective arousal dimension; (2) valence may influence the subliminal affective priming effect of the arousal dimension of emotional stimuli; and (3) the subliminal affective priming effect of face arousal occurs when the prime stimulus affects late-stage processing of the probe."
},
{
"pmid": "24609509",
"title": "A comparative study on classification of sleep stage based on EEG signals using feature selection and classification algorithms.",
"abstract": "Sleep scoring is one of the most important diagnostic methods in psychiatry and neurology. Sleep staging is a time consuming and difficult task undertaken by sleep experts. This study aims to identify a method which would classify sleep stages automatically and with a high degree of accuracy and, in this manner, will assist sleep experts. This study consists of three stages: feature extraction, feature selection from EEG signals, and classification of these signals. In the feature extraction stage, it is used 20 attribute algorithms in four categories. 41 feature parameters were obtained from these algorithms. Feature selection is important in the elimination of irrelevant and redundant features and in this manner prediction accuracy is improved and computational overhead in classification is reduced. Effective feature selection algorithms such as minimum redundancy maximum relevance (mRMR); fast correlation based feature selection (FCBF); ReliefF; t-test; and Fisher score algorithms are preferred at the feature selection stage in selecting a set of features which best represent EEG signals. The features obtained are used as input parameters for the classification algorithms. At the classification stage, five different classification algorithms (random forest (RF); feed-forward neural network (FFNN); decision tree (DT); support vector machine (SVM); and radial basis function neural network (RBF)) classify the problem. The results, obtained from different classification algorithms, are provided so that a comparison can be made between computation times and accuracy rates. Finally, it is obtained 97.03 % classification accuracy using the proposed method. The results show that the proposed method indicate the ability to design a new intelligent assistance sleep scoring system."
},
{
"pmid": "21955918",
"title": "Rapid processing of emotional expressions without conscious awareness.",
"abstract": "Rapid accurate categorization of the emotional state of our peers is of critical importance and as such many have proposed that facial expressions of emotion can be processed without conscious awareness. Typically, studies focus selectively on fearful expressions due to their evolutionary significance, leaving the subliminal processing of other facial expressions largely unexplored. Here, I investigated the time course of processing of 3 facial expressions (fearful, disgusted, and happy) plus an emotionally neutral face, during objectively unaware and aware perception. Participants completed the challenging \"which expression?\" task in response to briefly presented backward-masked expressive faces. Although participant's behavioral responses did not differentiate between the emotional content of the stimuli in the unaware condition, activity over frontal and occipitotemporal (OT) brain regions indicated an emotional modulation of the neuronal response. Over frontal regions this was driven by negative facial expressions and was present on all emotional trials independent of later categorization. Whereas the N170 component, recorded on lateral OT electrodes, was enhanced for all facial expressions but only on trials that would later be categorized as emotional. The results indicate that emotional faces, not only fearful, are processed without conscious awareness at an early stage and highlight the critical importance of considering categorization response when studying subliminal perception."
},
{
"pmid": "24269801",
"title": "Multimodal fusion framework: a multiresolution approach for emotion classification and recognition from physiological signals.",
"abstract": "The purpose of this paper is twofold: (i) to investigate the emotion representation models and find out the possibility of a model with minimum number of continuous dimensions and (ii) to recognize and predict emotion from the measured physiological signals using multiresolution approach. The multimodal physiological signals are: Electroencephalogram (EEG) (32 channels) and peripheral (8 channels: Galvanic skin response (GSR), blood volume pressure, respiration pattern, skin temperature, electromyogram (EMG) and electrooculogram (EOG)) as given in the DEAP database. We have discussed the theories of emotion modeling based on i) basic emotions, ii) cognitive appraisal and physiological response approach and iii) the dimensional approach and proposed a three continuous dimensional representation model for emotions. The clustering experiment on the given valence, arousal and dominance values of various emotions has been done to validate the proposed model. A novel approach for multimodal fusion of information from a large number of channels to classify and predict emotions has also been proposed. Discrete Wavelet Transform, a classical transform for multiresolution analysis of signal has been used in this study. The experiments are performed to classify different emotions from four classifiers. The average accuracies are 81.45%, 74.37%, 57.74% and 75.94% for SVM, MLP, KNN and MMC classifiers respectively. The best accuracy is for 'Depressing' with 85.46% using SVM. The 32 EEG channels are considered as independent modes and features from each channel are considered with equal importance. May be some of the channel data are correlated but they may contain supplementary information. In comparison with the results given by others, the high accuracy of 85% with 13 emotions and 32 subjects from our proposed method clearly proves the potential of our multimodal fusion approach."
},
{
"pmid": "25250001",
"title": "Enhanced subliminal emotional responses to dynamic facial expressions.",
"abstract": "Emotional processing without conscious awareness plays an important role in human social interaction. Several behavioral studies reported that subliminal presentation of photographs of emotional facial expressions induces unconscious emotional processing. However, it was difficult to elicit strong and robust effects using this method. We hypothesized that dynamic presentations of facial expressions would enhance subliminal emotional effects and tested this hypothesis with two experiments. Fearful or happy facial expressions were presented dynamically or statically in either the left or the right visual field for 20 (Experiment 1) and 30 (Experiment 2) ms. Nonsense target ideographs were then presented, and participants reported their preference for them. The results consistently showed that dynamic presentations of emotional facial expressions induced more evident emotional biases toward subsequent targets than did static ones. These results indicate that dynamic presentations of emotional facial expressions induce more evident unconscious emotional processing."
},
{
"pmid": "30778507",
"title": "[Attentional bias processing mechanism of emotional faces: anger and happiness superiority effects].",
"abstract": "Emotional information is critical for our social life, in which attentional bias is now a focus in the study on attention. However, the attentional bias processing mechanism of emotional faces still arouses huge controversy. Using similar experimental paradigms and stimuli, the published studies have yielded contradictory results. Some studies suggest that angry faces could automatically stimulate attention, that is, there is an anger superiority effect. On the contrary, lines of growing evidence support the existence of a happiness superiority effect, suggesting that the superiority effect is shown in happy faces rather than angry faces. In the present paper, the behavioral and neuroscience studies of anger and happiness superiority effects are combined. It is found that there are three major reasons for the debate over the two types of effects, which include the choice of stimulus materials, the difference of paradigm setting, and the different stages of emotional processing. By comparatively integrating the previous published results, we highlight that the future studies should further control the experimental materials and procedures, and investigate the processing mechanism of anger and happiness superiority effects by combining cognitive neurobiology means to resolve the disputes."
},
{
"pmid": "32240426",
"title": "Fused behavior recognition model based on attention mechanism.",
"abstract": "With the rapid development of deep learning technology, behavior recognition based on video streams has made great progress in recent years. However, there are also some problems that must be solved: (1) In order to improve behavior recognition performance, the models have tended to become deeper, wider, and more complex. However, some new problems have been introduced also, such as that their real-time performance decreases; (2) Some actions in existing datasets are so similar that they are difficult to distinguish. To solve these problems, the ResNet34-3DRes18 model, which is a lightweight and efficient two-dimensional (2D) and three-dimensional (3D) fused model, is constructed in this study. The model used 2D convolutional neural network (2DCNN) to obtain the feature maps of input images and 3D convolutional neural network (3DCNN) to process the temporal relationships between frames, which made the model not only make use of 3DCNN's advantages on video temporal modeling but reduced model complexity. Compared with state-of-the-art models, this method has shown excellent performance at a faster speed. Furthermore, to distinguish between similar motions in the datasets, an attention gate mechanism is added, and a Res34-SE-IM-Net attention recognition model is constructed. The Res34-SE-IM-Net achieved 71.85%, 92.196%, and 36.5% top-1 accuracy (The predicting label obtained from model is the largest one in the output probability vector. If the label is the same as the target label of the motion, the classification is correct.) respectively on the test sets of the HMDB51, UCF101, and Something-Something v1 datasets."
},
{
"pmid": "33636711",
"title": "A novel consciousness emotion recognition method using ERP components and MMSE.",
"abstract": "Objective.Electroencephalogram (EEG) based emotion recognition mainly extracts traditional features from time domain and frequency domain, and the classification accuracy is often low for the complex nature of EEG signals. However, to the best of our knowledge, the fusion of event-related potential (ERP) components and traditional features is not employed in emotion recognition, and the ERP components are only identified and analyzed by the psychology professionals, which is time-consuming and laborious.Approach.In order to recognize the consciousness and unconsciousness emotions, we propose a novel consciousness emotion recognition method using ERP components and modified multi-scale sample entropy (MMSE). Firstly, ERP components such as N200, P300 and N300 are automatically identified and extracted based on shapelet technique. Secondly, variational mode decomposition and wavelet packet decomposition are utilized to process EEG signals for obtaining different levels of emotional variational mode function (VMF), namelyVMFβ+γ, and then nonlinear feature MMSE of eachVMFβ+γare extracted. At last, ERP components and nonlinear feature MMSE are fused to generate a new feature vector, which is fed into random forest to classify the consciousness and unconsciousness emotions.Main results.Experimental results demonstrate that the average classification accuracy of our proposed method reach 94.42%, 94.88%, and 94.95% for happiness, horror and anger, respectively.Significance.Our study indicates that the fusion of ERP components and nonlinear feature MMSE is more effective for the consciousness and unconsciousness emotions recognition, which provides a new research direction and method for the study of nonlinear time series."
}
] |
Frontiers in Cardiovascular Medicine | null | PMC8990170 | 10.3389/fcvm.2022.860032 | Deep Learning for Detecting and Locating Myocardial Infarction by Electrocardiogram: A Literature Review | Myocardial infarction is a common cardiovascular disorder caused by prolonged ischemia, and early diagnosis of myocardial infarction (MI) is critical for lifesaving. ECG is a simple and non-invasive approach in MI detection, localization, diagnosis, and prognosis. Population-based screening with ECG can detect MI early and help prevent it but this method is too labor-intensive and time-consuming to carry out in practice unless artificial intelligence (AI) would be able to reduce the workload. Recent advances in using deep learning (DL) for ECG screening might rekindle this hope. This review aims to take stock of 59 major DL studies applied to the ECG for MI detection and localization published in recent 5 years, covering convolutional neural network (CNN), long short-term memory (LSTM), convolutional recurrent neural network (CRNN), gated recurrent unit (GRU), residual neural network (ResNet), and autoencoder (AE). In this period, CNN obtained the best popularity in both MI detection and localization, and the highest performance has been obtained from CNN and ResNet model. The reported maximum accuracies of the six different methods are all beyond 97%. Considering the usage of different datasets and ECG leads, the network that trained on 12 leads ECG data of PTB database has obtained higher accuracy than that on smaller number leads data of other datasets. In addition, some limitations and challenges of the DL techniques are also discussed in this review. | Other Related WorkThere exist other six related works that focus on automatic ECG analysis for the prediction of structural cardiac pathologies, including two systematic reviews (52, 53), one meta-analysis (54), and three comprehensive reviews (44, 51, 55). Al Hinai et al. (52) assess the evidence for DL-based analysis of resting ECGs to predict cardiac diseases such as left ventricular (LV) systolic dysfunction, myocardial hypertrophy, and ischemic heart disease. Joloudari et al. (53) focus on ML and DL techniques for myocardial infarction disease (MID) diagnosis but just cover 16 papers regarding DL methods. Grün et al. (54) include a total of five reports to provide an overview of the ability of AI to predict heart failure based on ECG signals. Attia et al. (55) discuss AI ECG algorithms for cardiac screening including LV dysfunction, silent atrial fibrillation, hypertrophic cardiomyopathy, and other structural and valvular diseases. Jothiramalingam et al. review papers that consider ECG signal pre-processing, feature extraction and selection, and classification techniques to diagnose heart disorders such as LV Hypertrophy, Bundle Branch Block, and MI. Ansari et al. (51) comprehensively evaluate several hundred publications that analyzed the ECG signal and electronic health records (EHR) to diagnose myocardial ischemia and infarction automatically and point out that DL methods have not specifically been used to detect MI and ischemia prior to 2017. | [
"12011955",
"29512040",
"11694105",
"21497307",
"20102880",
"26674987",
"22958960",
"33526938",
"30153967",
"11850377",
"13777880",
"33404584",
"32074979",
"26087076",
"19797259",
"27251920",
"19130984",
"12729431",
"33997773",
"29754806",
"19859947",
"31187399",
"32305938",
"29903630",
"10946390",
"29054254",
"33939896",
"23366483",
"20703720",
"33167558",
"34306056",
"30990200",
"21885134",
"24012034",
"26017442",
"23334714",
"32213969",
"29035225",
"34604757",
"34713056",
"34534279",
"34763807",
"12691437",
"32451379",
"1396824",
"28437484",
"34262951",
"33235279",
"33006947",
"34034715",
"17669389",
"33770545",
"28481991",
"30082087",
"29888083",
"32074979",
"31891901",
"32439873",
"32847070",
"33019030",
"33019341",
"33991857",
"33803265",
"33743488",
"29990164",
"9377276",
"28727867",
"33664502",
"31043065",
"30990200",
"22144961",
"32074979",
"32143796",
"30625197",
"32166560",
"30523982",
"31669959",
"31141794",
"32064914",
"32152582",
"33024103",
"32286358",
"31144637"
] | [
{
"pmid": "12011955",
"title": "Acute myocardial infarction: the first manifestation of ischemic heart disease and relation to risk factors.",
"abstract": "OBJECTIVE\nTo assess the association between cardiovascular risk factors and acute myocardial infarction as the first manifestation of ischemic heart disease, correlating them with coronary angiographic findings.\n\n\nMETHODS\nWe carried out a cross-sectional study of 104 patients with previous acute myocardial infarction, who were divided into 2 groups according to the presence or absence of angina prior to acute myocardial infarction. We assessed the presence of angina preceding acute myocardial infarction and risk factors, such as age >55 years, male sex, smoking, systemic arterial hypertension, lipid profile, diabetes mellitus, obesity, sedentary lifestyle, and familial history of ischemic heart disease. On coronary angiography, the severity of coronary heart disease and presence of left ventricular hypertrophy were assessed.\n\n\nRESULTS\nOf the 104 patients studied, 72.1% were males, 90.4% were white, 73.1% were older than 55 years, and 53.8% were hypertensive. Acute myocardial infarction was the first manifestation of ischemic heart disease in 49% of the patients. The associated risk factors were systemic arterial hypertension (RR=0.19; 95% CI=0.06-0.59; P=0.04) and left ventricular hypertrophy (RR=0.27; 95% CI=0,.8-0.88; P=0.03). The remaining risk factors were not statistically significant.\n\n\nCONCLUSION\nPrevalence of acute myocardial infarction as the first manifestation of ischemic heart disease is high, approximately 50%. Hypertensive individuals more frequently have symptoms preceding acute myocardial infarction, probably due to ventricular hypertrophy associated with high blood pressure levels."
},
{
"pmid": "29512040",
"title": "A Population-Based Study of Early Postoperative Outcomes in Patients with Heart Failure Undergoing Bariatric Surgery.",
"abstract": "BACKGROUND\nWeight loss following bariatric surgery can improve cardiac function among patients with heart failure (HF). However, perioperative morbidity of bariatric surgery has not been evaluated in patients with HF.\n\n\nSTUDY DESIGN\nThe National Surgical Quality Improvement Project (NSQIP) database for 2006-2014 was queried to identify patients undergoing adjustable gastric band, gastric bypass, sleeve gastrectomy, and biliopancreatic diversion-duodenal switch. Patients with HF were propensity matched to a control group without HF (1:5). Univariate analyses evaluated differences in complications, and multivariate analysis was completed to predict all-cause morbidity.\n\n\nRESULTS\nThere were 237 patients identified with HF (mean age 52.8 years, 59.9% female, mean body mass index 50.6 kg/m2) matched to 1185 controls without HF who underwent bariatric surgery. Preoperatively, patients with HF were more likely to be taking antihypertensive medication and have undergone prior percutaneous cardiac intervention and cardiac surgery. There was no difference in operative time, surgical site infections, acute renal failure, re-intubation, or myocardial infarction. HF was associated with increased likelihood of length of stay more than 7 days, likelihood to remain ventilated > 48 h, venous thromboembolism, and reoperation. For patients with HF, the adjusted odds ratio for all-cause morbidity was 2.09 (1.32-3.22).\n\n\nCONCLUSION\nThe NSQIP definition of HF, which includes recent hospitalization for HF exacerbation or new HF diagnosis 30 days prior to surgery, predicts a more than two-fold increase in odds of morbidity following bariatric surgery. This must be balanced with the longer-term potential benefits of weight loss and associated improvement in cardiac function in this population."
},
{
"pmid": "11694105",
"title": "Unrecognized myocardial infarction.",
"abstract": "This review addresses myocardial infarctions that escape clinical recognition. It focuses on the prevalence, predisposing factors, and prognosis of these unrecognized infarctions, and incorporates data from relevant epidemiologic studies, basic science investigations, and review articles. These data indicate that at least one fourth of all myocardial infarctions are clinically unrecognized. The demographic characteristics and coronary risk factor profiles of persons with previously unrecognized myocardial infarctions appear to be similar to those of persons whose infarctions are clinically detected. Impaired symptom perception may contribute to lack of recognition, but both patients' and physicians' perceptions about the risk for myocardial infarction may also play an important role. Finally, mortality rates after unrecognized and recognized myocardial infarction are similar. Given the public health implications of unrecognized myocardial infarction, future studies should address screening strategies, risk stratification after detection of previously unrecognized myocardial infarction, and the role of standard postinfarction therapies in affected patients."
},
{
"pmid": "21497307",
"title": "Prevalence, incidence, predictive factors and prognosis of silent myocardial infarction: a review of the literature.",
"abstract": "The prevalence, incidence, risk factors and prognosis of silent myocardial infarction are less well known than those of silent myocardial ischaemia. The aims of this article are to evaluate the prevalence and incidence of silent myocardial infarction in subjects with or without a history of cardiovascular disease and in diabetic patients, and to identify potential risk factors and estimate prognosis through a review of the literature. A Medline search identified studies that provided data on the prevalence, incidence, potential risk factors and/or prognosis of silent myocardial infarction, among cohorts from the general population and large clinical studies of at-risk patients (with hypertension or a history of cardiovascular disease or diabetes). The search identified 15 studies in subjects from the general population, five in hypertensive patients, six in patients with a history of cardiovascular disease, and 10 in diabetic patients. The prevalence and incidence of silent myocardial infarction appear highly variable depending on the population studied, the patients' ages, and the method used to detect silent myocardial infarction. In the general population, the prevalence of silent myocardial infarction increased markedly with increasing age (up to>5% in elderly subjects). Hypertension causes only a moderate increase in prevalence, whereas underlying cardiovascular diseases and diabetes are associated with marked increases in prevalence. The incidence of silent myocardial infarction changes in the same way. The main predictive factors of silent myocardial infarction are hypertension, history of cardiovascular diseases and diabetes duration. Silent myocardial infarction is associated with as poor a prognosis as clinical myocardial infarction. The frequency of silent myocardial infarction and the poor prognosis in at-risk patients amply justify its systematic early detection and active management."
},
{
"pmid": "20102880",
"title": "Percutaneous coronary intervention or coronary artery bypass surgery for cardiogenic shock and multivessel coronary artery disease?",
"abstract": "BACKGROUND\nDespite advances in treatment of cardiogenic shock (CS), the incidence of this serious complication of acute ST-elevation myocardial infarction (STEMI) has stayed relatively constant, and rates of mortality, although somewhat improved in recent decades, remain dauntingly high. Although both percutaneous coronary intervention (PCI) and coronary artery bypass grafting (CABG) are used in patients with CS with multivessel coronary disease, the optimal revascularization strategy in this setting remains unknown.\n\n\nMETHODS\nWe conducted a literature search and review of English language publications on CS in multiple online medical databases. Studies were included if they were (1) randomized controlled trials or observational cohort studies, (2) single-center or multicenter reports, (3) prospective or retrospective studies, and (4) contained information on PCI and CABG. Non-English language studies were excluded.\n\n\nRESULTS\nOur search retrieved no published findings from randomized clinical trials, and only 4 observational reports evaluating PCI versus CABG. Our review of the limited available data suggests similar mortality rates with CABG and PCI in patients with STEMI and multivessel coronary disease complicated by CS.\n\n\nCONCLUSIONS\nLimited data from observational studies in patients with CS and multivessel disease suggest that CABG should be considered a complementary reperfusion strategy to PCI and may be preferred, especially when complete revascularization with PCI is not possible. Our data highlight the need for large randomized trials to further evaluate the relative benefit of PCI versus CABG in patients with multivessel coronary disease and CS using contemporary surgical and percutaneous techniques."
},
{
"pmid": "26674987",
"title": "Reducing myocardial infarct size: challenges and future opportunities.",
"abstract": "Despite prompt reperfusion by primary percutaneous coronary intervention (PPCI), the mortality and morbidity of patients presenting with an acute ST-segment elevation myocardial infarction (STEMI) remain significant with 9% death and 10% heart failure at 1 year. In these patients, one important neglected therapeutic target is 'myocardial reperfusion injury', a term given to the cardiomyocyte death and microvascular dysfunction which occurs on reperfusing ischaemic myocardium. A number of cardioprotective therapies (both mechanical and pharmacological), which are known to target myocardial reperfusion injury, have been shown to reduce myocardial infarct (MI) size in small proof-of-concept clinical studies-however, being able to demonstrate improved clinical outcomes has been elusive. In this article, we review the challenges facing clinical cardioprotection research, and highlight future therapies for reducing MI size and preventing heart failure in patients presenting with STEMI at risk of myocardial reperfusion injury."
},
{
"pmid": "33526938",
"title": "Artificial intelligence-enhanced electrocardiography in cardiovascular disease management.",
"abstract": "The application of artificial intelligence (AI) to the electrocardiogram (ECG), a ubiquitous and standardized test, is an example of the ongoing transformative effect of AI on cardiovascular medicine. Although the ECG has long offered valuable insights into cardiac and non-cardiac health and disease, its interpretation requires considerable human expertise. Advanced AI methods, such as deep-learning convolutional neural networks, have enabled rapid, human-like interpretation of the ECG, while signals and patterns largely unrecognizable to human interpreters can be detected by multilayer AI networks with precision, making the ECG a powerful, non-invasive biomarker. Large sets of digital ECGs linked to rich clinical data have been used to develop AI models for the detection of left ventricular dysfunction, silent (previously undocumented and asymptomatic) atrial fibrillation and hypertrophic cardiomyopathy, as well as the determination of a person's age, sex and race, among other phenotypes. The clinical and population-level implications of AI-based ECG phenotyping continue to emerge, particularly with the rapid rise in the availability of mobile and wearable ECG technologies. In this Review, we summarize the current and future state of the AI-enhanced ECG in the detection of cardiovascular disease in at-risk populations, discuss its implications for clinical decision-making in patients with cardiovascular disease and critically appraise potential limitations and unknowns."
},
{
"pmid": "33404584",
"title": "Posterior infarction: a STEMI easily missed.",
"abstract": "Anterior ST-segment depression encompasses important differential diagnoses, including ST-segment elevation myocardial infarction, non-ST-segment elevation myocardial infarction and pulmonary embolism. Diagnostic accuracy is crucial, as this has important therapeutic implications. This ECG case report reviews the electrocardiographic changes seen in patients with chest pain and anterior ST-segment depression."
},
{
"pmid": "32074979",
"title": "Hybrid Network with Attention Mechanism for Detection and Location of Myocardial Infarction Based on 12-Lead Electrocardiogram Signals.",
"abstract": "The electrocardiogram (ECG) is a non-invasive, inexpensive, and effective tool for myocardial infarction (MI) diagnosis. Conventional detection algorithms require solid domain expertise and rely heavily on handcrafted features. Although previous works have studied deep learning methods for extracting features, these methods still neglect the relationships between different leads and the temporal characteristics of ECG signals. To handle the issues, a novel multi-lead attention (MLA) mechanism integrated with convolutional neural network (CNN) and bidirectional gated recurrent unit (BiGRU) framework (MLA-CNN-BiGRU) is therefore proposed to detect and locate MI via 12-lead ECG records. Specifically, the MLA mechanism automatically measures and assigns the weights to different leads according to their contribution. The two-dimensional CNN module exploits the interrelated characteristics between leads and extracts discriminative spatial features. Moreover, the BiGRU module extracts essential temporal features inside each lead. The spatial and temporal features from these two modules are fused together as global features for classification. In experiments, MI location and detection were performed under both intra-patient scheme and inter-patient scheme to test the robustness of the proposed framework. Experimental results indicate that our intelligent framework achieved satisfactory performance and demonstrated vital clinical significance."
},
{
"pmid": "26087076",
"title": "Multiscale Energy and Eigenspace Approach to Detection and Localization of Myocardial Infarction.",
"abstract": "In this paper, a novel technique on a multiscale energy and eigenspace (MEES) approach is proposed for the detection and localization of myocardial infarction (MI) from multilead electrocardiogram (ECG). Wavelet decomposition of multilead ECG signals grossly segments the clinical components at different subbands. In MI, pathological characteristics such as hypercute T-wave, inversion of T-wave, changes in ST elevation, or pathological Q-wave are seen in ECG signals. This pathological information alters the covariance structures of multiscale multivariate matrices at different scales and the corresponding eigenvalues. The clinically relevant components can be captured by eigenvalues. In this study, multiscale wavelet energies and eigenvalues of multiscale covariance matrices are used as diagnostic features. Support vector machines (SVMs) with both linear and radial basis function (RBF) kernel and K-nearest neighbor are used as classifiers. Datasets, which include healthy control, and various types of MI, such as anterior, anteriolateral, anterioseptal, inferior, inferiolateral, and inferioposterio-lateral, from the PTB diagnostic ECG database are used for evaluation. The results show that the proposed technique can successfully detect the MI pathologies. The MEES approach also helps localize different types of MIs. For MI detection, the accuracy, the sensitivity, and the specificity values are 96%, 93%, and 99% respectively. The localization accuracy is 99.58%, using a multiclass SVM classifier with RBF kernel."
},
{
"pmid": "19797259",
"title": "Incidence and predictors of silent myocardial infarction in type 2 diabetes and the effect of fenofibrate: an analysis from the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) study.",
"abstract": "AIMS\nTo determine the incidence and predictors of, and effects of fenofibrate on silent myocardial infarction (MI) in a large contemporary cohort of patients with type 2 diabetes in the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) study.\n\n\nMETHODS AND RESULTS\nRoutine electrocardiograms taken throughout the study were assessed by Minnesota-code criteria for the presence of new Q-waves without clinical presentation and analysed with blinding to treatment allocation and clinical outcome. Of all MIs, 36.8% were silent. Being male, older age, longer diabetes duration, prior cardiovascular disease (CVD), neuropathy, higher HbA(1c), albuminuria, high serum creatinine, and insulin use all significantly predicted risk of clinical or silent MI. Fenofibrate reduced MI (clinical or silent) by 19% [hazard ratio (HR) 0.81, 95% confidence interval (CI) 0.69-0.94; P = 0.006], non-fatal clinical MI by 24% (P = 0.01), and silent MI by 16% (P = 0.16). Among those having silent MI, fenofibrate reduced subsequent clinical CVD events by 78% (HR 0.22, 95% CI 0.08-0.65; P = 0.003).\n\n\nCONCLUSION\nSilent and clinical MI have similar risk factors and increase the risk of future CVD events. Fenofibrate reduces the risk of a first MI and substantially reduces the risk of further clinical CVD events after silent MI, supporting its use in type 2 diabetes."
},
{
"pmid": "27251920",
"title": "ST elevation myocardial infarction.",
"abstract": "ST segment elevation myocardial infarction remains a significant contributor to morbidity and mortality worldwide, despite a declining incidence and better survival rates. It usually results from thrombotic occlusion of a coronary artery at the site of a ruptured or eroded plaque. Diagnosis is based on characteristic symptoms and electrocardiogram changes, and confirmed subsequently by raised cardiac enzymes. Prognosis is dependent on the size of the infarct, presence of collaterals and speed with which the occluded artery is reopened. Mechanical reperfusion by primary percutaneous coronary intervention is superior to fibrinolytic therapy if delivered by an experienced team in a timely fashion. Post-reperfusion care includes monitoring for complications, evaluation of left ventricular function, secondary preventive therapy and cardiac rehabilitation."
},
{
"pmid": "19130984",
"title": "Utilization and impact of pre-hospital electrocardiograms for patients with acute ST-segment elevation myocardial infarction: data from the NCDR (National Cardiovascular Data Registry) ACTION (Acute Coronary Treatment and Intervention Outcomes Network) Registry.",
"abstract": "OBJECTIVES\nThis study sought to determine the association of pre-hospital electrocardiograms (ECGs) and the timing of reperfusion therapy for patients with ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nPre-hospital ECGs have been recommended in the management of patients with chest pain transported by emergency medical services (EMS).\n\n\nMETHODS\nWe evaluated patients with STEMI from the NCDR (National Cardiovascular Data Registry) ACTION (Acute Coronary Treatment and Intervention Outcomes Network) registry who were transported by EMS from January 1, 2007, through December 31, 2007. Patients were stratified by the use of pre-hospital ECGs, and timing of reperfusion therapy was compared between the 2 groups.\n\n\nRESULTS\nA total of 7,098 of 12,097 patients (58.7%) utilized EMS, and 1,941 of these 7,098 EMS transport patients (27.4%) received a pre-hospital ECG. Among the EMS transport population, primary percutaneous coronary intervention was performed in 92.1% of patients with a pre-hospital ECG versus 86.3% with an in-hospital ECG, whereas fibrinolytic therapy was used in 4.6% versus 4.2% of patients. Median door-to-needle times for patients receiving fibrinolytic therapy (19 min vs. 29 min, p = 0.003) and median door-to-balloon times for patients undergoing primary percutaneous coronary intervention (61 min vs. 75 min, p < 0.0001) were significantly shorter for patients with a pre-hospital ECG. A suggestive trend for a lower risk of in-hospital mortality was observed with pre-hospital ECG use (adjusted odds ratio: 0.80, 95% confidence interval: 0.63 to 1.01).\n\n\nCONCLUSIONS\nOnly one-quarter of these patients transported by EMS receive a pre-hospital ECG. The use of a pre-hospital ECG was associated with a greater use of reperfusion therapy, faster reperfusion times, and a suggested trend for a lower risk of mortality."
},
{
"pmid": "12729431",
"title": "Competency in interpretation of 12-lead electrocardiograms: a summary and appraisal of published evidence.",
"abstract": "BACKGROUND\nThere have been many proposals for objective standards designed to optimize training, testing, and maintaining competency in interpretation of electrocardiograms (ECGs). However, most of these recommendations are consensus based and are not derived from clinical trials that include patient outcomes.\n\n\nPURPOSE\nTo critically review the available data on training, accuracy, and outcomes of computer and physician interpretation of 12-lead resting ECGs.\n\n\nDATA SOURCES\nEnglish-language articles were retrieved by searching MEDLINE (1966 to 2002), EMBASE (1974 to 2002), and the Cochrane Controlled Trials Register (1975-2002). The references in articles selected for analysis were also reviewed for relevance.\n\n\nSTUDY SELECTION\nAll articles on training, accuracy, and outcomes of ECG interpretations were analyzed.\n\n\nDATA EXTRACTION\nStudy design and results were summarized in evidence tables. Information on physician interpretation compared to a \"gold standard,\" typically a consensus panel of expert electrocardiographers, was extracted. The clinical context of and outcomes related to the ECG interpretation were obtained whenever possible.\n\n\nDATA SYNTHESIS\nPhysicians of all specialties and levels of training, as well as computer software for interpreting ECGs, frequently made errors in interpreting ECGs when compared to expert electrocardiographers. There was also substantial disagreement on interpretations among cardiologists. Adverse patient outcomes occurred infrequently when ECGs were incorrectly interpreted.\n\n\nCONCLUSIONS\nThere is no evidence-based minimum number of ECG interpretations that is ideal for attaining or maintaining competency in ECG interpretation skills. Further research is needed to clarify the optimal way to build and maintain ECG interpretation skills based on patient outcomes."
},
{
"pmid": "33997773",
"title": "Generative Adversarial Networks-Enabled Human-Artificial Intelligence Collaborative Applications for Creative and Design Industries: A Systematic Review of Current Approaches and Trends.",
"abstract": "The future of work and workplace is very much in flux. A vast amount has been written about artificial intelligence (AI) and its impact on work, with much of it focused on automation and its impact in terms of potential job losses. This review will address one area where AI is being added to creative and design practitioners' toolbox to enhance their creativity, productivity, and design horizons. A designer's primary purpose is to create, or generate, the most optimal artifact or prototype, given a set of constraints. We have seen AI encroaching into this space with the advent of generative networks and generative adversarial networks (GANs) in particular. This area has become one of the most active research fields in machine learning over the past number of years, and a number of these techniques, particularly those around plausible image generation, have garnered considerable media attention. We will look beyond automatic techniques and solutions and see how GANs are being incorporated into user pipelines for design practitioners. A systematic review of publications indexed on ScienceDirect, SpringerLink, Web of Science, Scopus, IEEExplore, and ACM DigitalLibrary was conducted from 2015 to 2020. Results are reported according to PRISMA statement. From 317 search results, 34 studies (including two snowball sampled) are reviewed, highlighting key trends in this area. The studies' limitations are presented, particularly a lack of user studies and the prevalence of toy-examples or implementations that are unlikely to scale. Areas for future study are also identified."
},
{
"pmid": "29754806",
"title": "Machine learning in cardiac CT: Basic concepts and contemporary data.",
"abstract": "Propelled by the synergy of the groundbreaking advancements in the ability to analyze high-dimensional datasets and the increasing availability of imaging and clinical data, machine learning (ML) is poised to transform the practice of cardiovascular medicine. Owing to the growing body of literature validating both the diagnostic performance as well as the prognostic implications of anatomic and physiologic findings, coronary computed tomography angiography (CCTA) is now a well-established non-invasive modality for the assessment of cardiovascular disease. ML has been increasingly utilized to optimize performance as well as extract data from CCTA as well as non-contrast enhanced cardiac CT scans. The purpose of this review is to describe the contemporary state of ML based algorithms applied to cardiac CT, as well as to provide clinicians with an understanding of its benefits and associated limitations."
},
{
"pmid": "19859947",
"title": "Classification of brain tumor type and grade using MRI texture and shape in a machine learning scheme.",
"abstract": "The objective of this study is to investigate the use of pattern classification methods for distinguishing different types of brain tumors, such as primary gliomas from metastases, and also for grading of gliomas. The availability of an automated computer analysis tool that is more objective than human readers can potentially lead to more reliable and reproducible brain tumor diagnostic procedures. A computer-assisted classification method combining conventional MRI and perfusion MRI is developed and used for differential diagnosis. The proposed scheme consists of several steps including region-of-interest definition, feature extraction, feature selection, and classification. The extracted features include tumor shape and intensity characteristics, as well as rotation invariant texture features. Feature subset selection is performed using support vector machines with recursive feature elimination. The method was applied on a population of 102 brain tumors histologically diagnosed as metastasis (24), meningiomas (4), gliomas World Health Organization grade II (22), gliomas World Health Organization grade III (18), and glioblastomas (34). The binary support vector machine classification accuracy, sensitivity, and specificity, assessed by leave-one-out cross-validation, were, respectively, 85%, 87%, and 79% for discrimination of metastases from gliomas and 88%, 85%, and 96% for discrimination of high-grade (grades III and IV) from low-grade (grade II) neoplasms. Multiclass classification was also performed via a one-vs-all voting scheme."
},
{
"pmid": "31187399",
"title": "Enabling machine learning in X-ray-based procedures via realistic simulation of image formation.",
"abstract": "PURPOSE\nMachine learning-based approaches now outperform competing methods in most disciplines relevant to diagnostic radiology. Image-guided procedures, however, have not yet benefited substantially from the advent of deep learning, in particular because images for procedural guidance are not archived and thus unavailable for learning, and even if they were available, annotations would be a severe challenge due to the vast amounts of data. In silico simulation of X-ray images from 3D CT is an interesting alternative to using true clinical radiographs since labeling is comparably easy and potentially readily available.\n\n\nMETHODS\nWe extend our framework for fast and realistic simulation of fluoroscopy from high-resolution CT, called DeepDRR, with tool modeling capabilities. The framework is publicly available, open source, and tightly integrated with the software platforms native to deep learning, i.e., Python, PyTorch, and PyCuda. DeepDRR relies on machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, but uses analytic forward projection and noise injection to ensure acceptable computation times. On two X-ray image analysis tasks, namely (1) anatomical landmark detection and (2) segmentation and localization of robot end-effectors, we demonstrate that convolutional neural networks (ConvNets) trained on DeepDRRs generalize well to real data without re-training or domain adaptation. To this end, we use the exact same training protocol to train ConvNets on naïve and DeepDRRs and compare their performance on data of cadaveric specimens acquired using a clinical C-arm X-ray system.\n\n\nRESULTS\nOur findings are consistent across both considered tasks. All ConvNets performed similarly well when evaluated on the respective synthetic testing set. However, when applied to real radiographs of cadaveric anatomy, ConvNets trained on DeepDRRs significantly outperformed ConvNets trained on naïve DRRs ([Formula: see text]).\n\n\nCONCLUSION\nOur findings for both tasks are positive and promising. Combined with complementary approaches, such as image style transfer, the proposed framework for fast and realistic simulation of fluoroscopy from CT contributes to promoting the implementation of machine learning in X-ray-guided procedures. This paradigm shift has the potential to revolutionize intra-operative image analysis to simplify surgical workflows."
},
{
"pmid": "32305938",
"title": "Harnessing Machine Intelligence in Automatic Echocardiogram Analysis: Current Status, Limitations, and Future Directions.",
"abstract": "Echocardiography (echo) is a critical tool in diagnosing various cardiovascular diseases. Despite its diagnostic and prognostic value, interpretation and analysis of echo images are still widely performed manually by echocardiographers. A plethora of algorithms has been proposed to analyze medical ultrasound data using signal processing and machine learning techniques. These algorithms provided opportunities for developing automated echo analysis and interpretation systems. The automated approach can significantly assist in decreasing the variability and burden associated with manual image measurements. In this paper, we review the state-of-the-art automatic methods for analyzing echocardiography data. Particularly, we comprehensively and systematically review existing methods of four major tasks: echo quality assessment, view classification, boundary segmentation, and disease diagnosis. Our review covers three echo imaging modes, which are B-mode, M-mode, and Doppler. We also discuss the challenges and limitations of current methods and outline the most pressing directions for future research. In summary, this review presents the current status of automatic echo analysis and discusses the challenges that need to be addressed to obtain robust systems suitable for efficient use in clinical settings or point-of-care testing."
},
{
"pmid": "29903630",
"title": "Automated diagnosis of arrhythmia using combination of CNN and LSTM techniques with variable length heart beats.",
"abstract": "Arrhythmia is a cardiac conduction disorder characterized by irregular heartbeats. Abnormalities in the conduction system can manifest in the electrocardiographic (ECG) signal. However, it can be challenging and time-consuming to visually assess the ECG signals due to the very low amplitudes. Implementing an automated system in the clinical setting can potentially help expedite diagnosis of arrhythmia, and improve the accuracies. In this paper, we propose an automated system using a combination of convolutional neural network (CNN) and long short-term memory (LSTM) for diagnosis of normal sinus rhythm, left bundle branch block (LBBB), right bundle branch block (RBBB), atrial premature beats (APB) and premature ventricular contraction (PVC) on ECG signals. The novelty of this work is that we used ECG segments of variable length from the MIT-BIT arrhythmia physio bank database. The proposed system demonstrated high classification performance in the handling of variable-length data, achieving an accuracy of 98.10%, sensitivity of 97.50% and specificity of 98.70% using ten-fold cross validation strategy. Our proposed model can aid clinicians to detect common arrhythmias accurately on routine screening ECG."
},
{
"pmid": "10946390",
"title": "Independent component analysis: algorithms and applications.",
"abstract": "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject."
},
{
"pmid": "29054254",
"title": "ECG based Myocardial Infarction detection using Hybrid Firefly Algorithm.",
"abstract": "BACKGROUND AND OBJECTIVE\nMyocardial Infarction (MI) is one of the most frequent diseases, and can also cause demise, disability and monetary loss in patients who suffer from cardiovascular disorder. Diagnostic methods of this ailment by physicians are typically invasive, even though they do not fulfill the required detection accuracy.\n\n\nMETHODS\nRecent feature extraction methods, for example, Auto Regressive (AR) modelling; Magnitude Squared Coherence (MSC); Wavelet Coherence (WTC) using Physionet database, yielded a collection of huge feature set. A large number of these features may be inconsequential containing some excess and non-discriminative components that present excess burden in computation and loss of execution performance. So Hybrid Firefly and Particle Swarm Optimization (FFPSO) is directly used to optimise the raw ECG signal instead of extracting features using the above feature extraction techniques.\n\n\nRESULTS\nProvided results in this paper show that, for the detection of MI class, the FFPSO algorithm with ANN gives 99.3% accuracy, sensitivity of 99.97%, and specificity of 98.7% on MIT-BIH database by including NSR database also.\n\n\nCONCLUSIONS\nThe proposed approach has shown that methods that are based on the feature optimization of the ECG signals are the perfect to diagnosis the condition of the heart patients."
},
{
"pmid": "33939896",
"title": "Short duration Vectorcardiogram based inferior myocardial infarction detection: class and subject-oriented approach.",
"abstract": "Myocardial infarction (MI) happens when blood stops circulating to an explicit segment of the heart causing harm to the heart muscles. Vectorcardiography (VCG) is a technique of recording direction and magnitude of the signals that are produced by the heart in a 3-lead representation. In this work, we present a technique for detection of MI in the inferior portion of heart using short duration VCG signals. The raw signal was pre-processed using the median and Savitzky-Golay (SG) filter. The Stationary Wavelet Transform (SWT) was used for time-invariant decomposition of the signal followed by feature extraction. The selected features using minimum-redundancy-maximum-relevance (mRMR) based feature selection method were applied to the supervised classification methods. The efficacy of the proposed method was assessed under both class-oriented and a more real-life subject-oriented approach. An accuracy of 99.14 and 89.37% were achieved respectively. Results of the proposed technique are better than existing state-of-art methods and used VCG segment is shorter. Thus, a shorter segment and a high accuracy can be helpful in the automation of timely and reliable detection of MI. The satisfactory performance achieved in the subject-oriented approach shows reliability and applicability of the proposed technique."
},
{
"pmid": "23366483",
"title": "Detection of acute myocardial infarction from serial ECG using multilayer support vector machine.",
"abstract": "Acute Myocardial Infarction (AMI) remains a leading cause of mortality in the United States. Finding accurate and cost effective solutions for AMI diagnosis in Emergency Departments (ED) is vital. Consecutive, or serial, ECGs, taken minutes apart, have the potential to improve detection of AMI in patients presented to ED with symptoms of chest pain. By transforming the ECG into 3 dimensions (3D), computing 3D ECG markers, and processing marker variations, as extracted from serial ECG, more information can be gleaned about cardiac electrical activity. We aimed at improving AMI diagnostic accuracy relative to that of expert cardiologists. We utilized support vector machines in a multilayer network, optimized via a genetic algorithm search. We report a mean sensitivity of 86.82%±4.23% and specificity of 91.05%±2.10% on randomized subsets from a master set of 201 patients. Serial ECG processing using the proposed algorithm shows promise in improving AMI diagnosis in Emergency Department settings."
},
{
"pmid": "20703720",
"title": "Detection and localization of myocardial infarction using K-nearest neighbor classifier.",
"abstract": "This paper presents automatic detection and localization of myocardial infarction (MI) using K-nearest neighbor (KNN) classifier. Time domain features of each beat in the ECG signal such as T wave amplitude, Q wave and ST level deviation, which are indicative of MI, are extracted from 12 leads ECG. Detection of MI aims to classify normal subjects without myocardial infarction and subjects suffering from Myocardial Infarction. For further investigation, Localization of MI is done to specify the region of infarction of the heart. Total 20,160 ECG beats from PTB database available on Physio-bank is used to investigate the performance of extracted features with KNN classifier. In the case of MI detection, sensitivity and specificity of KNN is found to be 99.9% using half of the randomly selected beats as training set and rest of the beats for testing. Moreover, Arif-Fayyaz pruning algorithm is used to prune the data which will reduce the storage requirement and computational cost of search. After pruning, sensitivity and specificity are dropped to 97% and 99.6% respectively but training is reduced by 93%. Myocardial Infarction beats are divided into ten classes based on the location of the infarction along with one class of normal subjects. Sensitivity and Specificity of above 90% is achieved for all eleven classes with overall classification accuracy of 98.8%. Some of the ECG beats are misclassified but interestingly these are misclassified to those classes whose location of infarction is near to the true classes of the ECG beats. Pruning is done on the training set for eleven classes and training set is reduced by 70% and overall classification accuracy of 98.3% is achieved. The proposed method due to its simplicity and high accuracy over the PTB database can be very helpful in correct diagnosis of MI in a practical scenario."
},
{
"pmid": "33167558",
"title": "Computational Diagnostic Techniques for Electrocardiogram Signal Analysis.",
"abstract": "Cardiovascular diseases (CVDs), including asymptomatic myocardial ischemia, angina, myocardial infarction, and ischemic heart failure, are the leading cause of death globally. Early detection and treatment of CVDs significantly contribute to the prevention or delay of cardiovascular death. Electrocardiogram (ECG) records the electrical impulses generated by heart muscles, which reflect regular or irregular beating activity. Computer-aided techniques provide fast and accurate tools to identify CVDs using a patient's ECG signal, which have achieved great success in recent years. Latest computational diagnostic techniques based on ECG signals for estimating CVDs conditions are summarized here. The procedure of ECG signals analysis is discussed in several subsections, including data preprocessing, feature engineering, classification, and application. In particular, the End-to-End models integrate feature extraction and classification into learning algorithms, which not only greatly simplifies the process of data analysis, but also shows excellent accuracy and robustness. Portable devices enable users to monitor their cardiovascular status at any time, bringing new scenarios as well as challenges to the application of ECG algorithms. Computational diagnostic techniques for ECG signal analysis show great potential for helping health care professionals, and their application in daily life benefits both patients and sub-healthy people."
},
{
"pmid": "34306056",
"title": "Prediction of Heart Disease Using a Combination of Machine Learning and Deep Learning.",
"abstract": "The correct prediction of heart disease can prevent life threats, and incorrect prediction can prove to be fatal at the same time. In this paper different machine learning algorithms and deep learning are applied to compare the results and analysis of the UCI Machine Learning Heart Disease dataset. The dataset consists of 14 main attributes used for performing the analysis. Various promising results are achieved and are validated using accuracy and confusion matrix. The dataset consists of some irrelevant features which are handled using Isolation Forest, and data are also normalized for getting better results. And how this study can be combined with some multimedia technology like mobile devices is also discussed. Using deep learning approach, 94.2% accuracy was obtained."
},
{
"pmid": "30990200",
"title": "MFB-CBRNN: A Hybrid Network for MI Detection Using 12-Lead ECGs.",
"abstract": "This paper proposes a novel hybrid network named multiple-feature-branch convolutional bidirectional recurrent neural network (MFB-CBRNN) for myocardial infarction (MI) detection using 12-lead ECGs. The model efficiently combines convolutional neural network-based and recurrent neural network-based structures. Each feature branch consists of several one-dimensional convolutional and pooling layers, corresponding to a certain lead. All the feature branches are independent from each other, which are utilized to learn the diverse features from different leads. Moreover, a bidirectional long short term memory network is employed to summarize all the feature branches. Its good ability of feature aggregation has been proved by the experiments. Furthermore, the paper develops a novel optimization method, lead random mask (LRM), to alleviate overfitting and implement an implicit ensemble like dropout. The model with LRM can achieve a more accurate MI detection. Class-based and subject-based fivefold cross validations are both carried out using Physikalisch-Technische Bundesanstalt diagnostic database. Totally, there are 148 MI and 52 healthy control subjects involved in the experiments. The MFB-CBRNN achieves an overall accuracy of 99.90% in class-based experiments, and an overall accuracy of 93.08% in subject-based experiments. Compared with other related studies, our algorithm achieves a comparable or even better result on MI detection. Therefore, the MFB-CBRNN has a good generalization capacity and is suitable for MI detection using 12-lead ECGs. It has a potential to assist the real-world MI diagnostics and reduce the burden of cardiologists."
},
{
"pmid": "21885134",
"title": "ECG findings in comparison to cardiovascular MR imaging in viral myocarditis.",
"abstract": "OBJECTIVES\nWe sought (1) to assess prevalence and type of ECG abnormalities in patients with biopsy proven myocarditis and signs of myocardial damage indicated by LGE, and (2) to evaluate whether ECG abnormalities are related to the pattern of myocardial damage.\n\n\nBACKGROUND\nPrevalence and type of ECG abnormalities in patients presenting biopsy proven myocarditis, as well as any relation between ECG abnormalities and the in vivo pattern of myocardial damage are unknown.\n\n\nMETHODS\nEighty-four consecutive patients fulfilled the following criteria: (1) newly diagnosed biopsy proven viral myocarditis, and (2) non-ischemic LGE, and (3) standard 12-lead-ECG upon admission.\n\n\nRESULTS\nSixty-five patients with biopsy proven myocarditis had abnormal ECGs upon admission (77%). In this group, ST-abnormalities were detected most frequently (69%), followed by bundle-branch-block in 26%, and Q-waves in 8%. Atrial fibrillation was present in 6%, and AV-Block in two patients. In patients with septal LGE ST-abnormalities were more frequently located in anterolateral leads compared to patients with lateral LGE, in whom ST-abnormalities were most frequently observed in inferolateral leads. Bundle-branch-block occurred more often in patients with septal LGE (11/17). Four of five patients with Q-waves had severe and almost transmural LGE in the lateral wall.\n\n\nCONCLUSION\nECG abnormalities can be found in most patients with biopsy proven viral myocarditis at initial presentation. However, similar to suspected acute myocardial infarction, a normal ECG does not rule out myocarditis. ECG findings are related to the amount and area of damage as indicated by LGE, which confirms the important clinical role of ECG."
},
{
"pmid": "24012034",
"title": "Differences and similarities of repolarization patterns during hospitalization for Takotsubo cardiomyopathy and acute coronary syndrome.",
"abstract": "Takotsubo cardiomyopathy (TC) is characterized by an acute transient left ventricular systolic dysfunction mimicking acute coronary syndrome (ACS) without significant coronary stenosis. The aim of this study was to examine the electrocardiographic repolarization patterns of TC and ACS and to compare them from hospital admission to hospital discharge. Forty-five patients with TC were matched with 45 patients with ACS according to age, gender, and presence or absence of ST elevation at hospital admission. A complete 12-lead electrocardiography was performed within 12 hours after symptoms onset and then repeated after 3, 5, and 7 days. All patients underwent coronary angiography, and patients with ACS also underwent percutaneous revascularization. Within 12 hours from the symptoms' onset, patients with TC had a significantly fewer number of leads with ST elevation and a significantly more number of leads with T-wave inversion. These differences, however, were not present after 72 hours and a similar trend was seen over time during the 7-day follow-up. Patients with TC had a significant longer corrected QT interval at admission and during the whole follow-up. In conclusion, in the electrocardiograms collected 12 hours within symptoms onset, patients with TC and those with ACS showed significant differences in cardiac repolarization. However, the number of leads with either ST-segment deviation or T-wave alterations in patients with TC soon matched the ACS group undergoing percutaneous revascularization. In contrast, corrected QT interval was persistently longer in patients with TC and, despite a similar reduction in length over time in both groups, it was still significantly longer after 7 days."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "23334714",
"title": "A comprehensive survey of wearable and wireless ECG monitoring systems for older adults.",
"abstract": "Wearable health monitoring is an emerging technology for continuous monitoring of vital signs including the electrocardiogram (ECG). This signal is widely adopted to diagnose and assess major health risks and chronic cardiac diseases. This paper focuses on reviewing wearable ECG monitoring systems in the form of wireless, mobile and remote technologies related to older adults. Furthermore, the efficiency, user acceptability, strategies and recommendations on improving current ECG monitoring systems with an overview of the design and modelling are presented. In this paper, over 120 ECG monitoring systems were reviewed and classified into smart wearable, wireless, mobile ECG monitoring systems with related signal processing algorithms. The results of the review suggest that most research in wearable ECG monitoring systems focus on the older adults and this technology has been adopted in aged care facilitates. Moreover, it is shown that how mobile telemedicine systems have evolved and how advances in wearable wireless textile-based systems could ensure better quality of healthcare delivery. The main drawbacks of deployed ECG monitoring systems including imposed limitations on patients, short battery life, lack of user acceptability and medical professional's feedback, and lack of security and privacy of essential data have been also discussed."
},
{
"pmid": "32213969",
"title": "ECG Monitoring Systems: Review, Architecture, Processes, and Key Challenges.",
"abstract": "Health monitoring and its related technologies is an attractive research area. The electrocardiogram (ECG) has always been a popular measurement scheme to assess and diagnose cardiovascular diseases (CVDs). The number of ECG monitoring systems in the literature is expanding exponentially. Hence, it is very hard for researchers and healthcare experts to choose, compare, and evaluate systems that serve their needs and fulfill the monitoring requirements. This accentuates the need for a verified reference guiding the design, classification, and analysis of ECG monitoring systems, serving both researchers and professionals in the field. In this paper, we propose a comprehensive, expert-verified taxonomy of ECG monitoring systems and conduct an extensive, systematic review of the literature. This provides evidence-based support for critically understanding ECG monitoring systems' components, contexts, features, and challenges. Hence, a generic architectural model for ECG monitoring systems is proposed, an extensive analysis of ECG monitoring systems' value chain is conducted, and a thorough review of the relevant literature, classified against the experts' taxonomy, is presented, highlighting challenges and current trends. Finally, we identify key challenges and emphasize the importance of smart monitoring systems that leverage new technologies, including deep learning, artificial intelligence (AI), Big Data and Internet of Things (IoT), to provide efficient, cost-aware, and fully connected monitoring systems."
},
{
"pmid": "29035225",
"title": "A Review of Automated Methods for Detection of Myocardial Ischemia and Infarction Using Electrocardiogram and Electronic Health Records.",
"abstract": "There is a growing body of research focusing on automatic detection of ischemia and myocardial infarction (MI) using computer algorithms. In clinical settings, ischemia and MI are diagnosed using electrocardiogram (ECG) recordings as well as medical context including patient symptoms, medical history, and risk factors-information that is often stored in the electronic health records. The ECG signal is inspected to identify changes in the morphology such as ST-segment deviation and T-wave changes. Some of the proposed methods compute similar features automatically while others use nonconventional features such as wavelet coefficients. This review provides an overview of the methods that have been proposed in this area, focusing on their historical evolution, the publicly available datasets that they have used to evaluate their performance, and the details of their algorithms for ECG and EHR analysis. The validation strategies that have been used to evaluate the performance of the proposed methods are also presented. Finally, the paper provides recommendations for future research to address the shortcomings of the currently existing methods and practical considerations to make the proposed technical solutions applicable in clinical practice."
},
{
"pmid": "34604757",
"title": "Deep learning analysis of resting electrocardiograms for the detection of myocardial dysfunction, hypertrophy, and ischaemia: a systematic review.",
"abstract": "The aim of this review was to assess the evidence for deep learning (DL) analysis of resting electrocardiograms (ECGs) to predict structural cardiac pathologies such as left ventricular (LV) systolic dysfunction, myocardial hypertrophy, and ischaemic heart disease. A systematic literature search was conducted to identify published original articles on end-to-end DL analysis of resting ECG signals for the detection of structural cardiac pathologies. Studies were excluded if the ECG was acquired by ambulatory, stress, intracardiac, or implantable devices, and if the pathology of interest was arrhythmic in nature. After duplicate reviewers screened search results, 12 articles met the inclusion criteria and were included. Three articles used DL to detect LV systolic dysfunction, achieving an area under the curve (AUC) of 0.89-0.93 and an accuracy of 98%. One study used DL to detect LV hypertrophy, achieving an AUC of 0.87 and an accuracy of 87%. Six articles used DL to detect acute myocardial infarction, achieving an AUC of 0.88-1.00 and an accuracy of 83-99.9%. Two articles used DL to detect stable ischaemic heart disease, achieving an accuracy of 95-99.9%. Deep learning models, particularly those that used convolutional neural networks, outperformed rules-based models and other machine learning models. Deep learning is a promising technique to analyse resting ECG signals for the detection of structural cardiac pathologies, which has clinical applicability for more effective screening of asymptomatic populations and expedited diagnostic work-up of symptomatic patients at risk for cardiovascular disease."
},
{
"pmid": "34713056",
"title": "Identifying Heart Failure in ECG Data With Artificial Intelligence-A Meta-Analysis.",
"abstract": "Introduction: Electrocardiography (ECG) is a quick and easily accessible method for diagnosis and screening of cardiovascular diseases including heart failure (HF). Artificial intelligence (AI) can be used for semi-automated ECG analysis. The aim of this evaluation was to provide an overview of AI use in HF detection from ECG signals and to perform a meta-analysis of available studies. Methods and Results: An independent comprehensive search of the PubMed and Google Scholar database was conducted for articles dealing with the ability of AI to predict HF based on ECG signals. Only original articles published in peer-reviewed journals were considered. A total of five reports including 57,027 patients and 579,134 ECG datasets were identified including two sets of patient-level data and three with ECG-based datasets. The AI-processed ECG data yielded areas under the receiver operator characteristics curves between 0.92 and 0.99 to identify HF with higher values in ECG-based datasets. Applying a random-effects model, an sROC of 0.987 was calculated. Using the contingency tables led to diagnostic odds ratios ranging from 3.44 [95% confidence interval (CI) = 3.12-3.76] to 13.61 (95% CI = 13.14-14.08) also with lower values in patient-level datasets. The meta-analysis diagnostic odds ratio was 7.59 (95% CI = 5.85-9.34). Conclusions: The present meta-analysis confirms the ability of AI to predict HF from standard 12-lead ECG signals underlining the potential of such an approach. The observed overestimation of the diagnostic ability in artificial ECG databases compared to patient-level data stipulate the need for robust prospective studies."
},
{
"pmid": "34534279",
"title": "Application of artificial intelligence to the electrocardiogram.",
"abstract": "Artificial intelligence (AI) has given the electrocardiogram (ECG) and clinicians reading them super-human diagnostic abilities. Trained without hard-coded rules by finding often subclinical patterns in huge datasets, AI transforms the ECG, a ubiquitous, non-invasive cardiac test that is integrated into practice workflows, into a screening tool and predictor of cardiac and non-cardiac diseases, often in asymptomatic individuals. This review describes the mathematical background behind supervised AI algorithms, and discusses selected AI ECG cardiac screening algorithms including those for the detection of left ventricular dysfunction, episodic atrial fibrillation from a tracing recorded during normal sinus rhythm, and other structural and valvular diseases. The ability to learn from big data sets, without the need to understand the biological mechanism, has created opportunities for detecting non-cardiac diseases as COVID-19 and introduced challenges with regards to data privacy. Like all medical tests, the AI ECG must be carefully vetted and validated in real-world clinical environments. Finally, with mobile form factors that allow acquisition of medical-grade ECGs from smartphones and wearables, the use of AI may enable massive scalability to democratize healthcare."
},
{
"pmid": "34763807",
"title": "DeepMI: Deep multi-lead ECG fusion for identifying myocardial infarction and its occurrence-time.",
"abstract": "Myocardial Infarction (MI) has the highest mortality of all cardiovascular diseases (CVDs). Detection of MI and information regarding its occurrence-time in particular, would enable timely interventions that may improve patient outcomes, thereby reducing the global rise in CVD deaths. Electrocardiogram (ECG) recordings are currently used to screen MI patients. However, manual inspection of ECGs is time-consuming and prone to subjective bias. Machine learning methods have been adopted for automated ECG diagnosis, but most approaches require extraction of ECG beats or consider leads independently of one another. We propose an end-to-end deep learning approach, DeepMI, to classify MI from Normal cases as well as identifying the time-occurrence of MI (defined as Acute, Recent and Old), using a collection of fusion strategies on 12 ECG leads at data-, feature-, and decision-level. In order to minimise computational overhead, we employ transfer learning using existing computer vision networks. Moreover, we use recurrent neural networks to encode the longitudinal information inherent in ECGs. We validated DeepMI on a dataset collected from 17,381 patients, in which over 323,000 samples were extracted per ECG lead. We were able to classify Normal cases as well as Acute, Recent and Old onset cases of MI, with AUROCs of 96.7%, 82.9%, 68.6% and 73.8%, respectively. We have demonstrated a multi-lead fusion approach to detect the presence and occurrence-time of MI. Our end-to-end framework provides flexibility for different levels of multi-lead ECG fusion and performs feature extraction via transfer learning."
},
{
"pmid": "12691437",
"title": "Long-term ST database: a reference for the development and evaluation of automated ischaemia detectors and for the study of the dynamics of myocardial ischaemia.",
"abstract": "The long-term ST database is the result of a multinational research effort. The goal was to develop a challenging and realistic research resource for development and evaluation of automated systems to detect transient ST segment changes in electrocardiograms and for supporting basic research into the mechanisms and dynamics of transient myocardial ischaemia. Twenty-four hour ambulatory ECG records were selected from routine clinical practice settings in the USA and Europe, between 1994 and 2000, on the basis of occurrence of ischaemic and non-ischaemic ST segment changes. Human expert annotators used newly developed annotation protocols and a specially developed interactive graphic editor tool (SEMIA) that supported paperless editing of annotations and facilitated international co-operation via the Internet. The database contains 86 two- and three-channel 24 h annotated ambulatory records from 80 patients and is stored on DVD-ROMs. The database annotation files contain ST segment annotations of transient ischaemic (1155) and heart-rate related ST episodes and annotations of non-ischaemic ST segment events related to postural changes and conduction abnormalities. The database is intended to complement the European Society of Cardiology ST-T database and the MIT-BIH and AHA arrhythmia databases. It provides a comprehensive representation of 'real-world' data, with numerous examples of transient ischaemic and non-ischaemic ST segment changes, arrhythmias, conduction abnormalities, axis shifts, noise and artifacts."
},
{
"pmid": "32451379",
"title": "PTB-XL, a large publicly available electrocardiography dataset.",
"abstract": "Electrocardiography (ECG) is a key non-invasive diagnostic tool for cardiovascular diseases which is increasingly supported by algorithms based on machine learning. Major obstacles for the development of automatic ECG interpretation algorithms are both the lack of public datasets and well-defined benchmarking procedures to allow comparison s of different algorithms. To address these issues, we put forward PTB-XL, the to-date largest freely accessible clinical 12-lead ECG-waveform dataset comprising 21837 records from 18885 patients of 10 seconds length. The ECG-waveform data was annotated by up to two cardiologists as a multi-label dataset, where diagnostic labels were further aggregated into super and subclasses. The dataset covers a broad range of diagnostic classes including, in particular, a large fraction of healthy records. The combination with additional metadata on demographics, additional diagnostic statements, diagnosis likelihoods, manually annotated signal properties as well as suggested folds for splitting training and test sets turns the dataset into a rich resource for the development and the evaluation of automatic ECG interpretation algorithms."
},
{
"pmid": "1396824",
"title": "The European ST-T database: standard for evaluating systems for the analysis of ST-T changes in ambulatory electrocardiography.",
"abstract": "The project for the development of the European ST-T annotated Database originated from a 'Concerted Action' on Ambulatory Monitoring, set up by the European Community in 1985. The goal was to prototype an ECG database for assessing the quality of ambulatory ECG monitoring (AECG) systems. After the 'concerted action', the development of the full database was coordinated by the Institute of Clinical Physiology of the National Research Council (CNR) in Pisa and the Thoraxcenter of Erasmus University in Rotterdam. Thirteen research groups from eight countries provided AECG tapes and annotated beat by beat the selected 2-channel records, each 2 h in duration. ST segment (ST) and T-wave (T) changes were identified and their onset, offset and peak beats annotated in addition to QRSs, beat types, rhythm and signal quality changes. In 1989, the European Society of Cardiology sponsored the remainder of the project. Recently the 90 records were completed and stored on CD-ROM. The records include 372 ST and 423 T changes. In cooperation with the Biomedical Engineering Centre of MIT (developers of the MIT-BIH arrhythmia database), the annotation scheme was revised to be consistent with both MIT-BIH and American Heart Association formats."
},
{
"pmid": "28437484",
"title": "ECG-ViEW II, a freely accessible electrocardiogram database.",
"abstract": "The Electrocardiogram Vigilance with Electronic data Warehouse II (ECG-ViEW II) is a large, single-center database comprising numeric parameter data of the surface electrocardiograms of all patients who underwent testing from 1 June 1994 to 31 July 2013. The electrocardiographic data include the test date, clinical department, RR interval, PR interval, QRS duration, QT interval, QTc interval, P axis, QRS axis, and T axis. These data are connected with patient age, sex, ethnicity, comorbidities, age-adjusted Charlson comorbidity index, prescribed drugs, and electrolyte levels. This longitudinal observational database contains 979,273 electrocardiograms from 461,178 patients over a 19-year study period. This database can provide an opportunity to study electrocardiographic changes caused by medications, disease, or other demographic variables. ECG-ViEW II is freely available at http://www.ecgview.org."
},
{
"pmid": "34262951",
"title": "Acute Myocardial Infarction Detection Using Deep Learning-Enabled Electrocardiograms.",
"abstract": "Background: Acute myocardial infarction (AMI) is associated with a poor prognosis. Therefore, accurate diagnosis and early intervention of the culprit lesion are of extreme importance. Therefore, we developed a neural network algorithm in this study to automatically diagnose AMI from 12-lead electrocardiograms (ECGs). Methods: We used the open-source PTB-XL database as the training and validation sets, with a 7:3 sample size ratio. Twenty-One thousand, eight hundred thirty-seven clinical 12-lead ECGs from the PTB-XL dataset were available for training and validation (15,285 were used in the training set and 6,552 in the validation set). Additionally, we randomly selected 205 ECGs from a dataset built by Chapman University, CA, USA and Shaoxing People's Hospital, China, as the testing set. We used a residual network for training and validation. The model performance was experimentally verified in terms of area under the curve (AUC), precision, sensitivity, specificity, and F1 score. Results: The AUC of the training, validation, and testing sets were 0.964 [95% confidence interval (CI): 0.961-0.966], 0.944 (95% CI: 0.939-0.949), and 0.977 (95% CI: 0.961-0.991), respectively. The precision, sensitivity, specificity, and F1 score of the deep learning model for AMI diagnosis from ECGs were 0.827, 0.824, 0.950, and 0.825, respectively, in the training set, 0.789, 0.818, 0.913, and 0.803, respectively, in the validation set, and 0.830, 0.951, 0.951, and 0.886, respectively, in the testing set. The AUC for automatic AMI location diagnosis of LMI, IMI, ASMI, AMI, ALMI were 0.969 (95% CI: 0.959-0.979), 0.973 (95% CI: 0.962-0.978), 0.987 (95% CI: 0.963-0.989), 0.961 (95% CI: 0.956-0.989), and 0.996 (95% CI: 0.957-0.997), respectively. Conclusions: The residual network-based algorithm can effectively automatically diagnose AMI and MI location from 12-lead ECGs."
},
{
"pmid": "33235279",
"title": "Artificial intelligence algorithm for detecting myocardial infarction using six-lead electrocardiography.",
"abstract": "Rapid diagnosis of myocardial infarction (MI) using electrocardiography (ECG) is the cornerstone of effective treatment and prevention of mortality; however, conventional interpretation methods has low reliability for detecting MI and is difficulty to apply to limb 6-lead ECG based life type or wearable devices. We developed and validated a deep learning-based artificial intelligence algorithm (DLA) for detecting MI using 6-lead ECG. A total of 412,461 ECGs were used to develop a variational autoencoder (VAE) that reconstructed precordial 6-lead ECG using limb 6-lead ECG. Data from 9536, 1301, and 1768 ECGs of adult patients who underwent coronary angiography within 24 h from each ECG were used for development, internal and external validation, respectively. During internal and external validation, the area under the receiver operating characteristic curves of the DLA with VAE using a 6-lead ECG were 0.880 and 0.854, respectively, and the performances were preserved by the territory of the coronary lesion. Our DLA successfully detected MI using a 12-lead ECG or a 6-lead ECG. The results indicate that MI could be detected not only with a conventional 12 lead ECG but also with a life type 6-lead ECG device that employs our DLA."
},
{
"pmid": "33006947",
"title": "Detecting myocardial scar using electrocardiogram data and deep neural networks.",
"abstract": "Ischaemic heart disease is among the most frequent causes of death. Early detection of myocardial pathologies can increase the benefit of therapy and reduce the number of lethal cases. Presence of myocardial scar is an indicator for developing ischaemic heart disease and can be detected with high diagnostic precision by magnetic resonance imaging. However, magnetic resonance imaging scanners are expensive and of limited availability. It is known that presence of myocardial scar has an impact on the well-established, reasonably low cost, and almost ubiquitously available electrocardiogram. However, this impact is non-specific and often hard to detect by a physician. We present an artificial intelligence based approach - namely a deep learning model - for the prediction of myocardial scar based on an electrocardiogram and additional clinical parameters. The model was trained and evaluated by applying 6-fold cross-validation to a dataset of 12-lead electrocardiogram time series together with clinical parameters. The proposed model for predicting the presence of scar tissue achieved an area under the curve score, sensitivity, specificity, and accuracy of 0.89, 70.0, 84.3, and 78.0%, respectively. This promisingly high diagnostic precision of our electrocardiogram-based deep learning models for myocardial scar detection may support a novel, comprehensible screening method."
},
{
"pmid": "34034715",
"title": "Classification of COVID-19 electrocardiograms by using hexaxial feature mapping and deep learning.",
"abstract": "BACKGROUND\nCoronavirus disease 2019 (COVID-19) has become a pandemic since its first appearance in late 2019. Deaths caused by COVID-19 are still increasing day by day and early diagnosis has become crucial. Since current diagnostic methods have many disadvantages, new investigations are needed to improve the performance of diagnosis.\n\n\nMETHODS\nA novel method is proposed to automatically diagnose COVID-19 by using Electrocardiogram (ECG) data with deep learning for the first time. Moreover, a new and effective method called hexaxial feature mapping is proposed to represent 12-lead ECG to 2D colorful images. Gray-Level Co-Occurrence Matrix (GLCM) method is used to extract features and generate hexaxial mapping images. These generated images are then fed into a new Convolutional Neural Network (CNN) architecture to diagnose COVID-19.\n\n\nRESULTS\nTwo different classification scenarios are conducted on a publicly available paper-based ECG image dataset to reveal the diagnostic capability and performance of the proposed approach. In the first scenario, ECG data labeled as COVID-19 and No-Findings (normal) are classified to evaluate COVID-19 classification ability. According to results, the proposed approach provides encouraging COVID-19 detection performance with an accuracy of 96.20% and F1-Score of 96.30%. In the second scenario, ECG data labeled as Negative (normal, abnormal, and myocardial infarction) and Positive (COVID-19) are classified to evaluate COVID-19 diagnostic ability. The experimental results demonstrated that the proposed approach provides satisfactory COVID-19 prediction performance with an accuracy of 93.00% and F1-Score of 93.20%. Furthermore, different experimental studies are conducted to evaluate the robustness of the proposed approach.\n\n\nCONCLUSION\nAutomatic detection of cardiovascular changes caused by COVID-19 can be possible with a deep learning framework through ECG data. This not only proves the presence of cardiovascular changes caused by COVID-19 but also reveals that ECG can potentially be used in the diagnosis of COVID-19. We believe the proposed study may provide a crucial decision-making system for healthcare professionals.\n\n\nSOURCE CODE\nAll source codes are made publicly available at: https://github.com/mkfzdmr/COVID-19-ECG-Classification."
},
{
"pmid": "17669389",
"title": "ECG signal denoising and baseline wander correction based on the empirical mode decomposition.",
"abstract": "The electrocardiogram (ECG) is widely used for diagnosis of heart diseases. Good quality ECG are utilized by physicians for interpretation and identification of physiological and pathological phenomena. However, in real situations, ECG recordings are often corrupted by artifacts. Two dominant artifacts present in ECG recordings are: (1) high-frequency noise caused by electromyogram induced noise, power line interferences, or mechanical forces acting on the electrodes; (2) baseline wander (BW) that may be due to respiration or the motion of the patients or the instruments. These artifacts severely limit the utility of recorded ECGs and thus need to be removed for better clinical evaluation. Several methods have been developed for ECG enhancement. In this paper, we propose a new ECG enhancement method based on the recently developed empirical mode decomposition (EMD). The proposed EMD-based method is able to remove both high-frequency noise and BW with minimum signal distortion. The method is validated through experiments on the MIT-BIH databases. Both quantitative and qualitative results are given. The simulations show that the proposed EMD-based method provides very good results for denoising and BW removal."
},
{
"pmid": "33770545",
"title": "Convolutional neural network based automatic screening tool for cardiovascular diseases using different intervals of ECG signals.",
"abstract": "BACKGROUND AND OBJECTIVE\nAutomatic screening tools can be applied to detect cardiovascular diseases (CVDs), which are the leading cause of death worldwide. As an effective and non-invasive method, electrocardiogram (ECG) based approaches are widely used to identify CVDs. Hence, this paper proposes a deep convolutional neural network (CNN) to classify five CVDs using standard 12-lead ECG signals.\n\n\nMETHODS\nThe Physiobank (PTB) ECG database is used in this study. Firstly, ECG signals are segmented into different intervals (one-second, two-seconds and three-seconds), without any wave detection, and three datasets are obtained. Secondly, as an alternative to any complex preprocessing, durations of raw ECG signals have been considered as input with simple min-max normalization. Lastly, a ten-fold cross-validation method is employed for one-second ECG signals and also tested on other two datasets (two-seconds and three-seconds).\n\n\nRESULTS\nComparing to the competing approaches, the proposed CNN acquires the highest performance, having an accuracy, sensitivity, and specificity of 99.59%, 99.04%, and 99.87%, respectively, with one-second ECG signals. The overall accuracy, sensitivity, and specificity obtained are 99.80%, 99.48%, and 99.93%, respectively, using two-seconds of signals with pre-trained proposed models. The accuracy, sensitivity, and specificity of segmented ECG tested by three-seconds signals are 99.84%, 99.52%, and 99.95%, respectively.\n\n\nCONCLUSION\nThe results of this study indicate that the proposed system accomplishes high performance and keeps the characterizations in brief with flexibility at the same time, which means that it has the potential for implementation in a practical, real-time medical environment."
},
{
"pmid": "28481991",
"title": "Deep learning for healthcare: review, opportunities and challenges.",
"abstract": "Gaining knowledge and actionable insights from complex, high-dimensional and heterogeneous biomedical data remains a key challenge in transforming health care. Various types of data have been emerging in modern biomedical research, including electronic health records, imaging, -omics, sensor data and text, which are complex, heterogeneous, poorly annotated and generally unstructured. Traditional data mining and statistical learning approaches typically need to first perform feature engineering to obtain effective and more robust features from those data, and then build prediction or clustering models on top of them. There are lots of challenges on both steps in a scenario of complicated data and lacking of sufficient domain knowledge. The latest advances in deep learning technologies provide new effective paradigms to obtain end-to-end learning models from complex data. In this article, we review the recent literature on applying deep learning technologies to advance the health care domain. Based on the analyzed work, we suggest that deep learning approaches could be the vehicle for translating big biomedical data into improved human health. However, we also note limitations and needs for improved methods development and applications, especially in terms of ease-of-understanding for domain experts and citizen scientists. We discuss such challenges and suggest developing holistic and meaningful interpretable architectures to bridge deep learning models and human interpretability."
},
{
"pmid": "29888083",
"title": "A Deep Learning Approach to Examine Ischemic ST Changes in Ambulatory ECG Recordings.",
"abstract": "Patients with suspected acute coronary syndrome (ACS) are at risk of transient myocardial ischemia (TMI), which could lead to serious morbidity or even mortality. Early detection of myocardial ischemia can reduce damage to heart tissues and improve patient condition. Significant ST change in the electrocardiogram (ECG) is an important marker for detecting myocardial ischemia during the rule-out phase of potential ACS. However, current ECG monitoring software is vastly underused due to excessive false alarms. The present study aims to tackle this problem by combining a novel image-based approach with deep learning techniques to improve the detection accuracy of significant ST depression change. The obtained convolutional neural network (CNN) model yields an average area under the curve (AUC) at 89.6% from an independent testing set. At selected optimal cutoff thresholds, the proposed model yields a mean sensitivity at 84.4% while maintaining specificity at 84.9%."
},
{
"pmid": "32074979",
"title": "Hybrid Network with Attention Mechanism for Detection and Location of Myocardial Infarction Based on 12-Lead Electrocardiogram Signals.",
"abstract": "The electrocardiogram (ECG) is a non-invasive, inexpensive, and effective tool for myocardial infarction (MI) diagnosis. Conventional detection algorithms require solid domain expertise and rely heavily on handcrafted features. Although previous works have studied deep learning methods for extracting features, these methods still neglect the relationships between different leads and the temporal characteristics of ECG signals. To handle the issues, a novel multi-lead attention (MLA) mechanism integrated with convolutional neural network (CNN) and bidirectional gated recurrent unit (BiGRU) framework (MLA-CNN-BiGRU) is therefore proposed to detect and locate MI via 12-lead ECG records. Specifically, the MLA mechanism automatically measures and assigns the weights to different leads according to their contribution. The two-dimensional CNN module exploits the interrelated characteristics between leads and extracts discriminative spatial features. Moreover, the BiGRU module extracts essential temporal features inside each lead. The spatial and temporal features from these two modules are fused together as global features for classification. In experiments, MI location and detection were performed under both intra-patient scheme and inter-patient scheme to test the robustness of the proposed framework. Experimental results indicate that our intelligent framework achieved satisfactory performance and demonstrated vital clinical significance."
},
{
"pmid": "31891901",
"title": "Multi-branch fusion network for Myocardial infarction screening from 12-lead ECG images.",
"abstract": "BACKGROUND AND OBJECTIVE\nMyocardial infarction (MI) is a myocardial anoxic incapacitation caused by severe cardiovascular obstruction that can cause irreversible injury or even death. In medical field, the electrocardiogram (ECG) is a common and effective way to diagnose myocardial infarction, which often requires a wealth of medical knowledge. It is necessary to develop an approach that can detect the MI automatically.\n\n\nMETHODS\nIn this paper, we propose a multi-branch fusion framework for automatic MI screening from 12-lead ECG images, which consists of multi-branch network, feature fusion and classification network. First, we use text detection and position alignment to automatically separate twelve leads from ECG images. Then, those 12 leads are input into the multi-branch network constructed by a shallow neural network to get 12 feature maps. After concatenating those feature maps by depth fusion, classification is explored to judge the given ECG is MI or not.\n\n\nRESULTS\nBased on extensive experiments on an ECG image dataset, performances of different combinations of structures are analyzed. The proposed network is compared with other networks and also compared with physicians in the practical use. All the experiments verify that the proposed method is effective for MI screening based on ECG images, which achieves accuracy, sensitivity, specificity and F1-score of 94.73%, 96.41%, 95.94% and 93.79% respectively.\n\n\nCONCLUSIONS\nRather than using the typical one-dimensional electrical ECG signal, this paper gives an effective model to screen MI by analyzing 12-lead ECG images. Extracting and analyzing these 12 leads from their corresponding ECG images is a good attempt in the application of MI screening."
},
{
"pmid": "32439873",
"title": "Performance of a convolutional neural network derived from an ECG database in recognizing myocardial infarction.",
"abstract": "Artificial intelligence (AI) is developing rapidly in the medical technology field, particularly in image analysis. ECG-diagnosis is an image analysis in the sense that cardiologists assess the waveforms presented in a 2-dimensional image. We hypothesized that an AI using a convolutional neural network (CNN) may also recognize ECG images and patterns accurately. We used the PTB ECG database consisting of 289 ECGs including 148 myocardial infarction (MI) cases to develop a CNN to recognize MI in ECG. Our CNN model, equipped with 6-layer architecture, was trained with training-set ECGs. After that, our CNN and 10 physicians are tested with test-set ECGs and compared their MI recognition capability in metrics F1 (harmonic mean of precision and recall) and accuracy. The F1 and accuracy by our CNN were significantly higher (83 ± 4%, 81 ± 4%) as compared to physicians (70 ± 7%, 67 ± 7%, P < 0.0001, respectively). Furthermore, elimination of Goldberger-leads or ECG image compression up to quarter resolution did not significantly decrease the recognition capability. Deep learning with a simple CNN for image analysis may achieve a comparable capability to physicians in recognizing MI on ECG. Further investigation is warranted for the use of AI in ECG image assessment."
},
{
"pmid": "32847070",
"title": "A Robust Multilevel DWT Densely Network for Cardiovascular Disease Classification.",
"abstract": "Cardiovascular disease is the leading cause of death worldwide. Immediate and accurate diagnoses of cardiovascular disease are essential for saving lives. Although most of the previously reported works have tried to classify heartbeats accurately based on the intra-patient paradigm, they suffer from category imbalance issues since abnormal heartbeats appear much less regularly than normal heartbeats. Furthermore, most existing methods rely on data preprocessing steps, such as noise removal and R-peak location. In this study, we present a robust classification system using a multilevel discrete wavelet transform densely network (MDD-Net) for the accurate detection of normal, coronary artery disease (CAD), myocardial infarction (MI) and congestive heart failure (CHF). First, the raw ECG signals from different databases are divided into same-size segments using an original adaptive sample frequency segmentation algorithm (ASFS). Then, the fusion features are extracted from the MDD-Net to achieve great classification performance. We evaluated the proposed method considering the intra-patient and inter-patient paradigms. The average accuracy, positive predictive value, sensitivity and specificity were 99.74%, 99.09%, 98.67% and 99.83%, respectively, under the intra-patient paradigm, and 96.92%, 92.17%, 89.18% and 97.77%, respectively, under the inter-patient paradigm. Moreover, the experimental results demonstrate that our model is robust to noise and class imbalance issues."
},
{
"pmid": "33019030",
"title": "Energy-efficient Real-time Myocardial Infarction Detection on Wearable Devices.",
"abstract": "Myocardial Infarction (MI) is a fatal heart disease that is a leading cause of death. The silent and recurrent nature of MI requires real-time monitoring on a daily basis through wearable devices. Real-time MI detection on wearable devices requires a fast and energy-efficient solution to enable long term monitoring. In this paper, we propose an MI detection methodology using Binary Convolutional Neural Network (BCNN) that is fast, energy-efficient and outperforms the state-of-the- art work on wearable devices. We validate the performance of our methodology on the well known PTB diagnostic ECG database from PhysioNet. Evaluation on real hardware shows that our BCNN is faster and achieves up to 12x energy efficiency compared to the state-of-the-art work."
},
{
"pmid": "33019341",
"title": "A Spectral-longitudinal Model for Detection of Heart Attack from 12-lead Electrocardiogram Waveforms.",
"abstract": "Cardiovascular diseases (CVDs) remain responsible for millions of deaths annually. Myocardial infarction (MI) is the most prevalent condition among CVDs. Although datadriven approaches have been applied to predict CVDs from ECG signals, comparatively little work has been done on the use of multiple-lead ECG traces and their efficient integration to diagnose CVDs. In this paper, we propose an end-to-end trainable and joint spectral-longitudinal model to predict heart attack using data-level fusion of multiple ECG leads. The spectral stage transforms the time-series waveforms to stacked spectrograms and encodes the frequency-time characteristics, whilst the longitudinal model helps to utilise the temporal dependency that exists in these waveforms using recurrent networks. We validate the proposed approach using a public MI dataset. Our results show that the proposed spectrallongitudinal model achieves the highest performance compared to the baseline methods."
},
{
"pmid": "33991857",
"title": "Automated detection of coronary artery disease, myocardial infarction and congestive heart failure using GaborCNN model with ECG signals.",
"abstract": "Cardiovascular diseases (CVDs) are main causes of death globally with coronary artery disease (CAD) being the most important. Timely diagnosis and treatment of CAD is crucial to reduce the incidence of CAD complications like myocardial infarction (MI) and ischemia-induced congestive heart failure (CHF). Electrocardiogram (ECG) signals are most commonly employed as the diagnostic screening tool to detect CAD. In this study, an automated system (AS) was developed for the automated categorization of electrocardiogram signals into normal, CAD, myocardial infarction (MI) and congestive heart failure (CHF) classes using convolutional neural network (CNN) and unique GaborCNN models. Weight balancing was used to balance the imbalanced dataset. High classification accuracies of more than 98.5% were obtained by the CNN and GaborCNN models respectively, for the 4-class classification of normal, coronary artery disease, myocardial infarction and congestive heart failure classes. GaborCNN is a more preferred model due to its good performance and reduced computational complexity as compared to the CNN model. To the best of our knowledge, this is the first study to propose GaborCNN model for automated categorizing of normal, coronary artery disease, myocardial infarction and congestive heart failure classes using ECG signals. Our proposed system is equipped to be validated with bigger database and has the potential to aid the clinicians to screen for CVDs using ECG signals."
},
{
"pmid": "33803265",
"title": "Detection of Myocardial Infarction Using ECG and Multi-Scale Feature Concatenate.",
"abstract": "Diverse computer-aided diagnosis systems based on convolutional neural networks were applied to automate the detection of myocardial infarction (MI) found in electrocardiogram (ECG) for early diagnosis and prevention. However, issues, particularly overfitting and underfitting, were not being taken into account. In other words, it is unclear whether the network structure is too simple or complex. Toward this end, the proposed models were developed by starting with the simplest structure: a multi-lead features-concatenate narrow network (N-Net) in which only two convolutional layers were included in each lead branch. Additionally, multi-scale features-concatenate networks (MSN-Net) were also implemented where larger features were being extracted through pooling the signals. The best structure was obtained via tuning both the number of filters in the convolutional layers and the number of inputting signal scales. As a result, the N-Net reached a 95.76% accuracy in the MI detection task, whereas the MSN-Net reached an accuracy of 61.82% in the MI locating task. Both networks give a higher average accuracy and a significant difference of p < 0.001 evaluated by the U test compared with the state-of-the-art. The models are also smaller in size thus are suitable to fit in wearable devices for offline monitoring. In conclusion, testing throughout the simple and complex network structure is indispensable. However, the way of dealing with the class imbalance problem and the quality of the extracted features are yet to be discussed."
},
{
"pmid": "33743488",
"title": "Localization of myocardial infarction with multi-lead ECG based on DenseNet.",
"abstract": "BACKGROUND AND OBJECTIVE\nMyocardial infarction (MI) is a critical acute ischemic heart disease, which can be early diagnosed by electrocardiogram (ECG). However, the most research of MI localization pay more attention on the specific changes in every ECG lead independent. In our study, the research envisages the development of a novel multi-lead MI localization approach based on the densely connected convolutional network (DenseNet).\n\n\nMETHODS\nConsidering the correlation of the multi-lead ECG, the method using parallel 12-lead ECG, systematically exploited the correlation of the inter-lead signals. In addition, the dense connection of DenseNet enhanced the reuse of the feature information between the inter-lead and intra-lead signals. The proposed method automatically captured the effective pathological features, which improved the identification of MI.\n\n\nRESULTS\nThe experimental results based on PTB diagnostic ECG database showed that the accuracy, sensitivity and specificity of the proposed method was 99.87%, 99.84% and 99.98% for 11 types of MI localization.\n\n\nCONCLUSIONS\nThe proposed method has achieved superior results compared to other localization methods, which can be introduced into the clinical practice to assist the diagnosis of MI."
},
{
"pmid": "29990164",
"title": "Real-Time Multilead Convolutional Neural Network for Myocardial Infarction Detection.",
"abstract": "In this paper, a novel algorithm based on a convolutional neural network (CNN) is proposed for myocardial infarction detection via multilead electrocardiogram (ECG). A beat segmentation algorithm utilizing multilead ECG is designed to obtain multilead beats, and fuzzy information granulation is adopted for preprocessing. Then, the beats are input into our multilead-CNN (ML-CNN), a novel model that includes sub two-dimensional (2-D) convolutional layers and lead asymmetric pooling (LAP) layers. As different leads represent various angles of the same heart, LAP can capture multiscale features of different leads, exploiting the individual characteristics of each lead. In addition, sub 2-D convolution can utilize the holistic characters of all the leads. It uses 1-D kernels shared among the different leads to generate local optimal features. These strategies make the ML-CNN suitable for multilead ECG processing. To evaluate our algorithm, actual ECG datasets from the PTB diagnostic database are used. The sensitivity of our algorithm is 95.40%, the specificity is 97.37%, and the accuracy is 96.00% in the experiments. Targeting lightweight mobile healthcare applications, real-time analyses are performed on both MATLAB and ARM Cortex-A9 platforms. The average processing times for each heartbeat are approximately 17.10 and 26.75 ms, respectively, which indicate that this method has good potential for mobile healthcare applications."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "33664502",
"title": "Smart wearable devices in cardiovascular care: where we are and how to move forward.",
"abstract": "Technological innovations reach deeply into our daily lives and an emerging trend supports the use of commercial smart wearable devices to manage health. In the era of remote, decentralized and increasingly personalized patient care, catalysed by the COVID-19 pandemic, the cardiovascular community must familiarize itself with the wearable technologies on the market and their wide range of clinical applications. In this Review, we highlight the basic engineering principles of common wearable sensors and where they can be error-prone. We also examine the role of these devices in the remote screening and diagnosis of common cardiovascular diseases, such as arrhythmias, and in the management of patients with established cardiovascular conditions, for example, heart failure. To date, challenges such as device accuracy, clinical validity, a lack of standardized regulatory policies and concerns for patient privacy are still hindering the widespread adoption of smart wearable technologies in clinical practice. We present several recommendations to navigate these challenges and propose a simple and practical 'ABCD' guide for clinicians, personalized to their specific practice needs, to accelerate the integration of these devices into the clinical workflow for optimal patient care."
},
{
"pmid": "31043065",
"title": "Corrie Health Digital Platform for Self-Management in Secondary Prevention After Acute Myocardial Infarction.",
"abstract": "BACKGROUND\nUnplanned readmissions after hospitalization for acute myocardial infarction are among the leading causes of preventable morbidity, mortality, and healthcare costs. Digital health interventions could be an effective tool in promoting self-management, adherence to guideline-directed therapy, and cardiovascular risk reduction. A digital health intervention developed at Johns Hopkins-the Corrie Health Digital Platform (Corrie)-includes the first cardiology Apple CareKit smartphone application, which is paired with an Apple Watch and iHealth Bluetooth-enabled blood pressure cuff. Corrie targets: (1) self-management of cardiac medications, (2) self-tracking of vital signs, (3) education about cardiovascular disease through articles and animated videos, and (4) care coordination that includes outpatient follow-up appointments.\n\n\nMETHODS AND RESULTS\nThe 3 phases of the MiCORE study (Myocardial infarction, Combined-device, Recovery Enhancement) include (1) the development of Corrie, (2) a pilot study to assess the usability and feasibility of Corrie, and (3) a prospective research study to primarily compare time to first readmission within 30 days postdischarge among patients with Corrie to patients in the historical standard of care comparison group. In Phase 2, the feasibility of deploying Corrie in an acute care setting was established among a sample of 60 patients with acute myocardial infarction. Phase 3 is ongoing and patients from 4 hospitals are being enrolled as early as possible during their hospital stay if they are 18 years or older, admitted with acute myocardial infarction (ST-segment-elevation myocardial infarction or type I non-ST-segment-elevation myocardial infarction), and own a smartphone. Patients are either being enrolled with their own personal devices or they are provided an iPhone and/or Apple Watch for the duration of the study. Phase 3 started in October 2017 and we aim to recruit 140 participants.\n\n\nCONCLUSIONS\nThis article will provide an in-depth understanding of the feasibility associated with implementing a digital health intervention in an acute care setting and the potential of Corrie as a self-management tool for acute myocardial infarction recovery."
},
{
"pmid": "30990200",
"title": "MFB-CBRNN: A Hybrid Network for MI Detection Using 12-Lead ECGs.",
"abstract": "This paper proposes a novel hybrid network named multiple-feature-branch convolutional bidirectional recurrent neural network (MFB-CBRNN) for myocardial infarction (MI) detection using 12-lead ECGs. The model efficiently combines convolutional neural network-based and recurrent neural network-based structures. Each feature branch consists of several one-dimensional convolutional and pooling layers, corresponding to a certain lead. All the feature branches are independent from each other, which are utilized to learn the diverse features from different leads. Moreover, a bidirectional long short term memory network is employed to summarize all the feature branches. Its good ability of feature aggregation has been proved by the experiments. Furthermore, the paper develops a novel optimization method, lead random mask (LRM), to alleviate overfitting and implement an implicit ensemble like dropout. The model with LRM can achieve a more accurate MI detection. Class-based and subject-based fivefold cross validations are both carried out using Physikalisch-Technische Bundesanstalt diagnostic database. Totally, there are 148 MI and 52 healthy control subjects involved in the experiments. The MFB-CBRNN achieves an overall accuracy of 99.90% in class-based experiments, and an overall accuracy of 93.08% in subject-based experiments. Compared with other related studies, our algorithm achieves a comparable or even better result on MI detection. Therefore, the MFB-CBRNN has a good generalization capacity and is suitable for MI detection using 12-lead ECGs. It has a potential to assist the real-world MI diagnostics and reduce the burden of cardiologists."
},
{
"pmid": "22144961",
"title": "Heart rate variability - a historical perspective.",
"abstract": "Heart rate variability (HRV), the beat-to-beat variation in either heart rate or the duration of the R-R interval - the heart period, has become a popular clinical and investigational tool. The temporal fluctuations in heart rate exhibit a marked synchrony with respiration (increasing during inspiration and decreasing during expiration - the so called respiratory sinus arrhythmia, RSA) and are widely believed to reflect changes in cardiac autonomic regulation. Although the exact contributions of the parasympathetic and the sympathetic divisions of the autonomic nervous system to this variability are controversial and remain the subject of active investigation and debate, a number of time and frequency domain techniques have been developed to provide insight into cardiac autonomic regulation in both health and disease. It is the purpose of this essay to provide an historical overview of the evolution in the concept of HRV. Briefly, pulse rate was first measured by ancient Greek physicians and scientists. However, it was not until the invention of the \"Physician's Pulse Watch\" (a watch with a second hand that could be stopped) in 1707 that changes in pulse rate could be accurately assessed. The Rev. Stephen Hales (1733) was the first to note that pulse varied with respiration and in 1847 Carl Ludwig was the first to record RSA. With the measurement of the ECG (1895) and advent of digital signal processing techniques in the 1960s, investigation of HRV and its relationship to health and disease has exploded. This essay will conclude with a brief description of time domain, frequency domain, and non-linear dynamic analysis techniques (and their limitations) that are commonly used to measure HRV."
},
{
"pmid": "32074979",
"title": "Hybrid Network with Attention Mechanism for Detection and Location of Myocardial Infarction Based on 12-Lead Electrocardiogram Signals.",
"abstract": "The electrocardiogram (ECG) is a non-invasive, inexpensive, and effective tool for myocardial infarction (MI) diagnosis. Conventional detection algorithms require solid domain expertise and rely heavily on handcrafted features. Although previous works have studied deep learning methods for extracting features, these methods still neglect the relationships between different leads and the temporal characteristics of ECG signals. To handle the issues, a novel multi-lead attention (MLA) mechanism integrated with convolutional neural network (CNN) and bidirectional gated recurrent unit (BiGRU) framework (MLA-CNN-BiGRU) is therefore proposed to detect and locate MI via 12-lead ECG records. Specifically, the MLA mechanism automatically measures and assigns the weights to different leads according to their contribution. The two-dimensional CNN module exploits the interrelated characteristics between leads and extracts discriminative spatial features. Moreover, the BiGRU module extracts essential temporal features inside each lead. The spatial and temporal features from these two modules are fused together as global features for classification. In experiments, MI location and detection were performed under both intra-patient scheme and inter-patient scheme to test the robustness of the proposed framework. Experimental results indicate that our intelligent framework achieved satisfactory performance and demonstrated vital clinical significance."
},
{
"pmid": "32143796",
"title": "Comprehensive electrocardiographic diagnosis based on deep learning.",
"abstract": "Cardiovascular disease (CVD) is the leading cause of death worldwide, and coronary artery disease (CAD) is a major contributor. Early-stage CAD can progress if undiagnosed and left untreated, leading to myocardial infarction (MI) that may induce irreversible heart muscle damage, resulting in heart chamber remodeling and eventual congestive heart failure (CHF). Electrocardiography (ECG) signals can be useful to detect established MI, and may also be helpful for early diagnosis of CAD. For the latter especially, the ECG perturbations can be subtle and potentially misclassified during manual interpretation and/or when analyzed by traditional algorithms found in ECG instrumentation. For automated diagnostic systems (ADS), deep learning techniques are favored over conventional machine learning techniques, due to the automatic feature extraction and selection processes involved. This paper highlights various deep learning algorithms exploited for the classification of ECG signals into CAD, MI, and CHF conditions. The Convolutional Neural Network (CNN), followed by combined CNN and Long Short-Term Memory (LSTM) models, appear to be the most useful architectures for classification. A 16-layer LSTM model was developed in our study and validated using 10-fold cross-validation. A classification accuracy of 98.5% was achieved. Our proposed model has the potential to be a useful diagnostic tool in hospitals for the classification of abnormal ECG signals."
},
{
"pmid": "30625197",
"title": "Artificial intelligence to predict needs for urgent revascularization from 12-leads electrocardiography in emergency patients.",
"abstract": "BACKGROUND\nPatient with acute coronary syndrome benefits from early revascularization. However, methods for the selection of patients who require urgent revascularization from a variety of patients visiting the emergency room with chest symptoms is not fully established. Electrocardiogram is an easy and rapid procedure, but may contain crucial information not recognized even by well-trained physicians.\n\n\nOBJECTIVE\nTo make a prediction model for the needs for urgent revascularization from 12-lead electrocardiogram recorded in the emergency room.\n\n\nMETHOD\nWe developed an artificial intelligence model enabling the detection of hidden information from a 12-lead electrocardiogram recorded in the emergency room. Electrocardiograms obtained from consecutive patients visiting the emergency room at Keio University Hospital from January 2012 to April 2018 with chest discomfort was collected. These data were splitted into validation and derivation dataset with no duplication in each dataset. The artificial intelligence model was constructed to select patients who require urgent revascularization within 48 hours. The model was trained with the derivation dataset and tested using the validation dataset.\n\n\nRESULTS\nOf the consecutive 39,619 patients visiting the emergency room with chest discomfort, 362 underwent urgent revascularization. Of them, 249 were included in the derivation dataset and the remaining 113 were included in validation dataset. For the control, 300 were randomly selected as derivation dataset and another 130 patients were randomly selected for validation dataset from the 39,317 who did not undergo urgent revascularization. On validation, our artificial intelligence model had predictive value of the c-statistics 0.88 (95% CI 0.84-0.93) for detecting patients who required urgent revascularization.\n\n\nCONCLUSIONS\nOur artificial intelligence model provides information to select patients who need urgent revascularization from only 12-leads electrocardiogram in those visiting the emergency room with chest discomfort."
},
{
"pmid": "32166560",
"title": "Correction to: Urine Sediment Recognition Method Based on Multi-View Deep Residual Learning in Microscopic Image.",
"abstract": "In the original version of this article, the authors' units in the affiliation section are, unfortunately, incorrect. Jining No.1 people's hospital and Affiliated Hospital of Jining Medical University are two independent units and should not have been combined into one affiliation."
},
{
"pmid": "30523982",
"title": "Detecting and interpreting myocardial infarction using fully convolutional neural networks.",
"abstract": "OBJECTIVE\nWe aim to provide an algorithm for the detection of myocardial infarction that operates directly on ECG data without any preprocessing and to investigate its decision criteria.\n\n\nAPPROACH\nWe train an ensemble of fully convolutional neural networks on the PTB ECG dataset and apply state-of-the-art attribution methods.\n\n\nMAIN RESULTS\nOur classifier reaches 93.3% sensitivity and 89.7% specificity evaluated using 10-fold cross-validation with sampling based on patients. The presented method outperforms state-of-the-art approaches and reaches the performance level of human cardiologists for detection of myocardial infarction. We are able to discriminate channel-specific regions that contribute most significantly to the neural network's decision. Interestingly, the network's decision is influenced by signs also recognized by human cardiologists as indicative of myocardial infarction.\n\n\nSIGNIFICANCE\nOur results demonstrate the high prospects of algorithmic ECG analysis for future clinical applications considering both its quantitative performance as well as the possibility of assessing decision criteria on a per-example basis, which enhances the comprehensibility of the approach."
},
{
"pmid": "31669959",
"title": "ML-ResNet: A novel network to detect and locate myocardial infarction using 12 leads ECG.",
"abstract": "BACKGROUND AND OBJECTIVE\nMyocardial infarction (MI) is one of the most threatening cardiovascular diseases for human beings, which can be diagnosed by electrocardiogram (ECG). Automated detection methods based on ECG focus on extracting handcrafted features. However, limited by the performance of traditional methods and individual differences between patients, it's difficult for predesigned features to detect MI with high performance.\n\n\nMETHODS\nThe paper presents a novel method to detect and locate MI combining a multi-lead residual neural network (ML-ResNet) structure with three residual blocks and feature fusion via 12 leads ECG records. Specifically, single lead feature branch network is trained to automatically learn the representative features of different levels between different layers, which exploits local characteristics of ECG to characterize the spatial information representation. Then all the lead features are fused together as global features. To evaluate the generalization of proposed method and clinical utility, two schemes including the intra-patient scheme and inter-patient scheme are all employed.\n\n\nRESULTS\nExperimental results based on PTB (Physikalisch-Technische Bundesanstalt) database shows that our model achieves superior results with the accuracy of 95.49%, the sensitivity of 94.85%, the specificity of 97.37%, and the F1 score of 96.92% for MI detection under the inter-patient scheme compared to the state-of-the-art. By contrast, the accuracy is 99.92% and the F1 score is 99.94% based on 5-fold cross validation under the intra-patient scheme. As for five types of MI location, the proposed method also yields an average accuracy of 99.72% and F1 of 99.67% in the intra-patient scheme.\n\n\nCONCLUSIONS\nThe proposed method for MI detection and location has achieved superior results compared to other detection methods. However, further promotion of the performance based on MI location for the inter-patient scheme still depends significantly on the mass data and the novel model which reflects spatial location information of different leads subtly."
},
{
"pmid": "31141794",
"title": "Classification of benign and malignant lung nodules from CT images based on hybrid features.",
"abstract": "The classification of benign and malignant lung nodules has great significance for the early detection of lung cancer, since early diagnosis of nodules can greatly increase patient survival. In this paper, we propose a novel classification method for lung nodules based on hybrid features from computed tomography (CT) images. The method fused 3D deep dual path network (DPN) features, local binary pattern (LBP)-based texture features and histogram of oriented gradients (HOG)-based shape features to characterize lung nodules. DPN is a convolutional neural network which integrates the advantages of aggregated residual transformations (ResNeXt) for feature reuse and a densely convolutional network (DenseNet) for exploring new features. LBP is a prominent feature descriptor for texture classification, when combining with the HOG descriptor, it can improve the classification performance considerably. To differentiate malignant nodules from benign ones, a gradient boosting machine (GBM) algorithm is employed. We evaluated the proposed method on the publicly available LUng Nodule Analysis 2016 (LUNA16) dataset with 1004 nodules, achieving an area under the receiver operating characteristic curve (AUC) of 0.9687 and accuracy of 93.78%. The promising results demonstrate that our method has strong robustness on the classification of nodule patterns by virtue of the joint use of texture features, shape features and 3D deep DPN features. The method has the potential to help radiologists to interpret diagnostic data and make decisions in clinical practice."
},
{
"pmid": "32064914",
"title": "Assessing and Mitigating Bias in Medical Artificial Intelligence: The Effects of Race and Ethnicity on a Deep Learning Model for ECG Analysis.",
"abstract": "BACKGROUND\nDeep learning algorithms derived in homogeneous populations may be poorly generalizable and have the potential to reflect, perpetuate, and even exacerbate racial/ethnic disparities in health and health care. In this study, we aimed to (1) assess whether the performance of a deep learning algorithm designed to detect low left ventricular ejection fraction using the 12-lead ECG varies by race/ethnicity and to (2) determine whether its performance is determined by the derivation population or by racial variation in the ECG.\n\n\nMETHODS\nWe performed a retrospective cohort analysis that included 97 829 patients with paired ECGs and echocardiograms. We tested the model performance by race/ethnicity for convolutional neural network designed to identify patients with a left ventricular ejection fraction ≤35% from the 12-lead ECG.\n\n\nRESULTS\nThe convolutional neural network that was previously derived in a homogeneous population (derivation cohort, n=44 959; 96.2% non-Hispanic white) demonstrated consistent performance to detect low left ventricular ejection fraction across a range of racial/ethnic subgroups in a separate testing cohort (n=52 870): non-Hispanic white (n=44 524; area under the curve [AUC], 0.931), Asian (n=557; AUC, 0.961), black/African American (n=651; AUC, 0.937), Hispanic/Latino (n=331; AUC, 0.937), and American Indian/Native Alaskan (n=223; AUC, 0.938). In secondary analyses, a separate neural network was able to discern racial subgroup category (black/African American [AUC, 0.84], and white, non-Hispanic [AUC, 0.76] in a 5-class classifier), and a network trained only in non-Hispanic whites from the original derivation cohort performed similarly well across a range of racial/ethnic subgroups in the testing cohort with an AUC of at least 0.930 in all racial/ethnic subgroups.\n\n\nCONCLUSIONS\nOur study demonstrates that while ECG characteristics vary by race, this did not impact the ability of a convolutional neural network to predict low left ventricular ejection fraction from the ECG. We recommend reporting of performance among diverse ethnic, racial, age, and sex groups for all new artificial intelligence tools to ensure responsible use of artificial intelligence in medicine."
},
{
"pmid": "32152582",
"title": "Deep learning models for electrocardiograms are susceptible to adversarial attack.",
"abstract": "Electrocardiogram (ECG) acquisition is increasingly widespread in medical and commercial devices, necessitating the development of automated interpretation strategies. Recently, deep neural networks have been used to automatically analyze ECG tracings and outperform physicians in detecting certain rhythm irregularities1. However, deep learning classifiers are susceptible to adversarial examples, which are created from raw data to fool the classifier such that it assigns the example to the wrong class, but which are undetectable to the human eye2,3. Adversarial examples have also been created for medical-related tasks4,5. However, traditional attack methods to create adversarial examples do not extend directly to ECG signals, as such methods introduce square-wave artefacts that are not physiologically plausible. Here we develop a method to construct smoothed adversarial examples for ECG tracings that are invisible to human expert evaluation and show that a deep learning model for arrhythmia detection from single-lead ECG6 is vulnerable to this type of attack. Moreover, we provide a general technique for collating and perturbing known adversarial examples to create multiple new ones. The susceptibility of deep learning ECG algorithms to adversarial misclassification implies that care should be taken when evaluating these models on ECGs that may have been altered, particularly when incentives for causing misclassification exist."
},
{
"pmid": "32286358",
"title": "Resolving challenges in deep learning-based analyses of histopathological images using explanation methods.",
"abstract": "Deep learning has recently gained popularity in digital pathology due to its high prediction quality. However, the medical domain requires explanation and insight for a better understanding beyond standard quantitative performance evaluation. Recently, many explanation methods have emerged. This work shows how heatmaps generated by these explanation methods allow to resolve common challenges encountered in deep learning-based digital histopathology analyses. We elaborate on biases which are typically inherent in histopathological image data. In the binary classification task of tumour tissue discrimination in publicly available haematoxylin-eosin-stained images of various tumour entities, we investigate three types of biases: (1) biases which affect the entire dataset, (2) biases which are by chance correlated with class labels and (3) sampling biases. While standard analyses focus on patch-level evaluation, we advocate pixel-wise heatmaps, which offer a more precise and versatile diagnostic instrument. This insight is shown to not only be helpful to detect but also to remove the effects of common hidden biases, which improves generalisation within and across datasets. For example, we could see a trend of improved area under the receiver operating characteristic (ROC) curve by 5% when reducing a labelling bias. Explanation techniques are thus demonstrated to be a helpful and highly relevant tool for the development and the deployment phases within the life cycle of real-world applications in digital pathology."
},
{
"pmid": "31144637",
"title": "Exploiting Images for Video Recognition: Heterogeneous Feature Augmentation via Symmetric Adversarial Learning.",
"abstract": "Training deep models of video recognition usually requires sufficient labeled videos in order to achieve good performance without over-fitting. However, it is quite labor-intensive and time-consuming to collect and annotate a large amount of videos. Moreover, training deep neural networks on large-scale video datasets always demands huge computational resources which further hold back many researchers and practitioners. To resolve that, collecting and training on annotated images are much easier. However, thoughtlessly applying images to help recognize videos may result in noticeable performance degeneration due to the well-known domain shift and feature heterogeneity. This proposes a novel symmetric adversarial learning approach for heterogeneous image-to-video adaptation, which augments deep image and video features by learning domain-invariant representations of source images and target videos. Primarily focusing on an unsupervised scenario where the labeled source images are accompanied by unlabeled target videos in the training phrase, we present a data-driven approach to respectively learn the augmented features of images and videos with superior transformability and distinguishability. Starting with learning a common feature space (called image-frame feature space) between images and video frames, we then build new symmetric generative adversarial networks (Sym-GANs) where one GAN maps image-frame features to video features and the other maps video features to image-frame features. Using the Sym-GANs, the source image feature is augmented with the generated video-specific representation to capture the motion dynamics while the target video feature is augmented with the image-specific representation to take the static appearance information. Finally, the augmented features from the source domain are fed into a network with fully connected layers for classification. Thanks to an end-to-end training procedure of the Sym-GANs and the classification network, our approach achieves better results than other state-of-the-arts, which is clearly validated by experiments on two video datasets, i.e., the UCF101 and HMDB51 datasets."
}
] |
Frontiers in Artificial Intelligence | null | PMC8990779 | 10.3389/frai.2022.830026 | Behind the Leaves: Estimation of Occluded Grapevine Berries With Conditional Generative Adversarial Networks | The need for accurate yield estimates for viticulture is becoming more important due to increasing competition in the wine market worldwide. One of the most promising methods to estimate the harvest is berry counting, as it can be approached non-destructively, and its process can be automated. In this article, we present a method that addresses the challenge of occluded berries with leaves to obtain a more accurate estimate of the number of berries that will enable a better estimate of the harvest. We use generative adversarial networks, a deep learning-based approach that generates a highly probable scenario behind the leaves exploiting learned patterns from images with non-occluded berries. Our experiments show that the estimate of the number of berries after applying our method is closer to the manually counted reference. In contrast to applying a factor to the berry count, our approach better adapts to local conditions by directly involving the appearance of the visible berries. Furthermore, we show that our approach can identify which areas in the image should be changed by adding new berries without explicitly requiring information about hidden areas. | 2. Related Work2.1. Yield Estimation and CountingSince an accurate yield estimation is one of the major needs in viticulture, especially on a large scale, there is a strong demand for objective, fast, and non-destructive methods for yield forecasts in the field. For many plants, including grapevines, the derivation of phenotypic traits is essential for estimating future yields. Besides 3D-reconstruction (Schöler and Steinhage, 2015; Mack et al., 2017, 2018), 2D-image processing is also a widely used method (Hacking et al., 2019) for the derivation of such traits. For vine, one plant trait that strongly correlates with yield is the number of bearing fruits, that means the amount of berries. This correlation is underlined by the study of Clingeleffer et al. (2001), in which it is shown that the variation of grapevine yield over the years is mainly caused by the berry number per vine (90%).The task of object counting can be divided into two main approaches: (1) regression (Lempitsky and Zisserman, 2010; Arteta et al., 2016; Paul Cohen et al., 2017; Xie et al., 2018) which directly quantifies the number of objects for a given input, and (2) detection and instance segmentation approaches which identify objects as an intermediate step for counting (Nuske et al., 2014; Nyarko et al., 2018). Detection approaches in viticulture are presented, for example, by Nuske et al. (2011), Roscher et al. (2014), and Nyarko et al. (2018), who define berries as geometric objects such as circles or convex surfaces and determine them by image analysis procedures such as Hough-transform. Recent state-of-the-art approaches, especially segmentation (He et al., 2017), are mostly based on neural networks. One of the earliest works combining grapevine data and neural network analysis was Aquino et al. (2017). They detect grapes using connected components and determine key features based on them, which are fed as annotations into a three-layer neural network to estimate yield. In another work, Aquino et al. (2018) deal with counting individual berries, which are first classified into berry candidates using pixel classification and morphological operators. Afterward, a neural network classifies the results again and filters out the false positives.The two studies by Zabawa et al. (2019, 2020) serve as the basis for this article. Zabawa et al. (2019) use a neural network which performs a semantic segmentation with the classes berry, berry-edge and background, which enables the identification of single berry instances. The masks generated in that work serve as input for the proposed approach. The article by Zabawa et al. (2020) based on Zabawa et al. (2019) extends identification to counting berries by discarding the class edge and counting the berry components with a connected component algorithm. The counting procedure applied in that work is used for the analyses of the experiments.2.2. Given Prior Information About Regions to Be TransferredA significant problem in fruit yield estimation is the overlapping of the interesting fruit regions by other objects, like in the case of this work, the leaves. Several works are already addressing the issue of data with occluded objects or gaps within the data, where actual values are missing, which is typically indicated by special values like, e.g., not-a-number. The methodologies can be divided into two areas: (1) there is prior information available about where the covered positions are, and (2) there is no prior information. In actual data gaps, where the gap positions can be easily identified a priori, data imputation approaches can be used to complete data. This imputation is especially important in machine learning, since machine learning models generally require complete numerical data. The imputation can be performed using constant values like a fixed constant, mean, median, or k-nearest neighbor imputation (Batista and Monard, 2002) or calculated using a random number like the empirical distribution of the feature under consideration (Rubin, 1996, 2004; Enders, 2001; von Hippel and Bartlett, 2012). Also, possible are multivariate imputations, which additionally measures the uncertainty of the missing values (Van Buuren and Oudshoorn, 1999; Robins and Wang, 2000; Kim and Rao, 2009). Data imputation is also possible using deep learning. Lee et al. (2019), for example, introduce CollaGAN in which they convert the image imputation problem to a multi-domain image-to-image translation task.In case there are no data gaps, but the image areas that are occluded or need to be changed are known, inpainting is a commonly used method. The main objective is to generate visually and semantically plausible appearances for the occluded regions to fit in the image. Conventional inpainting methods (Bertalmio et al., 2003; Barnes et al., 2009) work by filling occluded pixels with patches of the image based on low level features like SIFT descriptors (Lowe, 2004). The results of these methods do not look realistic if the areas to be filled are near foreground objects or the structure is too complex. An alternative is deep learning methods that learn a direct end-to-end mapping from masked images to filled output images. Particularly realistic results can be generated using Generative Adversarial Networks (GANs) (Iizuka et al., 2017; Dekel et al., 2018; Liu et al., 2018). For example, Yu et al. (2018) deal with generative image inpainting using contextual attention. They stack generative networks to ensure further the color and texture consistence of generated regions with surroundings. Their approach is based on rectangular masks, which do not generalize well to free-form masks. This task is solved by Yu et al. (2019) one year later by using guidance with gated convolution to complete images with free-form masks. Further work introduces mask-specific inpainting that fills in pixel values at image locations defined by masks. Xiong et al. (2019) learn a mask of the partially masked object from the unmasked region. Based on the mask, they learn the edge of the object, which they subsequently use to generate the non-occluded image in combination with the occluded input image.2.3. No Prior Information About Regions to Be TransferredMethods that do not involve any prior knowledge about gaps and occluded areas can be divided into two-step and one-step approaches. Two-step approaches first determine the occluded areas, which then are used, for example, as a mask to inpaint the occluded areas. Examples are provided by Yan et al. (2019), which visualize the occluded parts by determining a binary mask of the visible object using a segmentation model and then creating a reconstructed mask using a generator. The resulting mask is fed into coupled discriminators together with a 3D-model pool in order to decide if the generated mask is real or generated compared to the masks in the model pool. Ostyakov et al. (2018) train an adversarial architecture called SEIGAN to first segment a mask of the interesting object, then paste the segmented region into a new image and lastly fill the masked part of the original image by inpainting. Similar to the proposed approach, SeGAN introduced by Ehsani et al. (2018) uses a combination of a convolutional neural network and a cGAN (Mirza and Osindero, 2014; Isola et al., 2017) to first predict a mask of the occluded region and, based on this, generate a non-occluded output. | [
"18237962",
"23235443",
"21123221",
"25466040",
"31443479",
"28708080",
"31521104"
] | [
{
"pmid": "18237962",
"title": "Simultaneous structure and texture image inpainting.",
"abstract": "An algorithm for the simultaneous filling-in of texture and structure in regions of missing image information is presented in this paper. The basic idea is to first decompose the image into the sum of two functions with different basic characteristics, and then reconstruct each one of these functions separately with structure and texture filling-in algorithms. The first function used in the decomposition is of bounded variation, representing the underlying image structure, while the second function captures the texture and possible noise. The region of missing information in the bounded variation image is reconstructed using image inpainting algorithms, while the same region in the texture image is filled-in with texture synthesis techniques. The original image is then reconstructed adding back these two sub-images. The novel contribution of this paper is then in the combination of these three previously developed components, image decomposition with inpainting and texture synthesis, which permits the simultaneous use of filling-in algorithms that are suited for different image characteristics. Examples on real images show the advantages of this proposed approach."
},
{
"pmid": "23235443",
"title": "Grapevine yield and leaf area estimation using supervised classification methodology on RGB images taken under field conditions.",
"abstract": "The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine's leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management."
},
{
"pmid": "21123221",
"title": "maxAlike: maximum likelihood-based sequence reconstruction with application to improved primer design for unknown sequences.",
"abstract": "MOTIVATION\nThe task of reconstructing a genomic sequence from a particular species is gaining more and more importance in the light of the rapid development of high-throughput sequencing technologies and their limitations. Applications include not only compensation for missing data in unsequenced genomic regions and the design of oligonucleotide primers for target genes in species with lacking sequence information but also the preparation of customized queries for homology searches.\n\n\nRESULTS\nWe introduce the maxAlike algorithm, which reconstructs a genomic sequence for a specific taxon based on sequence homologs in other species. The input is a multiple sequence alignment and a phylogenetic tree that also contains the target species. For this target species, the algorithm computes nucleotide probabilities at each sequence position. Consensus sequences are then reconstructed based on a certain confidence level. For 37 out of 44 target species in a test dataset, we obtain a significant increase of the reconstruction accuracy compared to both the consensus sequence from the alignment and the sequence of the nearest phylogenetic neighbor. When considering only nucleotides above a confidence limit, maxAlike is significantly better (up to 10%) in all 44 species. The improved sequence reconstruction also leads to an increase of the quality of PCR primer design for yet unsequenced genes: the differences between the expected T(m) and real T(m) of the primer-template duplex can be reduced by ~26% compared with other reconstruction approaches. We also show that the prediction accuracy is robust to common distortions of the input trees. The prediction accuracy drops by only 1% on average across all species for 77% of trees derived from random genomic loci in a test dataset.\n\n\nAVAILABILITY\nmaxAlike is available for download and web server at: http://rth.dk/resources/maxAlike."
},
{
"pmid": "25466040",
"title": "Influence of cluster zone leaf removal on Pinot noir grape chemical and volatile composition.",
"abstract": "The influence of cluster-zone leaf removal on Pinot noir vine growth and fruit chemical and volatile composition was investigated in 3 years. Different severities of leaf removal (0%, 50%, 100%) were imposed during the pea-size stage of development from the cluster zone. Results show that cluster-zone leaf removal had little influence on vine growth, crop load, or grape maturity in terms of total soluble solids (TSS), pH or titratable acidity (TA) at harvest. However, 100% leaf removal resulted in higher concentrations of quercetin glycoside in grapes compared to 0% leaf removal. The 100% leaf removal also increased concentrations of petunidin- and malvidin-3-monoglucoside anthocyanins in two out of 3 years (2010 and 2012) by an average of 62% and 53%, respectively. In addition, 100% leaf removal resulted in higher concentrations of β-damascenone, and some bound-form terpenoids. The increases in β-damascenone were positively correlated to the increased sunlight exposure."
},
{
"pmid": "31443479",
"title": "Investigating 2-D and 3-D Proximal Remote Sensing Techniques for Vineyard Yield Estimation.",
"abstract": "Vineyard yield estimation provides the winegrower with insightful information regarding the expected yield, facilitating managerial decisions to achieve maximum quantity and quality and assisting the winery with logistics. The use of proximal remote sensing technology and techniques for yield estimation has produced limited success within viticulture. In this study, 2-D RGB and 3-D RGB-D (Kinect sensor) imagery were investigated for yield estimation in a vertical shoot positioned (VSP) vineyard. Three experiments were implemented, including two measurement levels and two canopy treatments. The RGB imagery (bunch- and plant-level) underwent image segmentation before the fruit area was estimated using a calibrated pixel area. RGB-D imagery captured at bunch-level (mesh) and plant-level (point cloud) was reconstructed for fruit volume estimation. The RGB and RGB-D measurements utilised cross-validation to determine fruit mass, which was subsequently used for yield estimation. Experiment one's (laboratory conditions) bunch-level results achieved a high yield estimation agreement with RGB-D imagery (r2 = 0.950), which outperformed RGB imagery (r2 = 0.889). Both RGB and RGB-D performed similarly in experiment two (bunch-level), while RGB outperformed RGB-D in experiment three (plant-level). The RGB-D sensor (Kinect) is suited to ideal laboratory conditions, while the robust RGB methodology is suitable for both laboratory and in-situ yield estimation."
},
{
"pmid": "28708080",
"title": "Phenoliner: A New Field Phenotyping Platform for Grapevine Research.",
"abstract": "In grapevine research the acquisition of phenotypic data is largely restricted to the field due to its perennial nature and size. The methodologies used to assess morphological traits and phenology are mainly limited to visual scoring. Some measurements for biotic and abiotic stress, as well as for quality assessments, are done by invasive measures. The new evolving sensor technologies provide the opportunity to perform non-destructive evaluations of phenotypic traits using different field phenotyping platforms. One of the biggest technical challenges for field phenotyping of grapevines are the varying light conditions and the background. In the present study the Phenoliner is presented, which represents a novel type of a robust field phenotyping platform. The vehicle is based on a grape harvester following the concept of a moveable tunnel. The tunnel it is equipped with different sensor systems (RGB and NIR camera system, hyperspectral camera, RTK-GPS, orientation sensor) and an artificial broadband light source. It is independent from external light conditions and in combination with artificial background, the Phenoliner enables standardised acquisition of high-quality, geo-referenced sensor data."
},
{
"pmid": "31521104",
"title": "Towards pixel-to-pixel deep nucleus detection in microscopy images.",
"abstract": "BACKGROUND\nNucleus is a fundamental task in microscopy image analysis and supports many other quantitative studies such as object counting, segmentation, tracking, etc. Deep neural networks are emerging as a powerful tool for biomedical image computing; in particular, convolutional neural networks have been widely applied to nucleus/cell detection in microscopy images. However, almost all models are tailored for specific datasets and their applicability to other microscopy image data remains unknown. Some existing studies casually learn and evaluate deep neural networks on multiple microscopy datasets, but there are still several critical, open questions to be addressed.\n\n\nRESULTS\nWe analyze the applicability of deep models specifically for nucleus detection across a wide variety of microscopy image data. More specifically, we present a fully convolutional network-based regression model and extensively evaluate it on large-scale digital pathology and microscopy image datasets, which consist of 23 organs (or cancer diseases) and come from multiple institutions. We demonstrate that for a specific target dataset, training with images from the same types of organs might be usually necessary for nucleus detection. Although the images can be visually similar due to the same staining technique and imaging protocol, deep models learned with images from different organs might not deliver desirable results and would require model fine-tuning to be on a par with those trained with target data. We also observe that training with a mixture of target and other/non-target data does not always mean a higher accuracy of nucleus detection, and it might require proper data manipulation during model training to achieve good performance.\n\n\nCONCLUSIONS\nWe conduct a systematic case study on deep models for nucleus detection in a wide variety of microscopy images, aiming to address several important but previously understudied questions. We present and extensively evaluate an end-to-end, pixel-to-pixel fully convolutional regression network and report a few significant findings, some of which might have not been reported in previous studies. The model performance analysis and observations would be helpful to nucleus detection in microscopy images."
}
] |
Frontiers in Big Data | null | PMC8993228 | 10.3389/fdata.2022.822573 | Data-Driven Framework for Understanding and Predicting Air Quality in Urban Areas | Monitoring, predicting, and controlling the air quality in urban areas is one of the effective solutions for tackling the climate change problem. Leveraging the availability of big data in different domains like pollutant concentration, urban traffic, aerial imagery of terrains and vegetation, and weather conditions can aid in understanding the interactions between these factors and building a reliable air quality prediction model. This research proposes a novel cost-effective and efficient air quality modeling framework including all these factors employing state-of-the-art artificial intelligence techniques. The framework also includes a novel deep learning-based vegetation detection system using aerial images. The pilot study conducted in the UK city of Cambridge using the proposed framework investigates various predictive models ranging from statistical to machine learning and deep recurrent neural network models. This framework opens up possibilities of broadening air quality modeling and prediction to other domains like vegetation or green space planning or green traffic routing for sustainable urban cities. The research is mainly focused on extracting strong pieces of evidence which could be useful in proposing better policies around climate change. | 2. Related WorkThe increasing concentration of greenhouse gas emissions is considered as the prime cause of climate change and air quality degradation over the last three decades and many studies focused on the way in which this can be monitored and mitigated. Air quality, and in specific, the impact of vegetation on air quality has been in the spotlight of many researchers for the last decade. Studies show vegetation and trees can both influence the atmospheric composition of trace gases and enable dispersion and deposition of air pollutants, thus affecting the concentrations of pollutants that populations in urban areas are exposed to. However, the research outcomes are variable and none of these studies show any definite outcome on this matter.There have been recent studies modeling urban Air quality (Liang and Gong, 2020; Wolf et al., 2020), most of which do not consider other related factors. A study by Duarte et al. (2015) investigates the impact of vegetation on urban micro-climate and the warming effect resulting from an increase in built density in a subtropical climate. They have measured air temperature, relative humidity, solar radiation, soil temperature, wind direction, and speed in Bela Vista district of São Paulo, Brazil to pre-calibrate ENVI-met V4 preview prior to parametric simulations. Also, they have set up a Campbell Scientific meteorological station in the center of the central and densest block to monitor the micro-climate effect. The diurnal variation of air temperature and relative humidity have been measured and monitored on an hourly basis. They have measured the effect of vegetation on micro-climates by considering the tree's shadowing and physiological process of evapotranspiration. This study showed that the presence of vegetation can significantly reduce the surface temperature and mean radiant temperature of the urban area.In another study, Holnicki and Nahorski (2015) showed how emission uncertainty of air pollutants generated by the industry, traffic, and the municipal sector relates to concentrations measured at receptor points in the Warsaw metropolitan area of Poland. This study identified the transportation system as the main source of adverse environmental impact. Several types of urban atmospheric pollutants including PM10, PM2.5, NOx, SO2, and Pb were included in this study and analyzed using the Monte Carlo technique to identify the key uncertainty factors. Zhu et al. (2018) attempted to tackle air quality forecasting by predicting the hourly concentration of air pollutants such as Ozone, PM2.5, and SO2 on the basis of meteorological data of previous days by formulating the prediction over 24 h as a multi-task learning (MTL) problem. This study also proposed a consecutive hour-related regularization to achieve better performance figures.A study by Kleine Deters et al. (2017) offers a machine learning model based on Boosted Trees and Linear Support Vector Machines to analyse meteorological and pollution data collected from the city of Quito, Ecuador to predicting the concentrations of PM2.5 from wind speed and direction and precipitation levels. This study shows aforementioned machine learning models are capable of accurately predict concentrations of PM2.5 from meteorological data. Another study by Zalakeviciute et al. (2018) investigates the impact of meteorological and topological conditions on urban air pollution using data collected from the city of Quito, Ecuador. This study specifically investigates the impact of the relative humidity (RH) on the daily average PM2.5 concentrations. Results of this study show a positive correlation between daily average urban PM2.5 concentrations and the RH in traffic-busy central areas, and a negative correlation in the industrial city outskirts.Zhang et al. (2019) aimed for tackling issues such as the instability of data sources and the variation of pollutant concentration along time series based for a better air quality predictive model. This study measured PM2.5 concentration in over 35 air quality monitoring stations in Beijing and used the LightGBM model and forecasting data to address the issue of high-dimensionality. Ameer et al. (2019) proposed a comparative analysis of four regression machine learning techniques including decision trees, random forest, gradient boosting, and multi-layer perceptron for predicting air quality in specific PM2.5 atmospheric pollution in smart cities. This study shows that the Random Forest regression model was the best technique for pollution prediction in urban environments. A similar study by Aditya et al. (2018) attempted to predict air quality and PM2.5 atmospheric pollution using logistic regression. A comprehensive exploratory study by Rybarczyk and Zalakeviciute (2018) attempted to investigate the efficiency and performance of various machine learning techniques for outdoor air quality and atmospheric pollution modeling.Rao et al. (2019) proposed an efficient approach for modeling and prediction of air quality using long short term memory (LSTM) Recurrent Neural Networks. This study attempt to capture the dependencies in various pollutants such as PM2.5, PM10, SO2, NO2, and Ozone to perform air quality prediction. RNN-LSTM allows modeling of temporal sequence data of each pollutant for forecasting hourly-based concentrations. Similarly, Belavadi et al. (2020) proposed an air quality forecasting architecture that gathers real-time air pollutant concentration including SOx, PM2.5, CO, and LPG using Wireless Sensor Networks (WSN) and real-time air quality data API and then uses LSTM-RNN to forecast future air pollutant concentrations. Masmoudi et al. (2020) attempted to predict multiple air pollutants concentrations including NOx, Ozone, and SO2via a novel feature ranking method that is based on a combination of Ensemble of Regressor Chains and the Random Forest permutation importance measure. Feature selection allowed the model to obtain the best subset of features. Harishkumar et al. (2020) proposed an air pollution forecasting model for PM2.5 atmospheric pollution using a machine learning regression model.There are other studies that look at the satellite images to estimate the pollutants directly from images (Fang et al., 2016; Chen et al., 2018; Sun et al., 2019; Kalajdjieski et al., 2020; Shin et al., 2020). All these studies work for only particulate matter and not for gaseous pollutants. Our proposed research looks at both gaseous and particulate matter and uses the satellite imagery for vegetation detection not for pollutant detection. The pollutants in our proposed framework will be monitored through reliable sensors. Inclusion of weather parameters in air quality modeling has shown promising results (Kalajdjieski et al., 2020; Gonzlez-Enrique et al., 2021). Deep Learning models mainly LSTM based RNNs are being popularly used for both univariate and multivariate (with exogenous features) time series pollutant data. Different configurations of LSTM mainly cross-validation procedure for time series (LSTM-CVT) were compared with basic (Artificial neural networks) ANNs by Gonzlez-Enrique et al. (2021) for NO2 in the Bay of Algeciras (Spain). It was found that exogenous variables like weather parameters have shown considerable improvement in performance. LSTMs have also been used in traffic forecasting (Awan et al., 2020) and pollution classification (Arsov et al., 2021). Our research compares different machine learning models ranging from linear regression to multiple kernel based SVR techniques with both traditional mathematical models like ARIMA and the popular LSTM based deep learning models. Also, this work proposes to use more factors like vegetation and seasonal information on top of the previously suggested weather-based exogenous features.A study by Tallis et al. (2011) proposed a predictive model to understand the role of urban trees in removing PM10 from urban air in Greater London. The research identified that the planned 10% increase in tree area within Greater London (from the current 20–30%) by 2,050 increases the annual PM10 removal from the current range of 852-2121 tonnes (0.7–1.4%) to 1,109–2,379 tonnes (1.1–2.6%). It was also identified that the increased deposition would be greatest if a larger proportion of coniferous to broad-leaved trees were used around the polluted areas. This study proposed two different approaches in order to determine the relationships between the amount and type of tree cover and PM10 uptake. The first approach measured PM10 downward flux relative to the urban tree canopy using deposition velocity and pollutant concentration while the second approach used species specific deposition velocities to estimate the PM10 uptake. The main drawback of this study is the lack of in-site validation. Issues like the sensitivity of selected species to atmospheric pollution and climate change, aesthetic appeal, biodiversity, soil factors, maintenance costs, and the land availability for planting programs have also not been considered.In another study, Yang et al. (2015) investigated the suitability of common urban tree species for controlling PM2.5 pollution. This study developed a ranking approach to evaluate the PM2.5 removal efficiency, impacts on air quality, and the adaptability to urban environments of commonly occurring urban tree species. It was suggested to use species with high PM2.5 removal efficiency in urban greening projects. However, in the real world, PM removal efficiency is not the most important criterion for urban planting. The ability of the species to adapt to urban abiotic and biotic stresses such as compacted soil, water-logging, droughts, pests and diseases, and air pollutants are the most important factors in urban planting programs. The results of this study showed that the most frequently occurring urban tree species were not the best performers in removing PM2.5. Among the ten most frequently occurring tree species in the dataset, only three species namely, London plane, Silver maple, and Honey locust were ranked above average in capturing particulate matter. This study suggests conifer species have high PM2.5 removal efficiency while it is robust to urban abiotic and biotic stresses. A study by Yang et al. (2005) looked into the impact of the urban forest on air pollution in the city of Beijing. They relied on satellite image analyses and field surveys to establish the characteristics of the current urban forest in the central part of Beijing. Satellite images were obtained from EROS Data Center and captured by Landsat's Enhanced Thematic Mapper covers the Beijing region. This study attempted to create a model to quantify the major air pollutants including SO2, NO2, CO2, PM10, and O3 that are reduced from the atmosphere by urban forest in the central part of Beijing. This study also investigated the Biogenic Volatile Organic Compound (BVOC) emission sourced from the urban forest. The results of this study showed that 2.4 million trees in Beijing central reduced over 772 tons of PM10 and over 0.2 million tons of CO2 stored as biomass in a year.Wilkes et al. (2018) used multi-scale LiDAR imaging including terrestrial and airborne laser scanning to estimate urban ground biomass for the London Borough of Camden, UK. An airborne laser scanning was used in the first instance to create clusters of feature sets that represented a wide range of tree structures typical in an urban setting. Then, terrestrial LiDAR measurements were used to derive allometry that uses structure metrics to identify individual trees and subsequently estimate the above ground biomass. This study used two relatively expensive imaging techniques including terrestrial and airborne laser scanning to estimate the above ground biomass which is less preferable in the real world. A similar study by Li et al. (2020) attempted to estimate urban vegetation biomass in the east Chinese city of Xuzhou using a combination of field observations and Sentinel satellite images. Field measurements were used to identify the Quadrat biomass using the allometric biomass equations. Vegetation biomass models were constructed using remote sensing Sentinel satellite images. This study attempted to identify the capability of Sentinel-2A data to estimate urban vegetation biomass and examine whether vegetation type-specific modeling can improve estimation accuracy. Similar to the earlier study, this approach is also less preferable in the real world as it requires labor-intensive and expensive field observations and manual surveying. Similarly, studies including Reitberger et al. (2009), Lahivaara et al. (2013), Zhang et al. (2014), and Qin et al. (2014) used airborne LiDAR or a combination of airborne and point clouds LiDAR technologies for individual tree crown detection. There are other types of studies like (Kraft et al., 2019) who aimed to model vegetation dynamics in conjunction with climate change impacts. Kraft et al. (2019) used LSTM network and multivariate predictors to model earth system variables to create a global model for vegetation dynamic state. The authors have used 33 years of climate variables in addition to static soil and land cover characteristics to model daily satellite-based observations. The proposed LSTM based model was able to learn the dynamicity of vegetation through temporal and global spatial variables. However, the focus of the study is not on air quality.With an aim to promote urban tree management, Branson et al. (2018) created up-to-date catalogs of urban tree population using publicly available TreeMapLA Los Angeles tree inventory along with aerial and street view images of Google Maps. This study also aimed to create a change-tracker model that recognizes changes of individual trees at city-scale, which is essential to keep an urban tree inventory up-to-date. The study first scraped available aerial images and street view panoramas of the city of Pasadena from Google Maps. Then, a tree detector and a tree species classifier were separately trained using labels from the TreeMapLA dataset. The trained tree detector predicted all unseen available tree images and then projected them from the image space to true geographic positions. Larsen et al. (2011) conducted a comparison study of six individual tree crown detection algorithms and evaluated their performance using an image dataset containing six diverse forest types at different geographical locations in three European countries. This study showed that the majority of algorithms were straggling with individual tree crown detection in non-homogeneous images of forestry. More related literature on this topic is summarized in Section 4 which presents our approach of self-supervised tree crown detection from Google Earth images. Some of the limitations in the earlier attempts of vegetation or tree crown detection in urban areas and mapping this information to an air quality modeling framework have been discussed above. Furthermore, none of these aforementioned projects consider the factors of weather and climatic conditions or other factors for a generic air quality modeling framework. As mentioned earlier, our research proposes a comprehensive and affordable framework for urban air quality modeling. | [
"33578633",
"32635487",
"16996198",
"30014938",
"28600533",
"30671347",
"27596304",
"33806409",
"27751401",
"27570539",
"32357880",
"33693354",
"33122678",
"32041079",
"33416835",
"26886976",
"29330699",
"31007387"
] | [
{
"pmid": "33578633",
"title": "Multi-Horizon Air Pollution Forecasting with Deep Neural Networks.",
"abstract": "Air pollution is a global problem, especially in urban areas where the population density is very high due to the diverse pollutant sources such as vehicles, industrial plants, buildings, and waste. North Macedonia, as a developing country, has a serious problem with air pollution. The problem is highly present in its capital city, Skopje, where air pollution places it consistently within the top 10 cities in the world during the winter months. In this work, we propose using Recurrent Neural Network (RNN) models with long short-term memory units to predict the level of PM10 particles at 6, 12, and 24 h in the future. We employ historical air quality measurement data from sensors placed at multiple locations in Skopje and meteorological conditions such as temperature and humidity. We compare different deep learning models' performance to an Auto-regressive Integrated Moving Average (ARIMA) model. The obtained results show that the proposed models consistently outperform the baseline model and can be successfully employed for air pollution prediction. Ultimately, we demonstrate that these models can help decision-makers and local authorities better manage the air pollution consequences by taking proactive measures."
},
{
"pmid": "32635487",
"title": "Improving Road Traffic Forecasting Using Air Pollution and Atmospheric Data: Experiments Based on LSTM Recurrent Neural Networks.",
"abstract": "Traffic flow forecasting is one of the most important use cases related to smart cities. In addition to assisting traffic management authorities, traffic forecasting can help drivers to choose the best path to their destinations. Accurate traffic forecasting is a basic requirement for traffic management. We propose a traffic forecasting approach that utilizes air pollution and atmospheric parameters. Air pollution levels are often associated with traffic intensity, and much work is already available in which air pollution has been predicted using road traffic. However, to the best of our knowledge, an attempt to improve forecasting road traffic using air pollution and atmospheric parameters is not yet available in the literature. In our preliminary experiments, we found out the relation between traffic intensity, air pollution, and atmospheric parameters. Therefore, we believe that addition of air pollutants and atmospheric parameters can improve the traffic forecasting. Our method uses air pollution gases, including C O , N O , N O 2 , N O x , and O 3 . We chose these gases because they are associated with road traffic. Some atmospheric parameters, including pressure, temperature, wind direction, and wind speed have also been considered, as these parameters can play an important role in the dispersion of the above-mentioned gases. Data related to traffic flow, air pollution, and the atmosphere were collected from the open data portal of Madrid, Spain. The long short-term memory (LSTM) recurrent neural network (RNN) was used in this paper to perform traffic forecasting."
},
{
"pmid": "16996198",
"title": "Estimating the reduction of urban PM10 concentrations by trees within an environmental information system for planners.",
"abstract": "Trees have been widely quoted as effective scavengers of both gaseous and particulate pollutants from the atmosphere. Recent work on the deposition of urban aerosols onto woodland allows the effect of tree planting strategies on airborne aerosol concentrations to be quantified and considered within the planning process. By identifying the potential planting locations in the local authority area, and applying them within a dispersion and deposition model, the potential magnitude of reduction in the ambient concentration of PM(10), achievable through urban tree planting, has been quantified for two UK cities. As part of the Environmental Information Systems for Planners (EISP), flow diagrams, based on planning decisions, have incorporated output from the model to make decisions on land use planning ranging from development plans and strategic planning, to development control. In this way, for any new developments that contribute to the local PM(10) level, the mitigation by planting trees can be assessed, and in some cases, reductions can be sufficient to meet air quality objectives for PM(10)."
},
{
"pmid": "30014938",
"title": "Spatiotemporal patterns of PM10 concentrations over China during 2005-2016: A satellite-based estimation using the random forests approach.",
"abstract": "BACKGROUND\nFew studies have estimated historical exposures to PM10 at a national scale in China using satellite-based aerosol optical depth (AOD). Also, long-term trends have not been investigated.\n\n\nOBJECTIVES\nIn this study, daily concentrations of PM10 over China during the past 12 years were estimated with the most recent ground monitoring data, AOD, land use information, weather data and a machine learning approach.\n\n\nMETHODS\nDaily measurements of PM10 during 2014-2016 were collected from 1479 sites in China. Two types of Moderate Resolution Imaging Spectroradiometer (MODIS) AOD data, land use information, and weather data were downloaded and merged. A random forests model (non-parametric machine learning algorithms) and two traditional regression models were developed and their predictive abilities were compared. The best model was applied to estimate daily concentrations of PM10 across China during 2005-2016 at 0.1⁰ (≈10 km).\n\n\nRESULTS\nCross-validation showed our random forests model explained 78% of daily variability of PM10 [root mean squared prediction error (RMSE) = 31.5 μg/m3]. When aggregated into monthly and annual averages, the models captured 82% (RMSE = 19.3 μg/m3) and 81% (RMSE = 14.4 μg/m3) of the variability. The random forests model showed much higher predictive ability and lower bias than the other two regression models. Based on the predictions of random forests model, around one-third of China experienced with PM10 pollution exceeding Grade Ⅱ National Ambient Air Quality Standard (>70 μg/m3) in China during the past 12 years. The highest levels of estimated PM10 were present in the Taklamakan Desert of Xinjiang and Beijing-Tianjin metropolitan region, while the lowest were observed in Tibet, Yunnan and Hainan. Overall, the PM10 level in China peaked in 2006 and 2007, and declined since 2008.\n\n\nCONCLUSIONS\nThis is the first study to estimate historical PM10 pollution using satellite-based AOD data in China with random forests model. The results can be applied to investigate the long-term health effects of PM10 in China."
},
{
"pmid": "28600533",
"title": "Variation in Tree Species Ability to Capture and Retain Airborne Fine Particulate Matter (PM2.5).",
"abstract": "Human health risks caused by PM2.5 raise awareness to the role of trees as bio-filters of urban air pollution, but not all species are equally capable of filtering the air. The objectives of this current study were: (1) to determine the foliar traits for effective PM2.5-capture and (2) explore species-to-species differences in foliar PM2.5-recapture capacity following a rain event. The study concluded that overall, the acicular needle shape made conifers more efficient with PM2.5 accumulation and post-rainfall recapture than broadleaved species. The foliar shape and venation of broadleaved species did not appear to influence the PM2.5 accumulation. However, the number of the grooves and trichomes of broadleaved species were positively related to foliar PM2.5 accumulation, suggesting that they could be used as indicators for the effectiveness of tree PM2.5 capture. Furthermore, the amount of PM2.5 removal by rainfall was determined by the total foliar PM2.5. Not all PM2.5 remained on the foliage. In some species, PM2.5 was resuspended during the growing season, and thus reduced the net particular accumulation for that species. These findings contribute to a better understanding of tree species potential for reducing PM2.5 in urban environments."
},
{
"pmid": "30671347",
"title": "Morphological and chemical composition of particulate matter in buses exhaust.",
"abstract": "This research article investigates the particulate matter originated from the exhaust emissions of 20 bus models, within the territory of Vladivostok, Russian Federation. The majority of evaluated buses (17 out of 20) had emissions of large particles with sizes greater than 400 μm, which account for more than 80% of all measured particles. The analysis of the elemental composition showed that the exhaust emissions contained Al, Cd, Cu, Fe, Mg, Ni, Pb, and Zn, with the concentration of Zn prevailing in all samples by two to three orders of magnitude higher than the concentrations of the other elements."
},
{
"pmid": "27596304",
"title": "Particle deposition in a peri-urban Mediterranean forest.",
"abstract": "Urban and peri-urban forests provide a multitude of Ecosystem Services to the citizens. While the capacity of removing carbon dioxide and gaseous compounds from the atmosphere has been tested, their capacity to sequestrate particles (PM) has been poorly investigated. Mediterranean forest ecosystems are often located nearby or inside large urban areas. This is the case of the city of Rome, Italy, which hosts several urban parks and is surrounded by forested areas. In particular, the Presidential Estate of Castelporziano is a 6000 ha forested area located between the Tyrrhenian coast and the city (25 km downtown of Rome). Under the hypothesis that forests can ameliorate air quality thanks to particle deposition, we measured fluxes of PM1, 2.5 and 10 with fast optical sensors and eddy covariance technique. We found that PM1 is mainly deposited during the central hours of the day, while negligible fluxes were observed for PM 2.5 and 10. A Hybrid Single-Particle Lagrangian Integrated Trajectory model (HYSPLIT v4) simulated PM emission from traffic areas in the city of Rome and showed that a significant portion of PM is removed by vegetation in the days when the plume trajectory meets the urban forest."
},
{
"pmid": "33806409",
"title": "Artificial Neural Networks, Sequence-to-Sequence LSTMs, and Exogenous Variables as Analytical Tools for NO2 (Air Pollution) Forecasting: A Case Study in the Bay of Algeciras (Spain).",
"abstract": "This study aims to produce accurate predictions of the NO2 concentrations at a specific station of a monitoring network located in the Bay of Algeciras (Spain). Artificial neural networks (ANNs) and sequence-to-sequence long short-term memory networks (LSTMs) were used to create the forecasting models. Additionally, a new prediction method was proposed combining LSTMs using a rolling window scheme with a cross-validation procedure for time series (LSTM-CVT). Two different strategies were followed regarding the input variables: using NO2 from the station or employing NO2 and other pollutants data from any station of the network plus meteorological variables. The ANN and LSTM-CVT exogenous models used lagged datasets of different window sizes. Several feature ranking methods were used to select the top lagged variables and include them in the final exogenous datasets. Prediction horizons of t + 1, t + 4 and t + 8 were employed. The exogenous variables inclusion enhanced the model's performance, especially for t + 4 (ρ ≈ 0.68 to ρ ≈ 0.74) and t + 8 (ρ ≈ 0.59 to ρ ≈ 0.66). The proposed LSTM-CVT method delivered promising results as the best performing models per prediction horizon employed this new methodology. Additionally, per each parameter combination, it obtained lower error values than ANNs in 85% of the cases."
},
{
"pmid": "27751401",
"title": "Impact of air pollution on the burden of chronic respiratory diseases in China: time for urgent action.",
"abstract": "In China, where air pollution has become a major threat to public health, public awareness of the detrimental effects of air pollution on respiratory health is increasing-particularly in relation to haze days. Air pollutant emission levels in China remain substantially higher than are those in developed countries. Moreover, industry, traffic, and household biomass combustion have become major sources of air pollutant emissions, with substantial spatial and temporal variations. In this Review, we focus on the major constituents of air pollutants and their impacts on chronic respiratory diseases. We highlight targets for interventions and recommendations for pollution reduction through industrial upgrading, vehicle and fuel renovation, improvements in public transportation, lowering of personal exposure, mitigation of the direct effects of air pollution through healthy city development, intervention at population-based level (systematic health education, intensive and individualised intervention, pre-emptive measures, and rehabilitation), and improvement in air quality. The implementation of a national environmental protection policy has become urgent."
},
{
"pmid": "27570539",
"title": "The impact of weather changes on air quality and health in the United States in 1994-2012.",
"abstract": "Air quality is heavily influenced by weather conditions. In this study, we assessed the impact of long-term weather changes on air quality and health in the US during 1994-2012. We quantified past weather-related increases, or 'weather penalty', in ozone (O3) and fine particulate matter (PM2.5), and thereafter estimated the associated excess deaths. Using statistical regression methods, we derived the weather penalty as the additional increases in air pollution relative to trends assuming constant weather conditions (i.e., weather-adjusted trends). During our study period, temperature increased and wind speed decreased in most US regions. Nationally, weather-related 8 h max O3 increases were 0.18 ppb per year (95% CI: 0.06, 0.31) in the warm season (May-October) and 0.07 ppb per year (95% CI: 0.02, 0.13) in the cold season (November-April). The weather penalties on O3 were relatively larger than PM2.5 weather penalties, which were 0.056 µg m-3 per year (95% CI: 0.016, 0.096) in warm months and 0.027 µg m-3 per year (95% CI: 0.010, 0.043) in cold months. Weather penalties on O3 and PM2.5 were associated with 290 (95% CI: 80, 510) and 770 (95% CI: 190, 1350) excess annual deaths, respectively. Over a 19-year period, this amounts to 20 300 excess deaths (5600 from O3, 14 700 from PM2.5) attributable to the weather penalty on air quality."
},
{
"pmid": "32357880",
"title": "Machine learning approaches to predict peak demand days of cardiovascular admissions considering environmental exposure.",
"abstract": "BACKGROUND\nAccumulating evidence has linked environmental exposure, such as ambient air pollution and meteorological factors, to the development and severity of cardiovascular diseases (CVDs), resulting in increased healthcare demand. Effective prediction of demand for healthcare services, particularly those associated with peak events of CVDs, can be useful in optimizing the allocation of medical resources. However, few studies have attempted to adopt machine learning approaches with excellent predictive abilities to forecast the healthcare demand for CVDs. This study aims to develop and compare several machine learning models in predicting the peak demand days of CVDs admissions using the hospital admissions data, air quality data and meteorological data in Chengdu, China from 2015 to 2017.\n\n\nMETHODS\nSix machine learning algorithms, including logistic regression (LR), support vector machine (SVM), artificial neural network (ANN), random forest (RF), extreme gradient boosting (XGBoost), and light gradient boosting machine (LightGBM) were applied to build the predictive models with a unique feature set. The area under a receiver operating characteristic curve (AUC), logarithmic loss function, accuracy, sensitivity, specificity, precision, and F1 score were used to evaluate the predictive performances of the six models.\n\n\nRESULTS\nThe LightGBM model exhibited the highest AUC (0.940, 95% CI: 0.900-0.980), which was significantly higher than that of LR (0.842, 95% CI: 0.783-0.901), SVM (0.834, 95% CI: 0.774-0.894) and ANN (0.890, 95% CI: 0.836-0.944), but did not differ significantly from that of RF (0.926, 95% CI: 0.879-0.974) and XGBoost (0.930, 95% CI: 0.878-0.982). In addition, the LightGBM has the optimal logarithmic loss function (0.218), accuracy (91.3%), specificity (94.1%), precision (0.695), and F1 score (0.725). Feature importance identification indicated that the contribution rate of meteorological conditions and air pollutants for the prediction was 32 and 43%, respectively.\n\n\nCONCLUSION\nThis study suggests that ensemble learning models, especially the LightGBM model, can be used to effectively predict the peak events of CVDs admissions, and therefore could be a very useful decision-making tool for medical resource management."
},
{
"pmid": "33693354",
"title": "Identifying Dynamic Memory Effects on Vegetation State Using Recurrent Neural Networks.",
"abstract": "Vegetation state is largely driven by climate and the complexity of involved processes leads to non-linear interactions over multiple time-scales. Recently, the role of temporally lagged dependencies, so-called memory effects, has been emphasized and studied using data-driven methods, relying on a vast amount of Earth observation and climate data. However, the employed models are often not able to represent the highly non-linear processes and do not represent time explicitly. Thus, data-driven study of vegetation dynamics demands new approaches that are able to model complex sequences. The success of Recurrent Neural Networks (RNNs) in other disciplines dealing with sequential data, such as Natural Language Processing, suggests adoption of this method for Earth system sciences. Here, we used a Long Short-Term Memory (LSTM) architecture to fit a global model for Normalized Difference Vegetation Index (NDVI), a proxy for vegetation state, by using climate time-series and static variables representing soil properties and land cover as predictor variables. Furthermore, a set of permutation experiments was performed with the objective to identify memory effects and to better understand the scales on which they act under different environmental conditions. This was done by comparing models that have limited access to temporal context, which was achieved through sequence permutation during model training. We performed a cross-validation with spatio-temporal blocking to deal with the auto-correlation present in the data and to increase the generalizability of the findings. With a full temporal model, global NDVI was predicted with R 2 of 0.943 and RMSE of 0.056. The temporal model explained 14% more variance than the non-memory model on global level. The strongest differences were found in arid and semiarid regions, where the improvement was up to 25%. Our results show that memory effects matter on global scale, with the strongest effects occurring in sub-tropical and transitional water-driven biomes."
},
{
"pmid": "33122678",
"title": "Urban and air pollution: a multi-city study of long-term effects of urban landscape patterns on air quality trends.",
"abstract": "Most air pollution research has focused on assessing the urban landscape effects of pollutants in megacities, little is known about their associations in small- to mid-sized cities. Considering that the biggest urban growth is projected to occur in these smaller-scale cities, this empirical study identifies the key urban form determinants of decadal-long fine particulate matter (PM2.5) trends in all 626 Chinese cities at the county level and above. As the first study of its kind, this study comprehensively examines the urban form effects on air quality in cities of different population sizes, at different development levels, and in different spatial-autocorrelation positions. Results demonstrate that the urban form evolution has long-term effects on PM2.5 level, but the dominant factors shift over the urbanization stages: area metrics play a role in PM2.5 trends of small-sized cities at the early urban development stage, whereas aggregation metrics determine such trends mostly in mid-sized cities. For large cities exhibiting a higher degree of urbanization, the spatial connectedness of urban patches is positively associated with long-term PM2.5 level increases. We suggest that, depending on the city's developmental stage, different aspects of the urban form should be emphasized to achieve long-term clean air goals."
},
{
"pmid": "32041079",
"title": "A machine-learning framework for predicting multiple air pollutants' concentrations via multi-target regression and feature selection.",
"abstract": "Air pollution is considered one of the biggest threats for the ecological system and human existence. Therefore, air quality monitoring has become a necessity in urban and industrial areas. Recently, the emergence of Machine Learning techniques justifies the application of statistical approaches for environmental modeling, especially in air quality forecasting. In this context, we propose a novel feature ranking method, termed as Ensemble of Regressor Chains-guided Feature Ranking (ERCFR) to forecast multiple air pollutants simultaneously over two cities. This approach is based on a combination of one of the most powerful ensemble methods for Multi-Target Regression problems (Ensemble of Regressor Chains) and the Random Forest permutation importance measure. Thus, feature selection allowed the model to obtain the best results with a restricted subset of features. The experimental results reveal the superiority of the proposed approach compared to other state-of-the-art methods, although some cautions have to be considered to improve the runtime performance and to decrease its sensitivity over extreme and outlier values."
},
{
"pmid": "33416835",
"title": "Artificial Intelligence and Behavioral Science Through the Looking Glass: Challenges for Real-World Application.",
"abstract": "BACKGROUND\nArtificial Intelligence (AI) is transforming the process of scientific research. AI, coupled with availability of large datasets and increasing computational power, is accelerating progress in areas such as genetics, climate change and astronomy [NeurIPS 2019 Workshop Tackling Climate Change with Machine Learning, Vancouver, Canada; Hausen R, Robertson BE. Morpheus: A deep learning framework for the pixel-level analysis of astronomical image data. Astrophys J Suppl Ser. 2020;248:20; Dias R, Torkamani A. AI in clinical and genomic diagnostics. Genome Med. 2019;11:70.]. The application of AI in behavioral science is still in its infancy and realizing the promise of AI requires adapting current practices.\n\n\nPURPOSES\nBy using AI to synthesize and interpret behavior change intervention evaluation report findings at a scale beyond human capability, the HBCP seeks to improve the efficiency and effectiveness of research activities. We explore challenges facing AI adoption in behavioral science through the lens of lessons learned during the Human Behaviour-Change Project (HBCP).\n\n\nMETHODS\nThe project used an iterative cycle of development and testing of AI algorithms. Using a corpus of published research reports of randomized controlled trials of behavioral interventions, behavioral science experts annotated occurrences of interventions and outcomes. AI algorithms were trained to recognize natural language patterns associated with interventions and outcomes from the expert human annotations. Once trained, the AI algorithms were used to predict outcomes for interventions that were checked by behavioral scientists.\n\n\nRESULTS\nIntervention reports contain many items of information needing to be extracted and these are expressed in hugely variable and idiosyncratic language used in research reports to convey information makes developing algorithms to extract all the information with near perfect accuracy impractical. However, statistical matching algorithms combined with advanced machine learning approaches created reasonably accurate outcome predictions from incomplete data.\n\n\nCONCLUSIONS\nAI holds promise for achieving the goal of predicting outcomes of behavior change interventions, based on information that is automatically extracted from intervention evaluation reports. This information can be used to train knowledge systems using machine learning and reasoning algorithms."
},
{
"pmid": "26886976",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.",
"abstract": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks."
},
{
"pmid": "29330699",
"title": "Accounting of GHG emissions and removals from forest management: a long road from Kyoto to Paris.",
"abstract": "BACKGROUND\nForests have always played an important role in agreeing on accounting rules during the past two decades of international climate policy development. Starting from activity-based gross-net accounting of selected forestry activities to mandatory accounting against a baseline-rules have changed quite rapidly and with significant consequences for accounted credits and debits. Such changes have direct consequences on incentives for climate-investments in forestry. There have also been strong arguments not to include forests into the accounting system by considering large uncertainties, procedural challenges and a fear of unearned credits corrupting the overall accounting system, among others. This paper reflects the development of respective accounting approaches and reviews the progress made on core challenges and resulting incentives.\n\n\nMAIN TEXT\nThe historic development of forest management accounting rules is analysed in the light of the Paris Agreement. Pros and cons of different approaches are discussed with specific focus on the challenge to maintain integrity of the accounting approach and on resulting incentives for additional human induced investments to increase growth for future substitution and increased C storage by forest management. The review is solely based on scientific publications and official IPCC and UNFCC documents. Some rather political statements of non-scientific stakeholders are considered to reflect criticism. Such sources are indicated accordingly. Remaining and emerging requirements for an accounting system for post 2030 are highlighted.\n\n\nCONCLUSIONS\nThe Paris Agreement is interpreted as a \"game changer\" for the role of forests in climate change mitigation. Many countries rely on forests in their NDCs to achieve their self-set targets. In fact, the agreement \"to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century\" puts pressure on the entire land sector to contribute to overall GHG emission reductions. This also concerns forests as a resource for the bio-based economy and wood products, and for increasing carbon reservoirs. By discussing the existing elements of forest accounting rules and conditions for establishing an accounting system post 2030, it is concluded that core requirements like factoring out direct human-induced from indirect human-induced and natural impacts on managed lands, a facilitation of incentives for management changes and providing safeguards for the integrity of the accounting system are not sufficiently secured by currently discussed accounting rules. A responsibility to fulfil these basic requirements is transferred to Nationally Determined Contributions. Increased incentives for additional human induced investments are not stipulated by the accounting approach but rather by the political decision to make use of the substitution effect and potential net removals from LULUCF to contribute to self-set targets."
},
{
"pmid": "31007387",
"title": "Recurrently exploring class-wise attention in a hybrid convolutional and bidirectional LSTM network for multi-label aerial image classification.",
"abstract": "Aerial image classification is of great significance in the remote sensing community, and many researches have been conducted over the past few years. Among these studies, most of them focus on categorizing an image into one semantic label, while in the real world, an aerial image is often associated with multiple labels, e.g., multiple object-level labels in our case. Besides, a comprehensive picture of present objects in a given high-resolution aerial image can provide a more in-depth understanding of the studied region. For these reasons, aerial image multi-label classification has been attracting increasing attention. However, one common limitation shared by existing methods in the community is that the co-occurrence relationship of various classes, so-called class dependency, is underexplored and leads to an inconsiderate decision. In this paper, we propose a novel end-to-end network, namely class-wise attention-based convolutional and bidirectional LSTM network (CA-Conv-BiLSTM), for this task. The proposed network consists of three indispensable components: (1) a feature extraction module, (2) a class attention learning layer, and (3) a bidirectional LSTM-based sub-network. Particularly, the feature extraction module is designed for extracting fine-grained semantic feature maps, while the class attention learning layer aims at capturing discriminative class-specific features. As the most important part, the bidirectional LSTM-based sub-network models the underlying class dependency in both directions and produce structured multiple object labels. Experimental results on UCM multi-label dataset and DFC15 multi-label dataset validate the effectiveness of our model quantitatively and qualitatively."
}
] |
Frontiers in Physiology | null | PMC8993503 | 10.3389/fphys.2022.760000 | WaSP-ECG: A Wave Segmentation Pretraining Toolkit for Electrocardiogram Analysis | IntroductionRepresentation learning allows artificial intelligence (AI) models to learn useful features from large, unlabelled datasets. This can reduce the need for labelled data across a range of downstream tasks. It was hypothesised that wave segmentation would be a useful form of electrocardiogram (ECG) representation learning. In addition to reducing labelled data requirements, segmentation masks may provide a mechanism for explainable AI. This study details the development and evaluation of a Wave Segmentation Pretraining (WaSP) application.Materials and MethodsPretraining: A non-AI-based ECG signal and image simulator was developed to generate ECGs and wave segmentation masks. U-Net models were trained to segment waves from synthetic ECGs. Dataset: The raw sample files from the PTB-XL dataset were downloaded. Each ECG was also plotted into an image. Fine-tuning and evaluation: A hold-out approach was used with a 60:20:20 training/validation/test set split. The encoder portions of the U-Net models were fine-tuned to classify PTB-XL ECGs for two tasks: sinus rhythm (SR) vs atrial fibrillation (AF), and myocardial infarction (MI) vs normal ECGs. The fine-tuning was repeated without pretraining. Results were compared. Explainable AI: an example pipeline combining AI-derived segmentation masks and a rule-based AF detector was developed and evaluated.ResultsWaSP consistently improved model performance on downstream tasks for both ECG signals and images. The difference between non-pretrained models and models pretrained for wave segmentation was particularly marked for ECG image analysis. A selection of segmentation masks are shown. An AF detection algorithm comprising both AI and rule-based components performed less well than end-to-end AI models but its outputs are proposed to be highly explainable. An example output is shown.ConclusionWaSP using synthetic data and labels allows AI models to learn useful features for downstream ECG analysis with real-world data. Segmentation masks provide an intermediate output that may facilitate confidence calibration in the context of end-to-end AI. It is possible to combine AI-derived segmentation masks and rule-based diagnostic classifiers for explainable ECG analysis. | Related WorkOvercoming Data PaucityRepresentation learning (RL) lessens the need for labelled training data. In RL, an AI model is trained for a task that forces it to learn useful “latent representations” of the data without manually assigned labels. This can be hard to intuit for non-data scientists, and a full explanation is beyond the scope of this article. Interested readers are directed to an excellent review by Bengio et al. (2013). RL is used for some of the most sophisticated AI models in existence today (Vaswani et al., 2017). Models pre-trained using RL and then fine-tuned for specific tasks using labelled training data, which is to say that they undergo a further training period for a specific task with constraints placed upon the rate at which they learn (Komodakis and Gidaris, 2018). The constrained learning rate means that the fine-tuning period serves to refine the latent representations acquired during pretraining, rather than simply overwriting previous representations with new ones. This latter phenomenon is known as “catastrophic forgetting” (Kirkpatrick et al., 2017).Representation learning has been investigated in the domain of ECG interpretation by a small number of studies. A recent example is from Sarkar and Etemad (2020). They tasked a model with identifying which augmentations had been applied to ECG signals, such as addition of Gaussian noise or signal flipping. This reduced the need for labelled data when fine-tuning for downstream tasks. However, this is a sparsely explored topic to date. | [
"33256715",
"31378392",
"23787338",
"30122457",
"31668636",
"30617320",
"31934645",
"33271204",
"31932992",
"28292907",
"34083689",
"31516126",
"32370835",
"20851409",
"31913322",
"34001319",
"26620728",
"31911652",
"33526938",
"31896794"
] | [
{
"pmid": "33256715",
"title": "Explainability for artificial intelligence in healthcare: a multidisciplinary perspective.",
"abstract": "BACKGROUND\nExplainability is one of the most heavily debated topics when it comes to the application of artificial intelligence (AI) in healthcare. Even though AI-driven systems have been shown to outperform humans in certain analytical tasks, the lack of explainability continues to spark criticism. Yet, explainability is not a purely technological issue, instead it invokes a host of medical, legal, ethical, and societal questions that require thorough exploration. This paper provides a comprehensive assessment of the role of explainability in medical AI and makes an ethical evaluation of what explainability means for the adoption of AI-driven tools into clinical practice.\n\n\nMETHODS\nTaking AI-based clinical decision support systems as a case in point, we adopted a multidisciplinary approach to analyze the relevance of explainability for medical AI from the technological, legal, medical, and patient perspectives. Drawing on the findings of this conceptual analysis, we then conducted an ethical assessment using the \"Principles of Biomedical Ethics\" by Beauchamp and Childress (autonomy, beneficence, nonmaleficence, and justice) as an analytical framework to determine the need for explainability in medical AI.\n\n\nRESULTS\nEach of the domains highlights a different set of core considerations and values that are relevant for understanding the role of explainability in clinical practice. From the technological point of view, explainability has to be considered both in terms how it can be achieved and what is beneficial from a development perspective. When looking at the legal perspective we identified informed consent, certification and approval as medical devices, and liability as core touchpoints for explainability. Both the medical and patient perspectives emphasize the importance of considering the interplay between human actors and medical AI. We conclude that omitting explainability in clinical decision support systems poses a threat to core ethical values in medicine and may have detrimental consequences for individual and public health.\n\n\nCONCLUSIONS\nTo ensure that medical AI lives up to its promises, there is a need to sensitize developers, healthcare professionals, and legislators to the challenges and limitations of opaque algorithms in medical AI and to foster multidisciplinary collaboration moving forward."
},
{
"pmid": "31378392",
"title": "An artificial intelligence-enabled ECG algorithm for the identification of patients with atrial fibrillation during sinus rhythm: a retrospective analysis of outcome prediction.",
"abstract": "BACKGROUND\nAtrial fibrillation is frequently asymptomatic and thus underdetected but is associated with stroke, heart failure, and death. Existing screening methods require prolonged monitoring and are limited by cost and low yield. We aimed to develop a rapid, inexpensive, point-of-care means of identifying patients with atrial fibrillation using machine learning.\n\n\nMETHODS\nWe developed an artificial intelligence (AI)-enabled electrocardiograph (ECG) using a convolutional neural network to detect the electrocardiographic signature of atrial fibrillation present during normal sinus rhythm using standard 10-second, 12-lead ECGs. We included all patients aged 18 years or older with at least one digital, normal sinus rhythm, standard 10-second, 12-lead ECG acquired in the supine position at the Mayo Clinic ECG laboratory between Dec 31, 1993, and July 21, 2017, with rhythm labels validated by trained personnel under cardiologist supervision. We classified patients with at least one ECG with a rhythm of atrial fibrillation or atrial flutter as positive for atrial fibrillation. We allocated ECGs to the training, internal validation, and testing datasets in a 7:1:2 ratio. We calculated the area under the curve (AUC) of the receiver operatoring characteristic curve for the internal validation dataset to select a probability threshold, which we applied to the testing dataset. We evaluated model performance on the testing dataset by calculating the AUC and the accuracy, sensitivity, specificity, and F1 score with two-sided 95% CIs.\n\n\nFINDINGS\nWe included 180 922 patients with 649 931 normal sinus rhythm ECGs for analysis: 454 789 ECGs recorded from 126 526 patients in the training dataset, 64 340 ECGs from 18 116 patients in the internal validation dataset, and 130 802 ECGs from 36 280 patients in the testing dataset. 3051 (8·4%) patients in the testing dataset had verified atrial fibrillation before the normal sinus rhythm ECG tested by the model. A single AI-enabled ECG identified atrial fibrillation with an AUC of 0·87 (95% CI 0·86-0·88), sensitivity of 79·0% (77·5-80·4), specificity of 79·5% (79·0-79·9), F1 score of 39·2% (38·1-40·3), and overall accuracy of 79·4% (79·0-79·9). Including all ECGs acquired during the first month of each patient's window of interest (ie, the study start date or 31 days before the first recorded atrial fibrillation ECG) increased the AUC to 0·90 (0·90-0·91), sensitivity to 82·3% (80·9-83·6), specificity to 83·4% (83·0-83·8), F1 score to 45·4% (44·2-46·5), and overall accuracy to 83·3% (83·0-83·7).\n\n\nINTERPRETATION\nAn AI-enabled ECG acquired during normal sinus rhythm permits identification at point of care of individuals with atrial fibrillation.\n\n\nFUNDING\nNone."
},
{
"pmid": "23787338",
"title": "Representation learning: a review and new perspectives.",
"abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."
},
{
"pmid": "30122457",
"title": "Automation bias in medicine: The influence of automated diagnoses on interpreter accuracy and uncertainty when reading electrocardiograms.",
"abstract": "INTRODUCTION\nInterpretation of the 12‑lead Electrocardiogram (ECG) is normally assisted with an automated diagnosis (AD), which can facilitate an 'automation bias' where interpreters can be anchored. In this paper, we studied, 1) the effect of an incorrect AD on interpretation accuracy and interpreter confidence (a proxy for uncertainty), and 2) whether confidence and other interpreter features can predict interpretation accuracy using machine learning.\n\n\nMETHODS\nThis study analysed 9000 ECG interpretations from cardiology and non-cardiology fellows (CFs and non-CFs). One third of the ECGs involved no ADs, one third with ADs (half as incorrect) and one third had multiple ADs. Interpretations were scored and interpreter confidence was recorded for each interpretation and subsequently standardised using sigma scaling. Spearman coefficients were used for correlation analysis and C5.0 decision trees were used for predicting interpretation accuracy using basic interpreter features such as confidence, age, experience and designation.\n\n\nRESULTS\nInterpretation accuracies achieved by CFs and non-CFs dropped by 43.20% and 58.95% respectively when an incorrect AD was presented (p < 0.001). Overall correlation between scaled confidence and interpretation accuracy was higher amongst CFs. However, correlation between confidence and interpretation accuracy decreased for both groups when an incorrect AD was presented. We found that an incorrect AD disturbs the reliability of interpreter confidence in predicting accuracy. An incorrect AD has a greater effect on the confidence of non-CFs (although this is not statistically significant it is close to the threshold, p = 0.065). The best C5.0 decision tree achieved an accuracy rate of 64.67% (p < 0.001), however this is only 6.56% greater than the no-information-rate.\n\n\nCONCLUSION\nIncorrect ADs reduce the interpreter's diagnostic accuracy indicating an automation bias. Non-CFs tend to agree more with the ADs in comparison to CFs, hence less expert physicians are more effected by automation bias. Incorrect ADs reduce the interpreter's confidence and also reduces the predictive power of confidence for predicting accuracy (even more so for non-CFs). Whilst a statistically significant model was developed, it is difficult to predict interpretation accuracy using machine learning on basic features such as interpreter confidence, age, reader experience and designation."
},
{
"pmid": "30617320",
"title": "Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network.",
"abstract": "Computerized electrocardiogram (ECG) interpretation plays a critical role in the clinical ECG workflow1. Widely available digital ECG data and the algorithmic paradigm of deep learning2 present an opportunity to substantially improve the accuracy and scalability of automated ECG analysis. However, a comprehensive evaluation of an end-to-end deep learning approach for ECG analysis across a wide variety of diagnostic classes has not been previously reported. Here, we develop a deep neural network (DNN) to classify 12 rhythm classes using 91,232 single-lead ECGs from 53,549 patients who used a single-lead ambulatory ECG monitoring device. When validated against an independent test dataset annotated by a consensus committee of board-certified practicing cardiologists, the DNN achieved an average area under the receiver operating characteristic curve (ROC) of 0.97. The average F1 score, which is the harmonic mean of the positive predictive value and sensitivity, for the DNN (0.837) exceeded that of average cardiologists (0.780). With specificity fixed at the average specificity achieved by cardiologists, the sensitivity of the DNN exceeded the average cardiologist sensitivity for all rhythm classes. These findings demonstrate that an end-to-end deep learning approach can classify a broad range of distinct arrhythmias from single-lead ECGs with high diagnostic performance similar to that of cardiologists. If confirmed in clinical settings, this approach could reduce the rate of misdiagnosed computerized ECG interpretations and improve the efficiency of expert human ECG interpretation by accurately triaging or prioritizing the most urgent conditions."
},
{
"pmid": "31934645",
"title": "Blockchain vehicles for efficient Medical Record management.",
"abstract": "The lack of interoperability in Britain's medical records systems precludes the realisation of benefits generated by increased spending elsewhere in healthcare. Growing concerns regarding the security of online medical data following breaches, and regarding regulations governing data ownership, mandate strict parameters in the development of efficient methods to administrate medical records. Furthermore, consideration must be placed on the rise of connected devices, which vastly increase the amount of data that can be collected in order to improve a patient's long-term health outcomes. Increasing numbers of healthcare systems are developing Blockchain-based systems to manage medical data. A Blockchain is a decentralised, continuously growing online ledger of records, validated by members of the network. Traditionally used to manage cryptocurrency records, distributed ledger technology can be applied to various aspects of healthcare. In this manuscript, we focus on how Electronic Medical Records in particular can be managed by Blockchain, and how the introduction of this novel technology can create a more efficient and interoperable infrastructure to manage records that leads to improved healthcare outcomes, while maintaining patient data ownership and without compromising privacy or security of sensitive data."
},
{
"pmid": "33271204",
"title": "Explainable artificial intelligence to detect atrial fibrillation using electrocardiogram.",
"abstract": "INTRODUCTION\nEarly detection and intervention of atrial fibrillation (AF) is a cornerstone for effective treatment and prevention of mortality. Diverse deep learning models (DLMs) have been developed, but they could not be applied in clinical practice owing to their lack of interpretability. We developed an explainable DLM to detect AF using ECG and validated its performance using diverse formats of ECG.\n\n\nMETHODS\nWe conducted a retrospective study. The Sejong ECG dataset comprising 128,399 ECGs was used to develop and internally validated the explainable DLM. DLM was developed with two feature modules, which could describe the reason for DLM decisions. DLM was external validated using data from 21,837, 10,605, and 8528 ECGs from PTB-XL, Chapman, and PhysioNet non-restricted datasets, respectively. The predictor variables were digitally stored ECGs, and the endpoints were AFs.\n\n\nRESULTS\nDuring internal and external validation of the DLM, the area under the receiver operating characteristic curves (AUCs) of the DLM using a 12‑lead ECG in detecting AF were 0.997-0.999. The AUCs of the DLM with VAE using a 6‑lead and single‑lead ECG were 0.990-0.999. The AUCs of explainability about features such as rhythm irregularity and absence of P-wave were 0.961-0.993 and 0.983-0.993, respectively.\n\n\nCONCLUSIONS\nOur DLM successfully detected AF using diverse ECGs and described the reason for this decision. The results indicated that an explainable artificial intelligence methodology could be adopted to the DLM using ECG and enhance the transparency of the DLM for its application in clinical practice."
},
{
"pmid": "31932992",
"title": "Harnessing Cardiac Regeneration as a Potential Therapeutic Strategy for AL Cardiac Amyloidosis.",
"abstract": "PURPOSE OF REVIEW\nCardiac regeneration has received much attention as a possible means to treat various forms of cardiac injury. This review will explore the field of cardiac regeneration by highlighting the existing animal models, describing the involved molecular pathways, and discussing attempts to harness cardiac regeneration to treat cardiomyopathies.\n\n\nRECENT FINDINGS\nLight chain cardiac amyloidosis is a degenerative disease characterized by progressive heart failure due to amyloid fibril deposition and light chain-mediated cardiotoxicity. Recent findings in a zebrafish model of light chain amyloidosis suggest that cardiac regenerative confers a protective effect against this disease. Cardiac regeneration remains an intriguing potential tool for treating cardiovascular disease. Degenerative diseases, such as light chain cardiac amyloidosis, may be particularly suited for therapeutic interventions that target cardiac regeneration. Further studies are needed to translate preclinical findings for cardiac regeneration into effective therapies."
},
{
"pmid": "28292907",
"title": "Overcoming catastrophic forgetting in neural networks.",
"abstract": "The ability to learn tasks in a sequential fashion is crucial to the development of artificial intelligence. Until now neural networks have not been capable of this and it has been widely thought that catastrophic forgetting is an inevitable feature of connectionist models. We show that it is possible to overcome this limitation and train networks that can maintain expertise on tasks that they have not experienced for a long time. Our approach remembers old tasks by selectively slowing down learning on the weights important for those tasks. We demonstrate our approach is scalable and effective by solving a set of classification tasks based on a hand-written digit dataset and by learning several Atari 2600 games sequentially."
},
{
"pmid": "31516126",
"title": "Fine-Tuning Bidirectional Encoder Representations From Transformers (BERT)-Based Models on Large-Scale Electronic Health Record Notes: An Empirical Study.",
"abstract": "BACKGROUND\nThe bidirectional encoder representations from transformers (BERT) model has achieved great success in many natural language processing (NLP) tasks, such as named entity recognition and question answering. However, little prior work has explored this model to be used for an important task in the biomedical and clinical domains, namely entity normalization.\n\n\nOBJECTIVE\nWe aim to investigate the effectiveness of BERT-based models for biomedical or clinical entity normalization. In addition, our second objective is to investigate whether the domains of training data influence the performances of BERT-based models as well as the degree of influence.\n\n\nMETHODS\nOur data was comprised of 1.5 million unlabeled electronic health record (EHR) notes. We first fine-tuned BioBERT on this large collection of unlabeled EHR notes. This generated our BERT-based model trained using 1.5 million electronic health record notes (EhrBERT). We then further fine-tuned EhrBERT, BioBERT, and BERT on three annotated corpora for biomedical and clinical entity normalization: the Medication, Indication, and Adverse Drug Events (MADE) 1.0 corpus, the National Center for Biotechnology Information (NCBI) disease corpus, and the Chemical-Disease Relations (CDR) corpus. We compared our models with two state-of-the-art normalization systems, namely MetaMap and disease name normalization (DNorm).\n\n\nRESULTS\nEhrBERT achieved 40.95% F1 in the MADE 1.0 corpus for mapping named entities to the Medical Dictionary for Regulatory Activities and the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED-CT), which have about 380,000 terms. In this corpus, EhrBERT outperformed MetaMap by 2.36% in F1. For the NCBI disease corpus and CDR corpus, EhrBERT also outperformed DNorm by improving the F1 scores from 88.37% and 89.92% to 90.35% and 93.82%, respectively. Compared with BioBERT and BERT, EhrBERT outperformed them on the MADE 1.0 corpus and the CDR corpus.\n\n\nCONCLUSIONS\nOur work shows that BERT-based models have achieved state-of-the-art performance for biomedical and clinical entity normalization. BERT-based models can be readily fine-tuned to normalize any kind of named entities."
},
{
"pmid": "32370835",
"title": "Artificial Intelligence in Cardiology: Present and Future.",
"abstract": "Artificial intelligence (AI) is a nontechnical, popular term that refers to machine learning of various types but most often to deep neural networks. Cardiology is at the forefront of AI in medicine. For this review, we searched PubMed and MEDLINE databases with no date restriction using search terms related to AI and cardiology. Articles were selected for inclusion on the basis of relevance. We highlight the major achievements in recent years in nearly all areas of cardiology and underscore the mounting evidence suggesting how AI will take center stage in the field. Artificial intelligence requires a close collaboration among computer scientists, clinical investigators, clinicians, and other users in order to identify the most relevant problems to be solved. Best practices in the generation and implementation of AI include the selection of ideal data sources, taking into account common challenges during the interpretation, validation, and generalizability of findings, and addressing safety and ethical concerns before final implementation. The future of AI in cardiology and in medicine in general is bright as the collaboration between investigators and clinicians continues to excel."
},
{
"pmid": "20851409",
"title": "A review of electrocardiogram filtering.",
"abstract": "Analog filtering and digital signal processing algorithms in the preprocessing modules of an electrocardiographic device play a pivotal role in providing high-quality electrocardiogram (ECG) signals for analysis, interpretation, and presentation (display, printout, and storage). In this article, issues relating to inaccuracy of ECG preprocessing filters are investigated in the context of facilitating efficient ECG interpretation and diagnosis. The discussion covers 4 specific ECG preprocessing applications: anti-aliasing and upper-frequency cutoff, baseline wander suppression and lower-frequency cutoff, line frequency rejection, and muscle artifact reduction. Issues discussed include linear phase, aliasing, distortion, ringing, and attenuation of desired ECG signals. Due to the overlapping power spectrum of signal and noise in acquired ECG data, frequency selective filters must seek a delicate balance between noise removal and deformation of the desired signal. Most importantly, the filtering output should not adversely impact subsequent diagnosis and interpretation. Based on these discussions, several suggestions are made to improve and update existing ECG data preprocessing standards and guidelines."
},
{
"pmid": "31913322",
"title": "Re-epithelialization and immune cell behaviour in an ex vivo human skin model.",
"abstract": "A large body of literature is available on wound healing in humans. Nonetheless, a standardized ex vivo wound model without disruption of the dermal compartment has not been put forward with compelling justification. Here, we present a novel wound model based on application of negative pressure and its effects for epidermal regeneration and immune cell behaviour. Importantly, the basement membrane remained intact after blister roof removal and keratinocytes were absent in the wounded area. Upon six days of culture, the wound was covered with one to three-cell thick K14+Ki67+ keratinocyte layers, indicating that proliferation and migration were involved in wound closure. After eight to twelve days, a multi-layered epidermis was formed expressing epidermal differentiation markers (K10, filaggrin, DSG-1, CDSN). Investigations about immune cell-specific manners revealed more T cells in the blister roof epidermis compared to normal epidermis. We identified several cell populations in blister roof epidermis and suction blister fluid that are absent in normal epidermis which correlated with their decrease in the dermis, indicating a dermal efflux upon negative pressure. Together, our model recapitulates the main features of epithelial wound regeneration, and can be applied for testing wound healing therapies and investigating underlying mechanisms."
},
{
"pmid": "34001319",
"title": "CEFEs: A CNN Explainable Framework for ECG Signals.",
"abstract": "In the healthcare domain, trust, confidence, and functional understanding are critical for decision support systems, therefore, presenting challenges in the prevalent use of black-box deep learning (DL) models. With recent advances in deep learning methods for classification tasks, there is an increased use of deep learning in healthcare decision support systems, such as detection and classification of abnormal Electrocardiogram (ECG) signals. Domain experts seek to understand the functional mechanism of black-box models with an emphasis on understanding how these models arrive at specific classification of patient medical data. In this paper, we focus on ECG data as the healthcare data signal to be analyzed. Since ECG is a one-dimensional time-series data, we target 1D-CNN (Convolutional Neural Networks) as the candidate DL model. Majority of existing interpretation and explanations research has been on 2D-CNN models in non-medical domain leaving a gap in terms of explanation of CNN models used on medical time-series data. Hence, we propose a modular framework, CNN Explanations Framework for ECG Signals (CEFEs), for interpretable explanations. Each module of CEFEs provides users with the functional understanding of the underlying CNN models in terms of data descriptive statistics, feature visualization, feature detection, and feature mapping. The modules evaluate a model's capacity while inherently accounting for correlation between learned features and raw signals which translates to correlation between model's capacity to classify and it's learned features. Explainable models such as CEFEs could be evaluated in different ways: training one deep learning architecture on different volumes/amounts of the same dataset, training different architectures on the same data set or a combination of different CNN architectures and datasets. In this paper, we choose to evaluate CEFEs extensively by training on different volumes of datasets with the same CNN architecture. The CEFEs' interpretations, in terms of quantifiable metrics, feature visualization, provide explanation as to the quality of the deep learning model where traditional performance metrics (such as precision, recall, accuracy, etc.) do not suffice."
},
{
"pmid": "26620728",
"title": "Eyewitness to history: Landmarks in the development of computerized electrocardiography.",
"abstract": "The use of digital computers for ECG processing was pioneered in the early 1960s by two immigrants to the US, Hubert Pipberger, who initiated a collaborative VA project to collect an ECG-independent Frank lead data base, and Cesar Caceres at NIH who selected for his ECAN program standard 12-lead ECGs processed as single leads. Ray Bonner in the early 1970s placed his IBM 5880 program in a cart to print ECGs with interpretation, and computer-ECG programs were developed by Telemed, Marquette, HP-Philips and Mortara. The \"Common Standards for quantitative Electrocardiography (CSE)\" directed by Jos Willems evaluated nine ECG programs and eight cardiologists in clinically-defined categories. The total accuracy by a representative \"average\" cardiologist (75.5%) was 5.8% higher than that of the average program (69.7, p<0.001). Future comparisons of computer-based and expert reader performance are likely to show evolving results with continuing improvement of computer-ECG algorithms and changing expertise of ECG interpreters."
},
{
"pmid": "31911652",
"title": "U1 snRNP regulates cancer cell migration and invasion in vitro.",
"abstract": "Stimulated cells and cancer cells have widespread shortening of mRNA 3'-untranslated regions (3'UTRs) and switches to shorter mRNA isoforms due to usage of more proximal polyadenylation signals (PASs) in introns and last exons. U1 snRNP (U1), vertebrates' most abundant non-coding (spliceosomal) small nuclear RNA, silences proximal PASs and its inhibition with antisense morpholino oligonucleotides (U1 AMO) triggers widespread premature transcription termination and mRNA shortening. Here we show that low U1 AMO doses increase cancer cells' migration and invasion in vitro by up to 500%, whereas U1 over-expression has the opposite effect. In addition to 3'UTR length, numerous transcriptome changes that could contribute to this phenotype are observed, including alternative splicing, and mRNA expression levels of proto-oncogenes and tumor suppressors. These findings reveal an unexpected role for U1 homeostasis (available U1 relative to transcription) in oncogenic and activated cell states, and suggest U1 as a potential target for their modulation."
},
{
"pmid": "33526938",
"title": "Artificial intelligence-enhanced electrocardiography in cardiovascular disease management.",
"abstract": "The application of artificial intelligence (AI) to the electrocardiogram (ECG), a ubiquitous and standardized test, is an example of the ongoing transformative effect of AI on cardiovascular medicine. Although the ECG has long offered valuable insights into cardiac and non-cardiac health and disease, its interpretation requires considerable human expertise. Advanced AI methods, such as deep-learning convolutional neural networks, have enabled rapid, human-like interpretation of the ECG, while signals and patterns largely unrecognizable to human interpreters can be detected by multilayer AI networks with precision, making the ECG a powerful, non-invasive biomarker. Large sets of digital ECGs linked to rich clinical data have been used to develop AI models for the detection of left ventricular dysfunction, silent (previously undocumented and asymptomatic) atrial fibrillation and hypertrophic cardiomyopathy, as well as the determination of a person's age, sex and race, among other phenotypes. The clinical and population-level implications of AI-based ECG phenotyping continue to emerge, particularly with the rapid rise in the availability of mobile and wearable ECG technologies. In this Review, we summarize the current and future state of the AI-enhanced ECG in the detection of cardiovascular disease in at-risk populations, discuss its implications for clinical decision-making in patients with cardiovascular disease and critically appraise potential limitations and unknowns."
},
{
"pmid": "31896794",
"title": "The GenTree Dendroecological Collection, tree-ring and wood density data from seven tree species across Europe.",
"abstract": "The dataset presented here was collected by the GenTree project (EU-Horizon 2020), which aims to improve the use of forest genetic resources across Europe by better understanding how trees adapt to their local environment. This dataset of individual tree-core characteristics including ring-width series and whole-core wood density was collected for seven ecologically and economically important European tree species: silver birch (Betula pendula), European beech (Fagus sylvatica), Norway spruce (Picea abies), European black poplar (Populus nigra), maritime pine (Pinus pinaster), Scots pine (Pinus sylvestris), and sessile oak (Quercus petraea). Tree-ring width measurements were obtained from 3600 trees in 142 populations and whole-core wood density was measured for 3098 trees in 125 populations. This dataset covers most of the geographical and climatic range occupied by the selected species. The potential use of it will be highly valuable for assessing ecological and evolutionary responses to environmental conditions as well as for model development and parameterization, to predict adaptability under climate change scenarios."
}
] |
Frontiers in Artificial Intelligence | null | PMC8993509 | 10.3389/frai.2022.798892 | Utility of Crowdsourced User Experiments for Measuring the Central Tendency of User Performance: A Case of Error-Rate Model Evaluation in a Pointing Task | The usage of crowdsourcing to recruit numerous participants has been recognized as beneficial in the human-computer interaction (HCI) field, such as for designing user interfaces and validating user performance models. In this work, we investigate its effectiveness for evaluating an error-rate prediction model in target pointing tasks. In contrast to models for operational times, a clicking error (i.e., missing a target) occurs by chance at a certain probability, e.g., 5%. Therefore, in traditional laboratory-based experiments, a lot of repetitions are needed to measure the central tendency of error rates. We hypothesize that recruiting many workers would enable us to keep the number of repetitions per worker much smaller. We collected data from 384 workers and found that existing models on operational time and error rate showed good fits (both R2 > 0.95). A simulation where we changed the number of participants NP and the number of repetitions Nrepeat showed that the time prediction model was robust against small NP and Nrepeat, although the error-rate model fitness was considerably degraded. These findings empirically demonstrate a new utility of crowdsourced user experiments for collecting numerous participants, which should be of great use to HCI researchers for their evaluation studies. | 2. Related Work2.1. Time Prediction for Pointing TasksFor comparing the sensitivity of time and error-rate prediction models against NP and Nrepeat, we examine a robust time-prediction model, called Fitts' law (Fitts, 1954). According to this model, the time for the first click, or movement time MT, to point to a target is linearly related to the index of difficulty ID measured in bits:
(1)
MT=a+b·ID=a+b·(AW+1),
where a and b are empirical regression constants, A is the target distance (or amplitude), and W is its width (see Figure 1A). There are numerous formulae for calculating the ID, such as using a square root instead of the logarithm or using the effective target width (Plamondon and Alimi, 1997), but previous studies have shown that Equation 1 yields excellent model fitness (Soukoreff and MacKenzie, 2004). Using this Fitts' law, researchers can measure MTs for several {A, W} conditions, regress the data to compute a and b, and then predict the MT for a new {A, W} condition by applying the parameters of {a, b, A, W} to Equation 1.Figure 1(A) We use the Fitts' law paradigm in which users point to a vertically long target. A clicked position is illustrated with an “x” mark. (B) It has been assumed that the click positions recorded in many trials distribute normally, and its variability would increase with the target width. (C) An error rate is computed based on the probability where a click falls outside the target.2.2. Error-Rate Prediction for Pointing TasksResearchers have also tried to derive models to predict the error rate ER (Meyer et al., 1988; Wobbrock et al., 2008; Park and Lee, 2018). In practice, the ER should increase as participants move faster, and vice versa (Zhai et al., 2004; Batmaz and Stuerzlinger, 2021). In typical target pointing experiments, participants are instructed to “point to the target as quickly and accurately as possible,” which is intended to balance the speed and carefulness to decrease both MT and ER (MacKenzie, 1992; Soukoreff and MacKenzie, 2004).In pointing tasks, as the target size decreases, users have to aim for the target more carefully to avoid misses. Accordingly, the spread of click positions should be smaller. If researchers conduct a pointing experiment following a typical Fitts' law methodology, in which two vertically long targets are used and participants perform left-right cursor movements, the click positions would follow a normal distribution (Figure 1B) (Crossman, 1956; MacKenzie, 1992). Formally speaking, a click point is a random variable X following normal distribution: X ~ N(μ, σ2), where μ and σ are the mean and standard deviation of the click positions on the x-axis, respectively. The click point variability σ is assumed to proportionally relate to the target width, or to need an intercept, i.e., linear relationship (Bi and Zhai, 2016; Yu et al., 2019; Yamanaka and Usuba, 2020):
(2)
σ=c+d·W,
where c and d are regression constants. The probability density function for a normal distribution, f(x), is
(3)
f(x)=1σ2πe-(x-μ)2/(2σ2).
If we define the target center as located at x = 0 with the target boundary ranging from x1 to x2 (Figure 1C), the predicted probability for where the click point X falls on the target, P(x1 ≤ X ≤ x2), is
(4)
P(x1≤X≤x2)=∫x2x1f(x)dx =12[erf(x2−μσ2)−erf(x1−μσ2)],
where erf(·) is the Gauss error function:
(5)
erf(z)=2π∫z0e−t2dt.
Previous studies have shown that the mean click point is located close to the target center (μ ≈ 0), and σ is not significantly affected by the target distance A (MacKenzie, 1992; Bi and Zhai, 2016; Yamanaka and Usuba, 2020). Given the target width W, Equation 4 can be simplified and the ER is predicted as
(6)
ER=1−P(−W2≤X≤W2)=1 −12[erf(W/2σ2)−erf(−W/2σ2)]=1−erf(W22σ).
Similarly to the way Fitts' law is used, researchers measure σ for several {A, W} conditions, regress the data to compute c and d in Equation 2, and then predict the σ for a new {A, W} condition. In this way (i.e., using the predicted σ based on a new W), we can predict the ER with Equation 6 for a new task condition. While there are similar but more complicated versions of this model tuned for pointing tasks in virtual reality systems (Yu et al., 2019) and touchscreens (Bi and Zhai, 2016), to our knowledge, there has been no report on the evaluation of this model for the most fundamental computer environment, i.e., PCs with mice.2.3. Crowdsourced Studies on User Performance and Model Evaluation for GUIsFor target pointing tasks in PC environments, Komarov et al. (2013) found that crowdsourced and lab-based experiments led to the same conclusions on user performance, such as that a novel facilitation technique called Bubble Cursor (Grossman and Balakrishnan, 2005) reduced the MT compared with the baseline point-and-click method. Yamanaka et al. (2019) tested the effects of target margins on touch-pointing performance using smartphones and reported that the same effects were consistently found in crowdsourced and lab-based experiments, e.g., wider margins significantly decreased the MT but increased the ER. Findlater et al. (2017) showed that crowdworkers had significantly shorter MTs and higher ERs than lab-based participants in both mouse- and touch-pointing tasks. Thus, they concluded that crowdworkers were more biased towards speed than accuracy when instructed to “operate as rapidly and accurately as possible.”Regarding Fitts' law fitness, Findlater et al. reported that crowdworkers had average values of Pearson's r = 0.926 with mice and r = 0.898 with touchscreens (Findlater et al., 2017). Schwab et al. (2019) conducted crowdsourced scrolling tasks and found that Fitts' law held with R2 = 0.983 and 0.972 for the desktop and mobile cases, respectively (note that scrolling operations follow Fitts' law well Zhao et al., 2014). Overall, these reports suggest that Fitts' law is valid for crowdsourced data regardless of the input device. It is unclear, however, how the NP affects model fitness, because these studies used the entire workers' data for model fitting.The only article that tested the effect of NP on the fitness of user-performance models is a recent work by Yamanaka (2021a). He tested modified versions of Fitts' law to predict MTs in a rectangular-target pointing task. The conclusion was that, although he changed NP from 5 to 100, the best-fit model did not change. However, because he used all Nrepeat clicks, increasing NP always increased the total data points to be analyzed, and thus the contributions of NP and Nrepeat could not be analyzed separately. We further analyze this point in our simulation.In summary, there is a consensus that a time prediction model for pointing tasks (Fitts' law) shows a good fit for crowdsourced data. However, ER data have typically been reported as secondary results when measuring user performance in these studies. At least, no studies on evaluating ER prediction models have been reported so far. If we can demonstrate the potential of crowdsourced ER model evaluation, at least for one example task (target pointing in a PC environment), it will motivate future researchers to investigate novel ER models with less recruitment effort, more diversity of participants, and less time-consuming data collection. This will directly benefit the contribution of crowdsourcing to the HCI field. | [
"13174710",
"3406245",
"10096999"
] | [
{
"pmid": "10096999",
"title": "Speed/accuracy trade-offs in target-directed movements.",
"abstract": "This target article presents a critical survey of the scientific literature dealing with the speed/accuracy trade-offs in rapid-aimed movements. It highlights the numerous mathematical and theoretical interpretations that have been proposed in recent decades. Although the variety of points of view reflects the richness of the field and the high degree of interest that such basic phenomena attract in the understanding of human movements, it calls into question the ability of 'many models to explain the basic observations consistently reported in the field. This target article summarizes the kinematic theory of rapid human movements, proposed recently by R. Plamondon (1993b; 1993c; 1995a; 1995b), and analyzes its predictions in the context of speed/accuracy trade-offs. Data from human movement literature are reanalyzed and reinterpreted in the context of the new theory. It is shown that the various aspects of speed/accuracy trade-offs can be taken into account by considering the asymptotic behavior of a large number of coupled linear systems, from which a delta-lognormal law can be derived to describe the velocity profile of an end-effector driven by a neuromuscular synergy. This law not only describes velocity profiles almost perfectly, it also predicts the kinematic properties of simple rapid movements and provides a consistent framework for the analysis of different types of speed/accuracy trade-offs using a quadratic (or power) law that emerges from the model."
}
] |
Frontiers in Artificial Intelligence | null | PMC8993511 | 10.3389/frai.2022.736791 | Critical Analysis of Deconfounded Pretraining to Improve Visio-Linguistic Models | An important problem with many current visio-linguistic models is that they often depend on spurious correlations. A typical example of a spurious correlation between two variables is one that is due to a third variable causing both (a “confounder”). Recent work has addressed this by adjusting for spurious correlations using a technique of deconfounding with automatically found confounders. We will refer to this technique as AutoDeconfounding. This article dives more deeply into AutoDeconfounding, and surfaces a number of issues of the original technique. First, we evaluate whether its implementation is actually equivalent to deconfounding. We provide an explicit explanation of the relation between AutoDeconfounding and the underlying causal model on which it implicitly operates, and show that additional assumptions are needed before the implementation of AutoDeconfounding can be equated to correct deconfounding. Inspired by this result, we perform ablation studies to verify to what extent the improvement on downstream visio-linguistic tasks reported by the works that implement AutoDeconfounding is due to AutoDeconfounding, and to what extent it is specifically due to the deconfounding aspect of AutoDeconfounding. We evaluate AutoDeconfounding in a way that isolates its effect, and no longer see the same improvement. We also show that tweaking AutoDeconfounding to be less related to deconfounding does not negatively affect performance on downstream visio-linguistic tasks. Furthermore, we create a human-labeled ground truth causality dataset for objects in a scene to empirically verify whether and how well confounders are found. We show that some models do indeed find more confounders than a random baseline, but also that finding more confounders is not correlated with performing better on downstream visio-linguistic tasks. Finally, we summarize the current limitations of AutoDeconfounding to solve the issue of spurious correlations and provide directions for the design of novel AutoDeconfounding methods that are aimed at overcoming these limitations. | 2. Related WorkVisio-linguistic models. There has been a lot of work on creating the best possible general-purpose visio-linguistic models. Most of the recent models are based on the Transformer architecture (Vaswani et al., 2017), examples include ViLBERT (Lu et al., 2019), LXMERT (Tan and Bansal, 2019), Uniter (Chen et al., 2020), and VL-BERT (Su et al., 2019). Often, the Transformer architecture is complemented with a convolutional Region Proposal Network (RPN) to convert images into sets of region features: Ren et al. (2015) and Anderson et al. (2018), present examples of RPNs that have been used for this purpose. This articles that use AutoDeconfounding, which is the topic of this article, both use ViLBERT (Lu et al., 2019) as a basis for multi-modal tasks.Issue of spurious correlations. The issue of models learning spurious correlations is widely recognized. Schölkopf et al. (2021) gives a good overview of the theoretical benefits of learning causal representations as a way to address spurious correlations. A number of works have tried to put ideas from causality in practice to address this issue. Most of these assume a certain fixed underlying SCM, and use this structure to adjust for confounders. Examples include Qi et al. (2020), Zhang et al. (2020a), Niu et al. (2021), or Yue et al. (2020). An important difference of AutoDeconfounding with regard to these works, is that in AutoDeconfounding the structure of the SCM is automatically discovered, as well as that the variables of the SCM correspond to individual object classes.Discovering causal structure. There is theoretical work explaining the “ladder of causality” (Pearl and Mackenzie, 2018), where the different “rungs” of the ladder correspond to the availability of observational, interventional and counterfactual information, respectively. The Causal Hierarchy Theorem (CHT) (Bareinboim et al., 2020) shows that it is often very hard to discover the complete causal structure of an SCM (the second “rung” from the ladder) from purely observational data (the first “rung” of the ladder). However, it is not a problem for the CHT to discover the causal structure of an SCM up to its Markov Equivalence Class4. This has been done with constraint-based methods such as Colombo et al. (2012), and score-based methods such as Chickering (2002).Despite the CHT, there have also been attempts to go beyond the Markov Equivalence class. One tactic to do this is through supervised training on ground truth causal annotations of synthetic data, and porting those results to real data (Lopez-Paz et al., 2017). Another way makes use of distribution-shifts to discover causal structure: this does not violate the CHT by being a proxy for having access to interventional (“second rung”) data. More specifically Bengio et al. (2019) and more recently Ke et al. (2020) train different models with different factorizations, see which model is the best at adapting to out-of-distribution data, and retroactively conclude which factorization is the “causal” one.In contrast to these methods, AutoDeconfounding does not make use of distribution shifts nor of ground truth labeled causal data, but only of “first rung” observational data.Investigating
AutoDeconfounding. The works that implement AutoDeconfounding (Wang et al., 2020 and Zhang et al., 2020b) both explain the benefit of AutoDeconfounding as coming from its deconfounding effect. This article will do novel additional experiments that surface a number of issues with AutoDeconfounding. We focus on the implementation by Zhang et al. (2020b) as it is the SOTA for AutoDeconfounding. First, we make the underlying SCM more explicit, showing the assumptions under which it corresponds to deconfounding. Second, we compare with the non-causal baseline in a way that better isolates the effect of AutoDeconfounding. Finally, we evaluate whether confounders (and thus “second rung” information about the underlying SCM) are indeed found by collecting and evaluating on a ground-truth confounder dataset. | [] | [] |
Scientific Reports | 35396565 | PMC8993803 | 10.1038/s41598-022-09905-3 | A novel wavelet decomposition and transformation convolutional neural network with data augmentation for breast cancer detection using digital mammogram | Research in deep learning (DL) has continued to provide significant solutions to the challenges of detecting breast cancer in digital images. Image preprocessing methods and architecture enhancement techniques have been proposed to improve the performance of DL models such as convolutional neural networks (CNNs). For instance, the wavelet decomposition function has been used for image feature extraction in CNNs due to its strong compactness. Additionally, CNN architectures have been optimized to improve the process of feature detection to support the classification process. However, these approaches still lack completeness, as no mechanism exists to discriminate features to be enhanced and features to be eliminated for feature enhancement. More so, no studies have approached the use of wavelet transform to restructure CNN architectures to improve the detection of discriminant features in digital mammography for increased classification accuracy. Therefore, this study addresses these problems through wavelet-CNN-wavelet architecture. The approach presented in this paper combines seam carving and wavelet decomposition algorithms for image preprocessing to find discriminative features. These features are passed as input to a CNN-wavelet structure that uses the new wavelet transformation function proposed in this paper. The CNN-wavelet architecture applied layers of wavelet transform and reduced feature maps to obtain features suggestive of abnormalities that support the classification process. Meanwhile, we synthesized image samples with architectural distortion using a generative adversarial network (GAN) model to argue for their training datasets' insufficiency. Experimentation of the proposed method was carried out using DDSM + CBIS and MIAS datasets. The results obtained showed that the new method improved the classification accuracy and lowered the loss function values. The study's findings demonstrate the usefulness of the wavelet transform function in restructuring CNN architectures for performance enhancement in detecting abnormalities leading to breast cancer in digital mammography. | Related worksThis section presents a review of some related works that used data augmented techniques for training deep learning models in detecting abnormalities from digital mammography and other related areas. Abnormalities in mammograms are often categorized into four categories: malignant mass, calcification, architectural distortion, and asymmetric of the breast. All studies reviewed were selected using this consideration of abnormalities, wavelet functions, and data augmentation using GANs.Using existing and trained architectures helps fast-track the process of adapting networks for applicability to other problems. This was demonstrated by30, who used AlexNet and some segmentation techniques to classify and segment ROIs. The authors modified AlexNet for binary classification by introducing a support vector machine (SVM) classifier at the last fully connected layer. The approach also used a segmentation technique, namely, threshold- and region-based, to automate the process of ROI extraction. The method for the classification was based on applying SVM on mammography images from the digital database for screening mammography (DDSM) and the Curated Breast Imaging Subset of DDSM (CBIS-DDSM). The research successfully classified benign and malignant mass tumors in breast mammography images by obtaining an accuracy of 87.2% with an AUC equal to 0.94 (94%). Similarly, Levy and Jain31 investigated the performance of the following architectures: AlexNet, GoogLeNet, and a shallow CNN architecture. The three models were used for classifying images, whether malignant or benign, based on the detection of malignant masses. To circumvent the challenge of overfitting, they used transfer learning techniques, batch normalization, careful preprocessing, and data augmentation. For both AlexNet and GoogLeNet, the researchers used the same base architecture as the original works but replaced the last fully connected (FC) layer to output classes. The shallow CNN proposed takes a 224 × 224 × 3 image as input, and it consists of 3 convolutional blocks composed of 3 × 3, 3 fully connected layers, and a soft-max layer. Furthermore, they employed ReLU activation functions, Xavier weight initialization, and the Adam update rule with a base learning rate of 10−3 and batched size 64. The best model presents a result of 0.934 for recall at 0.924 for precision.In related work, Jung et al.32 proposed the use of RetinaNet to detect mass in mammograms. They made the RetinaNet model use weights pretrained on GURO, training and testing on INbreast. They observed that using weights pretrained on datasets achieves similar performance as directly using datasets in the training phase. Experimental setups using the public dataset INbreast and the in-house dataset GURO showed that their model obtained a good performance of an average number of false positives of 0.34 and 0.03 when the confidence score was 0.95 in INbreast and GURO, respectively. Similarly, Agarwal et al.33 employed transfer learning to propose a patch-based CNN method for automated mass detection in full-field digital mammograms (FFDM). In addition, they investigated the performances of VGG16, ResNet50, and InceptionV3 architectures on the same dataset while applying the transfer learning technique to uncover the benefit of domain adaptation between the CBIS-DDSM (digitized) and INbreast (digital) datasets using the InceptionV3 CNN. Their experimentation showed that InceptionV3 performs best for classifying the mass and non-mass breast regions for CBIS-DDSM. The results show that transfer learning from CBIS-DDSM obtains a substantially higher performance with the best true positive rate (TPR) of 0.98 at 1.67 false positives per image (FPI) compared with transfer learning from ImageNet with a TPR of 0.91 at 2.1 FPI. In34, the authors demonstrated the existence of superiority when a deep learning-based classifier was used to distinguish malignant and benign breast masses without segmenting the lesions and extracting the predefined image features. In35, an adversarial deep structural network was adopted for use on mammographic images in detecting mass segmentation. The research employed a fully convolutional network (FCN) to model the potential function, followed by conditional random fields (CRF) to perform structural learning. This end-to-end model was used for mammographic mass segmentation. While combining FCN with position a priori for the classification task, GAN training was used to control overfitting due to the small size of mammogram datasets. Four models with different convolutional kernels were further fused to improve the segmentation task. The results showed that the end-to-end model combined with adversarial training achieves state-of-the-art performance on two public datasets: INbreast and DDSM-BCRP.The work in36 combined Craniocaudal (CC) and Mediolateral-oblique (MLO) mammography views to differentiate between malignant and benign tumors. They implemented a deep-learning classification method that is based on two view-level decisions, implemented by two neural networks, followed by a single-neuron layer that combines the view-level decisions into a global decision that mimics the biopsy results. The model exploited the detection of features of clustered breast microcalcifications to classify tumors into benign and malignant categories. In related work, Sert et al.37 adapted a CNN model to the task of breast tumor classification as benign or malignant based on the detection of microcalcification features. Basically, the approach investigated the benefit of employing various preprocessing methods, such as contrast scaling, dilation, cropping, and decision fusion, using an ensemble of networks and the CNN model. Experimental results showed that preprocessing greatly improved classification performance. The learning models proposed achieved a recall of 94.0% and precision of 95.0% above human-level performance. Additionally, Xi et al.38 was able to use classifiers that are trained on labeled image patches and then adapted it to work on full mammogram images for localizing the abnormalities. The models investigated are VGGNet and ResNet, demonstrating the most appreciable accuracy at 92.53% in classifications. Meanwhile, Murali and Dinesh39 employed a deep convolutional neural network (CNN) and random forest classifier to classify ROIs with malignant masses and microcalcifications. The AUC of the CNN was 0.87, which was higher than the radiologists' mean AUC (0.84), although the difference was not significant. On the other hand, the studies discussed in40,41 circumvent the use of deep learning by adopting wavelet decomposition.A recent study5 proposed combining CNN architecture with image augmentation to detect architectural distortion. Many transformation operations were applied to the image samples with right and left breasts presented in MLO and CC views for augmentation purposes. The resulting model was applied to ROIs from MIAS, whole images from INbreast, whole images from MIAS, and ROIs from DDSM + CBIS databases. Performance evaluation of the proposed model showed that they achieved an accuracy of 93.75%. The use of Region-based (R-CNN) was introduced in42 to detect architectural distortion using a supervised pretrained region-based network (R-CNN). Experimentation was based on the DDSM dataset, and the results showed that they obtained over 80% sensitivity and specificity and yielded 0.46 false positives per image at an 83% true-positive rate. Similarly, the work in43 demonstrated a novel neural network that combined two learning branches with region-level classification and region ranking in weakly and semisupervised settings. Their results for weakly supervised learning showed an improvement of 4% in AUC, 10–17% in partial AUC, and 8–15% in specificity at 0.85 sensitivity. On the other hand35, GlimpseNet autonomously extracts multiple regions of interest, classifies them, and then pools them to obtain a diagnosis for the full image. They obtained the result that gained 4.1% improvement. Additionally, Qiu et al.44 proposed a framework using a deep convolutional neural network. The model is an 8-layer deep learning network that involves 3 pairs of convolution-max-pooling layers for automatic feature extraction and a multiple layer perceptron (MLP) classifier for feature categorization to process ROIs. The MLP classifier comprises one hidden layer and one logistic regression layer. The results of their experimentation achieved AUCs of 0.696 ± 0.044, 0.802 ± 0.037, 0.836 ± 0.036, and 0.822 ± 0.035 for fold 1 to 4 testing datasets, respectively, with an overall AUC of 0.790 ± 0.019 for the entire dataset. In another related work, Bakkour and Afdel45 proposed a novel discriminative objective for a supervised feature deep learning approach focused on classifying tumors in mammograms as malignant or benign, using the Softmax layer as a classifier. The proposed network was enhanced with a scaling process based on Gaussian pyramids to obtain normalized size regions of interest. The DDSM and BCDR datasets were used in addition to the data augmentation technique. The results of their experiments showed that they obtained an accuracy of 97.28%.In46, the authors presented a novel classification technique for a large data set of mammograms using deep learning: convolutional neural network-discrete wavelet (CNN-DW) and convolutional neural network-curvelet transform (CNN-CT). An augmented data set is generated by using mammogram patches and filtering the data, by contrast, limited adaptive histogram equalization (CLAHE). At the same time, the softmax layer and support vector machine layer were used as classifiers. The results showed that CNN-DW and CNN-CT achieved accuracy rates of 81.83% and 83.74%, respectively. The authors in47 used a wavelet convolution neural network to detect spiculated findings in low-contrast noisy mammograms, such as architectural distortions and spiculated masses. The dataset used for experimentation consisted of CBIS-DDSM, and it reached an accuracy of over 85% for architectural distortions and 88% for spiculated masses. The databases used are the IRMA version of the digital database for screening mammograms (DDSM) and the Mammographic Image Analysis Society (MIAS). The results pertain to an accuracy of 92.94% obtained in the case of the DDSM database for fixed-size ROIs and for the MIAS database, an accuracy of 95.34%. Other studies that have used similar approaches, although with application in different domains, are as follows: the use of wavelet convolutional neural network (wCNN) and wavelet convolutional wavelet neural network (wCwNN) for image classification on MNIST dataset48, and the use of wavelet function for feature extraction to support CNN-based feature detection in the classification of lung cancer using computerized tomography (CT) scans12.In addition to using wavelet-based CNN in medical image classification, several domains have also received attention in applying the technique. For example, Peifeng et al.49 proposed integrating a wavelet function on time series data and into a backpropagation neural network (BPNN) and nonlinear autoregressive network with exogenous inputs (NARX) to achieve WNN and WNARX hybrid models, which were applied as benchmark models. Experimentation with the hybrid model showed that the wavelet transform could enhance long-term concentration predictions. In another novel approach, Nourani et al.50 applied the wavelet function to a variant of the SVM to obtain a Wavelet-based Least Square Support Vector Machine (WLSSVM) model. The study then used the WLSSVM to predict Suspended Sediment Load (SSL) in a river. Meanwhile, an artificial neural network (ANN) was adapted for feature extraction to support the WLSSVM model. In another study, Gürsoy et al.51 attempted to predict the actual discharge using meteorological data based on a wavelet neural network method. Wang et al.52 analyzed, classified, and forecasted time series data for frequency-awareness using a multilevel Wavelet Decomposition Network (mWDN) supported by Residual Classification Flow (RCF) and multi-frequency Long Short-Term Memory (mLSTM) deep learning models. In a similar domain, Wuwei et al.53 investigated the use of both wavelet neural network and data fusion models. Meanwhile, an RBF algorithm and SPSS Clementine technique were also combined to support the wavelet transform sequences for the prediction process. Shah et al.54 forecast output growth using wavelet transforms and Levenberg–Marquardt (LM) ANN models.We now present a summary of all related works and compare their methods with that which is proposed in this study. Existing methods and techniques in literature used to address the problem motivating this study still present some gaps justifying the need for improvement. As reported by30, the use of ROIs does not address the need for feature enhancement in the ROIs samples. Moreover, the ReLU activation function in31 still generalizes on a well-known activation function. Also, using two deep learning models in36 for feature detection is computationally costly compared with the one-model feature-detection-enhancement inclusive mechanism proposed in the model presented in this paper. Similarly, the use of only mainstream preprocessing techniques has no guarantee that relevant features can be isolated and enhanced. As such, the approach in37 lags behind what is proposed in this study. The popularity of the R-CNN method as used in42 for region-level abnormality detection still suffers from the omission of sensitive features owing to their automated region selection algorithm. A similar approach in43 leaves out the use of an optimized method for selecting regions in the second branch of their dual-branch model. We found our proposed method competitive with what is reported in46,48 so that performance obtained based on the variation of both methods put this study ahead of46,48. | [
"33414495",
"31549948",
"34023831",
"31913331"
] | [
{
"pmid": "33414495",
"title": "Plasma Hsp90 levels in patients with systemic sclerosis and relation to lung and skin involvement: a cross-sectional and longitudinal study.",
"abstract": "Our previous study demonstrated increased expression of Heat shock protein (Hsp) 90 in the skin of patients with systemic sclerosis (SSc). We aimed to evaluate plasma Hsp90 in SSc and characterize its association with SSc-related features. Ninety-two SSc patients and 92 age-/sex-matched healthy controls were recruited for the cross-sectional analysis. The longitudinal analysis comprised 30 patients with SSc associated interstitial lung disease (ILD) routinely treated with cyclophosphamide. Hsp90 was increased in SSc compared to healthy controls. Hsp90 correlated positively with C-reactive protein and negatively with pulmonary function tests: forced vital capacity and diffusing capacity for carbon monoxide (DLCO). In patients with diffuse cutaneous (dc) SSc, Hsp90 positively correlated with the modified Rodnan skin score. In SSc-ILD patients treated with cyclophosphamide, no differences in Hsp90 were found between baseline and after 1, 6, or 12 months of therapy. However, baseline Hsp90 predicts the 12-month change in DLCO. This study shows that Hsp90 plasma levels are increased in SSc patients compared to age-/sex-matched healthy controls. Elevated Hsp90 in SSc is associated with increased inflammatory activity, worse lung functions, and in dcSSc, with the extent of skin involvement. Baseline plasma Hsp90 predicts the 12-month change in DLCO in SSc-ILD patients treated with cyclophosphamide."
},
{
"pmid": "31549948",
"title": "Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives.",
"abstract": "Although computer-aided diagnosis (CAD) is widely used in mammography, conventional CAD programs that use prompts to indicate potential cancers on the mammograms have not led to an improvement in diagnostic accuracy. Because of the advances in machine learning, especially with use of deep (multilayered) convolutional neural networks, artificial intelligence has undergone a transformation that has improved the quality of the predictions of the models. Recently, such deep learning algorithms have been applied to mammography and digital breast tomosynthesis (DBT). In this review, the authors explain how deep learning works in the context of mammography and DBT and define the important technical challenges. Subsequently, they discuss the current status and future perspectives of artificial intelligence-based clinical applications for mammography, DBT, and radiomics. Available algorithms are advanced and approach the performance of radiologists-especially for cancer detection and risk prediction at mammography. However, clinical validation is largely lacking, and it is not clear how the power of deep learning should be used to optimize practice. Further development of deep learning models is necessary for DBT, and this requires collection of larger databases. It is expected that deep learning will eventually have an important role in DBT, including the generation of synthetic images."
},
{
"pmid": "34023831",
"title": "A Review of Applications of Machine Learning in Mammography and Future Challenges.",
"abstract": "BACKGROUND\nThe aim of this study is to systematically review the literature to summarize the evidence surrounding the clinical utility of artificial intelligence (AI) in the field of mammography. Databases from PubMed, IEEE Xplore, and Scopus were searched for relevant literature. Studies evaluating AI models in the context of prediction and diagnosis of breast malignancies that also reported conventional performance metrics were deemed suitable for inclusion. From 90 unique citations, 21 studies were considered suitable for our examination. Data was not pooled due to heterogeneity in study evaluation methods.\n\n\nSUMMARY\nThree studies showed the applicability of AI in reducing workload. Six studies demonstrated that AI can aid in diagnosis, with up to 69% reduction in false positives and an increase in sensitivity ranging from 84 to 91%. Five studies show how AI models can independently mark and classify suspicious findings on conventional scans, with abilities comparable with radiologists. Seven studies examined AI predictive potential for breast cancer and risk score calculation. Key Messages: Despite limitations in the current evidence base and technical obstacles, this review suggests AI has marked potential for extensive use in mammography. Additional works, including large-scale prospective studies, are warranted to elucidate the clinical utility of AI."
},
{
"pmid": "31913331",
"title": "Changes in dietary carbon footprint over ten years relative to individual characteristics and food intake in the Västerbotten Intervention Programme.",
"abstract": "The objective was to examine 10-year changes in dietary carbon footprint relative to individual characteristics and food intake in the unique longitudinal Västerbotten Intervention Programme, Sweden. Here, 14 591 women and 13 347 men had been followed over time. Food intake was assessed via multiple two study visits 1996-2016, using a 64-item food frequency questionnaire. Greenhouse gas emissions (GHGE) related to food intake, expressed as kg carbon dioxide equivalents/1000 kcal and day, were estimated. Participants were classified into GHGE quintiles within sex and 10-year age group strata at both visits. Women and men changing from lowest to highest GHGE quintile exhibited highest body mass index within their quintiles at first visit, and the largest increase in intake of meat, minced meat, chicken, fish and butter and the largest decrease in intake of potatoes, rice and pasta. Women and men changing from highest to lowest GHGE quintile exhibited basically lowest rates of university degree and marriage and highest rates of smoking within their quintiles at first visit. Among these, both sexes reported the largest decrease in intake of meat, minced meat and milk, and the largest increase in intake of snacks and, for women, sweets. More research is needed on how to motivate dietary modifications to reduce climate impact and support public health."
}
] |
Frontiers in Psychology | null | PMC8995508 | 10.3389/fpsyg.2022.812677 | Investigating Cognitive Load in Energy Network Control Rooms: Recommendations for Future Designs | This study analyzed and explored the cognitive load of Australian energy market operators managing one of the longest inter-connected electrical networks in the world. Each operator uses a workstation with seven screens in an active control room environment, with a large coordination screen to show information and enable collaboration between different control centers. Cognitive load was assessed during both training scenarios and regular control room operations via the integration of subjective and physiological measures. Eye-tracking glasses were also used to analyze the operators gaze behavior. Our results indicate that different events (normal or unexpected), different participants for the same session, and different periods of one session all have varying degrees of cognitive load. The system design was observed to be inefficient in some situations and to have an adverse affect on cognitive load. In critical situations for instance, operator collaboration was high and the coordination screen was used heavily when collaborating between two control centers, yet integration with the system could be improved. Eye tracking data analysis showed that the layout of applications across the seven screens was not optimal for many tasks. Improved layout strategies, potential combination of applications, redesigning of certain applications, and linked views are all recommended for further exploration in addition to improved integration of procedures and linking alarms to visual cues. | 2. Related WorkIn this section we define and describe cognitive load. We note the difficulty of measuring these effects as we report prior work that has been done in this area, in particular, to understand cognitive load of operators in control room environments.2.1. Cognitive Load MeasuresCognitive load refers to the amount of information processed by working memory in a given time and space (Dan and Reiner, 2017); the demands placed on an individual from undertaking and learning from a task. Here, mental load refers to the demands from the task itself, whereas mental effort is the step-by-step controlled or automatic processing an individual is engaged with. Indeed, both controlled and automatic processes can impact on task performance (Paas et al., 1994; Orru and Longo, 2019). Similarly, cognitive load theory differentiates three types of cognitive load: intrinsic, germane, and extraneous (Sweller et al., 1998; Dan and Reiner, 2017). Intrinsic load refers to the intrinsic and innate difficulty/complexity of understanding information or performing a task. It depicts the number of elements that are processed concurrently in the working memory for the construction of schema (Orru and Longo, 2019). Young et al. (2015) further informs that intrinsic cognitive load can not be changed by instructional interventions because it is intrinsic to the material being dealt with. Extraneous load refers to the extra demands placed on an individual by the way information is presented or instructed and is increased if ineffective methods are used. And hence, it can be altered by instructional interventions unlike intrinsic cognitive load (Young et al., 2015). Germane load depends on the effort put in by the individual to process information and construct a schema. After the reconceptualization of cognitive load theory, germane load is considered not an independent source of load but it is the function of those working memory resources concerning intrinsic load of a task (Orru and Longo, 2019). The intrinsic, extraneous, and germane loads are influenced by “unfamiliarity,” the way information and data are organized and displayed, and the effort required to process the information, respectively (Longo, 2018; Pawar et al., 2018). Cognitive load, after the reconceptulisation of cognitive load theory, is believed to be an additive consequence of the intrinsic and extraneous load, whereby if one is kept constant the other can be measured and vice versa. Whereas, measureability of germane load is not clear (Orru and Longo, 2019). In an optimal scenario there is a high level of familiarity with the event, data and information that is presented. The critical and relevant cues made are salient, and the cognitive effort needed to interpret this information is minimal. During higher levels of cognitive load scenarios, the operational performance of operators in power plants, cement factory, and traffic control centers declined. Higher levels of cognitive load over an extended period can cause chronic stress and mental fatigue (Fallahi et al., 2016a).With the above notions covered, it is important to provide a basic level of some wider considerations around the terms and descriptions used above to prevent confusion. More specifically, we refer to notions that like aspects of cognitive load, the term mental workload has similarly been used to investigate and thus describe cognitive demands of a task too (Miyake, 2001). Thus, as previously mentioned, the difference between mental workload and cognitive load appears to pertain to the use of mental effort and related processes, which could extend more greatly onto the use of intrinsic or germane load, given cognitive load theory publications appear to more readily incorporate those terms (Orru and Longo, 2019) when compared to others that focus solely on task demands alone (Hancock et al., 2021). On the other hand, other literature has suggested that cognitive load and mental workload are the same construct (Naismith et al., 2019). In addition to this, mental workload has also been referred to as cognitive workload too, but these have been suggested to be the same construct (Orru and Longo, 2019). Given constraints, this article cannot properly elaborate on these points, but such discrepancies may be worth highlighting because they may explain mix and matching of the terms in the literature, and this impacts on this article. As mentioned, this study has adopted the definitions outlined in the paragraphs preceeding this one. Thus, we use cognitive load in this article to better reflect the holistic processes required of control room operators, especially during the contexts used in this study that require task demands, mental effort processing and schema construction and retrieval.Measures of cognitive load can be divided into three classes: performance-based measures, subjective measures, and physiological measures (Eggemeier et al., 1991), in this work we will only use the two latter. Subjective measurements reflect the perceived cognitive load and the affective state of operators (Miller, 2001), and physiological measures focus more on how cognitive load is expressed in their body (e.g., heart rate). However, both have limitations and it is recommended to use them concurrently to allow for cross referencing between the subjective ratings and the physiological measures (Tsang and Vidulich, 2006).Several questionnaires can be used to assess subjectively the cognitive load like the Workload Profile (WP) or the NASA Task Load Index (TLX). The WP questionnaire is a multidimensional and subjective tool to assess Workload proposed by Tsang and Velazquez (1996). It asks participants to rate, on a scale from 0 (no demand) to 100 (maximum attention), the amount of attentional resources required on 8 workload dimensions: perceptual/central processing, response selection and execution, spatial processing, verbal processing, visual processing, auditory processing, manual output and speech output. The NASA task load index gives overall workload score based on a weighted average of ratings across six subscales (mental demands, physical demands, temporal demands, own performance, effort, and frustration) (NASA, 1986). Finally, other questionnaires are only unidimensional (e.g., the rating scale of mental effort; Ghanbary Sartang et al., 2016). Overall, evidence suggests that there is strong validity between all mentioned questionnaires (Longo, 2018), therefore the specific choice of measure can be argued to relate to the processes and time constraints involved in the specific task or study a researcher is conducting. Rubio et al. (2004) evaluated psychometric properties of three different subjective workload measures including Workload Profile (WP), NASA Task Load Index (TLX), and SWAT (Subjective workload assessment technique) and concluded that all the three subjective workload measures showed high convergent validity. For our study, we chose WP because it includes more relevant dimensions when assessing the cognitive load in control room environment due to the number of auditory and visual stimuli, for instance. In the control room environment, for instance, different operators may have to move around and verbally communicate with each other and hence the dimensions of verbal processing, auditory processing, and speech output are relevant. Also, during different normal and unexpected situations operators may need to perceive the problem, spatially process the data spread across multiple screens, and then select responses and execute.Multiple resource theory forms the basis of the WP. The WP is built of principles associated with mental workload, but is intended to be used for cognitive load in this study. To be brief, multiple resource theory separates mental processing into stage, modality, code, and responses. Stage differentiates the mental resources that are required for initial perceptual and cognitive activities that are required for choosing and eventually executing responses. Modality differentiates processing between auditory and visual sources, and similarly both codes and responses can be categorized into spatial and verbal processes. It should be noted that two tasks that involve the same dimension (i.e., stage, modality, code), are likely going to compete for limited resources, whereas tasks that are unrelated are less likely to be affected (Wickens, 2008). Additionally, if two concurrent tasks exist with one requiring either perceptual or cognitive processing and the other more responses, then the change in difficulty in one is less likely to affect the other because the stages can utilize different mental resources (Wickens, 2008). As a more clearly defined example, Parkes and Coleman (1990) demonstrated that route guidance was best delivered auditorily rather than visually when subjects were driving a simulated vehicle at the same time; driving already involving significant visual-spatial load. Hence, cross-modal demands have an advantage over intra-modal demands, and this seems similar for codes too (Wickens, 1980).Tao et al. (2019) identified 78 physiological measures which have been considered and tested to be effective agents of cognitive load. These measures were distributed across a variety of physiological processes; the most common categories being cardiovascular, EEG, and eye movement measures. Across the 91 studies reviewed in this survey, 72% of them suggest that the physiological measures have shown promise in tackling the problem of cognitive load modeling, but their validity and wide applicability still have to be demonstrated across experiments.Heart Rate Variability is the variation of the length of heart beat intervals (Malik and Camm, 1990). HRV is shown to have an inverse relationship with cognitive load (Myrtek et al., 1994; Fallahi et al., 2016b). Such measure has been successfully used in studies involving participants with chronic mental stress (Kim et al., 2008), and also with ship operators (Wulvik et al., 2020), where lower HRV is associated with greater physiological arousal that greater cognitive load has shown to induce (Wulvik et al., 2020). HRV measure was preferred over EEG measure in our study because acquiring EEG signals would mean attaching electrodes onto the scalp of operators which would be more intrusive as well as it is more susceptible to noise due to the movement of participants.Zagermann et al. (2016) concluded that eye tracking can be valuable agent to measure and analyse the cognitive load in the context of Human-Computer Interaction (HCI) and visual computing. Eye tracking has been linked to cognitive load, when using microsaccades, and measuring the mean change of rate and magnitude (Krejtz et al., 2018). High cognitive load is associated with a lot of quick eye movements, taking on board numerous pieces of information together, which visual scanning facilitates (Krejtz et al., 2018). Bhavsar et al. (2017) explored the association between eye gaze behavior and the cognitive steps involved in orientation, diagnosis, and execution phases of control room operations in the process industry. In this environment the majority of accidents (70%) are caused by human error. In the study, participants who successfully managed disturbances had significantly lower values for gaze transitions and lesser fixation dispersion in the execution phase. By contrast, both successful and unsuccessful participants had similar levels of fixation dispersion and gaze distribution recorded during normal operations, when no abnormalities were observed. In conclusion, after the abnormality has been flagged, successful participants fixated on the relevant variables and manipulated them to manage the abnormality and recorded very low dispersion in fixation. In our study we measured the gaze and eye movements of the participants during each session to understand which screen of their workstation they were looking at, how much they moved their gaze between screens and at what frequency, and what applications they were using to investigate the cognitive load of the operators.2.2. Cognitive Load in Control RoomsIn emergency centered scenarios, there is a potential issue of information presentation inefficiencies and sub optimal situational awareness which can potentially lead to overload of cognitive demand. To deal with such situations operators deploy strategies to circumvent information complexity to handle the situation. The strategies identified and observed are omission, reducing precision, filtering, extrapolation, similarity matching, random trial and error, escape, and queuing (Hollnagel and Woods, 2005). The strategies can lead to failure of the system and while lowering cognitive load may also deteriorate the necessary situational awareness.Both excessive information and lower levels of information than required prolonged the time duration of diagnosing the fault and clearing the alarms (Dadashi et al., 2016). Interestingly, the alarm episodes with “high information” took longer than those with “low information”. When observing Nuclear Power Plant operators, it was noted that more than 50% of alarms were redundant and were actually decreasing the situational awareness of the operators (Mumaw et al., 2000). Studies also indicate that a significant number of operators could not react appropriately to the critical alarms and demonstrated “inattentional deafness” due to their limited attentional resources when performing critical tasks (Giraudet et al., 2014).One way to limit cognitive load is to use cues. Cues are referred to the specialized associations between specific situations and the environmental features or objects (Brouwers et al., 2016). The association of cue utilization with cognitive resource consumption was observed for DNSP (Distribution Network Service Providers) control room operators (Sturman et al., 2019). Operators with higher cue utilization showed a lower cognitive load. It was demonstrated and supported by the observation that operators with higher cue utilization showed lesser levels of cerebral oxygenation rise in the prefrontal cortex from baseline, indicating the consumption of cognitive resources at a slower rate (Sturman et al., 2019). The cognitive resources are limited and a system should be designed to conserve them as much as possible for they might be needed, shall a critical situation arise. Dehais et al. (2014) concluded that earlier exposure to a critical event enhanced subsequent alarm detection for an event. In this research article, we propose guidelines to limit cognitive load in the specific context of network control rooms.2.3. Guidelines for Control Room System DesignLiu et al. (2016) observed that the conventional energy management system does not have the appropriate functions to provide for adequate situational awareness, as there is a lack of understanding of the dispatcher's thought process. Endsley (1995) emphasized that the interface design should be situational awareness orientated, rather than simply technically orientated, so operators can quickly and efficiently perceive, comprehend, and predict the situation and make more informed decisions.According to Giri et al. (2012), the future of grid management is moving away from the current “observe and control” reactive paradigm toward a more integrated proactive paradigm; one that does not just indicate problems but proposes “corrective actions.” For instance, a new energy management system was deployed in RTE (Rseau de Transport dElectricit, the French Transmission System Operator) that includes a predictive model that anticipates conditions for the next 48 h (Astic et al., 2018).Whilst automation technology and the advancement in power electronics has greatly enhanced reliability of electrical equipment, humans are still essential to the operation and decision making process. Evidence from problem events indicates that many are partially or solely due to human error. Analysis of the North American blackout of 2003, for example, identified that one of the reasons for an eventual cascading failure and blackout was the lack of monitoring of the state of the grid (Muir and Lopatto, 2004). Automation may lead to lower cognitive load but may also deteriorate situational awareness of the operator and hence, ideally, an adaptive automation system design would keep the human operator “in the loop” (Tsang and Vidulich, 2006). The ideal system design would support reduced cognitive load, but increased situational awareness (Endsley, 1995). | [
"27064669",
"25029890",
"27386425",
"26360199",
"30216385",
"2204508",
"11228350",
"8223408",
"10917145",
"8050404",
"7808878",
"29670763",
"32719593",
"29034226",
"31507501",
"31366058",
"8849491",
"3397865",
"27933012",
"18689052",
"25442818"
] | [
{
"pmid": "27064669",
"title": "Cue Utilization and Cognitive Load in Novel Task Performance.",
"abstract": "This study was designed to examine whether differences in cue utilization were associated with differences in performance during a novel, simulated rail control task, and whether these differences reflected a reduction in cognitive load. Two experiments were conducted, the first of which involved the completion of a 20-min rail control simulation that required participants to re-route trains that periodically required a diversion. Participants with a greater level of cue utilization recorded a consistently greater response latency, consistent with a strategy that maintained accuracy, but reduced the demands on cognitive resources. In the second experiment, participants completed the rail task, during which a concurrent, secondary task was introduced. The results revealed an interaction, whereby participants with lesser levels of cue utilization recorded an increase in response latency that exceeded the response latency recorded for participants with greater levels of cue utilization. The relative consistency of response latencies for participants with greater levels of cue utilization, across all blocks, despite the imposition of a secondary task, suggested that those participants with greater levels of cue utilization had adopted a strategy that was effectively minimizing the impact of additional sources of cognitive load on their performance."
},
{
"pmid": "25029890",
"title": "Failure to detect critical auditory alerts in the cockpit: evidence for inattentional deafness.",
"abstract": "OBJECTIVE\nThe aim of this study was to test whether inattentional deafness to critical alarms would be observed in a simulated cockpit.\n\n\nBACKGROUND\nThe inability of pilots to detect unexpected changes in their auditory environment (e.g., alarms) is a major safety problem in aeronautics. In aviation, the lack of response to alarms is usually not attributed to attentional limitations, but rather to pilots choosing to ignore such warnings due to decision biases, hearing issues, or conscious risk taking.\n\n\nMETHOD\nTwenty-eight general aviation pilots performed two landings in a flight simulator. In one scenario an auditory alert was triggered alone, whereas in the other the auditory alert occurred while the pilots dealt with a critical windshear.\n\n\nRESULTS\nIn the windshear scenario, II pilots (39.3%) did not report or react appropriately to the alarm whereas all the pilots perceived the auditory warning in the no-windshear scenario. Also, of those pilots who were first exposed to the no-windshear scenario and detected the alarm, only three suffered from inattentional deafness in the subsequent windshear scenario.\n\n\nCONCLUSION\nThese findings establish inattentional deafness as a cognitive phenomenon that is critical for air safety. Pre-exposure to a critical event triggering an auditory alarm can enhance alarm detection when a similar event is encountered subsequently.\n\n\nAPPLICATION\nCase-based learning is a solution to mitigate auditory alarm misperception."
},
{
"pmid": "27386425",
"title": "Assessment of operators' mental workload using physiological and subjective measures in cement, city traffic and power plant control centers.",
"abstract": "BACKGROUND\nThe present study aimed to evaluate the operators' mental workload (MW) of cement, city traffic control and power plant control centers using subjective and objective measures during system vital parameters monitoring.\n\n\nMETHODS\nThis cross-sectional study was conducted from June 2014 to February 2015 at the cement, city traffic control and power plant control centers. Electrocardiography and electroencephalography data were recorded from forty males during performing their daily working in resting, low mental workload (LMW), high mental workload (HMW) and recovery conditions (each block 5 minutes). The NASA-Task Load Index (TLX) was used to evaluate the subjective workload of the operators.\n\n\nRESULTS\nThe results showed that increasing MW had a significant effect on the operators subjective responses in two conditions ([1,53] = 216.303, P < 0.001, η2 = 0.803). Also,the Task-MW interaction effect on operators subjective responses was significant (F [3, 53] = 12.628,P < 0.001, η2 = 0.417). Analysis of repeated measures analysis of variance (ANOVA) indicated that increasing mental demands had a significant effect on heart rate, low frequency/high frequency ratio, theta and alpha band activity.\n\n\nCONCLUSION\nThe results suggested that when operators' mental demands especially in traffic control and power plant tasks increased, their mental fatigue and stress level increased and their mental health deteriorated. Therefore, it may be necessary to implement an ergonomic program or administrative control to manage mental probably health in these control centers. Furthermore, by evaluating MW, the control center director can organize the human resources for each MW condition to sustain the appropriate performance as well as improve system functions."
},
{
"pmid": "26360199",
"title": "Effects of mental workload on physiological and subjective responses during traffic density monitoring: A field study.",
"abstract": "This study evaluated operators' mental workload while monitoring traffic density in a city traffic control center. To determine the mental workload, physiological signals (ECG, EMG) were recorded and the NASA-Task Load Index (TLX) was administered for 16 operators. The results showed that the operators experienced a larger mental workload during high traffic density than during low traffic density. The traffic control center stressors caused changes in heart rate variability features and EMG amplitude, although the average workload score was significantly higher in HTD conditions than in LTD conditions. The findings indicated that increasing traffic congestion had a significant effect on HR, RMSSD, SDNN, LF/HF ratio, and EMG amplitude. The results suggested that when operators' workload increases, their mental fatigue and stress level increase and their mental health deteriorate. Therefore, it maybe necessary to implement an ergonomic program to manage mental health. Furthermore, by evaluating mental workload, the traffic control center director can organize the center's traffic congestion operators to sustain the appropriate mental workload and improve traffic control management."
},
{
"pmid": "30216385",
"title": "Eye tracking cognitive load using pupil diameter and microsaccades with fixed gaze.",
"abstract": "Pupil diameter and microsaccades are captured by an eye tracker and compared for their suitability as indicators of cognitive load (as beset by task difficulty). Specifically, two metrics are tested in response to task difficulty: (1) the change in pupil diameter with respect to inter- or intra-trial baseline, and (2) the rate and magnitude of microsaccades. Participants performed easy and difficult mental arithmetic tasks while fixating a central target. Inter-trial change in pupil diameter and microsaccade magnitude appear to adequately discriminate task difficulty, and hence cognitive load, if the implied causality can be assumed. This paper's contribution corroborates previous work concerning microsaccade magnitude and extends this work by directly comparing microsaccade metrics to pupillometric measures. To our knowledge this is the first study to compare the reliability and sensitivity of task-evoked pupillary and microsaccadic measures of cognitive load."
},
{
"pmid": "2204508",
"title": "Heart rate variability.",
"abstract": "Reduced heart rate variability carries an adverse prognosis in patients who have survived an acute myocardial infarction. This article reviews the physiology, technical problems of assessment, and clinical relevance of heart rate variability. The sympathovagal influence and the clinical assessment of heart rate variability are discussed. Methods measuring heart rate variability are classified into four groups, and the advantages and disadvantages of each group are described. Concentration is on risk stratification of postmyocardial infarction patients. The evidence suggests that heart rate variability is the single most important predictor of those patients who are at high risk of sudden death or serious ventricular arrhythmias."
},
{
"pmid": "11228350",
"title": "Multivariate workload evaluation combining physiological and subjective measures.",
"abstract": "This paper suggests a way to integrate different parameters into one index and results obtained by a newly developed index. The multivariate workload evaluation index, which integrates physiological parameters and one subjective parameter through Principle Components Analysis, was proposed to characterize task specific responses and individual differences in response patterns to mental tasks. Three different types of mental tasks were performed by 12 male participants. Heart rate variability, finger plethysmogram amplitude, and perspiration were used as physiological parameters. Three subscales, mental demand, temporal demand and effort out of six subscales in the NASA-Task Load Index were used as subjective scores. These parameters were standardized within each participant and then combined. It was possible to assess workload using this method from two different aspects, i.e. physiological and subjective, simultaneously."
},
{
"pmid": "8223408",
"title": "Audibility and identification of auditory alarms in the operating room and intensive care unit.",
"abstract": "The audibility and the identification of 23 auditory alarms in the intensive care unit (ICU) and 26 auditory alarms in the operating rooms (ORs) of a 214-bed Canadian teaching hospital were investigated. Digital tape recordings of the alarms were made and analysed using masked-threshold software developed at the Université de Montréal. The digital recordings were also presented to the hospital personnel responsible for monitoring these alarms on an individual basis in order to determine how many of the alarms they would be able to identify when they heard them. Several of the alarms in both areas of the hospital could mask other alarms in the same area, and many of the alarms in the operating rooms could be masked by the sound of a surgical saw or a surgical drill. The staff in the OR (anaesthetists, anaesthesia residents, and OR technologists) were able to identify a mean of between 10 and 15 of the 26 alarms found in their operating theatres. The ICU nurses were able to identify a mean of between 9 and 14 of the 23 alarms found in their ICU. Alarm importance was positively correlated with the frequency of alarm identification in the case of the OR, rho = 0.411, but was not significantly correlated in the case of the ICU, rho = 0.155. This study demonstrates the poor design of auditory warning signals in hospitals and the need for standardization of alarms on medical equipment."
},
{
"pmid": "10917145",
"title": "There is more to monitoring a nuclear power plant than meets the eye.",
"abstract": "A fundamental challenge in studying cognitive systems in context is how to move from the specific work setting studied to a more general understanding of distributed cognitive work and how to support it. We present a series of cognitive field studies that illustrate one response to this challenge. Our focus was on how nuclear power plant (NPP) operators monitor plant state during normal operating conditions. We studied operators at two NPPs with different control room interfaces. We identified strong consistencies with respect to factors that made monitoring difficult and the strategies that operators have developed to facilitate monitoring. We found that what makes monitoring difficult is not the need to identify subtle abnormal indications against a quiescent background, but rather the need to identify and pursue relevant findings against a noisy background. Operators devised proactive strategies to make important information more salient or reduce meaningless change, create new information, and off-load some cognitive processing onto the interface. These findings emphasize the active problem-solving nature of monitoring, and highlight the use of strategies for knowledge-driven monitoring and the proactive adaptation of the interface to support monitoring. Potential applications of this research include control room design for process control and alarm systems and user interfaces for complex systems."
},
{
"pmid": "8050404",
"title": "Physical, mental, emotional, and subjective workload components in train drivers.",
"abstract": "This study, using 12 train drivers on a high speed track and 11 drivers on a mountain track, tried to differentiate between the physical, emotional, mental, and subjective workload components imposed on the drivers during work. With the simultaneous recording and on-line analysis of heart rate and physical activity, the emotional component in terms of the so-called additional heart rate was separated from the physical component. Mental workload was calculated by the heart rate variability and by shifts in the T-wave amplitude of the ECG. Speed of the train, mode of driving, and stress of the situation were rated by two observers who accompanied the drivers in the cabin. During speeds up to 100 km/h as compared to standstills no heart rate changes occurred, but with speeds from 100 km/h up to 200 km/h heart rate decreased indicating a monotony effect. However, heart rate variability, and T-wave amplitude indicated higher mental load during driving in most speed categories. Starting the train and coming to a halt showed greater emotional workload as compared to moving. Observer ratings of stress and subjective ratings of stress by the drivers revealed several discrepancies. Discrepancies were also seen between workload as indicated by the physiological parameters, and corresponding stress ratings by the observers or by the drivers."
},
{
"pmid": "7808878",
"title": "Measurement of cognitive load in instructional research.",
"abstract": "The results of two of our recent empirical studies were considered to assess the usefulness of subjective ratings and cardiovascular measures of mental effort in instructional research. Based on its reliability and sensitivity, the subjective rating-scale technique met the requirements to be useful in instructional research whereas the cardiovascular technique did not. It was concluded that the usefulness of both measurement techniques in instructional research needs to be investigated further."
},
{
"pmid": "29670763",
"title": "Evaluation of cognitive load and emotional states during multidisciplinary critical care simulation sessions.",
"abstract": "BACKGROUND\nThe simulation in critical care setting involves a heterogeneous group of participants with varied background and experience. Measuring the impacts of simulation on emotional state and cognitive load in this setting is not often performed. The feasibility of such measurement in the critical care setting needs further exploration.\n\n\nMETHODS\nMedical and nursing staff with varying levels of experience from a tertiary intensive care unit participated in a standardised clinical simulation scenario. The emotional state of each participant was assessed before and after completion of the scenario using a validated eight-item scale containing bipolar oppositional descriptors of emotion. The cognitive load of each participant was assessed after the completion of the scenario using a validated subjective rating tool.\n\n\nRESULTS\nA total of 103 medical and nursing staff participated in the study. The participants felt more relaxed (-0.28±1.15 vs 0.14±1, P<0.005; d=0.39), excited (0.25±0.89 vs 0.55±0.92, P<0.005, d=0.35) and alert (0.85±0.87 vs 1.28±0.73, P<0.00001, d=0.54) following simulation. There was no difference in the mean scores for the remaining five items. The mean cognitive load for all participants was 6.67±1.41. There was no significant difference in the cognitive loads among medical staff versus nursing staff (6.61±2.3 vs 6.62±1.7; P>0.05).\n\n\nCONCLUSION\nA well-designed complex high fidelity critical care simulation scenario can be evaluated to identify the relative cognitive load of the participants' experience and their emotional state. The movement of learners emotionally from a more negative state to a positive state suggests that simulation can be an effective tool for improved knowledge transfer and offers more opportunity for dynamic thinking."
},
{
"pmid": "32719593",
"title": "Predicting Cognitive Load and Operational Performance in a Simulated Marksmanship Task.",
"abstract": "Modern operational environments can place significant demands on a service member's cognitive resources, increasing the risk of errors or mishaps due to overburden. The ability to monitor cognitive burden and associated performance within operational environments is critical to improving mission readiness. As a key step toward a field-ready system, we developed a simulated marksmanship scenario with an embedded working memory task in an immersive virtual reality environment. As participants performed the marksmanship task, they were instructed to remember numbered targets and recall the sequence of those targets at the end of the trial. Low and high cognitive load conditions were defined as the recall of three- and six-digit strings, respectively. Physiological and behavioral signals recorded included speech, heart rate, breathing rate, and body movement. These features were input into a random forest classifier that significantly discriminated between the low- and high-cognitive load conditions (AUC = 0.94). Behavioral features of gait were the most informative, followed by features of speech. We also showed the capability to predict performance on the digit recall (AUC = 0.71) and marksmanship (AUC = 0.58) tasks. The experimental framework can be leveraged in future studies to quantify the interaction of other types of stressors and their impact on operational cognitive and physical performance."
},
{
"pmid": "29034226",
"title": "An Overview of Heart Rate Variability Metrics and Norms.",
"abstract": "Healthy biological systems exhibit complex patterns of variability that can be described by mathematical chaos. Heart rate variability (HRV) consists of changes in the time intervals between consecutive heartbeats called interbeat intervals (IBIs). A healthy heart is not a metronome. The oscillations of a healthy heart are complex and constantly changing, which allow the cardiovascular system to rapidly adjust to sudden physical and psychological challenges to homeostasis. This article briefly reviews current perspectives on the mechanisms that generate 24 h, short-term (~5 min), and ultra-short-term (<5 min) HRV, the importance of HRV, and its implications for health and performance. The authors provide an overview of widely-used HRV time-domain, frequency-domain, and non-linear metrics. Time-domain indices quantify the amount of HRV observed during monitoring periods that may range from ~2 min to 24 h. Frequency-domain values calculate the absolute or relative amount of signal energy within component bands. Non-linear measurements quantify the unpredictability and complexity of a series of IBIs. The authors survey published normative values for clinical, healthy, and optimal performance populations. They stress the importance of measurement context, including recording period length, subject age, and sex, on baseline HRV values. They caution that 24 h, short-term, and ultra-short-term normative values are not interchangeable. They encourage professionals to supplement published norms with findings from their own specialized populations. Finally, the authors provide an overview of HRV assessment strategies for clinical and optimal performance interventions."
},
{
"pmid": "31507501",
"title": "Control Room Operators' Cue Utilization Predicts Cognitive Resource Consumption During Regular Operational Tasks.",
"abstract": "This study was designed to examine whether qualified practitioners' cue utilization is predictive of their sustained attention performance during regular operational tasks. Simulated laboratory studies have demonstrated that cue utilization differentiates cognitive load during process control tasks. However, it was previously unclear whether similar results would be demonstrated with qualified practitioners during familiar operational tasks. Australian distribution network service provider (DNSP) operators were classified with either higher or lower cue utilization based on an assessment of cue utilization within the context of electrical power distribution. During two, 20-min periods of operators' regular workdays, physiological measures of workload were assessed through changes in cerebral oxygenation in the prefrontal cortex compared to baseline, and through eye behavior metrics (fixation rates, saccade amplitude, and fixation dispersion). The results indicated that there were no statistically significant differences in eye behavior metrics, based on levels of cue utilization. However, as hypothesized, during both sessions, operators with higher cue utilization demonstrated smaller increases in cerebral oxygenation in the prefrontal cortex from baseline, compared to operators with lower cue utilization. The results are consistent with the proposition that operators with higher cue utilization experience lower cognitive load during periods of regular activity during their workday, compared to operators with lower cue utilization. Assessments of cue utilization could help identify operators who are better able to sustain attention during regular operational tasks, as well as those who may benefit from cue-based training interventions."
},
{
"pmid": "31366058",
"title": "A Systematic Review of Physiological Measures of Mental Workload.",
"abstract": "Mental workload (MWL) can affect human performance and is considered critical in the design and evaluation of complex human-machine systems. While numerous physiological measures are used to assess MWL, there appears no consensus on their validity as effective agents of MWL. This study was conducted to provide a comprehensive understanding of the use of physiological measures of MWL and to synthesize empirical evidence on the validity of the measures to discriminate changes in MWL. A systematical literature search was conducted with four electronic databases for empirical studies measuring MWL with physiological measures. Ninety-one studies were included for analysis. We identified 78 physiological measures, which were distributed in cardiovascular, eye movement, electroencephalogram (EEG), respiration, electromyogram (EMG) and skin categories. Cardiovascular, eye movement and EEG measures were the most widely used across varied research domains, with 76%, 66%, and 71% of times reported a significant association with MWL, respectively. While most physiological measures were found to be able to discriminate changes in MWL, they were not universally valid in all task scenarios. The use of physiological measures and their validity for MWL assessment also varied across different research domains. Our study offers insights into the understanding and selection of appropriate physiological measures for MWL assessment in varied human-machine systems."
},
{
"pmid": "8849491",
"title": "Diagnosticity and multidimensional subjective workload ratings.",
"abstract": "A new multidimensional subjective workload assessment instrument -- Workload Profile -- was introduced and evaluated against two unidimensional instruments -- Bedford and Psychophysical scaling. Subjects performed two laboratory tasks separately (single task) and simultaneously (dual task). The multidimensional procedure compared well with the unidimensional procedures in terms of sensitivity to task demands, concurrent validity with performance, and test-retest reliability. The results suggested that the subjective workload profiles would only have limited predictive value on performance. However, results of the canonical analysis demonstrated that the multidimensional ratings provided diagnostic information on the nature of task demands. Further, the diagnostic information was consistent with the a priori task characterization. This strongly supports the notion that mental workload is multidimensional and that subjects are capable of reporting the demands on separate workload dimensions. Theoretical implications on mental workload models and practical implications on the assessment approaches are discussed."
},
{
"pmid": "3397865",
"title": "Development and validation of brief measures of positive and negative affect: the PANAS scales.",
"abstract": "In recent studies of the structure of affect, positive and negative affect have consistently emerged as two dominant and relatively independent dimensions. A number of mood scales have been created to measure these factors; however, many existing measures are inadequate, showing low reliability or poor convergent or discriminant validity. To fill the need for reliable and valid Positive Affect and Negative Affect scales that are also brief and easy to administer, we developed two 10-item mood scales that comprise the Positive and Negative Affect Schedule (PANAS). The scales are shown to be highly internally consistent, largely uncorrelated, and stable at appropriate levels over a 2-month time period. Normative data and factorial and external evidence of convergent and discriminant validity for the scales are also presented."
},
{
"pmid": "27933012",
"title": "Degrees of Freedom in Planning, Running, Analyzing, and Reporting Psychological Studies: A Checklist to Avoid p-Hacking.",
"abstract": "The designing, collecting, analyzing, and reporting of psychological studies entail many choices that are often arbitrary. The opportunistic use of these so-called researcher degrees of freedom aimed at obtaining statistically significant results is problematic because it enhances the chances of false positive results and may inflate effect size estimates. In this review article, we present an extensive list of 34 degrees of freedom that researchers have in formulating hypotheses, and in designing, running, analyzing, and reporting of psychological research. The list can be used in research methods education, and as a checklist to assess the quality of preregistrations and to determine the potential for bias due to (arbitrary) choices in unregistered studies."
},
{
"pmid": "18689052",
"title": "Multiple resources and mental workload.",
"abstract": "OBJECTIVE\nThe objective is to lay out the rationale for multiple resource theory and the particular 4-D multiple resource model, as well as to show how the model is useful both as a design tool and as a means of predicting multitask workload overload.\n\n\nBACKGROUND\nI describe the discoveries and developments regarding multiple resource theory that have emerged over the past 50 years that contribute to performance and workload prediction.\n\n\nMETHOD\nThe article presents a history of the multiple resource concept, a computational version of the multiple resource model applied to multitask driving simulation data, and the relation of multiple resources to workload.\n\n\nRESULTS\nResearch revealed the importance of the four dimensions in accounting for task interference and the association of resources with brain structure. Multiple resource models yielded high correlations between model predictions and data. Lower correlations also identified the existence of additional resources.\n\n\nCONCLUSION\nThe model was shown to be partially relevant to the concept of mental workload, with greatest relevance to performance breakdowns related to dual-task overload. Future challenges are identified.\n\n\nAPPLICATION\nThe most important application of the multiple resource model is to recommend design changes when conditions of multitask resource overload exist."
},
{
"pmid": "25442818",
"title": "State of science: mental workload in ergonomics.",
"abstract": "Mental workload (MWL) is one of the most widely used concepts in ergonomics and human factors and represents a topic of increasing importance. Since modern technology in many working environments imposes ever more cognitive demands upon operators while physical demands diminish, understanding how MWL impinges on performance is increasingly critical. Yet, MWL is also one of the most nebulous concepts, with numerous definitions and dimensions associated with it. Moreover, MWL research has had a tendency to focus on complex, often safety-critical systems (e.g. transport, process control). Here we provide a general overview of the current state of affairs regarding the understanding, measurement and application of MWL in the design of complex systems over the last three decades. We conclude by discussing contemporary challenges for applied research, such as the interaction between cognitive workload and physical workload, and the quantification of workload 'redlines' which specify when operators are approaching or exceeding their performance tolerances."
}
] |
Frontiers in Neurorobotics | null | PMC8995800 | 10.3389/fnbot.2022.851471 | Schizophrenia-Mimicking Layers Outperform Conventional Neural Network Layers | We have reported nanometer-scale three-dimensional studies of brain networks of schizophrenia cases and found that their neurites are thin and tortuous when compared to healthy controls. This suggests that connections between distal neurons are suppressed in microcircuits of schizophrenia cases. In this study, we applied these biological findings to the design of a schizophrenia-mimicking artificial neural network to simulate the observed connection alteration in the disorder. Neural networks that have a “schizophrenia connection layer” in place of a fully connected layer were subjected to image classification tasks using the MNIST and CIFAR-10 datasets. The results revealed that the schizophrenia connection layer is tolerant to overfitting and outperforms a fully connected layer. The outperformance was observed only for networks using band matrices as weight windows, indicating that the shape of the weight matrix is relevant to the network performance. A schizophrenia convolution layer was also tested using the VGG configuration, showing that 60% of the kernel weights of the last three convolution layers can be eliminated without loss of accuracy. The schizophrenia layers can be used instead of conventional layers without any change in the network configuration and training procedures; hence, neural networks can easily take advantage of these layers. The results of this study suggest that the connection alteration found in schizophrenia is not a burden to the brain, but has functional roles in brain performance. | Related WorksNeural networks were first developed by incorporating biological findings, but until now, the structural aspects of neurons of patients with psychiatric disorders have not been incorporated in studies on artificial intelligence. This is probably because the neuropathology of psychiatric disorders had not been three-dimensionally delineated (Itokawa et al., 2020) before our recent reports regarding the nanometer-scale structure of neurons of schizophrenia cases (Mizutani et al., 2019, 2021). A method called “optimal brain damage” (Le Cun et al., 1990) has been proposed to remove unimportant weights to reduce the number of parameters, although its relation to biological findings, such as those regarding brain injuries, has not been explicitly described.Parameter reduction and network pruning have been suggested as strategies to simplify the network. It has been reported that simultaneous regularization during training can reduce network connections while maintaining competitive performance (Scardapane et al., 2017). A method to regularize the network structure that includes the filter shapes and layer depth has been reported to allow the network to learn more compact structures without loss of accuracy (Wen et al., 2016). A study on network pruning has suggested that careful evaluations of the structured pruning method are needed (Liu et al., 2018). Elimination of zero weights after training has been proposed as a way to simplify the network (Yaguchi et al., 2018). Improvements in accuracy have been reported for regularized networks (Scardapane et al., 2017; Yaguchi et al., 2018), although these parameter reduction methods require dedicated algorithms or procedures to remove parameters during training.Regularization on the basis of filter structure has been reported. The classification accuracy can be improved by using customized filter shapes in the convolution layer (Li et al., 2017). The shape of the filters can also be regularized from the symmetry in the filter matrix (Anselmi et al., 2016). It has been reported that a low-rank approximation can be used to regularize the weights of the convolution layers. (Yu et al., 2017; Idelbayev and Carreira-Perpiñán, 2020). Kernel-wise removal of weights has also been proposed as a regularization method for convolutional networks (Berthelier et al., 2020). These reports focus on the shape of the image dimensions, while the schizophrenia-mimicking modification of convolution layers proposed in this study is performed by masking the weight matrix with a band matrix defined along the channel dimensions but not along the image dimensions. This strategy allowed us to eliminate 60% of the weights of the last three convolution layers of the VGG16 network without loss of accuracy (Figure 4B). We suggest that the real human brain has already implemented this simple and efficient strategy in the process of its biological evolution. | [
"26687219",
"7370364",
"20174514",
"32166560",
"9396946",
"14403679",
"31725933",
"18148484",
"31760370",
"11596847",
"30755587",
"33446640",
"26053403",
"13602029",
"25823399",
"18270515"
] | [
{
"pmid": "26687219",
"title": "Architectonic Mapping of the Human Brain beyond Brodmann.",
"abstract": "Brodmann has pioneered structural brain mapping. He considered functional and pathological criteria for defining cortical areas in addition to cytoarchitecture. Starting from this idea of structural-functional relationships at the level of cortical areas, we will argue that the cortical architecture is more heterogeneous than Brodmann's map suggests. A triple-scale concept is proposed that includes repetitive modular-like structures and micro- and meso-maps. Criteria for defining a cortical area will be discussed, considering novel preparations, imaging and optical methods, 2D and 3D quantitative architectonics, as well as high-performance computing including analyses of big data. These new approaches contribute to an understanding of the brain on multiple levels and challenge the traditional, mosaic-like segregation of the cerebral cortex."
},
{
"pmid": "7370364",
"title": "Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.",
"abstract": "A neural network model for a mechanism of visual pattern recognition is proposed in this paper. The network is self-organized by \"learning without a teacher\", and acquires an ability to recognize stimulus patterns based on the geometrical similarity (Gestalt) of their shapes without affected by their positions. This network is given a nickname \"neocognitron\". After completion of self-organization, the network has a structure similar to the hierarchy model of the visual nervous system proposed by Hubel and Wiesel. The network consists of an input layer (photoreceptor array) followed by a cascade connection of a number of modular structures, each of which is composed of two layers of cells connected in a cascade. The first layer of each module consists of \"S-cells\", which show characteristics similar to simple cells or lower order hypercomplex cells, and the second layer consists of \"C-cells\" similar to complex cells or higher order hypercomplex cells. The afferent synapses to each S-cell have plasticity and are modifiable. The network has an ability of unsupervised learning: We do not need any \"teacher\" during the process of self-organization, and it is only needed to present a set of stimulus patterns repeatedly to the input layer of the network. The network has been simulated on a digital computer. After repetitive presentation of a set of stimulus patterns, each stimulus pattern has become to elicit an output only from one of the C-cells of the last layer, and conversely, this C-cell has become selectively responsive only to that stimulus pattern. That is, none of the C-cells of the last layer responds to more than one stimulus pattern. The response of the C-cells of the last layer is not affected by the pattern's position at all. Neither is it affected by a small change in shape nor in size of the stimulus pattern."
},
{
"pmid": "20174514",
"title": "What is schizophrenia: A neurodevelopmental or neurodegenerative disorder or a combination of both? A critical analysis.",
"abstract": "The etiology of schizophrenia has been the focus of intensive research for a long time. Perspectives have changed drastically with the development of new investigative techniques. Clinical observations made by Kraepelin, Clouston, Bender, and Watt are now being complemented by neuroimaging and genetic studies to prove the neurodevelopmental hypothesis. At the same time, neuropathological and longitudinal studies of schizophrenia often support a neurodegenerative hypothesis. To provide a theoretical basis to the available evidence, another hypothesis called the progressive neurodevelopmental model has also emerged. This review presents some key evidence supporting each of these theories followed by a critical analysis of each."
},
{
"pmid": "32166560",
"title": "Correction to: Urine Sediment Recognition Method Based on Multi-View Deep Residual Learning in Microscopic Image.",
"abstract": "In the original version of this article, the authors' units in the affiliation section are, unfortunately, incorrect. Jining No.1 people's hospital and Affiliated Hospital of Jining Medical University are two independent units and should not have been combined into one affiliation."
},
{
"pmid": "9396946",
"title": "Synaptic elimination, neurodevelopment, and the mechanism of hallucinated \"voices\" in schizophrenia.",
"abstract": "OBJECTIVE\nAfter peaking during childhood, synaptic density in the human frontal cortex declines by 30%-40% during adolescence because of progressive elimination of synaptic connections. The characteristic age at onset of schizophrenia--late adolescence and early adulthood--suggests that the disorder could arise from irregularities involving this neurodevelopmental process.\n\n\nMETHOD\nA computer simulation of a speech perception neural network was developed. Connections within the working memory component of the network were eliminated on the basis of a \"Darwinian rule\" in order to model loss of synapses. As a comparison, neuronal cell death, also postulated as being linked to both neurodevelopment and schizophrenia, was simulated. The authors determined whether these alterations at low levels could enhance perceptual capacity and at high levels produce spontaneous speech percepts that simulate hallucinated speech or \"voices.\"\n\n\nRESULTS\nEliminating up to 65% of working memory connections improved perceptual ability; beyond that point, network performance declined and speech hallucinations emerged. Simulating excitotoxic neuronal loss at low levels also improved network performance, but in excess it did not produce hallucinations.\n\n\nCONCLUSIONS\nThe model demonstrates perceptual advantages of selective synaptic elimination as well as selective neuronal loss, suggesting a functional explanation for these aspects of neurodevelopment. The model predicts that psychosis arises from a pathological extension of one of these neurodevelopmental trends, namely, synaptic elimination."
},
{
"pmid": "31725933",
"title": "Cutting-edge morphological studies of post-mortem brains of patients with schizophrenia and potential applications of X-ray nanotomography (nano-CT).",
"abstract": "Kraepelin expected that the neuropathological hallmark of schizophrenia would be identified when he proposed the concept of dementia praecox 120 years ago. Although a variety of neuropathological findings have been reported since then, a consensus regarding the pathology of schizophrenia has not been established. The discrepancies have mainly been ascribed to limitations in the disease definition of schizophrenia that accompanies etiological heterogeneity and to the incompleteness of the visualization methodology and technology for biochemical analyses. However, macroscopic structural changes in the schizophrenia brain, such as volumetric changes of brain regions, must entail structural changes to cells composing the brain. This paper overviews neuropathology of schizophrenia and also summarizes recent application of synchrotron radiation nanotomography (nano-CT) to schizophrenia brain tissues. Geometric parameters of neurites determined from the 3-D nano-CT images of brain tissues indicated that the curvature of neurites in schizophrenia cases is significantly higher than that of controls. The schizophrenia case with the highest curvature carried a frameshift mutation in the glyoxalase 1 gene and exhibited treatment resistance. Controversies in the neuropathology of schizophrenia are mainly due to the difficulty in reproducing histological findings reported for schizophrenia. Nano-CT visualization using synchrotron radiation and subsequent geometric analysis should shed light on this long-standing question about the neuropathology of schizophrenia."
},
{
"pmid": "31760370",
"title": "A review on neural network models of schizophrenia and autism spectrum disorder.",
"abstract": "This survey presents the most relevant neural network models of autism spectrum disorder and schizophrenia, from the first connectionist models to recent deep neural network architectures. We analyzed and compared the most representative symptoms with its neural model counterpart, detailing the alteration introduced in the network that generates each of the symptoms, and identifying their strengths and weaknesses. We additionally cross-compared Bayesian and free-energy approaches, as they are widely applied to model psychiatric disorders and share basic mechanisms with neural networks. Models of schizophrenia mainly focused on hallucinations and delusional thoughts using neural dysconnections or inhibitory imbalance as the predominating alteration. Models of autism rather focused on perceptual difficulties, mainly excessive attention to environment details, implemented as excessive inhibitory connections or increased sensory precision. We found an excessively tight view of the psychopathologies around one specific and simplified effect, usually constrained to the technical idiosyncrasy of the used network architecture. Recent theories and evidence on sensorimotor integration and body perception combined with modern neural network architectures could offer a broader and novel spectrum to approach these psychopathologies. This review emphasizes the power of artificial neural networks for modeling some symptoms of neurological disorders but also calls for further developing of these techniques in the field of computational psychiatry."
},
{
"pmid": "11596847",
"title": "Neural development, cell-cell signaling, and the \"two-hit\" hypothesis of schizophrenia.",
"abstract": "To account for the complex genetics, the developmental biology, and the late adolescent/early adulthood onset of schizophrenia, the \"two-hit\" hypothesis has gained increasing attention. In this model, genetic or environmental factors disrupt early central nervous system (CNS) development. These early disruptions produce long-term vulnerability to a \"second hit\" that then leads to the onset of schizophrenia symptoms. The cell-cell signaling pathways involved in nonaxial induction, morphogenesis, and differentiation in the brain, as well as in the limbs and face, could be targets for a \"first hit\" during early development. These same pathways, redeployed for neuronal maintenance rather than morphogenesis, may be targets for a \"second hit\" in the adolescent or adult brain. Furthermore, dysregulation of cell-cell signaling by a \"first hit\" may prime the CNS for a pathologic response to a \"second hit\" via the same signaling pathway. Thus, parallel disruption of cell-cell signaling in both the developing and the mature CNS provides a plausible way of integrating genetic, developmental, and environmental factors that contribute to vulnerability and pathogenesis in schizophrenia."
},
{
"pmid": "30755587",
"title": "Three-dimensional alteration of neurites in schizophrenia.",
"abstract": "Psychiatric symptoms of schizophrenia suggest alteration of cerebral neurons. However, the physical basis of the schizophrenia symptoms has not been delineated at the cellular level. Here, we report nanometer-scale three-dimensional analysis of brain tissues of schizophrenia and control cases. Structures of cerebral tissues of the anterior cingulate cortex were visualized with synchrotron radiation nanotomography. Tissue constituents visualized in the three-dimensional images were traced to build Cartesian coordinate models of tissue constituents, such as neurons and blood vessels. The obtained Cartesian coordinates were used for calculating curvature and torsion of neurites in order to analyze their geometry. Results of the geometric analyses indicated that the curvature of neurites is significantly different between schizophrenia and control cases. The mean curvature of distal neurites of the schizophrenia cases was ~1.5 times higher than that of the controls. The schizophrenia case with the highest neurite curvature carried a frame shift mutation in the GLO1 gene, suggesting that oxidative stress due to the GLO1 mutation caused the structural alteration of the neurites. The differences in the neurite curvature result in differences in the spatial trajectory and hence alter neuronal circuits. It has been shown that the anterior cingulate cortex analyzed in this study has emotional and cognitive functions. We suggest that the structural alteration of neurons in the schizophrenia cases should reflect psychiatric symptoms of schizophrenia."
},
{
"pmid": "33446640",
"title": "Structural diverseness of neurons between brain areas and between cases.",
"abstract": "The cerebral cortex is composed of multiple cortical areas that exert a wide variety of brain functions. Although human brain neurons are genetically and areally mosaic, the three-dimensional structural differences between neurons in different brain areas or between the neurons of different individuals have not been delineated. Here we report a nanometer-scale geometric analysis of brain tissues of the superior temporal gyrus of schizophrenia and control cases. The results of the analysis and a comparison with results for the anterior cingulate cortex indicated that (1) neuron structures are significantly dissimilar between brain areas and that (2) the dissimilarity varies from case to case. The structural diverseness was mainly observed in terms of the neurite curvature that inversely correlates with the diameters of the neurites and spines. The analysis also revealed the geometric differences between the neurons of the schizophrenia and control cases. The schizophrenia cases showed a thin and tortuous neuronal network compared with the controls, suggesting that the neuron structure is associated with the disorder. The area dependency of the neuron structure and its diverseness between individuals should represent the individuality of brain functions."
},
{
"pmid": "26053403",
"title": "Polygenic risk scores for schizophrenia and bipolar disorder predict creativity.",
"abstract": "We tested whether polygenic risk scores for schizophrenia and bipolar disorder would predict creativity. Higher scores were associated with artistic society membership or creative profession in both Icelandic (P = 5.2 × 10(-6) and 3.8 × 10(-6) for schizophrenia and bipolar disorder scores, respectively) and replication cohorts (P = 0.0021 and 0.00086). This could not be accounted for by increased relatedness between creative individuals and those with psychoses, indicating that creativity and psychosis share genetic roots."
},
{
"pmid": "25823399",
"title": "Creativity and positive symptoms in schizophrenia revisited: Structural connectivity analysis with diffusion tensor imaging.",
"abstract": "Both creativity and schizotypy are suggested to be manifestations of the hyperactivation of unusual or remote concepts/words. However, the results of studies on creativity in schizophrenia are diverse, possibly due to the multifaceted aspects of creativity and difficulties of differentiating adaptive creativity from pathological schizotypy/positive symptoms. To date, there have been no detailed studies comprehensively investigating creativity, positive symptoms including delusions, and their neural bases in schizophrenia. In this study, we investigated 43 schizophrenia and 36 healthy participants using diffusion tensor imaging. We used idea, design, and verbal (semantic and phonological) fluency tests as creativity scores and Peters Delusions Inventory as delusion scores. Subsequently, we investigated group differences in every psychological score, correlations between fluency and delusions, and relationships between these scores and white matter integrity using tract-based spatial statistics (TBSS). In schizophrenia, idea and verbal fluency were significantly lower in general, and delusion score was higher than in healthy controls, whereas there were no group differences in design fluency. We also found positive correlation between phonological fluency and delusions in schizophrenia. By correlation analyses using TBSS, we found that the anterior part of corpus callosum was the substantially overlapped area, negatively correlated with both phonological fluency and delusion severity. Our results suggest that the anterior interhemispheric dysconnectivity might be associated with executive dysfunction, and disinhibited automatic spreading activation in the semantic network was manifested as uncontrollable phonological fluency or delusions. This dysconnectivity could be one possible neural basis that differentiates pathological positive symptoms from adaptive creativity."
},
{
"pmid": "18270515",
"title": "Pyramidal neurons: dendritic structure and synaptic integration.",
"abstract": "Pyramidal neurons are characterized by their distinct apical and basal dendritic trees and the pyramidal shape of their soma. They are found in several regions of the CNS and, although the reasons for their abundance remain unclear, functional studies--especially of CA1 hippocampal and layer V neocortical pyramidal neurons--have offered insights into the functions of their unique cellular architecture. Pyramidal neurons are not all identical, but some shared functional principles can be identified. In particular, the existence of dendritic domains with distinct synaptic inputs, excitability, modulation and plasticity appears to be a common feature that allows synapses throughout the dendritic tree to contribute to action-potential generation. These properties support a variety of coincidence-detection mechanisms, which are likely to be crucial for synaptic integration and plasticity."
}
] |
Education and Information Technologies | 35431600 | PMC8995908 | 10.1007/s10639-022-11038-z | Investigating the effectiveness of a HyFlex cyber security training in a developing country: A case study | HyFlex termed as hybrid-flexibility is a teaching approach where teachers and students have the alternative to participate in planned courses either remotely or face-to-face. This study examines the effectiveness of the HyFlex pedagogical method to teach highly interactive digital and face-to-face cyber security training in Nigeria amidst the pandemic. Data was collected using a survey questionnaire from 113 participants to evaluate student’s perception towards the effectiveness of the Hyflex method using physical and Zoom teleconferencing which allow students to participate remotely in the cyber security training. The developed questionnaire comprising both open-ended and Likert-style questions was administered to purposely sampled participants. Findings from this study presents implementation details on how the HyFlex teaching model was implemented from a developing country context. Besides, findings present challenges and opportunities experienced with adopting the HyFlex pedagogical model, and also offers recommendations to other instructors for employing this teaching model. Findings also reveal that although there were challenges experienced by the students who attended via online such as connectivity issues, competency in using some features in Zoom-conferencing, etc. The students did appreciate the flexibility HyFlex teaching afforded, indicating that HyFlex is a promising teaching approach for fostering engagement of students especially in large-group cyber security courses. | Related worksThe HyFlex learning format has been studied over the past few years. It was originally developed by Dr. Brain Beatty for his graduate course at San Francisco State University (Beatty, 2007). A survey of the literature as shown in Table 1 depicts a summary of prior studies on HyFlex adoption in higher education.Table 1Survey of prior studies on HyFlex adoption in higher educationAuthors, year, and contributionHyFlex approachMethodology adoptedContext and CountryKohnke and Moorhouse (2021) Adopted HyFlex in higher education in response to COVID-19 based on students’ perspectiveUtilized face to face, Zoom video-conferencing application, other digital tools and mixed approachQualitative approach interview-Students’ perspectives-Hong KongVilhauer (2021) provided a background viewpoint on HyFlex and the deployment of a modified HyFlex modelSynchronous via Zoom breakout rooms and asynchronous with students using Padlet and Blackboard discussion boards and other online toolsNot reported- Content, engagement, and assessment strategies for students-United States of America (USA)Keiper et al. (2021) examined student perceptions on the usefulness of Flipgrid for HyFlex learningOnline video discussionboard learning platformQuestionnaire-Students’ perceptions-USAStraub (2021) designed a HyFlex course for defensive securityHyFlex-based model was used which comprises of quizzes quizzes, experiential, lab exercises, and discussion boardsNot reported-Cybersecurity education-USAFoust and Ruzybayev (2021) explored students' academic experience with Hyflex teaching modelHyFlex instructional modelSurvey-Engineering courses-USARaman et al. (2021) provided practical guidelines for HyFlex undergraduate teaching amidst the pandemicCOVID-19 HyFlex model and Group WorkNot reported- Facilitate effective peer collaboration in the classroom-USABrown and Tenbergen (2021) investigated teaching software quality assurance during COVID-19 based on the HyFlex approachLectures (physical class, with video recordings), face-to-face activities, group assignments, group projects, and exams via online campus management systemQuantitative evidence-Efficacy of the HyFlex educational paradigm-USALohmann et al. (2021) provided classroom management initiatives for Hyflex instructionBest practicesin Hyflex instruction for virtual and physical learningNot reported- Setting learners up for success in the hybrid learning environment-USAMiller et al. (2021) presented pandemic teaching opportunities and challenges for teaching communicationHyFlex teaching approach (online and physical)Not reported-Improve teaching in HyFlex, BlendFlex, and remote courses-USAKeshishi (2021) researched on playful reflective thinking within a HyFlex classroomHyFlex teaching approach (online and physical)Conceptual-Students’ engagement-United Kingdom (UK)Zehler et al. (2021) implemented a Hyflex simulation for creative method to unprecedented circumstancesSimulation and via online ZoomQuasi-experimental study, Case study-Support gains in critical thinking and judgment of learners-USATable 1 shows a review of survey of 11 studies that investigated HyFlex adoption in higher education. Besides, several other authors have also considered different aspects of the HyFlex learning method. For example, Kyei-Blankson and Godwyll (2010) explored the extent to which a student's needs and expectations are met in a HyFlex learning environment. They also compared instructor perspectives regarding participation and performance in HyFlex courses with previously face-to-face classes. Similarly, Abdelmalak and Parra (2016) considered student’s perspectives regarding HyFlex course design. They utilized qualitative study methods and focused on the participants' perspectives and experiences to obtain interesting results about HyFlex course design. The results obtained from the study indicated that HyFlex course design was able to accommodate student needs and their life circumstances, increase student access to course content and instruction, and gave students a sense of control over their learning.The effectiveness of HyFlex courses has been examined by Lakhal et al. (2014). They employed two categories of variables one independent variable (course delivery mode) and four dependent variables (satisfaction, performance on multiple-choice test, written exam, and continuous assessment) and a total of 376 students participated in the study by responding to an online questionnaire. The findings from the study showed that no significant difference was found between students who choose different delivery modes on satisfaction, multi-choice test, and written scores; but significant differences were observed on continuous assessment scores. Miller et al. (2013) also evaluated the effectiveness of a HyFlex instructional model, specifically designed for large, on-campus courses. In this study, a total of 161 undergraduate students participated in the pilot section of a course while a control group that was made of 168 undergraduate students enrolled in two additional sections of the course. The results from the study revealed that the HyFlex instructional model had no negative impact on student performance in class, either in overall learning or individual grades and that the HyFlex model performed no differently from the traditional classroom model with respect to student learning.Moreover, Binnewies and Wang (2019) explored equity and engagement methods to assist student learning in a HyFlex learning format. The approach they adopted in this study is to evaluate their teaching components according to student participation, and the quantitative and qualitative feedback received from the students. The results from the study observed that most students appreciated the HyFlex mode of delivery, however, it was constrained in some way by the technology available. Heilporn and Lakhal (2021) converted a graduated-level course into a HyFlex modality and then considered what are the effective engagement strategies. In this work, they combined both exploratory qualitative and mixed method approaches for data collection and analysis. The results obtained from this work demonstrated that HyFlex is a promising learning modality for encouraging student engagement at the graduate level, including large groups.In contrast to all the works discussed in the preceding paragraphs, we investigate the effectiveness of deploying a Hyflex learning format for cyber security training in a developing country. With the growing demand for cyber security professionals around the world, especially in developing countries, there is a need for designing and delivering a flexible cyber security training that will accommodate participant’s needs and will provide greater access to cyber security training to a large number of participants. Thus, this study fills the gap in the existing literature by proposing a HyFlex cyber security training format as a promising training delivery mode that could expand cyber and digital security capability and capacity, especially in developing countries. | [
"34025107"
] | [
{
"pmid": "34025107",
"title": "Classroom Management Strategies for Hyflex Instruction: Setting Students Up for Success in the Hybrid Environment.",
"abstract": "The COVID-19 pandemic changed the way that schools provide instruction to learners and these changes may last for an extended period of time. One current trend is the use of hyflex instruction, which involves teachers providing instruction to students simultaneously in the classroom and online. This form of instruction provides unique challenges for teachers, including establishing expectations and managing classroom behaviors. Teachers must utilize the same best practices in classroom management in the hyflex environment that they typically use in the face-to-face setting, including (a) teaching expectations, (b) modeling the desired behavior, and (c) providing timely and explicit feedback to support students, especially young children and those with disabilities, to follow the guidelines for physical distancing and to keep students, teachers, administrators, and their families safe at this time. This article provides a brief overview for general and special education teachers to apply these strategies in the hyflex instructional environment to support young children and maintain protocols required due to the COVID-19 pandemic."
}
] |
Frontiers in Bioengineering and Biotechnology | null | PMC8996057 | 10.3389/fbioe.2022.839586 | MGMSN: Multi-Granularity Matching Model Based on Siamese Neural Network | Aiming to overcome the shortcomings of the existing text matching algorithms, in this research, we have studied the related technologies of sentence matching and dialogue retrieval and proposed a multi-granularity matching model based on Siamese neural networks. This method considers both deep semantic similarity and shallow semantic similarity of input sentences to completely mine similar information between sentences. Moreover, to alleviate the problem of out of vocabulary in sentences, we have combined both word and character granularity in deep semantic similarity to further learn information. Finally, comparative experiments were carried out on the Chinese data set LCQMC. The experimental results confirm the effectiveness and generalization ability of this method, and several ablation experiments also show the importance of each part of the model. | 2 Related WorkIn this section, we briefly introduce some related theories and concepts. Specifically, bidirectional LSTM (BiLSTM) will be used to extract the character granularity and word granularity features. Siamese networks will be the core components of the proposed model.2.1 BiLSTMThe most important part of the text analysis process is the analysis of sentence sequences. Recurrent neural networks (RNNs) have a wide range of applications in solving sequence information problems, and their network structure is significantly different from traditional neural networks (Yu et al. (2020); Wang et al. (2022)). There will be a long-term dependency problem in the RNN learning process. This is because the connection relationship between the inputs and outputs is not ignored, resulting in forgetting the previous text information, which will cause the gradient disappearance or gradient explosion phenomenon.The long short-term memory network (LSTM) can solve this problem. It provides a gate mechanism to manage information to limit the amount of information and uses memory cells to store long-term historical information. Adding gates is actually a multilevel feature selection method (Na et al. (2021)). The LSTM model mainly includes input gates i
t
, forgetting gates f
t
, output gates O
t
and memory units C
t
. The specific structure is shown in Figure 1.FIGURE 1LSTM cell.In the first, LSTM must pass the forgetting gate to decide which information in the previous cell unit needs to be forgotten. It is completed by the sigmoid function, which calculates a number from 0 to 1 by receiving the weighted sum of the output at the previous time (time t − 1) and the input at this time (time t), where 0 means completely discarded and 1 means all retention. Its calculation is shown in Eq. 1:
ft=σwf⋅ht−1,xt+bf.
(1)
After inputting the information required by the door control unit, we get
it=σwi⋅ht−1,xt+bi.
(2)
Ct=ft×Ct−1+it×tanhwf⋅ht−1,xt+b0.
(3)
The information controlled by the output gate is used for the task output at this moment, and its calculation process is given as follows:
Ot=σwo⋅ht−1,xt+bo,
(4)
ht=Ot⋅tanhCt.
(5)
Among them, w
i
, w
f
, and w
o
are the weight matrices of the input gate, forgetting gate, and output gate, respectively; b
i
, b
f
, and b
o
are the bias matrices of the input gate, forgetting gate, and output gate, respectively; σ is the sigmoid activation function; h
t−1 and h
t
represent the state of the previous hidden layer and the current hidden layer, respectively; and x
t
represents the input of the current cell.However, LSTM still has defects. It cannot effectively use the information after the word and cannot effectively capture weaker semantic information but can only use the information before the current word. In fact, the word semantics is related not only to the previous information but also to the information behind the word. Therefore, the text sequence is reversely integrated into the model, so that the model becomes a bidirectional long short-term memory network (BiLSTM) structure model composed of forward and reverse. The BiLSTM network takes the word vector as the model input and obtains the hidden layer state vector through the forward and backward units of the hidden layer, respectively. Considering
H⃗=(h1,h2….ht)
and
H⃖=(h1,h2….ht)
as the forward and backward outputs of the hidden layer, the output of the BiLSTM hidden layer is obtained as follows:
H=H⃗,H⃖.
(6)
2.2 Siamese NetworksA Siamese network (Bromley et al. (1993)) is an architecture for non-linear metric learning with similarity information. It naturally learns representations that embed the invariance and selectivity desired by the explicit information about similarity between pairs of objects. In contrast, an auto-encoder (Wang et al. (2016)) learns invariance through added noise and dimensionality reduction in the bottleneck layer and selectivity through the condition that the input should be reproduced by the decoding part of the network. A Siamese network learns an invariant and selective representation directly through the use of similarity and dissimilarity information. In natural language processing, Siamese networks are usually used to calculate the semantic similarity between sentences (Kenter et al. (2016); Mueller and Thyagarajan (2016); Neculoiu et al. (2016)). The structure of the Siamese network is shown in Figure 2.FIGURE 2Siamese network frame diagram.Generally, to calculate semantic similarity, sentences will be reformed as sentence pairs and then input into a Siamese network (Fan et al. (2020)). | [
"27411231",
"12662814"
] | [
{
"pmid": "27411231",
"title": "LSTM: A Search Space Odyssey.",
"abstract": "Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( ≈ 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment."
},
{
"pmid": "12662814",
"title": "Automatic early stopping using cross validation: quantifying the criteria.",
"abstract": "Cross validation can be used to detect when overfitting starts during supervised training of a neural network; training is then stopped before convergence to avoid the overfitting ('early stopping'). The exact criterion used for cross validation based early stopping, however, is chosen in an ad-hoc fashion by most researchers or training is stopped interactively. To aid a more well-founded selection of the stopping criterion, 14 different automatic stopping criteria from three classes were evaluated empirically for their efficiency and effectiveness in 12 different classification and approximation tasks using multi-layer perceptrons with RPROP training. The experiments show that, on average, slower stopping criteria allow for small improvements in generalization (in the order of 4%), but cost about a factor of 4 longer in training time."
}
] |
Frontiers in Robotics and AI | null | PMC8996188 | 10.3389/frobt.2022.758519 | Guidelines for Robot-to-Human Handshake From the Movement Nuances in Human-to-Human Handshake | The handshake is the most acceptable gesture of greeting in many cultures throughout many centuries. To date, robotic arms are not capable of fully replicating this typical human gesture. Using multiple sensors that detect contact forces and displacements, we characterized the movements that occured during handshakes. A typical human-to-human handshake took around 3.63 s (SD = 0.45 s) to perform. It can be divided into three phases: reaching (M = 0.92 s, SD = 0.45 s), contact (M = 1.96 s, SD = 0.46 s), and return (M = 0.75 s, SD = 0.12 s). The handshake was further investigated to understand its subtle movements. Using a multiphase jerk minimization model, a smooth human-to-human handshake can be modelled with fifth or fourth degree polynomials at the reaching and return phases, and a sinusoidal function with exponential decay at the contact phase. We show that the contact phase (1.96 s) can be further divided according to the following subphases: preshake (0.06 s), main shake (1.31 s), postshake (0.06 s), and a period of no movement (0.52 s) just before both hands are retracted. We compared these to the existing handshake models that were proposed for physical human-robot interaction (pHRI). From our findings in human-to-human handshakes, we proposed guidelines for a more natural handshake movement between humanoid robots and their human partners. | 2 Related Works2.1 Modelling the Handshake Phases
Kasuga and Hashimoto (2005) implemented neural oscillators to simulate the movement of the robot in physical human-robot interaction. The neural oscillator made use of the synchronization of movements during the interactions. In this approach, the applied forces and torques were used as inputs to determine the trajectory of the shoulder and elbow joints of a robot arm. By adjusting the parameters of this system, the passive behavior of the robot’s handshake was adjusted. From computer simulations and experimental validation, results showed that the proposed neural oscillator control method was better than the conventional impedance control method.Extending the work of Kasuga and Hashimoto (2005), Jindai et al. (2006) proposed the use of a second-order lag element to simulate the reaching movement of the handshake receiver. The proposed reaching movement was a weighted combination of the reaching movement of the initiator and a following movement of the initiator’s hand. The value of the weighting coefficient changed as the movement progressed such that the movement switched from the former to the latter. A handshake experiment was performed to evaluate the proposed method. The velocity plots of the reaching movement of the receiver and the one obtained by applying this method were similar, which showed the acceptability of the procedure.
Yamato et al. (2008) proposed a preshaking movement to add to the reaching movement proposed by Jindai et al. (2006) and to the handshaking movement proposed by Kasuga and Hashimoto (2005). However, this led to modifying the model of the reaching movement by adding a dead-time element to the transfer function. The proposed preshake movement was hypothesized to exist as a leading initiative by the receiver to start the handshake. This movement was simply a quarter of a handshake cycle that was directed upward or downward, or was absent (i.e., the initiator led the shaking movement). These three conditions were tested in a second handshake experiment between the participants and a robot system. The authors reported that the upward preshake movement was the most acceptable of the three. A third experiment was conducted to determine the best out of the three models: the selected upward model, a height-based movement direction model, and a force direction model. The height-based model selected the direction of the movement based on the height at which contact was made. The force direction model moved in the direction that matched the direction of the force applied by the handshake initiator. The height-based model was reported to be the best out of the three models.
Jindai et al. (2012) proposed a model for the reaching movement of the handshake initiator. The proposed model was based on a combination of fast and slow movement instances of Hogan (1984)’s model. This approach was utilized to obtain a skewed bell-shaped velocity profile, which was the reaching movement profile. A handshake experiment between the participants and a small robot system was conducted to determine the best value of the parameters of the lag and dead-time elements of the control system. It was determined that the receiver's movement that lagged behind the initiator’s movement and incorporated a dead-time period was preferred.In Wang et al. (2008), they proposed a model for the handshaking contact wherein the modelling procedure started from acquiring numerous handshake data. Then, the desired trajectories were generated by a planner and were synthesized by a motion synthesis module. Finally, new control methods were employed to achieve a satisfactory robot handshake. The controller design made use of force-based impedance control.In Tagne et al. (2016), they studied handshake in the context of various greetings (i.e., hello, congratulations, sympathies). They found that the context has an effect on the strength and duration of the handshake. Similarly in Melnyk and Henaff (2019), they found that there is a difference in the duration of handshake in the context of handshake for greetings and that for consolations.2.2 Leader and Follower Synchronization Methods
Avraham et al. (2012) proposed three handshake leader and follower methods: the Tit-for-Tat model, λ model, and iML-Shake model. The Tit-for-Tat model was based on the assumption that the shaking movement involved a leader and a follower. The leader imposed his/her handshake style while the follower conformed to and imitated it. Thus, the Tit-for-Tat model involved recording and repeating the shaking movement of the leader. This required either prior knowledge of the shaking movement (when taking the leader’s role) or recording a portion of the movement and repeating or following it (when taking the follower’s role).The λ model was based on the assumption that the hand alternates between two threshold positions during the handshaking movement and the smoothness of the resulting path is a result of the physiology and biomechanics of the muscles. This model required instantaneous data of the participant’s hand position, an understanding of the biomechanics of the arm movement, and an estimation of movement parameters.The iML-Shake model is a machine-learning model that was based on the concept that human movements can be viewed as a function that takes positions as inputs and produces forces as outputs. The model required the collection of movement data and performing linear regression to estimate the parameters of the model. The position of the interrogator’s hand was also required when performing the handshake test. The Tit-for-Tat and the iML-Shake models were considered to be more humanlike than the λ model. The authors suggested developing a model that captured the advantages of all of the proposed models.
Jouaiti et al. (2018) introduced the Hebbian plasticity in central pattern generator controllers to facilitate self-synchronization for human-robot handshaking. With this mechanism, they showed that the synchronization had a transitory phase and the permanent phase. In the transitory phase, the system adapts and learns the handshake conditions while in the permanent phase, the system retained the learning.
Mura et al. (2020) proposed a handshake robot system that implemented an extended Kalman filter (EKF) to learn from a human handshaking partner and to mimic the handshake. The EKF was designed to observe the intention of the human and turn this into an appropriate control reference for the robot arm. For the handshake movement, a sinusoidal function of unknown time-varying amplitude and frequency were implemented. The robot arm and hand system was able to synchronize with the human motion and to anticipate an active, leading behavior.2.3 SummaryThe models proposed in Kasuga and Hashimoto (2005); Jindai et al. (2006); Yamato et al. (2008); Wang et al. (2008) and the Tit-for-Tat model (Avraham et al., 2012; Mura et al., 2020) were based on collecting movement data from the handshake partner. This approach leads to replicating or imitating the performed movement and removes any differences in handshake characteristics that normally exist between two individuals who are performing a handshake. Moreover, in certain situations, a warm greeting (e.g., when welcoming a guest) or a firm one (e.g., in an interview) is expected from one side.Without the ability to collect the movements from the handshake partner, these types of handshakes will be impossible to perform. The iML-shake model proposed in Avraham et al. (2012) introduced a machine learning strategy whereby the participants’ behavior data related to their positions and grasping forces were used as an input-output function of their algorithm. According to the authors, it assumed little about the handshake movement, which made it more useful, because any unknown biomechanical features will be included. However, this feature is its main weakness. Ideally, a model like the λ model (Avraham et al., 2012) should be sought such that the important features of a movement are included and the weak or random influences are removed.The reason this model did not perform that well could be due to the observation of Nelson (1983) wherein he stated that dynamic models are not enough to describe human movement alone and a form of optimization should be sought to compliment them. This approach was utilized in developing the model proposed in Jindai et al. (2012) by building on Hogan’s jerk minimization model (Hogan, 1984). The present paper builds on these results to develop an improved model for a smooth human-to-human handshake, which can then be replicated in human-robot handshakes. | [
"8961332",
"26964106",
"21109395",
"25291795",
"33137738",
"23016826",
"3620543",
"26913625",
"4020415",
"6502203",
"11517279",
"29937725",
"26052291",
"31594473",
"6838914",
"32181710",
"18808231",
"13249858"
] | [
{
"pmid": "8961332",
"title": "Greeting behaviour and psychogenic need: interviews on experiences of therapists, clergymen, and car salesmen.",
"abstract": "Interviews of 47 Swedish subjects, therapists, clergymen, and car salesmen were used to investigate relationships between 22 greeting behaviours and 12 psychogenic needs mainly drawn from the Cecarec-Marke Personality Schedule. Analysis showed that most of the subjects agreed about the personality characteristics of greeting behaviours, especially regarding the dimensions of introversion and extraversion. The results confirm those from earlier studies that some greeting behaviours are potentially valid in assessing personality."
},
{
"pmid": "26964106",
"title": "Toward Perceiving Robots as Humans: Three Handshake Models Face the Turing-Like Handshake Test.",
"abstract": "In the Turing test a computer model is deemed to \"think intelligently\" if it can generate answers that are indistinguishable from those of a human. We developed an analogous Turing-like handshake test to determine if a machine can produce similarly indistinguishable movements. The test is administered through a telerobotic system in which an interrogator holds a robotic stylus and interacts with another party - artificial or human with varying levels of noise. The interrogator is asked which party seems to be more human. Here, we compare the human-likeness levels of three different models for handshake: (1) Tit-for-Tat model, (2) λ model, and (3) Machine Learning model. The Tit-for-Tat and the Machine Learning models generated handshakes that were perceived as the most human-like among the three models that were tested. Combining the best aspects of each of the three models into a single robotic handshake algorithm might allow us to advance our understanding of the way the nervous system controls sensorimotor interactions and further improve the human-likeness of robotic handshakes."
},
{
"pmid": "21109395",
"title": "Cultural considerations of hand use.",
"abstract": "Although each of us has the same capacity for hand use based on musculoskeletal structure and physiology, the choice and meaning of hand usage and activity are unique to the individual and influenced by sociocultural values, beliefs, and expectations. Effective therapists provide culturally competent care. For the hand therapist, this involves understanding how patients use their hands and the meaning clients ascribe to that use. This article will provide a review of cross-cultural variations in hand use in activities of daily living, communication, and decoration."
},
{
"pmid": "25291795",
"title": "Illusory sense of human touch from a warm and soft artificial hand.",
"abstract": "To touch and be touched are vital to human development, well-being, and relationships. However, to those who have lost their arms and hands due to accident or war, touching becomes a serious concern that often leads to psychosocial issues and social stigma. In this paper, we demonstrate that the touch from a warm and soft rubber hand can be perceived by another person as if the touch were coming from a human hand. We describe a three-step process toward this goal. First, we made participants select artificial skin samples according to their preferred warmth and softness characteristics. At room temperature, the preferred warmth was found to be 28.4 °C at the skin surface of a soft silicone rubber material that has a Shore durometer value of 30 at the OO scale. Second, we developed a process to create a rubber hand replica of a human hand. To compare the skin softness of a human hand and artificial hands, a robotic indenter was employed to produce a softness map by recording the displacement data when constant indentation force of 1 N was applied to 780 data points on the palmar side of the hand. Results showed that an artificial hand with skeletal structure is as soft as a human hand. Lastly, the participants' arms were touched with human and artificial hands, but they were prevented from seeing the hand that touched them. Receiver operating characteristic curve analysis suggests that a warm and soft artificial hand can create an illusion that the touch is from a human hand. These findings open the possibilities for prosthetic and robotic hands that are life-like and are more socially acceptable."
},
{
"pmid": "33137738",
"title": "On the choice of grasp type and location when handing over an object.",
"abstract": "The human hand is capable of performing countless grasps and gestures that are the basis for social activities. However, which grasps contribute the most to the manipulation skills needed during collaborative tasks, and thus which grasps should be included in a robot companion, is still an open issue. Here, we investigated grasp choice and hand placement on objects during a handover when subsequent tasks are performed by the receiver and when in-hand and bimanual manipulation are not allowed. Our findings suggest that, in this scenario, human passers favor precision grasps during such handovers. Passers also tend to grasp the purposive part of objects and leave \"handles\" unobstructed to the receivers. Intuitively, this choice allows receivers to comfortably perform subsequent tasks with the objects. In practice, many factors contribute to a choice of grasp, e.g., object and task constraints. However, not all of these factors have had enough emphasis in the implementation of grasping by robots, particularly the constraints introduced by a task, which are critical to the success of a handover. Successful robotic grasping is important if robots are to help humans with tasks. We believe that the results of this work can benefit the wider robotics community, with applications ranging from industrial cooperative manipulation to household collaborative manipulation."
},
{
"pmid": "23016826",
"title": "The power of a handshake: neural correlates of evaluative judgments in observed social interactions.",
"abstract": "Effective social interactions require the ability to evaluate other people's actions and intentions, sometimes only on the basis of such subtle factors as body language, and these evaluative judgments may lead to powerful impressions. However, little is known about the impact of affective body language on evaluative responses in social settings and the associated neural correlates. This study investigated the neural correlates of observing social interactions in a business setting, in which whole-body dynamic stimuli displayed approach and avoidance behaviors that were preceded or not by a handshake and were followed by participants' ratings of these behaviors. First, approach was associated with more positive evaluations than avoidance behaviors, and a handshake preceding social interaction enhanced the positive impact of approach and diminished the negative impact of avoidance behavior on the evaluation of social interaction. Second, increased sensitivity to approach than to avoidance behavior in the amygdala and STS was linked to a positive evaluation of approach behavior and a positive impact of handshake. Third, linked to the positive effect of handshake on social evaluation, nucleus accumbens showed greater activity for Handshake than for No-handshake conditions. These findings shed light on the neural correlates of observing and evaluating nonverbal social interactions and on the role of handshake as a way of formal greeting."
},
{
"pmid": "3620543",
"title": "A model of handwriting.",
"abstract": "The research reported here is concerned with hand trajectory planning for the class of movements involved in handwriting. Previous studies show that the kinematics of human two-joint arm movements in the horizontal plane can be described by a model which is based on dynamic minimization of the square of the third derivative of hand position (jerk), integrated over the entire movement. We extend this approach to both the analysis and the synthesis of the trajectories occurring in the generation of handwritten characters. Several basic strokes are identified and possible stroke concatenation rules are suggested. Given a concise symbolic representation of a stroke shape, a simple algorithm computes the complete kinematic specification of the corresponding trajectory. A handwriting generation model based on a kinematics from shape principle and on dynamic optimization is formulated and tested. Good qualitative and quantitative agreement was found between subject recordings and trajectories generated by the model. The simple symbolic representation of hand motion suggested here may permit the central nervous system to learn, store and modify motor action plans for writing in an efficient manner."
},
{
"pmid": "26913625",
"title": "Action recognition in the visual periphery.",
"abstract": "Recognizing whether the gestures of somebody mean a greeting or a threat is crucial for social interactions. In real life, action recognition occurs over the entire visual field. In contrast, much of the previous research on action recognition has primarily focused on central vision. Here our goal is to examine what can be perceived about an action outside of foveal vision. Specifically, we probed the valence as well as first level and second level recognition of social actions (handshake, hugging, waving, punching, slapping, and kicking) at 0° (fovea/fixation), 15°, 30°, 45°, and 60° of eccentricity with dynamic (Experiment 1) and dynamic and static (Experiment 2) actions. To assess peripheral vision under conditions of good ecological validity, these actions were carried out by a life-size human stick figure on a large screen. In both experiments, recognition performance was surprisingly high (more than 66% correct) up to 30° of eccentricity for all recognition tasks and followed a nonlinear decline with increasing eccentricities."
},
{
"pmid": "4020415",
"title": "The coordination of arm movements: an experimentally confirmed mathematical model.",
"abstract": "This paper presents studies of the coordination of voluntary human arm movements. A mathematical model is formulated which is shown to predict both the qualitative features and the quantitative details observed experimentally in planar, multijoint arm movements. Coordination is modeled mathematically by defining an objective function, a measure of performance for any possible movement. The unique trajectory which yields the best performance is determined using dynamic optimization theory. In the work presented here, the objective function is the square of the magnitude of jerk (rate of change of acceleration) of the hand integrated over the entire movement. This is equivalent to assuming that a major goal of motor coordination is the production of the smoothest possible movement of the hand. Experimental observations of human subjects performing voluntary unconstrained movements in a horizontal plane are presented. They confirm the following predictions of the mathematical model: unconstrained point-to-point motions are approximately straight with bell-shaped tangential velocity profiles; curved motions (through an intermediate point or around an obstacle) have portions of low curvature joined by portions of high curvature; at points of high curvature, the tangential velocity is reduced; the durations of the low-curvature portions are approximately equal. The theoretical analysis is based solely on the kinematics of movement independent of the dynamics of the musculoskeletal system and is successful only when formulated in terms of the motion of the hand in extracorporal space. The implications with respect to movement organization are discussed."
},
{
"pmid": "6502203",
"title": "An organizing principle for a class of voluntary movements.",
"abstract": "This paper presents a mathematical model which predicts both the major qualitative features and, within experimental error, the quantitative details of a class of perturbed and unperturbed large-amplitude, voluntary movements performed at intermediate speed by primates. A feature of the mathematical model is that a concise description of the behavioral organization of the movement has been formulated which is separate and distinct from the description of the dynamics of movement execution. Based on observations of voluntary movements in primates, the organization has been described as though the goal were to make the smoothest movement possible under the circumstances, i.e., to minimize the accelerative transients. This has been formalized by using dynamic optimization theory to determine the movement which minimizes the rate of change of acceleration (jerk) of the limb. Based on observations of muscle mechanics, the concept of a \"virtual position\" determined by the active states of the muscles is rigorously defined as one of the mechanical consequences of the neural commands to the muscles. This provides insight into the mechanics of perturbed and unperturbed movements and is a useful aid in the separation of the descriptions of movement organization and movement execution."
},
{
"pmid": "11517279",
"title": "Eye-hand coordination in object manipulation.",
"abstract": "We analyzed the coordination between gaze behavior, fingertip movements, and movements of the manipulated object when subjects reached for and grasped a bar and moved it to press a target-switch. Subjects almost exclusively fixated certain landmarks critical for the control of the task. Landmarks at which contact events took place were obligatory gaze targets. These included the grasp site on the bar, the target, and the support surface where the bar was returned after target contact. Any obstacle in the direct movement path and the tip of the bar were optional landmarks. Subjects never fixated the hand or the moving bar. Gaze and hand/bar movements were linked concerning landmarks, with gaze leading. The instant that gaze exited a given landmark coincided with a kinematic event at that landmark in a manner suggesting that subjects monitored critical kinematic events for phasic verification of task progress and subgoal completion. For both the obstacle and target, subjects directed saccades and fixations to sites that were offset from the physical extension of the objects. Fixations related to an obstacle appeared to specify a location around which the extending tip of the bar should travel. We conclude that gaze supports hand movement planning by marking key positions to which the fingertips or grasped object are subsequently directed. The salience of gaze targets arises from the functional sensorimotor requirements of the task. We further suggest that gaze control contributes to the development and maintenance of sensorimotor correlation matrices that support predictive motor control in manipulation."
},
{
"pmid": "29937725",
"title": "Hebbian Plasticity in CPG Controllers Facilitates Self-Synchronization for Human-Robot Handshaking.",
"abstract": "It is well-known that human social interactions generate synchrony phenomena which are often unconscious. If the interaction between individuals is based on rhythmic movements, synchronized and coordinated movements will emerge from the social synchrony. This paper proposes a plausible model of plastic neural controllers that allows the emergence of synchronized movements in physical and rhythmical interactions. The controller is designed with central pattern generators (CPG) based on rhythmic Rowat-Selverston neurons endowed with neuronal and synaptic Hebbian plasticity. To demonstrate the interest of the proposed model, the case of handshaking is considered because it is a very common, both physically and socially, but also, a very complex act in the point of view of robotics, neuroscience and psychology. Plastic CPGs controllers are implemented in the joints of a simulated robotic arm that has to learn the frequency and amplitude of an external force applied to its effector, thus reproducing the act of handshaking with a human. Results show that the neural and synaptic Hebbian plasticity are working together leading to a natural and autonomous synchronization between the arm and the external force even if the frequency is changing during the movement. Moreover, a power consumption analysis shows that, by offering emergence of synchronized and coordinated movements, the plasticity mechanisms lead to a significant decrease in the energy spend by the robot actuators thus generating a more adaptive and natural human/robot handshake."
},
{
"pmid": "26052291",
"title": "The hand grasps the center, while the eyes saccade to the top of novel objects.",
"abstract": "In the present study, we investigated whether indenting the sides of novel objects (e.g., product packaging) would influence where people grasp, and hence focus their gaze, under the assumption that gaze precedes grasping. In Experiment 1, the participants grasped a selection of custom-made objects designed to resemble typical packaging forms with an indentation in the upper, middle, or lower part. In Experiment 2, eye movements were recorded while the participants viewed differently-sized (small, medium, and large) objects with the same three indentation positions tested in Experiment 1, together with a control object lacking any indentation. The results revealed that irrespective of the location of the indentation, the participants tended to grasp the mid-region of the object, with their index finger always positioned slightly above its midpoint. Importantly, the first visual fixation tended to fall in the cap region of the novel object. The participants also fixated for longer in this region. Furthermore, participants saccaded more often, as well saccading more rapidly when directing their gaze to the upper region of the objects that they were required to inspect visually. Taken together, these results therefore suggest that different spatial locations on target objects are of interest to our eyes and hands."
},
{
"pmid": "31594473",
"title": "Effects of Handshake Duration on Other Nonverbal Behavior.",
"abstract": "Although detailed descriptions of proper handshakes partly comprise many etiquette books, how a normal handshake can be described, its proper duration, and the consequences of violating handshake expectations remain empirically unexplored. This study measured the effect of temporal violations of the expected length of a handshake (less than three seconds according to previous studies) administered unobtrusively in a naturalistic experiment. We compared volunteer participants' (N = 34; 25 females; 9 males; Mage = 23.76 years, SD = 6.85) nonverbal behavior before and after (a) a prolonged handshake (>3 seconds), (b) a normal length handshake (average length <3 seconds), and (c) a control encounter with no handshake. Frame-by-frame behavioral analyses revealed that, following a prolonged handshake (vs. a normal length or no handshake), participants showed less interactional enjoyment, as indicated by less laughing. They also showed evidence of anxiety and behavioral freezing, indicated by increased hands-on-hands movements, and they showed fewer hands-on-body movements. Normal length handshakes resulted in less subsequent smiling than did prolonged handshakes, but normal length handshakes were also followed by fewer hands-on-face movements than prolonged handshakes. No behavior changes were associated with the no-handshake control condition. We found no differences in participants' level of empathy or state/trait anxiety related to these conditions. In summary, participants reacted behaviorally to temporal manipulations of handshakes, with relevant implications for interactions in interviews, business, educational, and social settings and for assisting patients with social skills difficulties."
},
{
"pmid": "6838914",
"title": "Physical principles for economies of skilled movements.",
"abstract": "This paper presents some elementary principles regarding constraints on movements, which may be useful in modeling and interpreting motor control strategies for skilled movements. Movements which are optimum with respect to various objectives, or \"costs\", are analyzed and compared. The specific costs considered are related to movement time, distance, peak velocity, energy, peak acceleration, and rate of change of acceleration (jerk). The velocity patterns for the various minimum cost movements are compared with each other and with some skilled movement patterns. The concept of performance trade-offs between competing objectives is used to interpret the distance-time relationships observed in skilled movements. Examples of arm movements during violin bowing and jaw movements during speech are used to show how skilled movements are influenced by considerations of physical economy, or \"ease\", of movement. Minimum-cost solutions for the various costs, which include the effect of frictional forces, are given in Appendices."
},
{
"pmid": "32181710",
"title": "Russell and the Handshake: Greeting in Spiritual Care.",
"abstract": "One of the most common practices in spiritual care involves welcoming or greeting others. Despite this, there is little literature exploring this practice in terms of how it is experienced by those we greet, how it impacts people and relationships, and how it should occur. By reflecting on several stories of handshakes and greeting, this paper seeks to call to attention the experience, impact, and \"how-to\" of greeting in spiritual care."
},
{
"pmid": "18808231",
"title": "Exploring the handshake in employment interviews.",
"abstract": "The authors examined how an applicant's handshake influences hiring recommendations formed during the employment interview. A sample of 98 undergraduate students provided personality measures and participated in mock interviews during which the students received ratings of employment suitability. Five trained raters independently evaluated the quality of the handshake for each participant. Quality of handshake was related to interviewer hiring recommendations. Path analysis supported the handshake as mediating the effect of applicant extraversion on interviewer hiring recommendations, even after controlling for differences in candidate physical appearance and dress. Although women received lower ratings for the handshake, they did not on average receive lower assessments of employment suitability. Exploratory analysis suggested that the relationship between a firm handshake and interview ratings may be stronger for women than for men."
}
] |
Cancers | null | PMC8996991 | 10.3390/cancers14071651 | A Weakly Supervised Deep Learning Method for Guiding Ovarian Cancer Treatment and Identifying an Effective Biomarker | Simple SummaryMolecular target therapy, i.e., antiangiogenesis with bevacizumab, was found to be effective in some patients of epithelial ovarian cancer. Considering the cost, potential adverse effects, including hypertension, proteinuria, bleeding, thromboembolic events, poor wound healing and gastrointestinal perforation, and no confirmed and accessible biomarkers for routine clinical use to direct patient selection for bevacizumab treatment, the identification of new predictive methods remains an urgent unmet medical need. This study identifies an effective biomarker and presents an automatic weakly supervised deep learning framework for patient selection and guiding ovarian cancer treatment.AbstractOvarian cancer is a common malignant gynecological disease. Molecular target therapy, i.e., antiangiogenesis with bevacizumab, was found to be effective in some patients of epithelial ovarian cancer (EOC). Although careful patient selection is essential, there are currently no biomarkers available for routine therapeutic usage. To the authors’ best knowledge, this is the first automated precision oncology framework to effectively identify and select EOC and peritoneal serous papillary carcinoma (PSPC) patients with positive therapeutic effect. From March 2013 to January 2021, we have a database, containing four kinds of immunohistochemical tissue samples, including AIM2, c3, C5 and NLRP3, from patients diagnosed with EOC and PSPC and treated with bevacizumab in a hospital-based retrospective study. We developed a hybrid deep learning framework and weakly supervised deep learning models for each potential biomarker, and the experimental results show that the proposed model in combination with AIM2 achieves high accuracy 0.92, recall 0.97, F-measure 0.93 and AUC 0.97 for the first experiment (66% training and 34%testing) and high accuracy 0.86 ± 0.07, precision 0.9 ± 0.07, recall 0.85 ± 0.06, F-measure 0.87 ± 0.06 and AUC 0.91 ± 0.05 for the second experiment using five-fold cross validation, respectively. Both Kaplan-Meier PFS analysis and Cox proportional hazards model analysis further confirmed that the proposed AIM2-DL model is able to distinguish patients gaining positive therapeutic effects with low cancer recurrence from patients with disease progression after treatment (p < 0.005). | 2. Related Works2.1. Selection of AntibodiesRather than directly targeting cancer cells, bevacizumab targets the tumor microenvironment, the effects of VEGF-inhibition are likely tumor-type and microenvironment-specific including the modulation of cancer immunity [13]. Cancer cells affect their microenvironment by releasing extracellular signals, thereby inducing tumor angiogenesis, and improving immune tolerance, thereby avoiding being recognized by the immune system. VEGF signaling supports immune suppression, and targeting VEGF/VEGFR has been recognized as an approach to enhance antitumor immunity in cancer patients [13]. Chronic inflammation perpetuated by inflammasome activation may play a central role in immunosuppression, angiogenesis, tumor proliferation, and metastasis. Conversely, inflammasome signaling also contributes to tumor suppression, which indicates the diverse roles of inflammasomes in tumorigenesis [14]. Inflammasomes are activated upon cellular infection that triggers the maturation of proinflammatory cytokines to engage innate immune defenses [15]. Once innate immune system related NOD-like receptors (NLRs) or AIM2-like receptors (ALRs) are activated, inflammation via the recruitment of immune cells such as macrophages promotes the proteolytic cleavage and secretion of proinflammatory cytokines (IL-1β and IL-18) through the activation of caspase-1, leading to cell senescence, apoptosis and the prevention of cancer progression [16,17,18]. In a previous study, the high expression level of AIM2 and NLRP3 was significantly correlated with poor PFS and disease progression of EOC which demonstrated a key role of the dysregulated inflammasome in modulating the malignant transformation of endometriosis-associated ovarian cancer [19]. In ascites of EOC patients, local complement activation has been observed to induce high complement anaphylatoxins level [20]. The immune genes involved in the complement system have dual influences on the survival of patients. Immunohistochemical analysis showed that the expression of the C3a receptor (C3aR) and the C5a receptor (C5aR) is higher in ovarian clear cell carcinoma [21]. Complement-activated factors have been related, either directly or indirectly, to neovascularization in several diseases [22]. The antiangiogenic factor was upregulated in monocytes by complement activation [23]. In contrast, a role for complement in the activation of angiogenesis has been demonstrated in age-related macular degeneration [24]. As the resistance to anti-VEGF treatment is related to immunity, in this study we explore the utility of four antibodies, including AIM2, NLPR3, C3 and C5, to differentiate patients with good treatment responses from patients with disease progression on EOC and PSPC.2.2. Deep Learning in Application to Gynecologic OncologyWith an increase in computing power and advances in imaging technologies, DL is being implemented for the diagnosis and classification of medical images. Wang et al. [25] proposed a DL-based noninvasive recurrence prediction model in high-grade serous ovarian cancer (HGSOC) that extracts prognostic biomarkers from preoperative computed tomography (CT) images. Sato et al. [26] successfully applied DL to the classification of images from colposcopy. Matsuo et al. [27] compared the performance of DL models in survival analysis for women with newly diagnosed cervical cancer with conventional Cox proportional hazard regression (CPH) models. Ke et al. [28] proposed a DL diagnostic system that can distinguish high grade squamous intraepithelial lesion (HSIL), squamous cell carcinoma, atypical squamous cells of undetermined significance (ASCUS) and low grade squamous intraepithelial lesion. Wu et al. [29] introduced automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks. Ghoniem et al. [30] built a multimodal evolutionary DL model for ovarian cancer diagnosis. Hong et al. [31] built multiresolution deep learning models for predicting endometrial cancer subtypes and molecular features from histopathology images. These studies demonstrate that gynecologists are able to utilize DL in clinical practice, increasingly.2.3. Weakly Supervised LearningThe development of decision support systems for medical applications with deployment in clinical practice has been hindered by the need for large manually annotated datasets. To overcome problems of limited amount of data supervision, recent studies have investigated weakly supervised learning technologies. Campanella et al. [32] built a weakly supervised multiple instance deep learning system that uses only the reported diagnoses as labels for training accurate classification models in pathology and avoids expensive and time-consuming pixelwise manual annotations. Li et al. [33] presented an ensemble learning scheme to derive a safe prediction by integrating multiple weakly supervised learners to deal with inaccurate supervision, such as label noise learning, where the given labels are not always ground-truth. Kim et al. [34] develoep a weakly-supervised DL algorithm that diagnoses breast cancer at ultrasound without image annotation. Liu et al. [35] evaluated a weakly supervised deep learning approach to breast magnetic resonance imaging (MRI) assessment and showed that it is feasible to assess breast MRI images without the need for pixel-by-pixel segmentation using the weakly supervised learning method to yield a high degree of specificity in lesion classification. Lu et al. [36] built a weakly supervised clustering-constrained-attention multiple-instance learning (CLAM) model for data-efficient WSI processing and learning that only requires slide-level labels. These studies demonstrate that weakly supervised learning assists development and deployment of decision support systems for medical applications. | [
"29809280",
"29249039",
"24078660",
"16849612",
"23430240",
"27960088",
"22204724",
"33293727",
"26017442",
"31399699",
"32335505",
"30842595",
"20303873",
"26626159",
"31228512",
"23864729",
"29423077",
"15726105",
"16452172",
"30392780",
"29456725",
"30582927",
"33526806",
"29572387",
"34622237",
"31308507",
"34934144",
"33649564",
"33216724",
"34376717",
"34359792",
"27295650",
"30224757",
"27713848",
"27055470",
"26115797",
"26853587",
"27268121",
"23401453",
"24947924",
"25087181",
"29270405",
"21629292",
"23997938",
"22711031",
"34640548",
"16873662",
"30976107"
] | [
{
"pmid": "29809280",
"title": "Ovarian cancer statistics, 2018.",
"abstract": "In 2018, there will be approximately 22,240 new cases of ovarian cancer diagnosed and 14,070 ovarian cancer deaths in the United States. Herein, the American Cancer Society provides an overview of ovarian cancer occurrence based on incidence data from nationwide population-based cancer registries and mortality data from the National Center for Health Statistics. The status of early detection strategies is also reviewed. In the United States, the overall ovarian cancer incidence rate declined from 1985 (16.6 per 100,000) to 2014 (11.8 per 100,000) by 29% and the mortality rate declined between 1976 (10.0 per 100,000) and 2015 (6.7 per 100,000) by 33%. Ovarian cancer encompasses a heterogenous group of malignancies that vary in etiology, molecular biology, and numerous other characteristics. Ninety percent of ovarian cancers are epithelial, the most common being serous carcinoma, for which incidence is highest in non-Hispanic whites (NHWs) (5.2 per 100,000) and lowest in non-Hispanic blacks (NHBs) and Asians/Pacific Islanders (APIs) (3.4 per 100,000). Notably, however, APIs have the highest incidence of endometrioid and clear cell carcinomas, which occur at younger ages and help explain comparable epithelial cancer incidence for APIs and NHWs younger than 55 years. Most serous carcinomas are diagnosed at stage III (51%) or IV (29%), for which the 5-year cause-specific survival for patients diagnosed during 2007 through 2013 was 42% and 26%, respectively. For all stages of epithelial cancer combined, 5-year survival is highest in APIs (57%) and lowest in NHBs (35%), who have the lowest survival for almost every stage of diagnosis across cancer subtypes. Moreover, survival has plateaued in NHBs for decades despite increasing in NHWs, from 40% for cases diagnosed during 1992 through 1994 to 47% during 2007 through 2013. Progress in reducing ovarian cancer incidence and mortality can be accelerated by reducing racial disparities and furthering knowledge of etiology and tumorigenesis to facilitate strategies for prevention and early detection. CA Cancer J Clin 2018;68:284-296. © 2018 American Cancer Society."
},
{
"pmid": "29249039",
"title": "Advances in ovarian cancer therapy.",
"abstract": "Epithelial ovarian cancer is typically diagnosed at an advanced stage. Current state-of-the-art surgery and chemotherapy result in the high incidence of complete remissions; however, the recurrence rate is also high. For most patients, the disease eventually becomes a continuum of symptom-free periods and recurrence episodes. Different targeted treatment approaches and biological drugs, currently under development, bring the promise of turning ovarian cancer into a manageable chronic disease. In this review, we discuss the current standard in the therapy for ovarian cancer, major recent studies on the new variants of conventional therapies, and new therapeutic approaches, recently approved and/or in clinical trials. The latter include anti-angiogenic therapies, polyADP-ribose polymerase (PARP) inhibitors, inhibitors of growth factor signaling, or folate receptor inhibitors, as well as several immunotherapeutic approaches. We also discuss cost-effectiveness of some novel therapies and the issue of better selection of patients for personalized treatment."
},
{
"pmid": "16849612",
"title": "Leukocytes, inflammation, and angiogenesis in cancer: fatal attractions.",
"abstract": "Leukocytes are cells of defense. Their main function is to protect our body against invading microorganisms. Some leukocytes, in particular, polymorphonuclear and monocytes, accumulate at sites of infection and neutralize pathogens through innate mechanisms. The blood and lymphatic vascular system are essential partners in this defensive reaction: Activated endothelial cells promote leukocyte recruitment at inflammatory sites; new blood vessel formation, a process called angiogenesis, sustains chronic inflammation, and lymphatic vessels transport antigens and antigen-presenting cells to lymph nodes, where they stimulate naive T and B lymphocytes to elicit an antigen-specific immune response. In contrast, leukocytes and lymphocytes are far less efficient in protecting us from cancer, the \"enemy from within.\" Worse, cancer can exploit inflammation to its advantage. The role of angiogenesis, leukocytes, and inflammation in tumor progression was discussed at the second Monte Verità Conference, Tumor Host Interaction and Angiogenesis: Basic Mechanisms and Therapeutic Perspectives, held in Ascona, Switzerland, October 1-5, 2005. (Conference chairs were K. Alitalo, M. Aguet, C. Rüegg, and I. Stamenkovic.) Eight articles reporting about topics presented at the conference are featured in this issue of the Journal of Leukocyte Biology."
},
{
"pmid": "23430240",
"title": "Inflammation and oxidative stress in angiogenesis and vascular disease.",
"abstract": "Recent evidence suggests that processes of inflammation and angiogenesis are interconnected, especially in human pathologies. Newly formed blood vessels enable the continuous recruitment of inflammatory cells, which release a variety of proangiogenic cytokines, chemokines, and growth factors and further promote angiogenesis. These series of positive feedback loops ultimately create a vicious cycle that exacerbates inflammation, transforming it into the chronic process. Recently, this concept of reciprocity of angiogenesis and inflammation has been expanded to include oxidative stress as a novel mechanistic connection between inflammation-driven oxidation and neovascularization. Production of reactive oxygen species results from activation of immune cells by proinflammatory stimuli. As oxidative stress can lead to chronic inflammation by activating a variety of transcription factors including NF-κB, AP-1, and PPAR-γ, inflammation itself has a reciprocal relationship with oxidative stress. This review discusses the recent findings in the area bridging neovascularization and oxidation and highlights novel mechanisms of inflammation- and oxidative stress-driven angiogenesis."
},
{
"pmid": "27960088",
"title": "Normalization of Tumor Vessels by Tie2 Activation and Ang2 Inhibition Enhances Drug Delivery and Produces a Favorable Tumor Microenvironment.",
"abstract": "A destabilized tumor vasculature leads to limited drug delivery, hypoxia, detrimental tumor microenvironment, and even metastasis. We performed a side-by-side comparison of ABTAA (Ang2-Binding and Tie2-Activating Antibody) and ABA (Ang2-Blocking Antibody) in mice with orthotopically implanted glioma, with subcutaneously implanted Lewis lung carcinoma, and with spontaneous mammary cancer. We found that Tie2 activation induced tumor vascular normalization, leading to enhanced blood perfusion and chemotherapeutic drug delivery, markedly lessened lactate acidosis, and reduced tumor growth and metastasis. Moreover, ABTAA favorably altered the immune cell profile within tumors. Together, our findings establish that simultaneous Tie2 activation and Ang2 inhibition form a powerful therapeutic strategy to elicit a favorable tumor microenvironment and enhanced delivery of a chemotherapeutic agent into tumors."
},
{
"pmid": "22204724",
"title": "Incorporation of bevacizumab in the primary treatment of ovarian cancer.",
"abstract": "BACKGROUND\nVascular endothelial growth factor is a key promoter of angiogenesis and disease progression in epithelial ovarian cancer. Bevacizumab, a humanized anti-vascular endothelial growth factor monoclonal antibody, has shown single-agent activity in women with recurrent tumors. Thus, we aimed to evaluate the addition of bevacizumab to standard front-line therapy.\n\n\nMETHODS\nIn our double-blind, placebo-controlled, phase 3 trial, we randomly assigned eligible patients with newly diagnosed stage III (incompletely resectable) or stage IV epithelial ovarian cancer who had undergone debulking surgery to receive one of three treatments. All three included chemotherapy consisting of intravenous paclitaxel at a dose of 175 mg per square meter of body-surface area, plus carboplatin at an area under the curve of 6, for cycles 1 through 6, plus a study treatment for cycles 2 through 22, each cycle of 3 weeks' duration. The control treatment was chemotherapy with placebo added in cycles 2 through 22; bevacizumab-initiation treatment was chemotherapy with bevacizumab (15 mg per kilogram of body weight) added in cycles 2 through 6 and placebo added in cycles 7 through 22. Bevacizumab-throughout treatment was chemotherapy with bevacizumab added in cycles 2 through 22. The primary end point was progression-free survival.\n\n\nRESULTS\nOverall, 1873 women were enrolled. The median progression-free survival was 10.3 months in the control group, 11.2 in the bevacizumab-initiation group, and 14.1 in the bevacizumab-throughout group. Relative to control treatment, the hazard ratio for progression or death was 0.908 (95% confidence interval [CI], 0.795 to 1.040; P=0.16) with bevacizumab initiation and 0.717 (95% CI, 0.625 to 0.824; P<0.001) with bevacizumab throughout. At the time of analysis, 76.3% of patients were alive, with no significant differences in overall survival among the three groups. The rate of hypertension requiring medical therapy was higher in the bevacizumab-initiation group (16.5%) and the bevacizumab-throughout group (22.9%) than in the control group (7.2%). Gastrointestinal-wall disruption requiring medical intervention occurred in 1.2%, 2.8%, and 2.6% of patients in the control group, the bevacizumab-initiation group, and the bevacizumab-throughout group, respectively.\n\n\nCONCLUSIONS\nThe use of bevacizumab during and up to 10 months after carboplatin and paclitaxel chemotherapy prolongs the median progression-free survival by about 4 months in patients with advanced epithelial ovarian cancer. (Funded by the National Cancer Institute and Genentech; ClinicalTrials.gov number, NCT00262847.)."
},
{
"pmid": "33293727",
"title": "Treatment of Recurrent Epithelial Ovarian Cancer.",
"abstract": "Epithelial ovarian cancer is the most common cause of death from gynecological tumors. Most patients with advanced ovarian cancer develop recurrence after concluding first-line therapy, making further lines of therapy necessary. The choice of therapy depends on various criteria such as tumor biology, the patient's general condition (ECOG), toxicity, previous chemotherapy, and response to chemotherapy. The platinum-free or treatment-free interval determines the potential response to repeat platinum-based therapy. If patients have late recurrence, i.e. > 6 months after the end of the last platinum-based therapy (i.e., they were previously platinum-sensitive), then they are usually considered suitable for another round of a platinum-based combination therapy. Patients who are not considered suitable for platinum-based chemotherapy are treated with a platinum-free regimen such as weekly paclitaxel, pegylated liposomal doxorubicin (PLD), gemcitabine, or topotecan. Treatment for the patient subgroup which is considered suitable for platinum-based therapy but cannot receive carboplatin due to uncontrollable hypersensitivity reactions may consist of trabectedin and PLD. While the use of surgery to treat recurrence has long been a controversial issue, new findings from the DESKTOP III study of the AGO working group have drawn attention to this issue again, particularly for patients with a platinum-free interval of > 6 months and a positive AGO score. Clinical studies have also shown the efficacy of angiogenesis inhibitors such as bevacizumab and the PARP inhibitors olaparib, niraparib and rucaparib. These drugs have substantially changed current treatment practice and expanded the range of available therapies. It is important to differentiate between purely maintenance therapy after completing CTX, continuous maintenance therapy during CTX, and the therapeutic use of these substances. The PARP inhibitors niraparib, olaparib and rucaparib have already been approved for use by the FDA and the EMA. The presence of a BRCA mutation is a predictive factor for a better response to PARP inhibitors."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "31399699",
"title": "Artificial intelligence in digital pathology - new tools for diagnosis and precision oncology.",
"abstract": "In the past decade, advances in precision oncology have resulted in an increased demand for predictive assays that enable the selection and stratification of patients for treatment. The enormous divergence of signalling and transcriptional networks mediating the crosstalk between cancer, stromal and immune cells complicates the development of functionally relevant biomarkers based on a single gene or protein. However, the result of these complex processes can be uniquely captured in the morphometric features of stained tissue specimens. The possibility of digitizing whole-slide images of tissue has led to the advent of artificial intelligence (AI) and machine learning tools in digital pathology, which enable mining of subvisual morphometric phenotypes and might, ultimately, improve patient management. In this Perspective, we critically evaluate various AI-based computational approaches for digital pathology, focusing on deep neural networks and 'hand-crafted' feature-based methodologies. We aim to provide a broad framework for incorporating AI and machine learning tools into clinical oncology, with an emphasis on biomarker development. We discuss some of the challenges relating to the use of AI, including the need for well-curated validation datasets, regulatory approval and fair reimbursement strategies. Finally, we present potential future opportunities for precision oncology."
},
{
"pmid": "32335505",
"title": "Bevacizumab (Avastin®) in cancer treatment: A review of 15 years of clinical experience and future outlook.",
"abstract": "When the VEGF-A-targeting monoclonal antibody bevacizumab (Avastin®) entered clinical practice more than 15 years ago, it was one of the first targeted therapies and the first approved angiogenesis inhibitor. Marking the beginning for a new line of anti-cancer treatments, bevacizumab remains the most extensively characterized anti-angiogenetic treatment. Initially approved for treatment of metastatic colorectal cancer in combination with chemotherapy, its indications now include metastatic breast cancer, non-small-cell lung cancer, glioblastoma, renal cell carcinoma, ovarian cancer and cervical cancer. This review provides an overview of the clinical experience and lessons learned since bevacizumab's initial approval, and highlights how this knowledge has led to the investigation of novel combination therapies. In the past 15 years, our understanding of VEGF's role in the tumor microenvironment has evolved. We now know that VEGF not only plays a major role in controlling blood vessel formation, but also modulates tumor-induced immunosuppression. These immunomodulatory properties of bevacizumab have opened up new perspectives for combination therapy approaches, which are being investigated in clinical trials. Specifically, the combination of bevacizumab with cancer immunotherapy has recently been approved in non-small-cell lung cancer and clinical benefit was also demonstrated for treatment of hepatocellular carcinoma. However, despite intense investigation, reliable and validated biomarkers that would enable a more personalized use of bevacizumab remain elusive. Overall, bevacizumab is expected to remain a key agent in cancer therapy, both due to its established efficacy in approved indications and its promise as a partner in novel targeted combination treatments."
},
{
"pmid": "30842595",
"title": "Diverging inflammasome signals in tumorigenesis and potential targeting.",
"abstract": "Inflammasomes are molecular platforms that assemble upon sensing various intracellular stimuli. Inflammasome assembly leads to activation of caspase 1, thereby promoting the secretion of bioactive interleukin-1β (IL-1β) and IL-18 and inducing an inflammatory cell death called pyroptosis. Effectors of the inflammasome efficiently drive an immune response, primarily providing protection against microbial infections and mediating control over sterile insults. However, aberrant inflammasome signalling is associated with pathogenesis of inflammatory and metabolic diseases, neurodegeneration and malignancies. Chronic inflammation perpetuated by inflammasome activation plays a central role in all stages of tumorigenesis, including immunosuppression, proliferation, angiogenesis and metastasis. Conversely, inflammasome signalling also contributes to tumour suppression by maintaining intestinal barrier integrity, which portrays the diverse roles of inflammasomes in tumorigenesis. Studies have underscored the importance of environmental factors, such as diet and gut microbiota, in inflammasome signalling, which in turn influences tumorigenesis. In this Review, we deliver an overview of the interplay between inflammasomes and tumorigenesis and discuss their potential as therapeutic targets."
},
{
"pmid": "20303873",
"title": "The inflammasomes.",
"abstract": "Inflammasomes are molecular platforms activated upon cellular infection or stress that trigger the maturation of proinflammatory cytokines such as interleukin-1beta to engage innate immune defenses. Strong associations between dysregulated inflammasome activity and human heritable and acquired inflammatory diseases highlight the importance this pathway in tailoring immune responses. Here, we comprehensively review mechanisms directing normal inflammasome function and its dysregulation in disease. Agonists and activation mechanisms of the NLRP1, NLRP3, IPAF, and AIM2 inflammasomes are discussed. Regulatory mechanisms that potentiate or limit inflammasome activation are examined, as well as emerging links between the inflammasome and pyroptosis and autophagy."
},
{
"pmid": "26626159",
"title": "AIM2 inflammasome in infection, cancer, and autoimmunity: Role in DNA sensing, inflammation, and innate immunity.",
"abstract": "Recognition of DNA by the cell is an important immunological signature that marks the initiation of an innate immune response. AIM2 is a cytoplasmic sensor that recognizes dsDNA of microbial or host origin. Upon binding to DNA, AIM2 assembles a multiprotein complex called the inflammasome, which drives pyroptosis and proteolytic cleavage of the proinflammatory cytokines pro-IL-1β and pro-IL-18. Release of microbial DNA into the cytoplasm during infection by Francisella, Listeria, Mycobacterium, mouse cytomegalovirus, vaccinia virus, Aspergillus, and Plasmodium species leads to activation of the AIM2 inflammasome. In contrast, inappropriate recognition of cytoplasmic self-DNA by AIM2 contributes to the development of psoriasis, dermatitis, arthritis, and other autoimmune and inflammatory diseases. Inflammasome-independent functions of AIM2 have also been described, including the regulation of the intestinal stem cell proliferation and the gut microbiota ecology in the control of colorectal cancer. In this review we provide an overview of the latest research on AIM2 inflammasome and its role in infection, cancer, and autoimmunity."
},
{
"pmid": "31228512",
"title": "Inflammasome as a promising therapeutic target for cancer.",
"abstract": "Inflammasomes are the major mechanistic complexes that include members of the NOD-like receptor (NLRs) or AIM2-like receptors (ALRs) families, which are affiliated with the innate immune system. Once NLRs or ALRs are activated by pathogen-associated molecular patterns (PAMPs) or damage-associated molecular patterns (DAMPs), the caspase-1 or -11 is activated by binding with NLRs or ALRs via its own unique cytosolic domains. As a result, caspase-1 or -11 enhances the production of IL-1β and IL-18, which results in inflammation via the recruitment of immune cells, such as macrophages, and the promotion of programmed cell death mechanisms such as pyroptosis. In addition, the consistent cascades of inflammasomes would precede both minor and severe autoimmune diseases and cancers. The clinical relevance of inflammasomes in multiple forms of cancer highlights their therapeutic promise as molecular targets. To closely analyze the physiological roles of inflammasomes in cancers, here, we describe the fundamental knowledge regarding the current issues of inflammasomes in relevant cancers, and discuss possible therapeutic values in targeting these inflammasomes for the prevention and treatment of cancer."
},
{
"pmid": "23864729",
"title": "AIM2, an IFN-inducible cytosolic DNA sensor, in the development of benign prostate hyperplasia and prostate cancer.",
"abstract": "UNLABELLED\nClose links have been noted between chronic inflammation of the prostate and the development of human prostatic diseases such as benign prostate hyperplasia (BPH) and prostate cancer. However, the molecular mechanisms that contribute to prostatic inflammation remain largely unexplored. Recent studies have indicated that the IFN-inducible AIM2 protein is a cytosolic DNA sensor in macrophages and keratinocytes. Upon sensing DNA, AIM2 recruits the adaptor ASC and pro-CASP1 to assemble the AIM2 inflammasome. Activation of the AIM2 inflammasome cleaves pro-interleukin (IL)-1β and pro-IL-18 and promotes the secretion of IL-1β and IL-18 proinflammatory cytokines. Given that human prostatic infections are associated with chronic inflammation, the development of BPH is associated with an accumulation of senescent cells with a proinflammatory phenotype, and the development of prostate cancer is associated with the loss of IFN signaling, the role of AIM2 in mediating the formation of prostatic diseases was investigated. It was determined that IFNs (α, β, or γ) induced AIM2 expression in human prostate epithelial cells and cytosolic DNA activated the AIM2 inflammasome. Steady-state levels of the AIM2 mRNA were higher in BPH than in normal prostate tissue. However, the levels of AIM2 mRNA were significantly lower in clinical tumor specimens. Accordingly, constitutive levels of AIM2 mRNA and protein were lower in a subset of prostate cancer cells as compared with BPH cells. Further, the cytosolic DNA activated the AIM2 inflammasome in the androgen receptor-negative PC3 prostate cancer cell line, suggesting that AIM2-mediated events are independent of androgen receptor status.\n\n\nIMPLICATIONS\nThe AIM2 inflammasome has a fundamental role in the generation of human prostatic diseases."
},
{
"pmid": "29423077",
"title": "Integrating the dysregulated inflammasome-based molecular functionome in the malignant transformation of endometriosis-associated ovarian carcinoma.",
"abstract": "The coexistence of endometriosis (ES) with ovarian clear cell carcinoma (CCC) or endometrioid carcinoma (EC) suggested that malignant transformation of ES leads to endometriosis associated ovarian carcinoma (EAOC). However, there is still lack of an integrating data analysis of the accumulated experimental data to provide the evidence supporting the hypothesis of EAOC transformation. Herein we used a function-based analytic model with the publicly available microarray datasets to investigate the expression profiling between ES, CCC, and EC. We analyzed the functional regularity pattern of the three type of samples and hierarchically clustered the gene sets to identify key mechanisms regulating the malignant transformation of EAOC. We identified a list of 18 genes (NLRP3, AIM2, PYCARD, NAIP, Caspase-4, Caspase-7, Caspase-8, TLR1, TLR7, TOLLIP, NFKBIA, TNF, TNFAIP3, INFGR2, P2RX7, IL-1B, IL1RL1, IL-18) closely related to inflammasome complex, indicating an important role of inflammation/immunity in EAOC transformation. We next explore the association between these target genes and patient survival using Gene Expression Omnibus (GEO), and found significant correlation between the expression levels of the target genes and the progression-free survival. Interestingly, high expression levels of AIM2 and NLRP3, initiating proteins of inflammasomes, were significantly correlated with poor progression-free survival. Immunohistochemistry staining confirmed a correlation between high AIM2 and high Ki-67 in clinical EAOC samples, supporting its role in disease progression. Collectively, we established a bioinformatic platform of gene-set integrative molecular functionome to dissect the pathogenic pathways of EAOC, and demonstrated a key role of dysregulated inflammasome in modulating the malignant transformation of EAOC."
},
{
"pmid": "15726105",
"title": "Ascitic complement system in ovarian cancer.",
"abstract": "Ovarian cancer spreads intraperitoneally and forms fluid, whereby the diagnosis and therapy often become delayed. As the complement (C) system may provide a cytotoxic effector arm for both immunological surveillance and mAb-therapy, we have characterised the C system in the intraperitoneal ascitic fluid (AF) from ovarian cancer patients. Most of the AF samples showed alternative and classical pathway haemolytic activity. The levels of C3 and C4 were similar to or in the lower normal range when compared to values in normal sera, respectively. However, elevated levels of C3a and soluble C5b-9 suggested C activation in vivo. Malignant cells isolated from the AF samples had surface deposits of C1q and C3 activation products, but not of C5b-9 (the membrane attack complex; MAC). Activation could have become initiated by anti-tumour cell antibodies that were detected in the AFs and/or by changes on tumour cell surfaces. The lack of MAC was probably due to the expression of C membrane regulators CD46, CD55 and CD59 on the tumour cells. Soluble forms of C1 inhibitor, CD59 and CD46, and the alternative pathway inhibitors factor H and FHL-1 were present in the AF at concentrations higher than in serum samples. Despite the presence of soluble C inhibitors it was possible to use AF as a C source in antibody-initiated killing of ovarian carcinoma cells. These results demonstrate that although the ovarian ascitic C system fails as an effective immunological surveillance mechanism, it could be utilised as an effector mechanism in therapy with intraperitoneally administrated mAbs, especially if the intrinsic C regulators are neutralised."
},
{
"pmid": "16452172",
"title": "Drusen complement components C3a and C5a promote choroidal neovascularization.",
"abstract": "Age-related macular degeneration (AMD) is the leading cause of irreversible blindness in industrialized nations, affecting 30-50 million people worldwide. The earliest clinical hallmark of AMD is the presence of drusen, extracellular deposits that accumulate beneath the retinal pigmented epithelium. Although drusen nearly always precede and increase the risk of choroidal neovascularization (CNV), the late vision-threatening stage of AMD, it is unknown whether drusen contribute to the development of CNV. Both in patients with AMD and in a recently described mouse model of AMD, early subretinal pigmented epithelium deposition of complement components C3 and C5 occurs, suggesting a contributing role for these inflammatory proteins in the development of AMD. Here we provide evidence that bioactive fragments of these complement components (C3a and C5a) are present in drusen of patients with AMD, and that C3a and C5a induce VEGF expression in vitro and in vivo. Further, we demonstrate that C3a and C5a are generated early in the course of laser-induced CNV, an accelerated model of neovascular AMD driven by VEGF and recruitment of leukocytes into the choroid. We also show that genetic ablation of receptors for C3a or C5a reduces VEGF expression, leukocyte recruitment, and CNV formation after laser injury, and that antibody-mediated neutralization of C3a or C5a or pharmacological blockade of their receptors also reduces CNV. Collectively, these findings establish a mechanistic basis for the clinical observation that drusen predispose to CNV, revealing a role for immunological phenomena in angiogenesis and providing therapeutic targets for AMD."
},
{
"pmid": "30392780",
"title": "Deep learning provides a new computed tomography-based prognostic biomarker for recurrence prediction in high-grade serous ovarian cancer.",
"abstract": "BACKGROUND AND PURPOSE\nRecurrence is the main risk for high-grade serous ovarian cancer (HGSOC) and few prognostic biomarkers were reported. In this study, we proposed a novel deep learning (DL) method to extract prognostic biomarkers from preoperative computed tomography (CT) images, aiming at providing a non-invasive recurrence prediction model in HGSOC.\n\n\nMATERIALS AND METHODS\nWe enrolled 245 patients with HGSOC from two hospitals, which included a feature-learning cohort (n = 102), a primary cohort (n = 49) and two independent validation cohorts from two hospitals (n = 49 and n = 45). We trained a novel DL network in 8917 CT images from the feature-learning cohort to extract the prognostic biomarkers (DL feature) of HGSOC. Afterward, a DL-CPH model incorporating the DL feature and Cox proportional hazard (Cox-PH) regression was developed to predict the individual recurrence risk and 3-year recurrence probability of patients.\n\n\nRESULTS\nIn the two validation cohorts, the concordance-index of the DL-CPH model was 0.713 and 0.694. Kaplan-Meier's analysis clearly identified two patient groups with high and low recurrence risk (p = 0.0038 and 0.0164). The 3-year recurrence prediction was also effective (AUC = 0.772 and 0.825), which was validated by the good calibration and decision curve analysis. Moreover, the DL feature demonstrated stronger prognostic value than clinical characteristics.\n\n\nCONCLUSIONS\nThe DL method extracts effective CT-based prognostic biomarkers for HGSOC, and provides a non-invasive and preoperative model for individualized recurrence prediction in HGSOC. In addition, the DL-CPH model provides a new prognostic analysis method that can utilize CT data without follow-up for prognostic biomarker extraction."
},
{
"pmid": "29456725",
"title": "Application of deep learning to the classification of images from colposcopy.",
"abstract": "The objective of the present study was to investigate whether deep learning could be applied successfully to the classification of images from colposcopy. For this purpose, a total of 158 patients who underwent conization were enrolled, and medical records and data from the gynecological oncology database were retrospectively reviewed. Deep learning was performed with the Keras neural network and TensorFlow libraries. Using preoperative images from colposcopy as the input data and deep learning technology, the patients were classified into three groups [severe dysplasia, carcinoma in situ (CIS) and invasive cancer (IC)]. A total of 485 images were obtained for the analysis, of which 142 images were of severe dysplasia (2.9 images/patient), 257 were of CIS (3.3 images/patient), and 86 were of IC (4.1 images/patient). Of these, 233 images were captured with a green filter, and the remaining 252 were captured without a green filter. Following the application of L2 regularization, L1 regularization, dropout and data augmentation, the accuracy of the validation dataset was ~50%. Although the present study is preliminary, the results indicated that deep learning may be applied to classify colposcopy images."
},
{
"pmid": "30582927",
"title": "Survival outcome prediction in cervical cancer: Cox models vs deep-learning model.",
"abstract": "BACKGROUND\nHistorically, the Cox proportional hazard regression model has been the mainstay for survival analyses in oncologic research. The Cox proportional hazard regression model generally is used based on an assumption of linear association. However, it is likely that, in reality, there are many clinicopathologic features that exhibit a nonlinear association in biomedicine.\n\n\nOBJECTIVE\nThe purpose of this study was to compare the deep-learning neural network model and the Cox proportional hazard regression model in the prediction of survival in women with cervical cancer.\n\n\nSTUDY DESIGN\nThis was a retrospective pilot study of consecutive cases of newly diagnosed stage I-IV cervical cancer from 2000-2014. A total of 40 features that included patient demographics, vital signs, laboratory test results, tumor characteristics, and treatment types were assessed for analysis and grouped into 3 feature sets. The deep-learning neural network model was compared with the Cox proportional hazard regression model and 3 other survival analysis models for progression-free survival and overall survival. Mean absolute error and concordance index were used to assess the performance of these 5 models.\n\n\nRESULTS\nThere were 768 women included in the analysis. The median age was 49 years, and the majority were Hispanic (71.7%). The majority of tumors were squamous (75.3%) and stage I (48.7%). The median follow-up time was 40.2 months; there were 241 events for recurrence and progression and 170 deaths during the follow-up period. The deep-learning model showed promising results in the prediction of progression-free survival when compared with the Cox proportional hazard regression model (mean absolute error, 29.3 vs 316.2). The deep-learning model also outperformed all the other models, including the Cox proportional hazard regression model, for overall survival (mean absolute error, Cox proportional hazard regression vs deep-learning, 43.6 vs 30.7). The performance of the deep-learning model further improved when more features were included (concordance index for progression-free survival: 0.695 for 20 features, 0.787 for 36 features, and 0.795 for 40 features). There were 10 features for progression-free survival and 3 features for overall survival that demonstrated significance only in the deep-learning model, but not in the Cox proportional hazard regression model. There were no features for progression-free survival and 3 features for overall survival that demonstrated significance only in the Cox proportional hazard regression model, but not in the deep-learning model.\n\n\nCONCLUSION\nOur study suggests that the deep-learning neural network model may be a useful analytic tool for survival prediction in women with cervical cancer because it exhibited superior performance compared with the Cox proportional hazard regression model. This novel analytic approach may provide clinicians with meaningful survival information that potentially could be integrated into treatment decision-making and planning. Further validation studies are necessary to support this pilot study."
},
{
"pmid": "33526806",
"title": "Quantitative analysis of abnormalities in gynecologic cytopathology with deep learning.",
"abstract": "Cervical cancer is one of the most frequent cancers in women worldwide, yet the early detection and treatment of lesions via regular cervical screening have led to a drastic reduction in the mortality rate. However, the routine examination of screening as a regular health checkup of women is characterized as time-consuming and labor-intensive, while there is lack of characteristic phenotypic profile and quantitative analysis. In this research, over the analysis of a privately collected and manually annotated dataset of 130 cytological whole-slide images, the authors proposed a deep-learning diagnostic system to localize, grade, and quantify squamous cell abnormalities. The system can distinguish abnormalities at the morphology level, namely atypical squamous cells of undetermined significance, low-grade squamous intraepithelial lesion, high-grade squamous intraepithelial lesion, and squamous cell carcinoma, as well as differential phenotypes of normal cells. The case study covered 51 positive and 79 negative digital gynecologic cytology slides collected from 2016 to 2018. Our automatic diagnostic system demonstrated its sensitivity of 100% at slide-level abnormality prediction, with the confirmation with three pathologists who performed slide-level diagnosis and training sample annotations. In the cellular-level classification, we yielded an accuracy of 94.5% in the binary classification between normality and abnormality, and the AUC was above 85% for each subtype of epithelial abnormality. Although the final confirmation from pathologists is often a must, empirically, computer-aided methods are capable of the effective extraction, interpretation, and quantification of morphological features, while also making it more objective and reproducible."
},
{
"pmid": "29572387",
"title": "Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks.",
"abstract": "Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images."
},
{
"pmid": "34622237",
"title": "Predicting endometrial cancer subtypes and molecular features from histopathology images using multi-resolution deep learning models.",
"abstract": "The determination of endometrial carcinoma histological subtypes, molecular subtypes, and mutation status is critical for the diagnostic process, and directly affects patients' prognosis and treatment. Sequencing, albeit slower and more expensive, can provide additional information on molecular subtypes and mutations that can be used to better select treatments. Here, we implement a customized multi-resolution deep convolutional neural network, Panoptes, that predicts not only the histological subtypes but also the molecular subtypes and 18 common gene mutations based on digitized H&E-stained pathological images. The model achieves high accuracy and generalizes well on independent datasets. Our results suggest that Panoptes, with further refinement, has the potential for clinical application to help pathologists determine molecular subtypes and mutations of endometrial carcinoma without sequencing."
},
{
"pmid": "31308507",
"title": "Clinical-grade computational pathology using weakly supervised deep learning on whole slide images.",
"abstract": "The development of decision support systems for pathology and their deployment in clinical practice have been hindered by the need for large manually annotated datasets. To overcome this problem, we present a multiple instance learning-based deep learning system that uses only the reported diagnoses as labels for training, thereby avoiding expensive and time-consuming pixel-wise manual annotations. We evaluated this framework at scale on a dataset of 44,732 whole slide images from 15,187 patients without any form of data curation. Tests on prostate cancer, basal cell carcinoma and breast cancer metastases to axillary lymph nodes resulted in areas under the curve above 0.98 for all cancer types. Its clinical application would allow pathologists to exclude 65-75% of slides while retaining 100% sensitivity. Our results show that this system has the ability to train accurate classification models at unprecedented scale, laying the foundation for the deployment of computational decision support systems in clinical practice."
},
{
"pmid": "34934144",
"title": "Weakly-supervised deep learning for ultrasound diagnosis of breast cancer.",
"abstract": "Conventional deep learning (DL) algorithm requires full supervision of annotating the region of interest (ROI) that is laborious and often biased. We aimed to develop a weakly-supervised DL algorithm that diagnosis breast cancer at ultrasound without image annotation. Weakly-supervised DL algorithms were implemented with three networks (VGG16, ResNet34, and GoogLeNet) and trained using 1000 unannotated US images (500 benign and 500 malignant masses). Two sets of 200 images (100 benign and 100 malignant masses) were used for internal and external validation sets. For comparison with fully-supervised algorithms, ROI annotation was performed manually and automatically. Diagnostic performances were calculated as the area under the receiver operating characteristic curve (AUC). Using the class activation map, we determined how accurately the weakly-supervised DL algorithms localized the breast masses. For internal validation sets, the weakly-supervised DL algorithms achieved excellent diagnostic performances, with AUC values of 0.92-0.96, which were not statistically different (all Ps > 0.05) from those of fully-supervised DL algorithms with either manual or automated ROI annotation (AUC, 0.92-0.96). For external validation sets, the weakly-supervised DL algorithms achieved AUC values of 0.86-0.90, which were not statistically different (Ps > 0.05) or higher (P = 0.04, VGG16 with automated ROI annotation) from those of fully-supervised DL algorithms (AUC, 0.84-0.92). In internal and external validation sets, weakly-supervised algorithms could localize 100% of malignant masses, except for ResNet34 (98%). The weakly-supervised DL algorithms developed in the present study were feasible for US diagnosis of breast cancer with well-performing localization and differential diagnosis."
},
{
"pmid": "33649564",
"title": "Data-efficient and weakly supervised computational pathology on whole-slide images.",
"abstract": "Deep-learning methods for computational pathology require either manual annotation of gigapixel whole-slide images (WSIs) or large datasets of WSIs with slide-level labels and typically suffer from poor domain adaptation and interpretability. Here we report an interpretable weakly supervised deep-learning method for data-efficient WSI processing and learning that only requires slide-level labels. The method, which we named clustering-constrained-attention multiple-instance learning (CLAM), uses attention-based learning to identify subregions of high diagnostic value to accurately classify whole slides and instance-level clustering over the identified representative regions to constrain and refine the feature space. By applying CLAM to the subtyping of renal cell carcinoma and non-small-cell lung cancer as well as the detection of lymph node metastasis, we show that it can be used to localize well-known morphological features on WSIs without the need for spatial labels, that it overperforms standard weakly supervised classification algorithms and that it is adaptable to independent test cohorts, smartphone microscopy and varying tissue content."
},
{
"pmid": "33216724",
"title": "Deep Learning Methods for Lung Cancer Segmentation in Whole-Slide Histopathology Images-The ACDC@LungHP Challenge 2019.",
"abstract": "Accurate segmentation of lung cancer in pathology slides is a critical step in improving patient care. We proposed the ACDC@LungHP (Automatic Cancer Detection and Classification in Whole-slide Lung Histopathology) challenge for evaluating different computer-aided diagnosis (CADs) methods on the automatic diagnosis of lung cancer. The ACDC@LungHP 2019 focused on segmentation (pixel-wise detection) of cancer tissue in whole slide imaging (WSI), using an annotated dataset of 150 training images and 50 test images from 200 patients. This paper reviews this challenge and summarizes the top 10 submitted methods for lung cancer segmentation. All methods were evaluated using metrics using the precision, accuracy, sensitivity, specificity, and DICE coefficient (DC). The DC ranged from 0.7354 ±0.1149 to 0.8372 ±0.0858. The DC of the best method was close to the inter-observer agreement (0.8398 ±0.0890). All methods were based on deep learning and categorized into two groups: multi-model method and single model method. In general, multi-model methods were significantly better (p 0.01) than single model methods, with mean DC of 0.7966 and 0.7544, respectively. Deep learning based methods could potentially help pathologists find suspicious regions for further analysis of lung cancer in WSI."
},
{
"pmid": "34376717",
"title": "Artificial intelligence-assisted fast screening cervical high grade squamous intraepithelial lesion and squamous cell carcinoma diagnosis and treatment planning.",
"abstract": "Every year cervical cancer affects more than 300,000 people, and on average one woman is diagnosed with cervical cancer every minute. Early diagnosis and classification of cervical lesions greatly boosts up the chance of successful treatments of patients, and automated diagnosis and classification of cervical lesions from Papanicolaou (Pap) smear images have become highly demanded. To the authors' best knowledge, this is the first study of fully automated cervical lesions analysis on whole slide images (WSIs) of conventional Pap smear samples. The presented deep learning-based cervical lesions diagnosis system is demonstrated to be able to detect high grade squamous intraepithelial lesions (HSILs) or higher (squamous cell carcinoma; SQCC), which usually immediately indicate patients must be referred to colposcopy, but also to rapidly process WSIs in seconds for practical clinical usage. We evaluate this framework at scale on a dataset of 143 whole slide images, and the proposed method achieves a high precision 0.93, recall 0.90, F-measure 0.88, and Jaccard index 0.84, showing that the proposed system is capable of segmenting HSILs or higher (SQCC) with high precision and reaches sensitivity comparable to the referenced standard produced by pathologists. Based on Fisher's Least Significant Difference (LSD) test (P < 0.0001), the proposed method performs significantly better than the two state-of-the-art benchmark methods (U-Net and SegNet) in precision, F-Measure, Jaccard index. For the run time analysis, the proposed method takes only 210 seconds to process a WSI and is 20 times faster than U-Net and 19 times faster than SegNet, respectively. In summary, the proposed method is demonstrated to be able to both detect HSILs or higher (SQCC), which indicate patients for further treatments, including colposcopy and surgery to remove the lesion, and rapidly processing WSIs in seconds for practical clinical usages."
},
{
"pmid": "34359792",
"title": "Deep Learning Fast Screening Approach on Cytological Whole Slides for Thyroid Cancer Diagnosis.",
"abstract": "Thyroid cancer is the most common cancer in the endocrine system, and papillary thyroid carcinoma (PTC) is the most prevalent type of thyroid cancer, accounting for 70 to 80% of all thyroid cancer cases. In clinical practice, visual inspection of cytopathological slides is an essential initial method used by the pathologist to diagnose PTC. Manual visual assessment of the whole slide images is difficult, time consuming, and subjective, with a high inter-observer variability, which can sometimes lead to suboptimal patient management due to false-positive and false-negative. In this study, we present a fully automatic, efficient, and fast deep learning framework for fast screening of papanicolaou-stained thyroid fine needle aspiration (FNA) and ThinPrep (TP) cytological slides. To the authors' best of knowledge, this work is the first study to build an automated deep learning framework for identification of PTC from both FNA and TP slides. The proposed deep learning framework is evaluated on a dataset of 131 WSIs, and the results show that the proposed method achieves an accuracy of 99%, precision of 85%, recall of 94% and F1-score of 87% in segmentation of PTC in FNA slides and an accuracy of 99%, precision of 97%, recall of 98%, F1-score of 98%, and Jaccard-Index of 96% in TP slides. In addition, the proposed method significantly outperforms the two state-of-the-art deep learning methods, i.e., U-Net and SegNet, in terms of accuracy, recall, F1-score, and Jaccard-Index (p<0.001). Furthermore, for run-time analysis, the proposed fast screening method takes 0.4 min to process a WSI and is 7.8 times faster than U-Net and 9.1 times faster than SegNet, respectively."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "30224757",
"title": "Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning.",
"abstract": "Visual inspection of histopathology slides is one of the main methods used by pathologists to assess the stage, type and subtype of lung tumors. Adenocarcinoma (LUAD) and squamous cell carcinoma (LUSC) are the most prevalent subtypes of lung cancer, and their distinction requires visual inspection by an experienced pathologist. In this study, we trained a deep convolutional neural network (inception v3) on whole-slide images obtained from The Cancer Genome Atlas to accurately and automatically classify them into LUAD, LUSC or normal lung tissue. The performance of our method is comparable to that of pathologists, with an average area under the curve (AUC) of 0.97. Our model was validated on independent datasets of frozen tissues, formalin-fixed paraffin-embedded tissues and biopsies. Furthermore, we trained the network to predict the ten most commonly mutated genes in LUAD. We found that six of them-STK11, EGFR, FAT1, SETBP1, KRAS and TP53-can be predicted from pathology images, with AUCs from 0.733 to 0.856 as measured on a held-out population. These findings suggest that deep-learning models can assist pathologists in the detection of cancer subtype or gene mutations. Our approach can be applied to any cancer type, and the code is available at https://github.com/ncoudray/DeepPATH ."
},
{
"pmid": "27055470",
"title": "Immune checkpoint inhibition in ovarian cancer.",
"abstract": "Recent studies have shown that tumor cells acquire escape mechanisms to evade host immunity in the tumor microenvironment. Two key immune checkpoint pathways mediated by immunosuppressive co-signaling, the first via programmed cell death 1 (PD-1) and PD-1 ligand 1 (PD-1/PD-L1) and the second via CTLA-4 and B7 (CTLA-4/B7), have been previously described. Several clinical trials have revealed an outstanding anti-tumor efficacy of immune checkpoint inhibitors (anti-CTLA-4 antibody, anti-PD-1 antibody and/or anti-PD-L1 antibody) in patients with various types of solid malignancies, including non-small cell lung cancer, melanoma, renal cell cancer and ovarian cancer. In this review, we examine pre-clinical studies that described the local immune status and immune checkpoint signals in ovarian cancer, highlight recent clinical trials that evaluated immune checkpoint inhibitors against ovarian cancer and discuss the clinical issues regarding immune checkpoint inhibitors."
},
{
"pmid": "26115797",
"title": "Standard chemotherapy with or without bevacizumab for women with newly diagnosed ovarian cancer (ICON7): overall survival results of a phase 3 randomised trial.",
"abstract": "BACKGROUND\nThe ICON7 trial previously reported improved progression-free survival in women with ovarian cancer with the addition of bevacizumab to standard chemotherapy, with the greatest effect in patients at high risk of disease progression. We report the final overall survival results of the trial.\n\n\nMETHODS\nICON7 was an international, phase 3, open-label, randomised trial undertaken at 263 centres in 11 countries across Europe, Canada, Australia and New Zealand. Eligible adult women with newly diagnosed ovarian cancer that was either high-risk early-stage disease (International Federation of Gynecology and Obstetrics [FIGO] stage I-IIa, grade 3 or clear cell histology) or more advanced disease (FIGO stage IIb-IV), with an Eastern Cooperative Oncology Group performance status of 0-2, were enrolled and randomly assigned in a 1:1 ratio to standard chemotherapy (six 3-weekly cycles of intravenous carboplatin [AUC 5 or 6] and paclitaxel 175 mg/m(2) of body surface area) or the same chemotherapy regimen plus bevacizumab 7·5 mg per kg bodyweight intravenously every 3 weeks, given concurrently and continued with up to 12 further 3-weekly cycles of maintenance therapy. Randomisation was done by a minimisation algorithm stratified by FIGO stage, residual disease, interval between surgery and chemotherapy, and Gynecologic Cancer InterGroup group. The primary endpoint was progression-free survival; the study was also powered to detect a difference in overall survival. Analysis was by intention to treat. This trial is registered as an International Standard Randomised Controlled Trial, number ISRCTN91273375.\n\n\nFINDINGS\nBetween Dec 18, 2006, and Feb 16, 2009, 1528 women were enrolled and randomly assigned to receive chemotherapy (n=764) or chemotherapy plus bevacizumab (n=764). Median follow-up at the end of the trial on March 31, 2013, was 48·9 months (IQR 26·6-56·2), at which point 714 patients had died (352 in the chemotherapy group and 362 in the bevacizumab group). Our results showed evidence of non-proportional hazards, so we used the difference in restricted mean survival time as the primary estimate of effect. No overall survival benefit of bevacizumab was recorded (restricted mean survival time 44·6 months [95% CI 43·2-45·9] in the standard chemotherapy group vs 45·5 months [44·2-46·7] in the bevacizumab group; log-rank p=0·85). In an exploratory analysis of a predefined subgroup of 502 patients with poor prognosis disease, 332 (66%) died (174 in the standard chemotherapy group and 158 in the bevacizumab group), and a significant difference in overall survival was noted between women who received bevacizumab plus chemotherapy and those who received chemotherapy alone (restricted mean survival time 34·5 months [95% CI 32·0-37·0] with standard chemotherapy vs 39·3 months [37·0-41·7] with bevacizumab; log-rank p=0·03). However, in non-high-risk patients, the restricted mean survival time did not differ significantly between the two treatment groups (49·7 months [95% CI 48·3-51·1]) in the standard chemotherapy group vs 48·4 months [47·0-49·9] in the bevacizumab group; p=0·20). An updated analysis of progression-free survival showed no difference between treatment groups. During extended follow-up, one further treatment-related grade 3 event (gastrointestinal fistula in a bevacizumab-treated patient), three grade 2 treatment-related events (cardiac failure, sarcoidosis, and foot fracture, all in bevacizumab-treated patients), and one grade 1 treatment-related event (vaginal haemorrhage, in a patient treated with standard chemotherapy) were reported.\n\n\nINTERPRETATION\nBevacizumab, added to platinum-based chemotherapy, did not increase overall survival in the study population as a whole. However, an overall survival benefit was recorded in poor-prognosis patients, which is concordant with the progression-free survival results from ICON7 and GOG-218, and provides further evidence towards the optimum use of bevacizumab in the treatment of ovarian cancer.\n\n\nFUNDING\nThe National Institute for Health Research through the UK National Cancer Research Network, the Medical Research Council, and Roche."
},
{
"pmid": "26853587",
"title": "Antiangiogenic therapy in oncology: current status and future directions.",
"abstract": "Angiogenesis, the formation of new blood vessels from pre-existing vessels, has been validated as a target in several tumour types through randomised trials, incorporating vascular endothelial growth factor (VEGF) pathway inhibitors into the therapeutic armoury. Although some tumours such as renal cell carcinoma, ovarian and cervical cancers, and pancreatic neuroendocrine tumours are sensitive to these drugs, others such as prostate cancer, pancreatic adenocarcinoma, and melanoma are resistant. Even when drugs have yielded significant results, improvements in progression-free survival, and, in some cases, overall survival, are modest. Thus, a crucial issue in development of these drugs is the search for predictive biomarkers-tests that predict which patients will, and will not, benefit before initiation of therapy. Development of biomarkers is important because of the need to balance efficacy, toxicity, and cost. Novel combinations of these drugs with other antiangiogenics or other classes of drugs are being developed, and the appreciation that these drugs have immunomodulatory and other modes of action will lead to combination regimens that capitalise on these newly understood mechanisms."
},
{
"pmid": "27268121",
"title": "Biological markers of prognosis, response to therapy and outcome in ovarian carcinoma.",
"abstract": "INTRODUCTION\nOvarian cancer (OvCa) is among the most common types of cancer and is the leading cause of death from gynecological malignancies in western countries. Cancer biomarkers have a potential for improving the management of OvCa patients at every point from screening and detection, diagnosis, prognosis, follow up, response to therapy and outcome.\n\n\nAREAS COVERED\nThe literature search has indicated a number of candidate biomarkers have recently emerged that could facilitate the molecular definition of OvCa, providing information about prognosis and predicting response to therapy. These potentially promising biomarkers include immune cells and their products, tumor-derived exosomes, nucleic acids and epigenetic biomarkers. Expert commentary: Although most of the biomarkers available today require prospective validation, the development of noninvasive liquid biopsy-based monitoring promises to improve their utility for evaluations of prognosis, response to therapy and outcome in OvCa."
},
{
"pmid": "23401453",
"title": "Markers of response for the antiangiogenic agent bevacizumab.",
"abstract": "Bevacizumab is the first antiangiogenic therapy proven to slow metastatic disease progression in patients with cancer. Although it has changed clinical practice, some patients do not respond or gradually develop resistance, resulting in rather modest gains in terms of overall survival. A major challenge is to develop robust biomarkers that can guide selection of patients for whom bevacizumab therapy is most beneficial. Here, we discuss recent progress in finding such markers, including the first results from randomized phase III clinical trials evaluating the efficacy of bevacizumab in combination with comprehensive biomarker analyses. In particular, these studies suggest that circulating levels of short vascular endothelial growth factor A (VEGF-A) isoforms, expression of neuropilin-1 and VEGF receptor 1 in tumors or plasma, and genetic variants in VEGFA or its receptors are strong biomarker candidates. The current challenge is to expand this first set of markers and to validate it and implement it into clinical practice. A first prospective biomarker study known as MERiDiAN, which will treat patients stratified for circulating levels of short VEGF-A isoforms with bevacizumab and paclitaxel, is planned and will hopefully provide us with new directions on how to treat patients more efficiently."
},
{
"pmid": "24947924",
"title": "The combination of circulating Ang1 and Tie2 levels predicts progression-free survival advantage in bevacizumab-treated patients with ovarian cancer.",
"abstract": "PURPOSE\nRandomized ovarian cancer trials, including ICON7, have reported improved progression-free survival (PFS) when bevacizumab was added to conventional cytotoxic therapy. The improvement was modest prompting the search for predictive biomarkers for bevacizumab.\n\n\nEXPERIMENTAL DESIGN\nPretreatment training (n=91) and validation (n=114) blood samples were provided by ICON7 patients. Plasma concentrations of 15 angio-associated factors were determined using validated multiplex ELISAs. Our statistical approach adopted PFS as the primary outcome measure and involved (i) searching for biomarkers with prognostic relevance or which related to between-individual variation in bevacizumab effect; (ii) unbiased determination of cutoffs for putative biomarker values; (iii) investigation of biologically meaningfully predictive combinations of putative biomarkers; and (iv) replicating the analysis on candidate biomarkers in the validation dataset.\n\n\nRESULTS\nThe combined values of circulating Ang1 (angiopoietin 1) and Tie2 (Tunica internal endothelial cell kinase 2) concentrations predicted improved PFS in bevacizumab-treated patients in the training set. Using median concentrations as cutoffs, high Ang1/low Tie2 values were associated with significantly improved PFS for bevacizumab-treated patients in both datasets (median, 23.0 months vs. 16.2; P=0.003) for the interaction of Ang1-Tie2 treatment in Cox regression analysis. The prognostic indices derived from the training set also distinguished high and low probability for progression in the validation set (P=0.008), generating similar values for HR (0.21 vs. 0.27) between treatment and control arms for patients with high Ang1 and low Tie2 values.\n\n\nCONCLUSIONS\nThe combined values of Ang1 and Tie2 are predictive biomarkers for improved PFS in bevacizumab-treated patients with ovarian cancer. These findings need to be validated in larger trials due to the limitation of sample size in this study."
},
{
"pmid": "25087181",
"title": "Prognostic importance of cell-free DNA in chemotherapy resistant ovarian cancer treated with bevacizumab.",
"abstract": "AIM\nTreatment of multiresistant epithelial ovarian cancer (EOC) is palliative and patients who have become resistant after multiple lines of chemotherapy often have an unmet need for further and less toxic treatment. Anti-angiogenic therapy has attracted considerable attention in the treatment of EOC in combination with chemotherapy. However, only a minor subgroup will benefit from the treatment and there is an obvious need for new markers to select such patients. The purpose of this study was to investigate the effect of single-agent bevacizumab in multiresistant EOC and the importance of circulating cell-free DNA (cfDNA) in predicting treatment response.\n\n\nMETHODS\nOne hundred and forty-four patients with multi-resistant EOC were treated with single-agent bevacizumab 10mg/kg every three weeks. Baseline plasma samples were analysed for levels of cfDNA by real-time polymerase chain reaction (PCR).\n\n\nRESULTS\nEighteen percent responded to treatment according to CA125 and 5.6% had partial response by Response Evaluation Criteria in Solid Tumours (RECIST). Stable disease was seen in 53.5% and 48.6% of the patients by CA125 and RECIST, respectively. Median progression free survival (PFS) and overall survival (OS) were 4.2 and 6.7 months, respectively. Cell-free DNA was highly correlated to PFS (p=0.0004) and OS (p=0.005) in both univariate and multivariate analyses (PFS, hazard ratio (HR)=1.98, p=0.002; OS, HR=1.66, p=0.02), as patients with high cfDNA had a poor outcome.\n\n\nCONCLUSIONS\nSingle-agent bevacizumab treatment in multiresistant EOC appears to be a valuable treatment option with acceptable side-effects. Cell-free DNA showed independent prognostic importance in patients treated with bevacizumab and could be applied as an adjunct for treatment selection."
},
{
"pmid": "29270405",
"title": "Therapy for Cancer: Strategy of Combining Anti-Angiogenic and Target Therapies.",
"abstract": "The concept that blood supply is required and necessary for cancer growth and spreading is intuitive and was firstly formalized by Judah Folkman in 1971, when he demonstrated that cancer cells release molecules able to promote the proliferation of endothelial cells and the formation of new vessels. This seminal result has initiated one of the most fascinating story of the medicine, which is offering a window of opportunity for cancer treatment based on the use of molecules inhibiting tumor angiogenesis and in particular vascular-endothelial growth factor (VEGF), which is the master gene in vasculature formation and is the commonest target of anti-angiogenic regimens. However, the clinical results are far from the remarkable successes obtained in pre-clinical models. The reasons of this discrepancy have been partially understood and well addressed in many reviews (Bergers and Hanahan, 2008; Bottsford-Miller et al., 2012; El-Kenawi and El-Remessy, 2013; Wang et al., 2015; Jayson et al., 2016). At present anti-angiogenic regimens are not used as single treatments but associated with standard chemotherapies. Based on emerging knowledge of the biology of VEGF, here we sustain the hypothesis of the efficacy of a dual approach based on targeting pro-angiogenic pathways and other druggable targets such as mutated oncogenes or the immune system."
},
{
"pmid": "21629292",
"title": "Principles and mechanisms of vessel normalization for cancer and other angiogenic diseases.",
"abstract": "Despite having an abundant number of vessels, tumours are usually hypoxic and nutrient-deprived because their vessels malfunction. Such abnormal milieu can fuel disease progression and resistance to treatment. Traditional anti-angiogenesis strategies attempt to reduce the tumour vascular supply, but their success is restricted by insufficient efficacy or development of resistance. Preclinical and initial clinical evidence reveal that normalization of the vascular abnormalities is emerging as a complementary therapeutic paradigm for cancer and other vascular disorders, which affect more than half a billion people worldwide. Here, we discuss the mechanisms, benefits, limitations and possible clinical translation of vessel normalization for cancer and other angiogenic disorders."
},
{
"pmid": "23997938",
"title": "Understanding and targeting resistance to anti-angiogenic therapies.",
"abstract": "Therapies targeting tumor angiogenesis are used in a variety of malignancies, however not all patients benefit from treatment and impact on tumor control may be transient and modest. Mechanisms of resistance to anti-angiogenic therapies can be broadly categorized into VEGF-axis dependent alterations, non-VEGF pathways, and stromal cell interactions. Complimentary combinations of agents that inhibit alternative mechanisms of blood vessel formation may optimize inhibition of angiogenesis and improve clinical benefit for patients. The purpose of this review is to detail the preclinical evidence for mechanisms of angiogenic resistance and provide an overview of novel therapeutic approaches exploiting these pathways."
},
{
"pmid": "22711031",
"title": "EZH2 inhibition: targeting the crossroad of tumor invasion and angiogenesis.",
"abstract": "Tumor angiogenesis and metastatic spreading are two highly interconnected phenomena, which contribute to cancer-associated deaths. Thus, the identification of novel strategies to target angiogenesis and metastatic spreading is crucial. Polycomb genes are a set of epigenetic effectors, structured in multimeric repressive complexes. EZH2 is the catalytic subunit of Polycomb repressive complex 2 (PRC2), which methylates histone H3 lysine 27, thereby silencing several tumor-suppressor genes. EZH2 is essential for cancer stem cell self-renewal. Interestingly, cancer stem cells are thought to be the seeds of metastatic spreading and are able to differentiate into tumor-associated endothelial cells. Pre-clinical studies showed that EZH2 is able to silence several anti-metastatic genes (e.g., E-cadherin and tissue inhibitors of metalloproteinases), thereby favoring cell invasion and anchorage-independent growth. In addition, EZH2 seems to play a crucial role in the regulation of tumor angiogenesis. High EZH2 expression predicts poor prognosis, high grade, and high stage in several cancer types. Recently, a small molecule inhibitor of PRC2 (DZNeP) demonstrated promising anti-tumor activity, both in vitro and in vivo. Interestingly, DZNeP was able to inhibit cancer cell invasion and tumor angiogenesis in prostate and brain cancers, respectively. At tumor-inhibiting doses, DZNeP is not harmful for non-transformed cells. In the present manuscript, we review current evidence supporting a role of EZH2 in metastatic spreading and tumor angiogenesis. Using Oncomine datasets, we show that DZNeP targets are specifically silenced in some metastatic cancers, and some of them may inhibit angiogenesis. Based on this evidence, we propose the development of EZH2 inhibitors as anti-angiogenic and anti-metastatic therapy."
},
{
"pmid": "34640548",
"title": "AIM2 Inflammasome in Tumor Cells as a Biomarker for Predicting the Treatment Response to Antiangiogenic Therapy in Epithelial Ovarian Cancer Patients.",
"abstract": "Antiangiogenic therapy, such as bevacizumab (BEV), has improved progression-free survival (PFS) and overall survival (OS) in high-risk patients with epithelial ovarian cancer (EOC) according to several clinical trials. Clinically, no reliable molecular biomarker is available to predict the treatment response to antiangiogenic therapy. Immune-related proteins can indirectly contribute to angiogenesis by regulating stromal cells in the tumor microenvironment. This study was performed to search biomarkers for prediction of the BEV treatment response in EOC patients. We conducted a hospital-based retrospective study from March 2013 to May 2020. Tissues from 78 Taiwanese patients who were newly diagnosed with EOC and peritoneal serous papillary carcinoma (PSPC) and received BEV therapy were collected. We used immunohistochemistry (IHC) staining and analyzed the expression of these putative biomarkers (complement component 3 (C3), complement component 5 (C5), and absent in melanoma 2 (AIM2)) based on the staining area and intensity of the color reaction to predict BEV efficacy in EOC patients. The immunostaining scores of AIM2 were significantly higher in the BEV-resistant group (RG) than in the BEV-sensitive group (SG) (355.5 vs. 297.1, p < 0.001). A high level of AIM2 (mean value > 310) conferred worse PFS after treatment with BEV than a low level of AIM2 (13.58 vs. 19.36 months, adjusted hazard ratio (HR) = 4.44, 95% confidence interval (CI) = 2.01-9.80, p < 0.001). There were no significant differences in C3 (p = 0.077) or C5 (p = 0.326) regarding BEV efficacy. AIM2 inflammasome expression can be a histopathological biomarker to predict the antiangiogenic therapy benefit in EOC patients. The molecular mechanism requires further investigation."
},
{
"pmid": "16873662",
"title": "Reducing the dimensionality of data with neural networks.",
"abstract": "High-dimensional data can be converted to low-dimensional codes by training a multilayer neural network with a small central layer to reconstruct high-dimensional input vectors. Gradient descent can be used for fine-tuning the weights in such \"autoencoder\" networks, but this works well only if the initial weights are close to a good solution. We describe an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data."
},
{
"pmid": "30976107",
"title": "Applications of machine learning in drug discovery and development.",
"abstract": "Drug discovery and development pipelines are long, complex and depend on numerous factors. Machine learning (ML) approaches provide a set of tools that can improve discovery and decision making for well-specified questions with abundant, high-quality data. Opportunities to apply ML occur in all stages of drug discovery. Examples include target validation, identification of prognostic biomarkers and analysis of digital pathology data in clinical trials. Applications have ranged in context and methodology, with some approaches yielding accurate predictions and insights. The challenges of applying ML lie primarily with the lack of interpretability and repeatability of ML-generated results, which may limit their application. In all areas, systematic and comprehensive high-dimensional data still need to be generated. With ongoing efforts to tackle these issues, as well as increasing awareness of the factors needed to validate ML approaches, the application of ML can promote data-driven decision making and has the potential to speed up the process and reduce failure rates in drug discovery and development."
}
] |
Frontiers in Psychology | null | PMC8997584 | 10.3389/fpsyg.2022.857924 | Realization of Self-Adaptive Higher Teaching Management Based Upon Expression and Speech Multimodal Emotion Recognition | In the process of communication between people, everyone will have emotions, and different emotions will have different effects on communication. With the help of external performance information accompanied by emotional expression, such as emotional speech signals or facial expressions, people can easily communicate with each other and understand each other. Emotion recognition is an important network of affective computers and research centers for signal processing, pattern detection, artificial intelligence, and human-computer interaction. Emotions convey important information in human communication and communication. Since the end of the last century, people have started the research on emotion recognition, especially how to correctly judge the emotion type has invested a lot of time and energy. In this paper, multi-modal emotion recognition is introduced to recognize facial expressions and speech, and conduct research on adaptive higher education management. Language and expression are the most direct ways for people to express their emotions. After obtaining the framework of the dual-modal emotion recognition system, the BOW model is used to identify the characteristic movement of local areas or key points. The recognition rates of emotion recognition for 1,000 audios of anger, disgust, fear, happiness, sadness and surprise are: 97.3, 83.75, 64.87, 89.87, 84.12, and 86.68%, respectively. | Related WorkRegarding emotion recognition, related scientists have done the following research. Jenke et al. reviews feature extraction methods for EEG emotion recognition based on 33 studies. He conducted comparative experiments on these features using machine learning techniques for feature selection on self-recorded datasets. He gives results on the performance of different feature selection methods, the use of selected feature types, and electrode location selection. The multivariate method selects features slightly better than the univariate method. He found that advanced feature extraction techniques have advantages over commonly used spectral power bands. The results also indicated a preference for parietal and central parietal locations (Jenke et al., 2017). Emotion is a key element of user-generated video. However, due to the complex and unstructured nature of user-generated content and the sparseness of video frames expressing emotion, it is difficult to understand the emotions conveyed in such videos. Xu et al. first proposed a technique to transfer knowledge from heterogeneous external sources, including image and text data, to facilitate three related tasks of understanding video emotion: emotion recognition, emotion attribution, and emotion-oriented (Xu et al., 2018). One of the challenges in virtual environments is that it is difficult for users to interact with these increasingly complex systems. Ultimately, giving machines the ability to sense user emotions will make interactions more intuitive and reliable. Menezes investigated features extracted from EEG signals to model affective states based on Russell's Circumplex model. The survey he presents aims to provide a basis for future work in modeling user influence to enhance interactive experiences in virtual environments. Wankhade and Doye aims to identify human emotional states or influences through EEG signals by employing advanced feature and classifier models. In the first stage of the recognition process, 2,501 (EMCD) and wavelet transforms are utilized to represent low-dimensional and descriptive EEG signals. Through EMCD, EEG redundancy can be ignored and important information can be extracted. And it demonstrates the superiority of the proposed work in identifying emotions more accurately (Wankhade and Doye, 2020). Zheng proposed a correlation analysis method for simultaneous EEG channel selection and emotion recognition. GSCCA is a group-sparse extension of traditional canonical correlation analysis methods to model linear correlations between emotion EEG category label vectors and corresponding EEG feature vectors. For EEG emotion recognition, he uses common frequency features to describe EEG signals. Finally, extensive experiments are conducted on EEG-based emotion recognition, and the experimental results show that the proposed GSCCA method will outperform ECG-based emotion recognition methods (Zheng, 2017). Albornoz and Milone proposes a novel system to model each language independently, preserving cultural attributes. In the second stage, the concept of emotional universality is used to map and predict emotions in never-before-seen languages. His widely tested features and classifiers for similar tasks are used to set the baseline. He developed a novel ensemble classifier to handle multiple languages and tested it on never-before-seen languages. Results show that the proposed model improves baseline accuracy, while its modular design allows the incorporation of new languages without training the entire system (Albornoz and Milone, 2017). Automatic recognition of spontaneous facial expressions is a major challenge in the field of affective computing. Happy et al. proposes and builds a new Facial Expression Database. In the Experiments, Emotions Were Evoked in Participants by Using emotional videos, while self-ratings of each experienced emotion were collected. Facial expression clips were carefully annotated by four trained decoders and further validated by self-reports of the nature and mood of the stimuli used. He performed extensive analysis of the database using several machine learning algorithms and provided the results for future reference (Happy et al., 2017). Huang et al. proposes that stimuli are based on clipped subsets corresponding to four specific regions (happy, neutral, sad, and fearful) of the emotional space of valence arousal. The results show that the accuracy rates of the two multimodal fusion detections are 81.25 and 82.75%, respectively, which are higher than those of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expression and EEG information for emotion recognition makes up for their shortcomings as a single source of information (Huang et al., 2017). These methods provide some references for research, but due to the short time and small sample size of the relevant research, this research has not been recognized by the public. | [
"28247068",
"26577473",
"29356568",
"29056963",
"28357975",
"34744913",
"28545237",
"27927685",
"27747819",
"29035069"
] | [
{
"pmid": "28247068",
"title": "The role of infants' mother-directed gaze, maternal sensitivity, and emotion recognition in childhood callous unemotional behaviours.",
"abstract": "While some children with callous unemotional (CU) behaviours show difficulty recognizing emotional expressions, the underlying developmental pathways are not well understood. Reduced infant attention to the caregiver's face and a lack of sensitive parenting have previously been associated with emerging CU features. The current study examined whether facial emotion recognition mediates the association between infants' mother-directed gaze, maternal sensitivity, and later CU behaviours. Participants were 206 full-term infants and their families from a prospective longitudinal study, the Durham Child Health and Development Study (DCHDS). Measures of infants' mother-directed gaze, and maternal sensitivity were collected at 6 months, facial emotion recognition performance at 6 years, and CU behaviours at 7 years. A path analysis showed a significant effect of emotion recognition predicting CU behaviours (β = -0.275, S.E. = 0.084, p = 0.001). While the main effects of infants' mother-directed gaze and maternal sensitivity were not significant, their interaction significantly predicted CU behaviours (β = 0.194, S.E. = 0.081, p = 0.016) with region of significance analysis showing a significant negative relationship between infant gaze and later CU behaviours only for those with low maternal sensitivity. There were no indirect effects of infants' mother-directed gaze, maternal sensitivity or the mother-directed gaze by maternal sensitivity interaction via emotion recognition. Emotion recognition appears to act as an independent predictor of CU behaviours, rather than mediating the relationship between infants' mother-directed gaze and maternal sensitivity with later CU behaviours. This supports the idea of multiple risk factors for CU behaviours."
},
{
"pmid": "26577473",
"title": "A generalized architecture of quantum secure direct communication for N disjointed users with authentication.",
"abstract": "In this paper, we generalize a secured direct communication process between N users with partial and full cooperation of quantum server. So, N - 1 disjointed users u1, u2, …, uN-1 can transmit a secret message of classical bits to a remote user uN by utilizing the property of dense coding and Pauli unitary transformations. The authentication process between the quantum server and the users are validated by EPR entangled pair and CNOT gate. Afterwards, the remained EPR will generate shared GHZ states which are used for directly transmitting the secret message. The partial cooperation process indicates that N - 1 users can transmit a secret message directly to a remote user uN through a quantum channel. Furthermore, N - 1 users and a remote user uN can communicate without an established quantum channel among them by a full cooperation process. The security analysis of authentication and communication processes against many types of attacks proved that the attacker cannot gain any information during intercepting either authentication or communication processes. Hence, the security of transmitted message among N users is ensured as the attacker introduces an error probability irrespective of the sequence of measurement."
},
{
"pmid": "29356568",
"title": "Usability study and pilot validation of a computer-based emotion recognition test for older adults with Alzheimer's disease and amnestic mild cognitive impairment.",
"abstract": "OBJECTIVES\nThis study aimed to carry out a pilot validation of Affect-GRADIOR, a computer-based emotion recognition test, with older adults. The study evaluated its usability, reliability and validity for the screening of people with Alzheimer´s disease (AD) and amnestic mild cognitive impairment (aMCI).\n\n\nMETHODS\nThe test was administered to 212 participants (76.37 ± 6.20 years) classified into three groups (healthy controls, n = 69; AD, n = 84; and aMCI, n = 59) on the basis of detailed neurological, neuropsychological, laboratory and neuro-imaging evidence. Data on usability were collected by means of a questionnaire and automated evaluation.\n\n\nRESULTS\nThe validated test comprised 53 stimuli and 7 practice items (one per emotion). Participants reported that Affect-GRADIOR was accessible and user-friendly. It had high internal consistency (ordinal Cronbach's α = 0.96). Test-retest reliability correlations were significant and robust (r = 0.840, p < 0.001). Exploratory factor analysis supported a seven-factor model of the emotions assessed (neutral expression, happiness, surprise, disgust, sadness, anger and fear). Receiver operating characteristic curve analyses suggested that the test discriminated healthy older adults from AD and aMCI cases. Correct answer score improved MMSE predictive power from 0.547 to 0.560 (Cox & Snell R2, p = 0.012), and Affect-GRADIOR speed of processing score improved MMSE predictive power from 0.547 to 0.563 (Cox & Snell R2, p = 0.010).\n\n\nCONCLUSIONS\nAffect-GRADIOR is a valid instrument for the assessment of the facial recognition of emotions in older adults with and without cognitive impairment."
},
{
"pmid": "29056963",
"title": "Fusion of Facial Expressions and EEG for Multimodal Emotion Recognition.",
"abstract": "This paper proposes two multimodal fusion methods between brain and peripheral signals for emotion recognition. The input signals are electroencephalogram and facial expression. The stimuli are based on a subset of movie clips that correspond to four specific areas of valance-arousal emotional space (happiness, neutral, sadness, and fear). For facial expression detection, four basic emotion states (happiness, neutral, sadness, and fear) are detected by a neural network classifier. For EEG detection, four basic emotion states and three emotion intensity levels (strong, ordinary, and weak) are detected by two support vector machines (SVM) classifiers, respectively. Emotion recognition is based on two decision-level fusion methods of both EEG and facial expression detections by using a sum rule or a production rule. Twenty healthy subjects attended two experiments. The results show that the accuracies of two multimodal fusion detections are 81.25% and 82.75%, respectively, which are both higher than that of facial expression (74.38%) or EEG detection (66.88%). The combination of facial expressions and EEG information for emotion recognition compensates for their defects as single information sources."
},
{
"pmid": "28357975",
"title": "Feasibility and Efficacy of Brief Computerized Training to Improve Emotion Recognition in Premanifest and Early-Symptomatic Huntington's Disease.",
"abstract": "OBJECTIVES\nDeficits in the recognition of negative emotions emerge before clinical diagnosis in Huntington's disease (HD). To address emotion recognition deficits, which have been shown in schizophrenia to be improved by computerized training, we conducted a study of the feasibility and efficacy of computerized training of emotion recognition in HD.\n\n\nMETHODS\nWe randomly assigned 22 individuals with premanifest or early symptomatic HD to the training or control group. The training group used a self-guided online training program, MicroExpression Training Tool (METT), twice weekly for 4 weeks. All participants completed measures of emotion recognition at baseline and post-training time-points. Participants in the training group also completed training adherence measures.\n\n\nRESULTS\nParticipants in the training group completed seven of the eight sessions on average. Results showed a significant group by time interaction, indicating that METT training was associated with improved accuracy in emotion recognition.\n\n\nCONCLUSIONS\nAlthough sample size was small, our study demonstrates that emotion recognition remediation using the METT is feasible in terms of training adherence. The evidence also suggests METT may be effective in premanifest or early-symptomatic HD, opening up a potential new avenue for intervention. Further study with a larger sample size is needed to replicate these findings, and to characterize the durability and generalizability of these improvements, and their impact on functional outcomes in HD. (JINS, 2017, 23, 314-321)."
},
{
"pmid": "34744913",
"title": "Emotion Recognition of Chinese Paintings at the Thirteenth National Exhibition of Fines Arts in China Based on Advanced Affective Computing.",
"abstract": "Today, with the rapid development of economic level, people's esthetic requirements are also rising, they have a deeper emotional understanding of art, and the voice of their traditional art and culture is becoming higher. The study expects to explore the performance of advanced affective computing in the recognition and analysis of emotional features of Chinese paintings at the 13th National Exhibition of Fines Arts. Aiming at the problem of \"semantic gap\" in the emotion recognition task of images such as traditional Chinese painting, the study selects the AlexNet algorithm based on convolutional neural network (CNN), and further improves the AlexNet algorithm. Meanwhile, the study adds chi square test to solve the problems of data redundancy and noise in various modes such as Chinese painting. Moreover, the study designs a multimodal emotion recognition model of Chinese painting based on improved AlexNet neural network and chi square test. Finally, the performance of the model is verified by simulation with Chinese painting in the 13th National Exhibition of Fines Arts as the data source. The proposed algorithm is compared with Long Short-Term Memory (LSTM), CNN, Recurrent Neural Network (RNN), AlexNet, and Deep Neural Network (DNN) algorithms from the training set and test set, respectively, The emotion recognition accuracy of the proposed algorithm reaches 92.23 and 97.11% in the training set and test set, respectively, the training time is stable at about 54.97 s, and the test time is stable at about 23.74 s. In addition, the analysis of the acceleration efficiency of each algorithm shows that the improved AlexNet algorithm is suitable for processing a large amount of brain image data, and the acceleration ratio is also higher than other algorithms. And the efficiency in the test set scenario is slightly better than that in the training set scenario. On the premise of ensuring the error, the multimodal emotion recognition model of Chinese painting can achieve high accuracy and obvious acceleration effect. More importantly, the emotion recognition and analysis effect of traditional Chinese painting is the best, which can provide an experimental basis for the digital understanding and management of emotion of quintessence."
},
{
"pmid": "28545237",
"title": "Emotion Recognition in Adolescents with Down Syndrome: A Nonverbal Approach.",
"abstract": "Several studies have reported that persons with Down syndrome (DS) have difficulties recognizing emotions; however, there is insufficient research to prove that a deficit of emotional knowledge exists in DS. The aim of this study was to evaluate the recognition of emotional facial expressions without making use of emotional vocabulary, given the language problems known to be associated with this syndrome. The ability to recognize six emotions was assessed in 24 adolescents with DS. Their performance was compared to that of 24 typically developing children with the same nonverbal-developmental age, as assessed by Raven's Progressive Matrices. Analysis of the results revealed no global difference; only marginal differences in the recognition of different emotions appeared. Study of the developmental trajectories revealed a developmental difference: the nonverbal reasoning level assessed by Raven's matrices did not predict success on the experimental tasks in the DS group, contrary to the typically developing group. These results do not corroborate the hypothesis that there is an emotional knowledge deficit in DS and emphasize the importance of using dynamic, strictly nonverbal tasks in populations with language disorders."
},
{
"pmid": "27927685",
"title": "Influences on Facial Emotion Recognition in Deaf Children.",
"abstract": "This exploratory research is aimed at studying facial emotion recognition abilities in deaf children and how they relate to linguistic skills and the characteristics of deafness. A total of 166 participants (75 deaf) aged 3-8 years were administered the following tasks: facial emotion recognition, naming vocabulary and cognitive ability. The children's teachers or speech therapists also responded to two questionnaires, one on children's linguistic-communicative skills and the other providing personal information. Results show a delay in deaf children's capacity to recognize some emotions (scared, surprised, and disgusted) but not others (happy, sad, and angry). Notably, they recognized emotions in a similar order to hearing children. Moreover, linguistic skills were found to be related to emotion recognition skills, even when controlling for age. We discuss the importance of facial emotion recognition of language, conversation, some characteristics of deafness, and parents' educational level."
},
{
"pmid": "27747819",
"title": "Familiarity effects in EEG-based emotion recognition.",
"abstract": "Although emotion detection using electroencephalogram (EEG) data has become a highly active area of research over the last decades, little attention has been paid to stimulus familiarity, a crucial subjectivity issue. Using both our experimental data and a sophisticated database (DEAP dataset), we investigated the effects of familiarity on brain activity based on EEG signals. Focusing on familiarity studies, we allowed subjects to select the same number of familiar and unfamiliar songs; both resulting datasets demonstrated the importance of reporting self-emotion based on the assumption that the emotional state when experiencing music is subjective. We found evidence that music familiarity influences both the power spectra of brainwaves and the brain functional connectivity to a certain level. We conducted an additional experiment using music familiarity in an attempt to recognize emotional states; our empirical results suggested that the use of only songs with low familiarity levels can enhance the performance of EEG-based emotion classification systems that adopt fractal dimension or power spectral density features and support vector machine, multilayer perceptron or C4.5 classifier. This suggests that unfamiliar songs are most appropriate for the construction of an emotion recognition system."
},
{
"pmid": "29035069",
"title": "Emotion recognition in Parkinson's disease: Static and dynamic factors.",
"abstract": "OBJECTIVE\nThe authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants.\n\n\nMETHOD\nIn a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks.\n\n\nRESULTS\nGroups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls.\n\n\nCONCLUSIONS\nPD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record"
}
] |
Foods | null | PMC8997768 | 10.3390/foods11070972 | Linking Categorical and Dimensional Approaches to Assess Food-Related Emotions | Reflecting the two main prevailing and opposing views on the nature of emotions, emotional responses to food and beverages are typically measured using either (a) a categorical (lexicon-based) approach where users select or rate the terms that best express their food-related feelings or (b) a dimensional approach where they rate perceived food items along the dimensions of valence and arousal. Relating these two approaches is problematic since a response in terms of valence and arousal is not easily expressed in terms of emotions (like happy or disgusted). In this study, we linked the dimensional approach to a categorical approach by establishing mapping between a set of 25 emotion terms (EsSense25) and the valence–arousal space (via the EmojiGrid graphical response tool), using a set of 20 food images. In two ‘matching’ tasks, the participants first imagined how the food shown in a given image would make them feel and then reported either the emotional terms or the combination of valence and arousal that best described their feelings. In two labeling tasks, the participants first imagined experiencing a given emotion term and then they selected either the foods (images) that appeared capable to elicit that feeling or reported the combination of valence and arousal that best reflected that feeling. By combining (1) the mapping between the emotion terms and the food images with (2) the mapping of the food images to the valence–arousal space, we established (3) an indirect (via the images) mapping of the emotion terms to the valence–arousal space. The results show that the mapping between terms and images was reliable and that the linkages have straightforward and meaningful interpretations. The valence and arousal values that were assigned to the emotion terms through indirect mapping to the valence–arousal space were typically less extreme than those that were assigned through direct mapping. | 1.2. Related WorkUsing a CATA paradigm with cashew nuts, peanuts, chocolate, fruit and processed tomatoes as the focal product categories, participants in a study by Jaeger, Spinelli, Ares and Monteleone [34] reported their sensory product perceptions (in terms of sensory attributes like appearance, flavor, taste, texture and odor) and associations with emotion terms (from the EsSense Profile). Relationships between the resulting food-elicited emotional associations and the sensory terms were established by mapping both to the circumplex model of human core affect through correspondence analysis. While many of these relationships were easy to interpret, others were less obvious. Jaeger et al. [34] suggested to further validate their mapping of emotion terms to the valence–arousal space through a direct mapping procedure.Scherer et al. [32] linked a dimensional and a categorical approach to emotion assessment through the Geneva Emotion Wheel (GEW) graphical response tool. In the GEW, 20 emotion terms are equidistantly spaced around the circumference of a circular two-dimensional space representing the dimensions of valence and control/power. Different emotion terms (that only appear when the user moves a cursor over their position) are placed inside the circle, such that their intensity increases with their distance from the center. Thus, the GEW combines three response dimensions (i.e., valence, intensity, and control/power) in a two-dimensional representation. However, the control/power dimension is rather abstract and appears difficult for users to rate. Although the GEW was developed as a general instrument for the measurement of emotional response to affective stimuli, its emotion terms do not apply to food-elicited emotions. Also, arousal is not explicitly measured. Although arousal and intensity are related, both are distinct concepts, which are not linearly related [35]Lorette [6] linked the discrete categorical approach to the continuous dimensional approach through a two-step instrument called the Two-Dimensional Affect and Feeling Space (2DAFS). The 2DAFS is a clickable and labeled affect grid that is followed by the presentation of a valence–arousal space that is labeled with 36 basic emotion terms. The emotion terms were centered at valence and arousal coordinates that had been determined in previous (unrelated) studies in which these terms had been rated for their valence and arousal [36,37,38]. After reporting their appraisal (in terms of valence and arousal) of the emotional stimuli by clicking on the affect grid, the users can further categorize their response by selecting one or more words from the spatially ordered set of emotion terms. Since the emotion terms are positioned according to their valence and arousal ratings, terms that probably apply most (and are therefore most likely to be selected by the user) are arranged closest to the location where the user clicked on the grid, enabling an efficient and fast response. Although the 2DAFS has been developed as a general instrument to measure emotional responses to affective stimuli, its emotion terms are not suitable for use to characterize food-elicited emotions. A further limitation of the instrument is that participants can only choose one emotion term per response, thus preventing the reporting of mixed emotions. | [
"25521352",
"31742235",
"27798257",
"30599977",
"30546339",
"28784478",
"32036918",
"21707162",
"27978493",
"17576282",
"29803492",
"23231533",
"19928612",
"23404613",
"30740078",
"7962581",
"32877409",
"32730278",
"27330520",
"18839484",
"23055170",
"15703257",
"33384331",
"28736213"
] | [
{
"pmid": "25521352",
"title": "Evoked emotions predict food choice.",
"abstract": "In the current study we show that non-verbal food-evoked emotion scores significantly improve food choice prediction over merely liking scores. Previous research has shown that liking measures correlate with choice. However, liking is no strong predictor for food choice in real life environments. Therefore, the focus within recent studies shifted towards using emotion-profiling methods that successfully can discriminate between products that are equally liked. However, it is unclear how well scores from emotion-profiling methods predict actual food choice and/or consumption. To test this, we proposed to decompose emotion scores into valence and arousal scores using Principal Component Analysis (PCA) and apply Multinomial Logit Models (MLM) to estimate food choice using liking, valence, and arousal as possible predictors. For this analysis, we used an existing data set comprised of liking and food-evoked emotions scores from 123 participants, who rated 7 unlabeled breakfast drinks. Liking scores were measured using a 100-mm visual analogue scale, while food-evoked emotions were measured using 2 existing emotion-profiling methods: a verbal and a non-verbal method (EsSense Profile and PrEmo, respectively). After 7 days, participants were asked to choose 1 breakfast drink from the experiment to consume during breakfast in a simulated restaurant environment. Cross validation showed that we were able to correctly predict individualized food choice (1 out of 7 products) for over 50% of the participants. This number increased to nearly 80% when looking at the top 2 candidates. Model comparisons showed that evoked emotions better predict food choice than perceived liking alone. However, the strongest predictive strength was achieved by the combination of evoked emotions and liking. Furthermore we showed that non-verbal food-evoked emotion scores more accurately predict food choice than verbal food-evoked emotions scores."
},
{
"pmid": "31742235",
"title": "Hyperconnectivity of the ventromedial prefrontal cortex in obsessive-compulsive disorder.",
"abstract": "Neuroimaging research has highlighted maladaptive thalamo-cortico-striatal interactions in obsessive-compulsive disorder as well as a more general deficit in prefrontal functioning linked with compromised executive functioning. More specifically, dysfunction in the ventromedial prefrontal cortex, a central hub in coordinating flexible behaviour, is thought to be central to obsessive-compulsive disorder symptomatology. We sought to determine the intrinsic alterations of the ventromedial prefrontal cortex in obsessive-compulsive disorder employing resting-state functional connectivity magnetic resonance imaging analyses with a ventromedial prefrontal cortex seed region of interest. A total of 38 obsessive-compulsive disorder patients and 33 matched controls were included in our analyses. We found widespread ventromedial prefrontal cortex hyperconnectivity during rest in patients with obsessive-compulsive disorder, displaying increased connectivity with its own surrounding region in addition to hyperconnectivity with several areas along the thalamo-cortico-striatal loop: thalamus, caudate and frontal gyrus. Obsessive-compulsive disorder patients also exhibited increased functional connectivity from the ventromedial prefrontal cortex to temporal and occipital lobes, cerebellum and the motor cortex, reflecting ventromedial prefrontal cortex hyperconnectivity in large-scale brain networks. Furthermore, hyperconnectivity of the ventromedial prefrontal cortex and caudate correlated with obsessive-compulsive disorder symptomatology. Additionally, we used three key thalamo-cortico-striatal regions that were hyperconnected with our ventromedial prefrontal cortex seed as supplementary seed regions, revealing hypoconnectivity along the orbito- and lateral prefrontal cortex-striatal pathway. Taken together, these results confirm a central role of a hyperconnected ventromedial prefrontal cortex in obsessive-compulsive disorder, with a special role for maladaptive crosstalk with the caudate, and indications for hypoconnectivity along the lateral and orbito pathways."
},
{
"pmid": "27798257",
"title": "The theory of constructed emotion: an active inference account of interoception and categorization.",
"abstract": "The science of emotion has been using folk psychology categories derived from philosophy to search for the brain basis of emotion. The last two decades of neuroscience research have brought us to the brink of a paradigm shift in understanding the workings of the brain, however, setting the stage to revolutionize our understanding of what emotions are and how they work. In this article, we begin with the structure and function of the brain, and from there deduce what the biological basis of emotions might be. The answer is a brain-based, computational account called the theory of constructed emotion."
},
{
"pmid": "30599977",
"title": "EmojiGrid: A 2D pictorial scale for cross-cultural emotion assessment of negatively and positively valenced food.",
"abstract": "Because of the globalization of world food markets there is a growing need for valid and language independent self-assessment tools to measure food-related emotions. We recently introduced the EmojiGrid as a language-independent, graphical affective self-report tool. The EmojiGrid is a Cartesian grid that is labeled with facial icons (emoji) expressing different degrees of valence and arousal. Users can report their subjective ratings of valence and arousal by marking the location on the area of the grid that corresponds to the emoji that best represent their affective state when perceiving a given food or beverage. In a previous study we found that the EmojiGrid is robust, self-explaining and intuitive: valence and arousal ratings were independent of framing and verbal instructions. This suggests that the EmojiGrid may be a valuable tool for cross-cultural studies. To test this hypothesis, we performed an online experiment in which respondents from Germany (GE), Japan (JP), the Netherlands (NL) and the United Kingdom (UK) rated valence and arousal for 60 different food images (covering a large part of the affective space) using the EmojiGrid. The results show that the nomothetic relation between valence and arousal has the well-known U-shape for all groups. The European groups (GE, NL and UK) closely agree in their overall rating behavior. Compared to the European groups, the Japanese group systematically gave lower mean arousal ratings to low valenced images and lower mean valence ratings to high valenced images. These results agree with known cultural response characteristics. We conclude that the EmojiGrid is potentially a valid and language-independent affective self-report tool for cross-cultural research on food-related emotions. It reliably reproduces the familiar nomothetic U-shaped relation between valence and arousal across cultures, with shape variations reflecting established cultural characteristics."
},
{
"pmid": "30546339",
"title": "EmojiGrid: A 2D Pictorial Scale for the Assessment of Food Elicited Emotions.",
"abstract": "Research on food experience is typically challenged by the way questions are worded. We therefore developed the EmojiGrid: a graphical (language-independent) intuitive self-report tool to measure food-related valence and arousal. In a first experiment participants rated the valence and the arousing quality of 60 food images, using either the EmojiGrid or two independent visual analog scales (VAS). The valence ratings obtained with both tools strongly agree. However, the arousal ratings only agree for pleasant food items, but not for unpleasant ones. Furthermore, the results obtained with the EmojiGrid show the typical universal U-shaped relation between the mean valence and arousal that is commonly observed for a wide range of (visual, auditory, tactile, olfactory) affective stimuli, while the VAS tool yields a positive linear association between valence and arousal. We hypothesized that this disagreement reflects a lack of proper understanding of the arousal concept in the VAS condition. In a second experiment we attempted to clarify the arousal concept by asking participants to rate the valence and intensity of the taste associated with the perceived food items. After this adjustment the VAS and EmojiGrid yielded similar valence and arousal ratings (both showing the universal U-shaped relation between the valence and arousal). A comparison with the results from the first experiment showed that VAS arousal ratings strongly depended on the actual wording used, while EmojiGrid ratings were not affected by the framing of the associated question. This suggests that the EmojiGrid is largely self-explaining and intuitive. To test this hypothesis, we performed a third experiment in which participants rated food images using the EmojiGrid without an associated question, and we compared the results to those of the first two experiments. The EmojiGrid ratings obtained in all three experiments closely agree. We conclude that the EmojiGrid appears to be a valid and intuitive affective self-report tool that does not rely on written instructions and that can efficiently be used to measure food-related emotions."
},
{
"pmid": "28784478",
"title": "A comparison of five methodological variants of emoji questionnaires for measuring product elicited emotional associations: An application with seafood among Chinese consumers.",
"abstract": "Product insights beyond hedonic responses are increasingly sought and include emotional associations. Various word-based questionnaires for direct measurement exist and an emoji variant was recently proposed. Herein, emotion words are replaced with emoji conveying a range of emotions. Further assessment of emoji questionnaires is needed to establish their relevance in food-related consumer research. Methodological research contributes hereto and in the present research the effects of question wording and response format are considered. Specifically, a web study was conducted with Chinese consumers (n=750) using four seafood names as stimuli (mussels, lobster, squid and abalone). Emotional associations were elicited using 33 facial emoji. Explicit reference to \"how would you feel?\" in the question wording changed product emoji profiles minimally. Consumers selected only a few emoji per stimulus when using CATA (check-all-that-apply) questions, and layout of the CATA question had only a small impact on responses. A comparison of CATA questions with forced yes/no questions and RATA (rate-all-that-apply) questions revealed an increase in frequency of emoji use for yes/no questions, but not a corresponding improvement in sample discrimination. For the stimuli in this research, which elicited similar emotional associations, RATA was probably the best methodological choice, with 8.5 emoji being used per stimulus, on average, and increased sample discrimination relative to CATA (12% vs. 6-8%). The research provided additional support for the potential of emoji surveys as a method for measurement of emotional associations to foods and beverages and began contributing to development of guidelines for implementation."
},
{
"pmid": "32036918",
"title": "Health beliefs towards kefir correlate with emotion and attitude: A study using an emoji scale in Brazil.",
"abstract": "Emojis can be used to explore food-evoked emotions in order to provide information that can support the product development and marketing decisions. This study aimed to evaluate consumers' acceptance, purchase intent and emotional responses to milk beverages, with and without kefir added, before and after these consumers were informed about the products' composition (0%, 15%, 30% and 50% m/v) and health claims toward kefir (blind and informed tests, respectively). Emotional responses were assessed by emoji use within a RATA questionnaire in order quantify the perceived significance of the emojis chosen. In the informed test, the consumers' perception of the sensory attributes of the milk beverages, such as their perception of an acid taste in added kefir beverages was shown to have changed. Overall, participants attributed significantly higher acceptance and purchase intent scores to added kefir beverages after they had been informed on its health benefits. In addition, expressions of positive emotion increased when participants were exposed to stimuli related to health benefits of kefir (15%, 30% and 50% m/v), while negative expressions of emotion decreased. The provided information of kefir modified valence and arousal in subjects, and it can be said that to 30% of kefir can be added to yogurt without compromising its sensory acceptability. Thus, health benefits alone cannot improve product acceptance, since participants found a 50% addition of kefir to be unpleasant when tasted during a blind test. Mixed beverages may present a probiotic beverage alternative for consumers who dislike kefir milk, but want to include it in their diets. The implications of liking and purchase intent and how they are linked to emotions are discussed in this paper as well."
},
{
"pmid": "21707162",
"title": "A 12-Point Circumplex Structure of Core Affect.",
"abstract": "Core Affect is a state accessible to consciousness as a single simple feeling (feeling good or bad, energized or enervated) that can vary from moment to moment and that is the heart of, but not the whole of, mood and emotion. In four correlational studies (Ns = 535, 190, 234, 395), a 12-Point Affect Circumplex (12-PAC) model of Core Affect was developed that is finer grained than previously available and that integrates major dimensional models of mood and emotion. Self-report scales in three response formats were cross-validated for Core Affect felt during current and remembered moments. A technique that places any external variable into the 12-PAC showed that 29 of 38 personality scales and 30 of 30 mood scales are significantly related to Core Affect, but not in a way that revealed its basic dimensions."
},
{
"pmid": "27978493",
"title": "Valence and arousal-based affective evaluations of foods.",
"abstract": "We investigated the nutrient-specific and individual-specific validity of dual-process models of valenced and arousal-based affective evaluations of foods across the disordered eating spectrum. 283 undergraduate women provided implicit and explicit valence and arousal-based evaluations of 120 food photos with known nutritional information on structurally similar indirect and direct affect misattribution procedures (AMP; Payne et al., 2005, 2008), and completed questionnaires assessing body mass index (BMI), hunger, restriction, and binge eating. Nomothetically, added fat and added sugar enhance evaluations of foods. Idiographically, hunger and binge eating enhance activation, whereas BMI and restriction enhance pleasantness. Added fat is salient for women who are heavier, hungrier, or who restrict; added sugar is influential for less hungry women. Restriction relates only to valence, whereas binge eating relates only to arousal. Findings are similar across implicit and explicit affective evaluations, albeit stronger for explicit, providing modest support for dual-process models of affective evaluation of foods."
},
{
"pmid": "17576282",
"title": "Putting feelings into words: affect labeling disrupts amygdala activity in response to affective stimuli.",
"abstract": "Putting feelings into words (affect labeling) has long been thought to help manage negative emotional experiences; however, the mechanisms by which affect labeling produces this benefit remain largely unknown. Recent neuroimaging studies suggest a possible neurocognitive pathway for this process, but methodological limitations of previous studies have prevented strong inferences from being drawn. A functional magnetic resonance imaging study of affect labeling was conducted to remedy these limitations. The results indicated that affect labeling, relative to other forms of encoding, diminished the response of the amygdala and other limbic regions to negative emotional images. Additionally, affect labeling produced increased activity in a single brain region, right ventrolateral prefrontal cortex (RVLPFC). Finally, RVLPFC and amygdala activity during affect labeling were inversely correlated, a relationship that was mediated by activity in medial prefrontal cortex (MPFC). These results suggest that affect labeling may diminish emotional reactivity along a pathway from RVLPFC to MPFC to the amygdala."
},
{
"pmid": "29803492",
"title": "Linking product-elicited emotional associations and sensory perceptions through a circumplex model based on valence and arousal: Five consumer studies.",
"abstract": "Sensory product characterisation by consumers is increasingly supplemented by measurement of emotional associations. However, studies that link products' sensory perception and emotional associations are still scarce. Five consumer studies were conducted using cashew nuts, peanuts, chocolate, fruit and processed tomatoes as the product categories. Consumers (n = 685) completed check-all-that-apply (CATA) questions to obtain sensory product perceptions and associations with emotion words. The latter were conceptualised and interpreted through a circumplex emotion model spanned by the dimensions of valence (pleasure to displeasure) and arousal (activation to deactivation). Through regression analysis, sensory terms were mapped to the circumplex model to represent statistical linkages with emotion words. Within a were interpretable. The most notable finding was the highly study-specific nature of the linkages, which was mainly attributed to the influence of product category. Methodological choices may also have been partly responsible for the differences. Three studies used a general emotion vocabulary (EsSense Profile®) and an identical number of sensory terms (n = 39). The less complete coverage of the emotional circumplex and the presence of synonymous sensory terms could have diminished the ability to interpret the results. Conversely, two studies used fewer emotion words and sensory terms and these, furthermore, were purposefully selected for the focal sets of samples. The linkages in these latter studies were more interpretable and this could suggest that customised vocabularies of modest length may be desirable when seeking to establish linkages between emotional associations and sensory characteristics of food/beverage stimuli. Purposeful inclusion of emotion words that fully span the circumplex emotion model may also be desirable. Overall, the research represents a new method for establishing linkages between the sensory properties and emotional association to food and beverage products."
},
{
"pmid": "23231533",
"title": "The relation between valence and arousal in subjective experience.",
"abstract": "Affect is basic to many if not all psychological phenomena. This article examines 2 of the most fundamental properties of affective experience--valence and arousal--asking how they are related to each other on a moment to moment basis. Over the past century, 6 distinct types of relations have been suggested or implicitly presupposed in the literature. We critically review the available evidence for each proposal and argue that the evidence does not provide a conclusive answer. Next, we use statistical modeling to verify the different proposals in 8 data sets (with Ns ranging from 80 to 1,417) where participants reported their affective experiences in response to experimental stimuli in laboratory settings or as momentary or remembered in natural settings. We formulate 3 key conclusions about the relation between valence and arousal: (a) on average, there is a weak but consistent V-shaped relation of arousal as a function of valence, but (b) there is large variation at the individual level, so that (c) valence and arousal can in principle show a variety of relations depending on person or circumstances. This casts doubt on the existence of a static, lawful relation between valence and arousal. The meaningfulness of the observed individual differences is supported by their personality and cultural correlates. The malleability and individual differences found in the structure of affect must be taken into account when studying affect and its role in other psychological phenomena."
},
{
"pmid": "19928612",
"title": "Using the Revised Dictionary of Affect in Language to quantify the emotional undertones of samples of natural language.",
"abstract": "Whissell's Dictionary of Affect in Language, originally designed to quantify the Pleasantness and Activation of specifically emotional words, was revised to increase its applicability to samples of natural language. Word selection for the revision privileged natural language, and the matching rate of the Dictionary, which includes 8,742 words, was increased to 90%. Dictionary scores were available for 9 of every 10 words in most language samples. A third rated dimension (Imagery) was added, and normative scores were obtained for natural English. Evidence supports the reliability and validity of ratings. Two sample applications to very disparate instances of natural language are described. The revised Dictionary, which contains ratings for words characteristic of natural language, is a portable tool that can be applied in almost any situation involving language."
},
{
"pmid": "23404613",
"title": "Norms of valence, arousal, and dominance for 13,915 English lemmas.",
"abstract": "Information about the affective meanings of words is used by researchers working on emotions and moods, word recognition and memory, and text-based sentiment analysis. Three components of emotions are traditionally distinguished: valence (the pleasantness of a stimulus), arousal (the intensity of emotion provoked by a stimulus), and dominance (the degree of control exerted by a stimulus). Thus far, nearly all research has been based on the ANEW norms collected by Bradley and Lang (1999) for 1,034 words. We extended that database to nearly 14,000 English lemmas, providing researchers with a much richer source of information, including gender, age, and educational differences in emotion norms. As an example of the new possibilities, we included stimuli from nearly all of the category norms (e.g., types of diseases, occupations, and taboo words) collected by Van Overschelde, Rawson, and Dunlosky (Journal of Memory and Language 50:289-335, 2004), making it possible to include affect in studies of semantic memory."
},
{
"pmid": "30740078",
"title": "CROCUFID: A Cross-Cultural Food Image Database for Research on Food Elicited Affective Responses.",
"abstract": "We present CROCUFID: a CROss-CUltural Food Image Database that currently contains 840 images, including 479 food images with detailed metadata and 165 images of non-food items. The database includes images of sweet, savory, natural, and processed food from Western and Asian cuisines. To create sufficient variability in valence and arousal we included images of food with different degrees of appetitiveness (fresh, unfamiliar, molded or rotten, spoiled, and partly consumed). We used a standardized photographing protocol, resulting in high resolution images depicting all food items on a standard background (a white plate), seen from a fixed viewing (45°) angle. CROCUFID is freely available under the CC-By Attribution 4.0 International license and hosted on the OSF repository. The advantages of the CROCUFID database over other databases are its (1) free availability, (2) full coverage of the valence - arousal space, (3) use of standardized recording methods, (4) inclusion of multiple cuisines and unfamiliar foods, (5) availability of normative and demographic data, (6) high image quality and (7) capability to support future (e.g., virtual and augmented reality) applications. Individuals from the United Kingdom (N = 266), North-America (N = 275), and Japan (N = 264) provided normative ratings of valence, arousal, perceived healthiness, and desire-to-eat using visual analog scales (VAS). In addition, for each image we computed 17 characteristics that are known to influence affective observer responses (e.g., texture, regularity, complexity, and colorfulness). Significant differences between groups and significant correlations between image characteristics and normative ratings were in accordance with previous research, indicating the validity of CROCUFID. We expect that CROCUFID will facilitate comparability across studies and advance experimental research on the determinants of food-elicited emotions. We plan to extend CROCUFID in the future with images of food from a wide range of different cuisines and with non-food images (for applications in for instance neuro-physiological studies). We invite researchers from all parts of the world to contribute to this effort by creating similar image sets that can be linked to this collection, so that CROCUFID will grow into a truly multicultural food database."
},
{
"pmid": "7962581",
"title": "Measuring emotion: the Self-Assessment Manikin and the Semantic Differential.",
"abstract": "The Self-Assessment Manikin (SAM) is a non-verbal pictorial assessment technique that directly measures the pleasure, arousal, and dominance associated with a person's affective reaction to a wide variety of stimuli. In this experiment, we compare reports of affective experience obtained using SAM, which requires only three simple judgments, to the Semantic Differential scale devised by Mehrabian and Russell (An approach to environmental psychology, 1974) which requires 18 different ratings. Subjective reports were measured to a series of pictures that varied in both affective valence and intensity. Correlations across the two rating methods were high both for reports of experienced pleasure and felt arousal. Differences obtained in the dominance dimension of the two instruments suggest that SAM may better track the personal response to an affective stimulus. SAM is an inexpensive, easy method for quickly assessing reports of affective response in many contexts."
},
{
"pmid": "32877409",
"title": "The EmojiGrid as a rating tool for the affective appraisal of touch.",
"abstract": "In this study we evaluate the convergent validity of a new graphical self-report tool (the EmojiGrid) for the affective appraisal of perceived touch events. The EmojiGrid is a square grid labeled with facial icons (emoji) showing different levels of valence and arousal. The EmojiGrid is language independent and efficient (a single click suffices to report both valence and arousal), making it a practical instrument for studies on affective appraisal. We previously showed that participants can intuitively and reliably report their affective appraisal (valence and arousal) of visual, auditory and olfactory stimuli using the EmojiGrid, even without additional (verbal) instructions. However, because touch events can be bidirectional and dynamic, these previous results cannot be generalized to the touch domain. In this study, participants reported their affective appraisal of video clips showing different interpersonal (social) and object-based touch events, using either the validated 9-point SAM (Self-Assessment Mannikin) scale or the EmojiGrid. The valence ratings obtained with the EmojiGrid and the SAM are in excellent agreement. The arousal ratings show good agreement for object-based touch and moderate agreement for social touch. For social touch and at more extreme levels of valence, the EmojiGrid appears more sensitive to arousal than the SAM. We conclude that the EmojiGrid can also serve as a valid and efficient graphical self-report instrument to measure human affective response to a wide range of tactile signals."
},
{
"pmid": "32730278",
"title": "A network model of affective odor perception.",
"abstract": "The affective appraisal of odors is known to depend on their intensity (I), familiarity (F), detection threshold (T), and on the baseline affective state of the observer. However, the exact nature of these relations is still largely unknown. We therefore performed an observer experiment in which participants (N = 52) smelled 40 different odors (varying widely in hedonic valence) and reported the intensity, familiarity and their affective appraisal (valence and arousal: V and A) for each odor. Also, we measured the baseline affective state (valence and arousal: BV and BA) and odor detection threshold of the participants. Analyzing the results for pleasant and unpleasant odors separately, we obtained two models through network analysis. Several relations that have previously been reported in the literature also emerge in both models (the relations between F and I, F and V, I and A; I and V, BV and T). However, there are also relations that do not emerge (between BA and V, BV and I, and T and I) or that appear with a different polarity (the relation between F and A for pleasant odors). Intensity (I) has the largest impact on the affective appraisal of unpleasant odors, while F significantly contributes to the appraisal of pleasant odors. T is only affected by BV and has no effect on other variables. This study is a first step towards an integral study of the affective appraisal of odors through network analysis. Future studies should also include other factors that are known to influence odor appraisal, such as age, gender, personality, and culture."
},
{
"pmid": "27330520",
"title": "A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.",
"abstract": "OBJECTIVE\nIntraclass correlation coefficient (ICC) is a widely used reliability index in test-retest, intrarater, and interrater reliability analyses. This article introduces the basic concept of ICC in the content of reliability analysis.\n\n\nDISCUSSION FOR RESEARCHERS\nThere are 10 forms of ICCs. Because each form involves distinct assumptions in their calculation and will lead to different interpretations, researchers should explicitly specify the ICC form they used in their calculation. A thorough review of the research design is needed in selecting the appropriate form of ICC to evaluate reliability. The best practice of reporting ICC should include software information, \"model,\" \"type,\" and \"definition\" selections.\n\n\nDISCUSSION FOR READERS\nWhen coming across an article that includes ICC, readers should first check whether information about the ICC form has been reported and if an appropriate ICC form was used. Based on the 95% confident interval of the ICC estimate, values less than 0.5, between 0.5 and 0.75, between 0.75 and 0.9, and greater than 0.90 are indicative of poor, moderate, good, and excellent reliability, respectively.\n\n\nCONCLUSION\nThis article provides a practical guideline for clinical researchers to choose the correct form of ICC and suggests the best practice of reporting ICC parameters in scientific publications. This article also gives readers an appreciation for what to look for when coming across ICC while reading an article."
},
{
"pmid": "18839484",
"title": "Intraclass correlations: uses in assessing rater reliability.",
"abstract": "Reliability coefficients often take the form of intraclass correlation coefficients. In this article, guidelines are given for choosing among six different forms of the intraclass correlation for reliability studies in which n target are rated by k judges. Relevant to the choice of the coefficient are the appropriate statistical model for the reliability and the application to be made of the reliability results. Confidence intervals for each of the forms are reviewed."
},
{
"pmid": "23055170",
"title": "Seriousness checks are useful to improve data validity in online research.",
"abstract": "Nonserious answering behavior increases noise and reduces experimental power; it is therefore one of the most important threats to the validity of online research. A simple way to address the problem is to ask respondents about the seriousness of their participation and to exclude self-declared nonserious participants from analysis. To validate this approach, a survey was conducted in the week prior to the German 2009 federal election to the Bundestag. Serious participants answered a number of attitudinal and behavioral questions in a more consistent and predictively valid manner than did nonserious participants. We therefore recommend routinely employing seriousness checks in online surveys to improve data validity."
},
{
"pmid": "15703257",
"title": "Pictures of appetizing foods activate gustatory cortices for taste and reward.",
"abstract": "Increasing research indicates that concepts are represented as distributed circuits of property information across the brain's modality-specific areas. The current study examines the distributed representation of an important but under-explored category, foods. Participants viewed pictures of appetizing foods (along with pictures of locations for comparison) during event-related fMRI. Compared to location pictures, food pictures activated the right insula/operculum and the left orbitofrontal cortex, both gustatory processing areas. Food pictures also activated regions of visual cortex that represent object shape. Together these areas contribute to a distributed neural circuit that represents food knowledge. Not only does this circuit become active during the tasting of actual foods, it also becomes active while viewing food pictures. Via the process of pattern completion, food pictures activate gustatory regions of the circuit to produce conceptual inferences about taste. Consistent with theories that ground knowledge in the modalities, these inferences arise as reenactments of modality-specific processing."
},
{
"pmid": "33384331",
"title": "Viewing images of foods evokes taste quality-specific activity in gustatory insular cortex.",
"abstract": "Previous studies have shown that the conceptual representation of food involves brain regions associated with taste perception. The specificity of this response, however, is unknown. Does viewing pictures of food produce a general, nonspecific response in taste-sensitive regions of the brain? Or is the response specific for how a particular food tastes? Building on recent findings that specific tastes can be decoded from taste-sensitive regions of insular cortex, we asked whether viewing pictures of foods associated with a specific taste (e.g., sweet, salty, and sour) can also be decoded from these same regions, and if so, are the patterns of neural activity elicited by the pictures and their associated tastes similar? Using ultrahigh-resolution functional magnetic resonance imaging at high magnetic field strength (7-Tesla), we were able to decode specific tastes delivered during scanning, as well as the specific taste category associated with food pictures within the dorsal mid-insula, a primary taste responsive region of brain. Thus, merely viewing food pictures triggers an automatic retrieval of specific taste quality information associated with the depicted foods, within gustatory cortex. However, the patterns of activity elicited by pictures and their associated tastes were unrelated, thus suggesting a clear neural distinction between inferred and directly experienced sensory events. These data show how higher-order inferences derived from stimuli in one modality (i.e., vision) can be represented in brain regions typically thought to represent only low-level information about a different modality (i.e., taste)."
},
{
"pmid": "28736213",
"title": "Beyond expectations: The responses of the autonomic nervous system to visual food cues.",
"abstract": "Self-report measures rely on cognitive and rational processes and may not, therefore, be the most suitable tools to investigate implicit or unconscious factors within a sensory experience. The responses from the autonomic nervous system (ANS), which are not susceptible to bias due to their involuntary nature, may provide a better insight. Expectations are important for the consumer-product interaction and should be considered. However, research using ANS responses has not focused thoroughly on expectations. Our aim was to investigate the mechanisms underlying ANS responses by evaluating the reactions to different images when expectations about a product are created (before tasting the product) and when they are confirmed and disconfirmed (after tasting the product). In a first study, seventy-five participants tasted four drinks (three identical soy-based drinks and one rice-based drink) and were told that they would be shown their main ingredient either before or after tasting. For the three identical drinks, the images shown were: worms, chocolate, and soy. Heart rate and skin conductance were measured during the procedure. The results showed that ANS responses followed similar patterns when images were presented before or after tasting. Heart rate decreased for all images, with the largest decrease found for chocolate and worms. Skin conductance increased, with the largest increase found for worms. To test whether the effects were solely caused by image perception, a second study was done in which forty participants only saw the images. The responses obtained were smaller and did not completely match those of the first study. In conclusion, it could be said that the ANS responses of the first study were a result of the sensory processing and defense mechanisms happening during the creation and (dis)confirmation of expectations. The second study confirmed that visual perception alone could not account for these effects and that it led to smaller changes. Hence, it seems that the context of use influences the patterns and magnitude of ANS responses to food cues."
}
] |
Frontiers in Big Data | null | PMC8998425 | 10.3389/fdata.2022.835949 | AutoLoc: Autonomous Sensor Location Configuration via Cross Modal Sensing | Internet-of-Things (IoT) systems have become pervasive for smart homes. In recent years, many of these IoT sensing systems are developed to enable in-home long-term monitoring applications, such as personalized services in smart homes, elderly/patient monitoring, etc. However, these systems often require complicated and expensive installation processes, which are some of the main concerns affecting users' adoption of smart home systems. In this work, we focus on floor vibration-based occupant monitoring systems, which enables non-intrusive in-home continuous occupant monitoring, such as patient step tracking and gait analysis. However, to enable these applications, the system would require known locations of vibration sensors placed in the environment. Current practice relies on manually input of location, which makes the installation labor-intensive, time consuming, and expensive. On the other hand, without known location of vibration sensors, the output of the system does not have intuitive physical meaning and is incomprehensive to users, which limits the systems' usability. We present AutoLoc, a scheme to estimate the location of the vibration sensors in a two-dimensional space in the view of a nearby camera, which has spatial physical meaning. AutoLoc utilizes occupants' walking events captured by both vibration sensors and the co-located camera to estimate the vibration sensors' location in the camera view. First, AutoLoc detects and localizes the occupant's footsteps in the vision data. Then, it associates the time and location of the event to the floor vibration data. Next, the extracted vibration data of the given event from multiple vibration sensors are used to estimate the sensors' locations in the camera view coordinates. We conducted real-world experiments and achieved up to 0.07 meters localization accuracy. | 2. Related WorkWe summarize relevant prior work in this section and compare AutoLoc to them.2.1. Vibration-Based Human SensingPhysical vibration signals induced by people in the buildings are used to indirectly infer human information for both physical and physiology information, include and not limited to identity (Pan et al., 2017), location (Mirshekari et al., 2018; Drira et al., 2021), activity (Hu et al., 2020; Sun et al., 2020), heart beat (Jia et al., 2016), and gait (Fagert et al., 2019). The intuition is that people induce physical vibrations all the time, such as stepping on the floor, heart-pounding in the chest, etc. Vibration sensors placed on the ambient surfaces can capture these vibrations propagating through the surface and infer the source of the signal. These prior works demonstrate the feasibility and potential of the physical vibration-based sensing system for various human-centric applications, which validates the motivation of this work.System often require sensors to have overlapping sensing area to enable applications such as step-level localization (Mirshekari et al., 2018), gait analysis (Fagert et al., 2020), and activity recognition (Hu et al., 2020; Sun et al., 2020). On the other hands, for applications such as localization and gait analysis, sensor devices' locations in the room coordinates are also needed. Therefore, autonomous sensor location configuration is important for these vibration-based human sensing applications.2.2. Device LocalizationLocalizing devices have been explored widely for robotics and mobile-based approaches. Various approaches have been explored over many sensing modalities. Landmark based approaches were adopted as data-driven approaches over visual landmarks (Se et al., 2002), infrared light landmark (Lee and Song, 2007), RF landmark (Purohit et al., 2013a), etc. On the other hand, multilateration is a commonly used approach as physics based approaches. It has been applied on acoustic- (Höflinger et al., 2014), WiFi- (Arthi et al., 2020), UWB- (Onalaja et al., 2014), BLE- (Shchekotov and Shilov, 2018) based systems. These devices and systems are mostly equipped with transceivers for ranging purpose. Acoustic-based devices relative physical arrangement detection problem is also being explored, where the sources of the signal and the devices are localized simultaneously (Sun et al., 2011; Kuang et al., 2013; Kamminga et al., 2016). However, they were able to achieve the relative arrangement of the device rather than the absolute physical locations. As a result, their target application/usage is also different from what we target in this work. In addition, the signal and sensing modality targeted in this paper—building vibration based occupant monitoring—faces more challenges such as high decay rate, high distortion, dispersion and ambient noise compared to prior work of acoustic-based sensing.2.3. Cross-Modal Autonomous System ConfigurationMultiple co-located sensing modalities are used to enable automation of system configuration. These modalities are associated over the shared context (both spatial and temporal) in the physical world that can be captured by different types of sensors (Han et al., 2018; Pan et al., 2018; Yu et al., 2019; He et al., 2020). Motion has been used as the shared context between IMUs on wearables and camera view to enable auto-pairing for IoT devices (Pan et al., 2018). Event timing is another type of shared context that is used to generate encryption keys for secured pairing (Han et al., 2018). These systems relies on a direct measurable context for both sensing modalities. However, when there is no directly measurable shared context, the indirect inference introduce more challenges, such as vibration-based sensing modalities. Footstep location has been used as a shared context to associate the vibration devices absolute locations (He et al., 2020). However, it is only been explored in a 1-D scenario, which is not sufficient for some of the applications such as localization and activity recognition. In this work, we focus on the 2-D solution of the vibration sensor localization via camera captured ambient occupant context. | [
"27649176"
] | [
{
"pmid": "27649176",
"title": "Ambient Sound-Based Collaborative Localization of Indeterministic Devices.",
"abstract": "Localization is essential in wireless sensor networks. To our knowledge, no prior work has utilized low-cost devices for collaborative localization based on only ambient sound, without the support of local infrastructure. The reason may be the fact that most low-cost devices are indeterministic and suffer from uncertain input latencies. This uncertainty makes accurate localization challenging. Therefore, we present a collaborative localization algorithm (Cooperative Localization on Android with ambient Sound Sources (CLASS)) that simultaneously localizes the position of indeterministic devices and ambient sound sources without local infrastructure. The CLASS algorithm deals with the uncertainty by splitting the devices into subsets so that outliers can be removed from the time difference of arrival values and localization results. Since Android is indeterministic, we select Android devices to evaluate our approach. The algorithm is evaluated with an outdoor experiment and achieves a mean Root Mean Square Error (RMSE) of 2.18 m with a standard deviation of 0.22 m. Estimated directions towards the sound sources have a mean RMSE of 17.5 ° and a standard deviation of 2.3 °. These results show that it is feasible to simultaneously achieve a relative positioning of both devices and sound sources with sufficient accuracy, even when using non-deterministic devices and platforms, such as Android."
}
] |
Subsets and Splits