WorldWideScience

Sample records for herg classification model

  1. hERG classification model based on a combination of support vector machine method and GRIND descriptors

    Li, Qiyuan; Jorgensen, Flemming Steen; Oprea, Tudor

    2008-01-01

    and diverse library of 495 compounds. The models combine pharmacophore-based GRIND descriptors with a support vector machine (SVM) classifier in order to discriminate between hERG blockers and nonblockers. Our models were applied at different thresholds from 1 to 40 mu m and achieved an overall accuracy up...

  2. Modeling of the hERG K+ Channel Blockage Using Online Chemical Database and Modeling Environment (OCHEM).

    Li, Xiao; Zhang, Yuan; Li, Huanhuan; Zhao, Yong

    2017-12-01

    Human ether-a-go-go related gene (hERG) K+ channel plays an important role in cardiac action potential. Blockage of hERG channel may result in long QT syndrome (LQTS), even cause sudden cardiac death. Many drugs have been withdrawn from the market because of the serious hERG-related cardiotoxicity. Therefore, it is quite essential to estimate the chemical blockage of hERG in the early stage of drug discovery. In this study, a diverse set of 3721 compounds with hERG inhibition data was assembled from literature. Then, we make full use of the Online Chemical Modeling Environment (OCHEM), which supplies rich machine learning methods and descriptor sets, to build a series of classification models for hERG blockage. We also generated two consensus models based on the top-performing individual models. The consensus models performed much better than the individual models both on 5-fold cross validation and external validation. Especially, consensus model II yielded the prediction accuracy of 89.5 % and MCC of 0.670 on external validation. This result indicated that the predictive power of consensus model II should be stronger than most of the previously reported models. The 17 top-performing individual models and the consensus models and the data sets used for model development are available at https://ochem.eu/article/103592. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Dynamics of hERG closure allow novel insights into hERG blocking by small molecules.

    Schmidtke, Peter; Ciantar, Marine; Theret, Isabelle; Ducrot, Pierre

    2014-08-25

    Today, drug discovery routinely uses experimental assays to determine very early if a lead compound can yield certain types of off-target activity. Among such off targets is hERG. The ion channel plays a primordial role in membrane repolarization and altering its activity can cause severe heart arrhythmia and sudden death. Despite routine tests for hERG activity, rather little information is available for helping medicinal chemists and molecular modelers to rationally circumvent hERG activity. In this article novel insights into the dynamics of hERG channel closure are described. Notably, helical pairwise closure movements have been observed. Implications and relations to hERG inactivation are presented. Based on these dynamics novel insights on hERG blocker placement are presented, compared to literature, and discussed. Last, new evidence for horizontal ligand positioning is shown in light of former studies on hERG blockers.

  4. Indexing molecules for their hERG liability.

    Rayan, Anwar; Falah, Mizied; Raiyn, Jamal; Da'adoosh, Beny; Kadan, Sleman; Zaid, Hilal; Goldblum, Amiram

    2013-07-01

    The human Ether-a-go-go-Related-Gene (hERG) potassium (K(+)) channel is liable to drug-inducing blockage that prolongs the QT interval of the cardiac action potential, triggers arrhythmia and possibly causes sudden cardiac death. Early prediction of drug liability to hERG K(+) channel is therefore highly important and preferably obligatory at earlier stages of any drug discovery process. In vitro assessment of drug binding affinity to hERG K(+) channel involves substantial expenses, time, and labor; and therefore computational models for predicting liabilities of drug candidates for hERG toxicity is of much importance. In the present study, we apply the Iterative Stochastic Elimination (ISE) algorithm to construct a large number of rule-based models (filters) and exploit their combination for developing the concept of hERG Toxicity Index (ETI). ETI estimates the molecular risk to be a blocker of hERG potassium channel. The area under the curve (AUC) of the attained model is 0.94. The averaged ETI of hERG binders, drugs from CMC, clinical-MDDR, endogenous molecules, ACD and ZINC, were found to be 9.17, 2.53, 3.3, -1.98, -2.49 and -3.86 respectively. Applying the proposed hERG Toxicity Index Model on external test set composed of more than 1300 hERG blockers picked from chEMBL shows excellent performance (Matthews Correlation Coefficient of 0.89). The proposed strategy could be implemented for the evaluation of chemicals in the hit/lead optimization stages of the drug discovery process, improve the selection of drug candidates as well as the development of safe pharmaceutical products. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  5. Computational analysis of the effects of the hERG channel opener NS1643 in a human ventricular cell model

    Peitersen, Torben; Grunnet, Morten; Benson, Alan P

    2008-01-01

    BACKGROUND: Dysfunction or pharmacologic inhibition of repolarizing cardiac ionic currents can lead to fatal arrhythmias. The hERG potassium channel underlies the repolarizing current I(Kr), and mutations therein can produce both long and short QT syndromes (LQT2 and SQT1). We previously reported...

  6. In vitro chronic effects on hERG channel caused by the marine biotoxin Yessotoxin.

    Sara Fernández Ferreiro

    2014-06-01

    Currently, published evidence indicates that hERG channel dysfunction can be due to more than one mechanism for many drugs (Guth, 2007. Alterations of hERG channel trafficking are considered an important factor in hERG-related cardiotoxicity. Actually, a screening study revealed that almost 40% of the drugs that block Ikr have also trafficking effects (Wible et al., 2005. Although YTX does not block hERG channels, it has been historically described as cardiotoxic due to in vivo damage to cardiomyocytes. Our results show that YTX induces a significant increase of hERG channel levels on the extracellular side of the plasma membrane in vitro. YTX causes cell death in many cell lines (Korsnes and Espenes, 2011 and the alterations of surface hERG levels might be related to the apoptotic process. However, annexin-V, a relatively early marker of apoptosis (Vermes et al., 1995, occurs later than the increase of surface hERG. Additionally, staurosporine triggered apoptosis without a simultaneous increase of surface hERG, so events are not necessarily related. Therefore YTX-induced elevated hERG in the plasma membrane seem to be independent of apoptosis. Functional implications of hERG currents have been described after alterations of cell surface hERG density (Guth, 2007. YTX did not cause significant alterations of hERG currents. Furthermore the hERG levels after YTX treatment were duplicated, so the effect on currents should be clearly evidenced if these channels were functional. The hERG channels on the cell surface are regulated by its production, translocation to the plasma membrane and degradation. The increase of extracellular channel could be a consequence of a higher production and externalization or a slower degradation. Higher synthesis in our cell model would not be physiologically relevant but our results demonstrated that the amount of immature hERG is reduced instead of increased. Fully glycosylated hERG seems slightly increased in these conditions but it is

  7. Latent classification models

    Langseth, Helge; Nielsen, Thomas Dyhre

    2005-01-01

    parametric family ofdistributions.  In this paper we propose a new set of models forclassification in continuous domains, termed latent classificationmodels. The latent classification model can roughly be seen ascombining the \\NB model with a mixture of factor analyzers,thereby relaxing the assumptions...... classification model, and wedemonstrate empirically that the accuracy of the proposed model issignificantly higher than the accuracy of other probabilisticclassifiers....

  8. A novel hypothesis for the binding mode of HERG channel blockers

    Choe, Han; Nah, Kwang Hoon; Lee, Soo Nam; Lee, Han Sam; Lee, Hui Sun; Jo, Su Hyun; Leem, Chae Hun; Jang, Yeon Jin

    2006-01-01

    We present a new docking model for HERG channel blockade. Our new model suggests three key interactions such that (1) a protonated nitrogen of the channel blocker forms a hydrogen bond with the carbonyl oxygen of HERG residue T623; (2) an aromatic moiety of the channel blocker makes a π-π interaction with the aromatic ring of HERG residue Y652; and (3) a hydrophobic group of the channel blocker forms a hydrophobic interaction with the benzene ring of HERG residue F656. The previous model assumes two interactions such that (1) a protonated nitrogen of the channel blocker forms a cation-π interaction with the aromatic ring of HERG residue Y652; and (2) a hydrophobic group of the channel blocker forms a hydrophobic interaction with the benzene ring of HERG residue F656. To test these models, we classified 69 known HERG channel blockers into eight binding types based on their plausible binding modes, and further categorized them into two groups based on the number of interactions our model would predict with the HERG channel (two or three). We then compared the pIC 5 value distributions between these two groups. If the old hypothesis is correct, the distributions should not differ between the two groups (i.e., both groups show only two binding interactions). If our novel hypothesis is correct, the distributions should differ between Groups 1 and 2. Consistent with our hypothesis, the two groups differed with regard to pIC 5 , and the group having more predicted interactions with the HERG channel had a higher mean pIC 5 value. Although additional work will be required to further validate our hypothesis, this improved understanding of the HERG channel blocker binding mode may help promote the development of in silico predictions methods for identifying potential HERG channel blockers

  9. Dynamic Latent Classification Model

    Zhong, Shengtong; Martínez, Ana M.; Nielsen, Thomas Dyhre

    as possible. Motivated by this problem setting, we propose a generative model for dynamic classification in continuous domains. At each time point the model can be seen as combining a naive Bayes model with a mixture of factor analyzers (FA). The latent variables of the FA are used to capture the dynamics...

  10. Voltage-Dependent Gating of hERG Potassium Channels

    Cheng, Yen May; Claydon, Tom W.

    2012-01-01

    The mechanisms by which voltage-gated channels sense changes in membrane voltage and energetically couple this with opening of the ion conducting pore has been the source of significant interest. In voltage-gated potassium (Kv) channels, much of our knowledge in this area comes from Shaker-type channels, for which voltage-dependent gating is quite rapid. In these channels, activation and deactivation are associated with rapid reconfiguration of the voltage-sensing domain unit that is electromechanically coupled, via the S4–S5 linker helix, to the rate-limiting opening of an intracellular pore gate. However, fast voltage-dependent gating kinetics are not typical of all Kv channels, such as Kv11.1 (human ether-à-go-go related gene, hERG), which activates and deactivates very slowly. Compared to Shaker channels, our understanding of the mechanisms underlying slow hERG gating is much poorer. Here, we present a comparative review of the structure–function relationships underlying activation and deactivation gating in Shaker and hERG channels, with a focus on the roles of the voltage-sensing domain and the S4–S5 linker that couples voltage sensor movements to the pore. Measurements of gating current kinetics and fluorimetric analysis of voltage sensor movement are consistent with models suggesting that the hERG activation pathway contains a voltage independent step, which limits voltage sensor transitions. Constraints upon hERG voltage sensor movement may result from loose packing of the S4 helices and additional intra-voltage sensor counter-charge interactions. More recent data suggest that key amino acid differences in the hERG voltage-sensing unit and S4–S5 linker, relative to fast activating Shaker-type Kv channels, may also contribute to the increased stability of the resting state of the voltage sensor. PMID:22586397

  11. Voltage-dependent gating of hERG potassium channels

    Yen May eCheng

    2012-05-01

    Full Text Available The mechanisms by which voltage-gated channels sense changes in membrane voltage and energetically couple this with opening of the ion conducting pore has been the source of significant interest. In voltage-gated potassium (Kv channels, much of our knowledge in this area comes from Shaker-type channels, for which voltage-dependent gating is quite rapid. In these channels, activation and deactivation are associated with rapid reconfiguration of the voltage-sensing domain unit that is electromechanically coupled, via the S4-S5 linker helix, to the rate-limiting opening of an intracellular pore gate. However, fast voltage-dependent gating kinetics are not typical of all Kv channels, such as Kv11.1 (human ether-a-go-go related gene, hERG, which activates and deactivates very slowly. Compared to Shaker channels, our understanding of the mechanisms underlying slow hERG gating is much poorer. Here, we present a comparative review of the structure-function relationships underlying voltage-dependent gating in Shaker and hERG channels, with a focus on the roles of the voltage sensing domain and the S4-S5 linker that couples voltage sensor movements to the pore. Measurements of gating current kinetics and fluorimetric analysis of voltage sensor movement are consistent with models suggesting that the hERG activation pathway contains a voltage independent step, which limits voltage sensor transitions. Constraints upon hERG voltage sensor movement may result from loose packing of the S4 helices and additional intra-voltage sensor counter charge interactions. More recent data suggest that key amino acid differences in the hERG voltage sensing unit and S4-S5 linker, relative to fast activating Shaker-type Kv channels, may also contribute to the increased stability of the resting state of the voltage sensor.

  12. Cluster Based Text Classification Model

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases...... the accuracy at the same time. The test example is classified using simpler and smaller model. The training examples in a particular cluster share the common vocabulary. At the time of clustering, we do not take into account the labels of the training examples. After the clusters have been created......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...

  13. Determinants of Isoform-Specific Gating Kinetics of hERG1 Channel: Combined Experimental and Simulation Study

    Laura L. Perissinotti

    2018-04-01

    Full Text Available IKr is the rapidly activating component of the delayed rectifier potassium current, the ion current largely responsible for the repolarization of the cardiac action potential. Inherited forms of long QT syndrome (LQTS (Lees-Miller et al., 1997 in humans are linked to functional modifications in the Kv11.1 (hERG ion channel and potentially life threatening arrhythmias. There is little doubt now that hERG-related component of IKr in the heart depends on the tetrameric (homo- or hetero- channels formed by two alternatively processed isoforms of hERG, termed hERG1a and hERG1b. Isoform composition (hERG1a- vs. the b-isoform has recently been reported to alter pharmacologic responses to some hERG blockers and was proposed to be an essential factor pre-disposing patients for drug-induced QT prolongation. Very little is known about the gating and pharmacological properties of two isoforms in heart membranes. For example, how gating mechanisms of the hERG1a channels differ from that of hERG1b is still unknown. The mechanisms by which hERG 1a/1b hetero-tetramers contribute to function in the heart, or what role hERG1b might play in disease are all questions to be answered. Structurally, the two isoforms differ only in the N-terminal region located in the cytoplasm: hERG1b is 340 residues shorter than hERG1a and the initial 36 residues of hERG1b are unique to this isoform. In this study, we combined electrophysiological measurements for HEK cells, kinetics and structural modeling to tease out the individual contributions of each isoform to Action Potential formation and then make predictions about the effects of having various mixture ratios of the two isoforms. By coupling electrophysiological data with computational kinetic modeling, two proposed mechanisms of hERG gating in two homo-tetramers were examined. Sets of data from various experimental stimulation protocols (HEK cells were analyzed simultaneously and fitted to Markov-chain models (M-models

  14. Inhibition of HERG potassium channels by celecoxib and its mechanism.

    Roman V Frolov

    Full Text Available Celecoxib (Celebrex, a widely prescribed selective inhibitor of cyclooxygenase-2, can modulate ion channels independently of cyclooxygenase inhibition. Clinically relevant concentrations of celecoxib can affect ionic currents and alter functioning of neurons and myocytes. In particular, inhibition of Kv2.1 channels by celecoxib leads to arrhythmic beating of Drosophila heart and of rat heart cells in culture. However, the spectrum of ion channels involved in human cardiac excitability differs from that in animal models, including mammalian models, making it difficult to evaluate the relevance of these observations to humans. Our aim was to examine the effects of celecoxib on hERG and other human channels critically involved in regulating human cardiac rhythm, and to explore the mechanisms of any observed effect on the hERG channels.Celecoxib inhibited the hERG, SCN5A, KCNQ1 and KCNQ1/MinK channels expressed in HEK-293 cells with IC(50s of 6.0 µM, 7.5 µM, 3.5 µM and 3.7 µM respectively, and the KCND3/KChiP2 channels expressed in CHO cells with an IC(50 of 10.6 µM. Analysis of celecoxib's effects on hERG channels suggested gating modification as the mechanism of drug action.The above channels play a significant role in drug-induced long QT syndrome (LQTS and short QT syndrome (SQTS. Regulatory guidelines require that all new drugs under development be tested for effects on the hERG channel prior to first administration in humans. Our observations raise the question of celecoxib's potential to induce cardiac arrhythmias or other channel related adverse effects, and make a case for examining such possibilities.

  15. Latent class models for classification

    Vermunt, J.K.; Magidson, J.

    2003-01-01

    An overview is provided of recent developments in the use of latent class (LC) and other types of finite mixture models for classification purposes. Several extensions of existing models are presented. Two basic types of LC models for classification are defined: supervised and unsupervised

  16. Overcoming hERG affinity in the discovery of maraviroc; a CCR5 antagonist for the treatment of HIV.

    Price, David A; Armour, Duncan; de Groot, Marcel; Leishman, Derek; Napier, Carolyn; Perros, Manos; Stammen, Blanda L; Wood, Anthony

    2008-01-01

    Avoiding cardiac liability associated with blockade of hERG (human ether a go-go) is key for successful drug discovery and development. This paper describes the work undertaken in the discovery of a potent CCR5 antagonist, maraviroc 34, for the treatment of HIV. In particular the use of a pharmacophore model of the hERG channel and a high throughput binding assay for the hERG channel are described that were critical to elucidate SAR to overcome hERG liabilities. The key SAR involves the introduction of polar substituents into regions of the molecule where it is postulated to undergo hydrophobic interactions with the ion channel. Within the CCR5 project there appeared to be no strong correlation between hERG affinity and physiochemical parameters such as pKa or lipophilicity. It is believed that chemists could apply these same strategies early in drug discovery to remove hERG interactions associated with lead compounds while retaining potency at the primary target.

  17. Churn classification model for local telecommunication company ...

    ... model based on the Rough Set Theory to classify customer churn. The results of the study show that the proposed Rough Set classification model outperforms the existing models and contributes to significant accuracy improvement. Keywords: customer churn; classification model; telecommunication industry; data mining;

  18. Role of the pH in state-dependent blockade of hERG currents

    Wang, Yibo; Guo, Jiqing; Perissinotti, Laura L.; Lees-Miller, James; Teng, Guoqi; Durdagi, Serdar; Duff, Henry J.; Noskov, Sergei Yu.

    2016-10-01

    Mutations that reduce inactivation of the voltage-gated Kv11.1 potassium channel (hERG) reduce binding for a number of blockers. State specific block of the inactivated state of hERG block may increase risks of drug-induced Torsade de pointes. In this study, molecular simulations of dofetilide binding to the previously developed and experimentally validated models of the hERG channel in open and open-inactivated states were combined with voltage-clamp experiments to unravel the mechanism(s) of state-dependent blockade. The computations of the free energy profiles associated with the drug block to its binding pocket in the intra-cavitary site display startling differences in the open and open-inactivated states of the channel. It was also found that drug ionization may play a crucial role in preferential targeting to the open-inactivated state of the pore domain. pH-dependent hERG blockade by dofetilie was studied with patch-clamp recordings. The results show that low pH increases the extent and speed of drug-induced block. Both experimental and computational findings indicate that binding to the open-inactivated state is of key importance to our understanding of the dofetilide’s mode of action.

  19. Homozygous premature truncation of the HERG protein : the human HERG knockout

    Hoorntje, T.; Alders, M.; van Tintelen, P.; van der Lip, K.; Sreeram, N.; van der Wal, A.; Mannens, M.; Wilde, A.

    1999-01-01

    Background-In long-QT syndrome (LQTS), heterozygosity for a mutation in 1 of the K(+) channel genes leads to prolongation of the cardiac action potential, because the aberrant protein exhibits "loss of function." HERG, which is involved in LQT2, is the gene encoding the rapid component of the

  20. Classification hierarchies for product data modelling

    Pels, H.J.

    2006-01-01

    Abstraction is an essential element in data modelling that appears mainly in one of the following forms: generalisation, classification or aggregation. In the design of complex products classification hierarchies can be found product families that are viewed as classes of product types, while

  1. Characterization of hERG1a and hERG1b potassium channels-a possible role for hERG1b in the I (Kr) current

    Larsen, Anders Peter; Olesen, Søren-Peter; Grunnet, Morten

    2008-01-01

    I (Kr) is the fast component of the delayed rectifier potassium currents responsible for the repolarization of the cardiac muscle. The molecular correlate underlying the I (Kr) current has been identified as the hERG1 channel. Recently, two splice variants of the hERG1 alpha-subunit, hERG1a and hERG......1b, have been shown to be co-expressed in human cardiomyocytes. In this paper, we present the electrophysiological characterization of hERG1a, hERG1b, and co-expressed hERG1a/b channels in a mammalian expression system using the whole-cell patch clamp technique. We also quantified the messenger RNA...... (mRNA) levels of hERG1a and hERG1b in human cardiac tissue, and based on the expressed ratios, we evaluated the resulting currents in Xenopus laevis oocytes. Compared to hERG1a channels, activation was faster for both hERG1b and hERG1a/b channels. The deactivation kinetics was greatly accelerated...

  2. Escitalopram block of hERG potassium channels.

    Chae, Yun Ju; Jeon, Ji Hyun; Lee, Hong Joon; Kim, In-Beom; Choi, Jin-Sung; Sung, Ki-Wug; Hahn, Sang June

    2014-01-01

    Escitalopram, a selective serotonin reuptake inhibitor, is the pharmacologically active S-enantiomer of the racemic mixture of RS-citalopram and is widely used in the treatment of depression. The effects of escitalopram and citalopram on the human ether-a-go-go-related gene (hERG) channels expressed in human embryonic kidney cells were investigated using voltage-clamp and Western blot analyses. Both drugs blocked hERG currents in a concentration-dependent manner with an IC50 value of 2.6 μM for escitalopram and an IC50 value of 3.2 μM for citalopram. The blocking of hERG by escitalopram was voltage-dependent, with a steep increase across the voltage range of channel activation. However, voltage independence was observed over the full range of activation. The blocking by escitalopram was frequency dependent. A rapid application of escitalopram induced a rapid and reversible blocking of the tail current of hERG. The extent of the blocking by escitalopram during the depolarizing pulse was less than that during the repolarizing pulse, suggesting that escitalopram has a high affinity for the open state of the hERG channel, with a relatively lower affinity for the inactivated state. Both escitalopram and citalopram produced a reduction of hERG channel protein trafficking to the plasma membrane but did not affect the short-term internalization of the hERG channel. These results suggest that escitalopram blocked hERG currents at a supratherapeutic concentration and that it did so by preferentially binding to both the open and the inactivated states of the channels and by inhibiting the trafficking of hERG channel protein to the plasma membrane.

  3. Emotion models for textual emotion classification

    Bruna, O.; Avetisyan, H.; Holub, J.

    2016-11-01

    This paper deals with textual emotion classification which gained attention in recent years. Emotion classification is used in user experience, product evaluation, national security, and tutoring applications. It attempts to detect the emotional content in the input text and based on different approaches establish what kind of emotional content is present, if any. Textual emotion classification is the most difficult to handle, since it relies mainly on linguistic resources and it introduces many challenges to assignment of text to emotion represented by a proper model. A crucial part of each emotion detector is emotion model. Focus of this paper is to introduce emotion models used for classification. Categorical and dimensional models of emotion are explained and some more advanced approaches are mentioned.

  4. hERG1 channels are overexpressed in glioblastoma multiforme and modulate VEGF secretion in glioblastoma cell lines

    Masi, A; Becchetti, A; Restano-Cassulini, R; Polvani, S; Hofmann, G; Buccoliero, A M; Paglierani, M; Pollo, B; Taddei, G L; Gallina, P; Di Lorenzo, N; Franceschetti, S; Wanke, E; Arcangeli, A

    2005-01-01

    Recent studies have led to considerable advancement in our understanding of the molecular mechanisms that underlie the relentless cell growth and invasiveness of human gliomas. Partial understanding of these mechanisms has (1) improved the classification for gliomas, by identifying prognostic subgroups, and (2) pointed to novel potential therapeutic targets. Some classes of ion channels have turned out to be involved in the pathogenesis and malignancy of gliomas. We studied the expression and properties of K+ channels in primary cultures obtained from surgical specimens: human ether a gò-gò related (hERG)1 voltage-dependent K+ channels, which have been found to be overexpressed in various human cancers, and human ether a gò-gò-like 2 channels, that share many of hERG1's biophysical features. The expression pattern of these two channels was compared to that of the classical inward rectifying K+ channels, IRK, that are widely expressed in astrocytic cells and classically considered a marker of astrocytic differentiation. In our study, hERG1 was found to be specifically overexpressed in high-grade astrocytomas, that is, glioblastoma multiforme (GBM). In addition, we present evidence that, in GBM cell lines, hERG1 channel activity actively contributes to malignancy by promoting vascular endothelial growth factor secretion, thus stimulating the neoangiogenesis typical of high-grade gliomas. Our data provide important confirmation for studies proposing the hERG1 channel as a molecular marker of tumour progression and a possible target for novel anticancer therapies. PMID:16175187

  5. Global Optimization Ensemble Model for Classification Methods

    Anwar, Hina; Qamar, Usman; Muzaffar Qureshi, Abdul Wahab

    2014-01-01

    Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC) that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity. PMID:24883382

  6. Global Optimization Ensemble Model for Classification Methods

    Hina Anwar

    2014-01-01

    Full Text Available Supervised learning is the process of data mining for deducing rules from training datasets. A broad array of supervised learning algorithms exists, every one of them with its own advantages and drawbacks. There are some basic issues that affect the accuracy of classifier while solving a supervised learning problem, like bias-variance tradeoff, dimensionality of input space, and noise in the input data space. All these problems affect the accuracy of classifier and are the reason that there is no global optimal method for classification. There is not any generalized improvement method that can increase the accuracy of any classifier while addressing all the problems stated above. This paper proposes a global optimization ensemble model for classification methods (GMC that can improve the overall accuracy for supervised learning problems. The experimental results on various public datasets showed that the proposed model improved the accuracy of the classification models from 1% to 30% depending upon the algorithm complexity.

  7. The Importance of Classification to Business Model Research

    Susan Lambert

    2015-01-01

    Purpose: To bring to the fore the scientific significance of classification and its role in business model theory building. To propose a method by which existing classifications of business models can be analyzed and new ones developed. Design/Methodology/Approach: A review of the scholarly literature relevant to classifications of business models is presented along with a brief overview of classification theory applicable to business model research. Existing business model classification...

  8. Classification using Hierarchical Naive Bayes models

    Langseth, Helge; Dyhre Nielsen, Thomas

    2006-01-01

    Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe......, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models...

  9. A Robust Geometric Model for Argument Classification

    Giannone, Cristina; Croce, Danilo; Basili, Roberto; de Cao, Diego

    Argument classification is the task of assigning semantic roles to syntactic structures in natural language sentences. Supervised learning techniques for frame semantics have been recently shown to benefit from rich sets of syntactic features. However argument classification is also highly dependent on the semantics of the involved lexicals. Empirical studies have shown that domain dependence of lexical information causes large performance drops in outside domain tests. In this paper a distributional approach is proposed to improve the robustness of the learning model against out-of-domain lexical phenomena.

  10. High potency inhibition of hERG potassium channels by the sodium–calcium exchange inhibitor KB-R7943

    Cheng, Hongwei; Zhang, Yihong; Du, Chunyun; Dempsey, Christopher E; Hancox, Jules C

    2012-01-01

    BACKGROUND AND PURPOSE KB-R7943 is an isothiourea derivative that is used widely as a pharmacological inhibitor of sodium–calcium exchange (NCX) in experiments on cardiac and other tissue types. This study investigated KB-R7943 inhibition of hERG (human ether-à-go-go-related gene) K+ channels that underpin the cardiac rapid delayed rectifier potassium current, IKr. EXPERIMENTAL APPROACH Whole-cell patch-clamp measurements were made of hERG current (IhERG) carried by wild-type or mutant hERG channels and of native rabbit ventricular IKr. Docking simulations utilized a hERG homology model built on a MthK-based template. KEY RESULTS KB-R7943 inhibited both IhERG and native IKr rapidly on membrane depolarization with IC50 values of ∼89 and ∼120 nM, respectively, for current tails at −40 mV following depolarizing voltage commands to +20 mV. Marked IhERG inhibition also occurred under ventricular action potential voltage clamp. IhERG inhibition by KB-R7943 exhibited both time- and voltage-dependence but showed no preference for inactivated over activated channels. Results of alanine mutagenesis and docking simulations indicate that KB-R7943 can bind to a pocket formed of the side chains of aromatic residues Y652 and F656, with the compound's nitrobenzyl group orientated towards the cytoplasmic side of the channel pore. The structurally related NCX inhibitor SN-6 also inhibited IhERG, but with a markedly reduced potency. CONCLUSIONS AND IMPLICATIONS KB-R7943 inhibits IhERG/IKr with a potency that exceeds that reported previously for acute cardiac NCX inhibition. Our results also support the feasibility of benzyloxyphenyl-containing NCX inhibitors with reduced potential, in comparison with KB-R7943, to inhibit hERG. PMID:21950687

  11. General regression and representation model for classification.

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  12. Text document classification based on mixture models

    Novovičová, Jana; Malík, Antonín

    2004-01-01

    Roč. 40, č. 3 (2004), s. 293-304 ISSN 0023-5954 R&D Projects: GA AV ČR IAA2075302; GA ČR GA102/03/0049; GA AV ČR KSK1019101 Institutional research plan: CEZ:AV0Z1075907 Keywords : text classification * text categorization * multinomial mixture model Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.224, year: 2004

  13. Nonlinear Inertia Classification Model and Application

    Mei Wang

    2014-01-01

    Full Text Available Classification model of support vector machine (SVM overcomes the problem of a big number of samples. But the kernel parameter and the punishment factor have great influence on the quality of SVM model. Particle swarm optimization (PSO is an evolutionary search algorithm based on the swarm intelligence, which is suitable for parameter optimization. Accordingly, a nonlinear inertia convergence classification model (NICCM is proposed after the nonlinear inertia convergence (NICPSO is developed in this paper. The velocity of NICPSO is firstly defined as the weighted velocity of the inertia PSO, and the inertia factor is selected to be a nonlinear function. NICPSO is used to optimize the kernel parameter and a punishment factor of SVM. Then, NICCM classifier is trained by using the optical punishment factor and the optical kernel parameter that comes from the optimal particle. Finally, NICCM is applied to the classification of the normal state and fault states of online power cable. It is experimentally proved that the iteration number for the proposed NICPSO to reach the optimal position decreases from 15 to 5 compared with PSO; the training duration is decreased by 0.0052 s and the recognition precision is increased by 4.12% compared with SVM.

  14. Fuzzy One-Class Classification Model Using Contamination Neighborhoods

    Lev V. Utkin

    2012-01-01

    Full Text Available A fuzzy classification model is studied in the paper. It is based on the contaminated (robust model which produces fuzzy expected risk measures characterizing classification errors. Optimal classification parameters of the models are derived by minimizing the fuzzy expected risk. It is shown that an algorithm for computing the classification parameters is reduced to a set of standard support vector machine tasks with weighted data points. Experimental results with synthetic data illustrate the proposed fuzzy model.

  15. Reducing Spatial Data Complexity for Classification Models

    Ruta, Dymitr; Gabrys, Bogdan

    2007-01-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  16. Reducing Spatial Data Complexity for Classification Models

    Ruta, Dymitr; Gabrys, Bogdan

    2007-11-01

    Intelligent data analytics gradually becomes a day-to-day reality of today's businesses. However, despite rapidly increasing storage and computational power current state-of-the-art predictive models still can not handle massive and noisy corporate data warehouses. What is more adaptive and real-time operational environment requires multiple models to be frequently retrained which further hinders their use. Various data reduction techniques ranging from data sampling up to density retention models attempt to address this challenge by capturing a summarised data structure, yet they either do not account for labelled data or degrade the classification performance of the model trained on the condensed dataset. Our response is a proposition of a new general framework for reducing the complexity of labelled data by means of controlled spatial redistribution of class densities in the input space. On the example of Parzen Labelled Data Compressor (PLDC) we demonstrate a simulatory data condensation process directly inspired by the electrostatic field interaction where the data are moved and merged following the attracting and repelling interactions with the other labelled data. The process is controlled by the class density function built on the original data that acts as a class-sensitive potential field ensuring preservation of the original class density distributions, yet allowing data to rearrange and merge joining together their soft class partitions. As a result we achieved a model that reduces the labelled datasets much further than any competitive approaches yet with the maximum retention of the original class densities and hence the classification performance. PLDC leaves the reduced dataset with the soft accumulative class weights allowing for efficient online updates and as shown in a series of experiments if coupled with Parzen Density Classifier (PDC) significantly outperforms competitive data condensation methods in terms of classification performance at the

  17. Inhibitory effects and mechanism of dihydroberberine on hERG channels expressed in HEK293 cells.

    Dahai Yu

    Full Text Available The human ether-a-go-go-related gene (hERG potassium channel conducts rapid delayed rectifier potassium currents (IKr and contributes to phase III cardiac action potential repolarization. Drugs inhibit hERG channels by binding to aromatic residues in hERG helixes. Berberine (BBR has multiple actions, and its hydrogenated derivative dihydroberberine (DHB is a potential candidate for developing new drugs. Previous studies have demonstrated that BBR blocks hERG channels and prolongs action potential duration (APD. Our present study aimed to investigate the effects and mechanism of DHB on hERG channels. Protein expression and the hERG current were analyzed using western blotting and patch-clamp, respectively. DHB inhibited the hERG current concentration-dependently after instantaneous perfusion, accelerated channel inactivation by directly binding tyrosine (Tyr652 and phenylalanine (Phe656, and decreased mature (155-kDa and simultaneously increased immature (135-kDa hERG expression, respectively. This suggests disruption of forward trafficking of hERG channels. Besides, DHB remarkably reduced heat shock protein 90 (Hsp90 expression and its interaction with hERG, indicating that DHB disrupted hERG trafficking by impairing channel folding. Meanwhie, DHB enhanced the expression of cleaved activating transcription factor-6 (ATF-6, a biomarker of unfolded protein response (UPR. Expression of calnexin and calreticulin, chaperones activated by ATF-6 to facilitate channel folding, were also increased, which indicating UPR activation. Additionally, the degradation rate of mature 155-kDa hERG increased following DHB exposure. In conclusion, we demonstrated that DHB acutely blocked hERG channels by binding the aromatic Tyr652 and Phe656. DHB may decrease hERG plasma membrane expression through two pathways involving disruption of forward trafficking of immature hERG channels and enhanced degradation of mature hERG channels. Furthermore, forward trafficking was

  18. Anti-HERG activity and the risk of drug-induced arrhythmias and sudden death

    De Bruin, M L; Pettersson, M; Meyboom, R H B

    2005-01-01

    AIMS: Drug-induced QTc-prolongation, resulting from inhibition of HERG potassium channels may lead to serious ventricular arrhythmias and sudden death. We studied the quantitative anti-HERG activity of pro-arrhythmic drugs as a risk factor for this outcome in day-to-day practice. METHODS...... defined as reports of cardiac arrest, sudden death, torsade de pointes, ventricular fibrillation, and ventricular tachycardia (n = 5591), and compared with non-cases regarding the anti-HERG activity, defined as the effective therapeutic plasma concentration (ETCPunbound) divided by the HERG IC50 value......, of suspected drugs. We identified a significant association of 1.93 (95% CI: 1.89-1.98) between the anti-HERG activity of drugs, measured as log10 (ETCPunbound/IC50), and reporting of serious ventricular arrhythmias and sudden death to the WHO-UMC database. CONCLUSION: Anti-HERG activity is associated...

  19. The S4-S5 linker acts as a signal integrator for HERG K+ channel activation and deactivation gating.

    Chai Ann Ng

    Full Text Available Human ether-à-go-go-related gene (hERG K(+ channels have unusual gating kinetics. Characterised by slow activation/deactivation but rapid inactivation/recovery from inactivation, the unique gating kinetics underlie the central role hERG channels play in cardiac repolarisation. The slow activation and deactivation kinetics are regulated in part by the S4-S5 linker, which couples movement of the voltage sensor domain to opening of the activation gate at the distal end of the inner helix of the pore domain. It has also been suggested that cytosolic domains may interact with the S4-S5 linker to regulate activation and deactivation kinetics. Here, we show that the solution structure of a peptide corresponding to the S4-S5 linker of hERG contains an amphipathic helix. The effects of mutations at the majority of residues in the S4-S5 linker of hERG were consistent with the previously identified role in coupling voltage sensor movement to the activation gate. However, mutations to Ser543, Tyr545, Gly546 and Ala548 had more complex phenotypes indicating that these residues are involved in additional interactions. We propose a model in which the S4-S5 linker, in addition to coupling VSD movement to the activation gate, also contributes to interactions that stabilise the closed state and a separate set of interactions that stabilise the open state. The S4-S5 linker therefore acts as a signal integrator and plays a crucial role in the slow deactivation kinetics of the channel.

  20. High Glucose Represses hERG K+ Channel Expression through Trafficking Inhibition

    Yuan-Qi Shi

    2015-08-01

    Full Text Available Background/Aims: Abnormal QT prolongation is the most prominent cardiac electrical disturbance in patients with diabetes mellitus (DM. It is well known that the human ether-ago-go-related gene (hERG controls the rapid delayed rectifier K+ current (IKr in cardiac cells. The expression of the hERG channel is severely down-regulated in diabetic hearts, and this down-regulation is a critical contributor to the slowing of repolarization and QT prolongation. However, the intracellular mechanisms underlying the diabetes-induced hERG deficiency remain unknown. Methods: The expression of the hERG channel was assessed via western blot analysis, and the hERG current was detected with a patch-clamp technique. Results: The results of our study revealed that the expression of the hERG protein and the hERG current were substantially decreased in high-glucose-treated hERG-HEK cells. Moreover, we demonstrated that the high-glucose-mediated damage to the hERG channel depended on the down-regulation of protein levels but not the alteration of channel kinetics. These discoveries indicated that high glucose likely disrupted hERG channel trafficking. From the western blot and immunoprecipitation analyses, we found that high glucose induced trafficking inhibition through an effect on the expression of Hsp90 and its interaction with hERG. Furthermore, the high-glucose-induced inhibition of hERG channel trafficking could activate the unfolded protein response (UPR by up-regulating the expression levels of activating transcription factor-6 (ATF-6 and the ER chaperone protein calnexin. In addition, we demonstrated that 100 nM insulin up-regulated the expression of the hERG channel and rescued the hERG channel repression caused by high glucose. Conclusion: The results of our study provide the first evidence of a high-glucose-induced hERG channel deficiency resulting from the inhibition of channel trafficking. Furthermore, insulin promotes the expression of the hERG channel

  1. Mechanism and pharmacological rescue of berberine-induced hERG channel deficiency

    Yan, Meng; Zhang, Kaiping; Shi, Yanhui; Feng, Lifang; Lv, Lin; Li, Baoxin

    2015-01-01

    Berberine (BBR), an isoquinoline alkaloid mainly isolated from plants of Berberidaceae family, is extensively used to treat gastrointestinal infections in clinics. It has been reported that BBR can block human ether-a-go-go-related gene (hERG) potassium channel and inhibit its membrane expression. The hERG channel plays crucial role in cardiac repolarization and is the target of diverse proarrhythmic drugs. Dysfunction of hERG channel can cause long QT syndrome. However, the regulatory mechanisms of BBR effects on hERG at cell membrane level remain unknown. This study was designed to investigate in detail how BBR decreased hERG expression on cell surface and further explore its pharmacological rescue strategies. In this study, BBR decreases caveolin-1 expression in a concentration-dependent manner in human embryonic kidney 293 (HEK293) cells stably expressing hERG channel. Knocking down the basal expression of caveolin-1 alleviates BBR-induced hERG reduction. In addition, we found that aromatic tyrosine (Tyr652) and phenylalanine (Phe656) in S6 domain mediate the long-term effect of BBR on hERG by using mutation techniques. Considering both our previous and present work, we propose that BBR reduces hERG membrane stability with multiple mechanisms. Furthermore, we found that fexofenadine and resveratrol shorten action potential duration prolongated by BBR, thus having the potential effects of alleviating the cardiotoxicity of BBR. PMID:26543354

  2. Performance of Machine Learning Algorithms for Qualitative and Quantitative Prediction Drug Blockade of hERG1 channel.

    Wacker, Soren; Noskov, Sergei Yu

    2018-05-01

    Drug-induced abnormal heart rhythm known as Torsades de Pointes (TdP) is a potential lethal ventricular tachycardia found in many patients. Even newly released anti-arrhythmic drugs, like ivabradine with HCN channel as a primary target, block the hERG potassium current in overlapping concentration interval. Promiscuous drug block to hERG channel may potentially lead to perturbation of the action potential duration (APD) and TdP, especially when with combined with polypharmacy and/or electrolyte disturbances. The example of novel anti-arrhythmic ivabradine illustrates clinically important and ongoing deficit in drug design and warrants for better screening methods. There is an urgent need to develop new approaches for rapid and accurate assessment of how drugs with complex interactions and multiple subcellular targets can predispose or protect from drug-induced TdP. One of the unexpected outcomes of compulsory hERG screening implemented in USA and European Union resulted in large datasets of IC 50 values for various molecules entering the market. The abundant data allows now to construct predictive machine-learning (ML) models. Novel ML algorithms and techniques promise better accuracy in determining IC 50 values of hERG blockade that is comparable or surpassing that of the earlier QSAR or molecular modeling technique. To test the performance of modern ML techniques, we have developed a computational platform integrating various workflows for quantitative structure activity relationship (QSAR) models using data from the ChEMBL database. To establish predictive powers of ML-based algorithms we computed IC 50 values for large dataset of molecules and compared it to automated patch clamp system for a large dataset of hERG blocking and non-blocking drugs, an industry gold standard in studies of cardiotoxicity. The optimal protocol with high sensitivity and predictive power is based on the novel eXtreme gradient boosting (XGBoost) algorithm. The ML-platform with XGBoost

  3. Stereoselective inhibition of the hERG1 potassium channel

    Liliana eSintra Grilo

    2010-11-01

    Full Text Available A growing number of drugs have been shown to prolong cardiac repolarization, predisposing individuals to life-threatening ventricular arrhythmias known as Torsades de Pointes. Most of these drugs are known to interfere with the human ether à-gogo related gene 1 (hERG1 channel, whose current is one of the main determinants of action potential duration. Prolonged repolarization is reflected by lengthening of the QT interval of the electrocardiogram, as seen in the suitably named drug-induced long QT syndrome. Chirality (presence of an asymmetric atom is a common feature of marketed drugs, which can therefore exist in at least two enantiomers with distinct three-dimensional structures and possibly distinct biological fates. Both the pharmacokinetic and pharmacodynamic properties can differ between enantiomers, as well as also between individuals who take the drug due to metabolic polymorphisms. Despite the large number of reports about drugs reducing the hERG1 current, potential stereoselective contributions have only been scarcely investigated. In this review, we present a non-exhaustive list of clinically important molecules which display chiral toxicity that may be related to hERG1-blocking properties. We particularly focus on methadone cardiotoxicity, which illustrates the importance of the stereoselective effect of drug chirality as well as individual variations resulting from pharmacogenetics. Furthermore, it seems likely that, during drug development, consideration of chirality in lead optimization and systematic assessment of the hERG1 current block with all enantiomers could contribute to the reduction of the risk of drug-induced LQTS.

  4. The human ether-a-go-go-related gene (hERG) current inhibition selectively prolongs action potential of midmyocardial cells to augment transmural dispersion.

    Yasuda, C; Yasuda, S; Yamashita, H; Okada, J; Hisada, T; Sugiura, S

    2015-08-01

    The majority of drug induced arrhythmias are related to the prolongation of action potential duration following inhibition of rapidly activating delayed rectifier potassium current (I(Kr)) mediated by the hERG channel. However, for arrhythmias to develop and be sustained, not only the prolongation of action potential duration but also its transmural dispersion are required. Herein, we evaluated the effect of hERG inhibition on transmural dispersion of action potential duration using the action potential clamp technique that combined an in silico myocyte model with the actual I(Kr) measurement. Whole cell I(Kr) current was measured in Chinese hamster ovary cells stably expressing the hERG channel. The measured current was coupled with models of ventricular endocardial, M-, and epicardial cells to calculate the action potentials. Action potentials were evaluated under control condition and in the presence of 1, 10, or 100 μM disopyramide, an hERG inhibitor. Disopyramide dose-dependently increased the action potential durations of the three cell types. However, action potential duration of M-cells increased disproportionately at higher doses, and was significantly different from that of epicardial and endocardial cells (dispersion of repolarization). By contrast, the effects of disopyramide on peak I(Kr) and instantaneous current-voltage relation were similar in all cell types. Simulation study suggested that the reduced repolarization reserve of M-cell with smaller amount of slowly activating delayed rectifier potassium current levels off at longer action potential duration to make such differences. The action potential clamp technique is useful for studying the mechanism of arrhythmogenesis by hERG inhibition through the transmural dispersion of repolarization.

  5. Computerized Classification Testing with the Rasch Model

    Eggen, Theo J. H. M.

    2011-01-01

    If classification in a limited number of categories is the purpose of testing, computerized adaptive tests (CATs) with algorithms based on sequential statistical testing perform better than estimation-based CATs (e.g., Eggen & Straetmans, 2000). In these computerized classification tests (CCTs), the Sequential Probability Ratio Test (SPRT) (Wald,…

  6. High yield purification of full-length functional hERG K+ channels produced in Saccharomyces cerevisiae

    Molbaek, Karen; Scharff-Poulsen, Peter; Hélix-Nielsen, Claus

    2015-01-01

    knowledge this is the first reported high-yield production and purification of full length, tetrameric and functional hERG. This significant breakthrough will be paramount in obtaining hERG crystal structures, and in establishment of new high-throughput hERG drug safety screening assays....

  7. Overcoming HERG affinity in the discovery of the CCR5 antagonist maraviroc.

    Price, David A; Armour, Duncan; de Groot, Marcel; Leishman, Derek; Napier, Carolyn; Perros, Manos; Stammen, Blanda L; Wood, Anthony

    2006-09-01

    The discovery of maraviroc 17 is described with particular reference to the generation of high selectivity over affinity for the HERG potassium channel. This was achieved through the use of a high throughput binding assay for the HERG channel that is known to show an excellent correlation with functional effects.

  8. hERG trafficking inhibition in drug-induced lethal cardiac arrhythmia.

    Nogawa, Hisashi; Kawai, Tomoyuki

    2014-10-15

    Acquired long QT syndrome induced by non-cardiovascular drugs can cause lethal cardiac arrhythmia called torsades de points and is a significant problem in drug development. The prolongation of QT interval and cardiac action potential duration are mainly due to reduced physiological function of the rapidly activating voltage-dependent potassium channels encoded by human ether-a-go-go-related gene (hERG). Structurally diverse groups of drugs are known to directly inhibit hERG channel conductance. Therefore, the ability of acute hERG inhibition is routinely assessed at the preclinical stages in pharmaceutical testing. Recent findings indicated that chronic treatment with various drugs not only inhibits hERG channels but also decreases hERG channel expression in the plasma membrane of cardiomyocytes, which has become another concern in safety pharmacology. The mechanisms involve the disruption of hERG trafficking to the surface membrane or the acceleration of hERG protein degradation. From this perspective, we present a brief overview of mechanisms of drug-induced trafficking inhibition and pathological regulation. Understanding of drug-induced hERG trafficking inhibition may provide new strategies for predicting drug-induced QT prolongation and lethal cardiac arrhythmia in pharmaceutical drug development. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Inter Genre Similarity Modelling For Automatic Music Genre Classification

    Bagci, Ulas; Erzin, Engin

    2009-01-01

    Music genre classification is an essential tool for music information retrieval systems and it has been finding critical applications in various media platforms. Two important problems of the automatic music genre classification are feature extraction and classifier design. This paper investigates inter-genre similarity modelling (IGS) to improve the performance of automatic music genre classification. Inter-genre similarity information is extracted over the mis-classified feature population....

  10. Mechanism and pharmacological rescue of berberine-induced hERG channel deficiency

    Yan M

    2015-10-01

    Full Text Available Meng Yan,1 Kaiping Zhang,1 Yanhui Shi,1 Lifang Feng,1 Lin Lv,1 Baoxin Li1,2 1Department of Pharmacology, Harbin Medical University, 2State-Province Key Laboratory of Biopharmaceutical Engineering, Harbin, Heilongjiang, People’s Republic of China Abstract: Berberine (BBR, an isoquinoline alkaloid mainly isolated from plants of Berberidaceae family, is extensively used to treat gastrointestinal infections in clinics. It has been reported that BBR can block human ether-a-go-go-related gene (hERG potassium channel and inhibit its membrane expression. The hERG channel plays crucial role in cardiac repolarization and is the target of diverse proarrhythmic drugs. Dysfunction of hERG channel can cause long QT syndrome. However, the regulatory mechanisms of BBR effects on hERG at cell membrane level remain unknown. This study was designed to investigate in detail how BBR decreased hERG expression on cell surface and further explore its pharmacological rescue strategies. In this study, BBR decreases caveolin-1 expression in a concentration-dependent manner in human embryonic kidney 293 (HEK293 cells stably expressing hERG channel. Knocking down the basal expression of caveolin-1 alleviates BBR-induced hERG reduction. In addition, we found that aromatic tyrosine (Tyr652 and phenylalanine (Phe656 in S6 domain mediate the long-term effect of BBR on hERG by using mutation techniques. Considering both our previous and present work, we propose that BBR reduces hERG membrane stability with multiple mechanisms. Furthermore, we found that fexofenadine and resveratrol shorten action potential duration prolongated by BBR, thus having the potential effects of alleviating the cardiotoxicity of BBR. Keywords: berberine, hERG, cavoline-1, cardiotoxicity, LQTS, pharmacological rescue

  11. Calibration of a Plastic Classification System with the Ccw Model

    Barcala Riveira, J. M.; Fernandez Marron, J. L.; Alberdi Primicia, J.; Navarrete Marin, J. J.; Oller Gonzalez, J. C.

    2003-01-01

    This document describes the calibration of a plastic Classification system with the Ccw model (Classification by Quantum's built with Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs

  12. Compensatory neurofuzzy model for discrete data classification in biomedical

    Ceylan, Rahime

    2015-03-01

    Biomedical data is separated to two main sections: signals and discrete data. So, studies in this area are about biomedical signal classification or biomedical discrete data classification. There are artificial intelligence models which are relevant to classification of ECG, EMG or EEG signals. In same way, in literature, many models exist for classification of discrete data taken as value of samples which can be results of blood analysis or biopsy in medical process. Each algorithm could not achieve high accuracy rate on classification of signal and discrete data. In this study, compensatory neurofuzzy network model is presented for classification of discrete data in biomedical pattern recognition area. The compensatory neurofuzzy network has a hybrid and binary classifier. In this system, the parameters of fuzzy systems are updated by backpropagation algorithm. The realized classifier model is conducted to two benchmark datasets (Wisconsin Breast Cancer dataset and Pima Indian Diabetes dataset). Experimental studies show that compensatory neurofuzzy network model achieved 96.11% accuracy rate in classification of breast cancer dataset and 69.08% accuracy rate was obtained in experiments made on diabetes dataset with only 10 iterations.

  13. Structural refinement of the hERG1 pore and voltage-sensing domains with ROSETTA-membrane and molecular dynamics simulations.

    Subbotina, Julia; Yarov-Yarovoy, Vladimir; Lees-Miller, James; Durdagi, Serdar; Guo, Jiqing; Duff, Henry J; Noskov, Sergei Yu

    2010-11-01

    The hERG1 gene (Kv11.1) encodes a voltage-gated potassium channel. Mutations in this gene lead to one form of the Long QT Syndrome (LQTS) in humans. Promiscuous binding of drugs to hERG1 is known to alter the structure/function of the channel leading to an acquired form of the LQTS. Expectably, creation and validation of reliable 3D model of the channel have been a key target in molecular cardiology and pharmacology for the last decade. Although many models were built, they all were limited to pore domain. In this work, a full model of the hERG1 channel is developed which includes all transmembrane segments. We tested a template-driven de-novo design with ROSETTA-membrane modeling using side-chain placements optimized by subsequent molecular dynamics (MD) simulations. Although backbone templates for the homology modeled parts of the pore and voltage sensors were based on the available structures of KvAP, Kv1.2 and Kv1.2-Kv2.1 chimera channels, the missing parts are modeled de-novo. The impact of several alignments on the structure of the S4 helix in the voltage-sensing domain was also tested. Herein, final models are evaluated for consistency to the reported structural elements discovered mainly on the basis of mutagenesis and electrophysiology. These structural elements include salt bridges and close contacts in the voltage-sensor domain; and the topology of the extracellular S5-pore linker compared with that established by toxin foot-printing and nuclear magnetic resonance studies. Implications of the refined hERG1 model to binding of blockers and channels activators (potent new ligands for channel activations) are discussed. © 2010 Wiley-Liss, Inc.

  14. Classification

    Clary, Renee; Wandersee, James

    2013-01-01

    In this article, Renee Clary and James Wandersee describe the beginnings of "Classification," which lies at the very heart of science and depends upon pattern recognition. Clary and Wandersee approach patterns by first telling the story of the "Linnaean classification system," introduced by Carl Linnacus (1707-1778), who is…

  15. Classification rates: non‐parametric verses parametric models using ...

    This research sought to establish if non parametric modeling achieves a higher correct classification ratio than a parametric model. The local likelihood technique was used to model fit the data sets. The same sets of data were modeled using parametric logit and the abilities of the two models to correctly predict the binary ...

  16. Learning classification models with soft-label information.

    Nguyen, Quang; Valizadegan, Hamed; Hauskrecht, Milos

    2014-01-01

    Learning of classification models in medicine often relies on data labeled by a human expert. Since labeling of clinical data may be time-consuming, finding ways of alleviating the labeling costs is critical for our ability to automatically learn such models. In this paper we propose a new machine learning approach that is able to learn improved binary classification models more efficiently by refining the binary class information in the training phase with soft labels that reflect how strongly the human expert feels about the original class labels. Two types of methods that can learn improved binary classification models from soft labels are proposed. The first relies on probabilistic/numeric labels, the other on ordinal categorical labels. We study and demonstrate the benefits of these methods for learning an alerting model for heparin induced thrombocytopenia. The experiments are conducted on the data of 377 patient instances labeled by three different human experts. The methods are compared using the area under the receiver operating characteristic curve (AUC) score. Our AUC results show that the new approach is capable of learning classification models more efficiently compared to traditional learning methods. The improvement in AUC is most remarkable when the number of examples we learn from is small. A new classification learning framework that lets us learn from auxiliary soft-label information provided by a human expert is a promising new direction for learning classification models from expert labels, reducing the time and cost needed to label data.

  17. Cholesterol regulates HERG K+ channel activation by increasing phospholipase C β1 expression.

    Chun, Yoon Sun; Oh, Hyun Geun; Park, Myoung Kyu; Cho, Hana; Chung, Sungkwon

    2013-01-01

    Human ether-a-go-go-related gene (HERG) K(+) channel underlies the rapidly activating delayed rectifier K(+) conductance (IKr) during normal cardiac repolarization. Also, it may regulate excitability in many neuronal cells. Recently, we showed that enrichment of cell membrane with cholesterol inhibits HERG channels by reducing the levels of phosphatidylinositol 4,5-bisphosphate [PtdIns(4,5)P2] due to the activation of phospholipase C (PLC). In this study, we further explored the effect of cholesterol enrichment on HERG channel kinetics. When membrane cholesterol level was mildly increased in human embryonic kidney (HEK) 293 cells expressing HERG channel, the inactivation and deactivation kinetics of HERG current were not affected, but the activation rate was significantly decelerated at all voltages tested. The application of PtdIns(4,5)P2 or inhibitor for PLC prevented the effect of cholesterol enrichment, while the presence of antibody against PtdIns(4,5)P2 in pipette solution mimicked the effect of cholesterol enrichment. These results indicate that the effect of cholesterol enrichment on HERG channel is due to the depletion of PtdIns(4,5)P2. We also found that cholesterol enrichment significantly increases the expression of β1 and β3 isoforms of PLC (PLCβ1, PLCβ3) in the membrane. Since the effects of cholesterol enrichment on HERG channel were prevented by inhibiting transcription or by inhibiting PLCβ1 expression, we conclude that increased PLCβ1 expression leads to the deceleration of HERG channel activation rate via downregulation of PtdIns(4,5)P2. These results confirm a crosstalk between two plasma membrane-enriched lipids, cholesterol and PtdIns(4,5)P2, in the regulation of HERG channels.

  18. Channel sialic acids limit hERG channel activity during the ventricular action potential.

    Norring, Sarah A; Ednie, Andrew R; Schwetz, Tara A; Du, Dongping; Yang, Hui; Bennett, Eric S

    2013-02-01

    Activity of human ether-a-go-go-related gene (hERG) 1 voltage-gated K(+) channels is responsible for portions of phase 2 and phase 3 repolarization of the human ventricular action potential. Here, we questioned whether and how physiologically and pathophysiologically relevant changes in surface N-glycosylation modified hERG channel function. Voltage-dependent hERG channel gating and activity were evaluated as expressed in a set of Chinese hamster ovary (CHO) cell lines under conditions of full glycosylation, no sialylation, no complex N-glycans, and following enzymatic deglycosylation of surface N-glycans. For each condition of reduced glycosylation, hERG channel steady-state activation and inactivation relationships were shifted linearly by significant depolarizing ∼9 and ∼18 mV, respectively. The hERG window current increased significantly by 50-150%, and the peak shifted by a depolarizing ∼10 mV. There was no significant change in maximum hERG current density. Deglycosylated channels were significantly more active (20-80%) than glycosylated controls during phases 2 and 3 of action potential clamp protocols. Simulations of hERG current and ventricular action potentials corroborated experimental data and predicted reduced sialylation leads to a 50-70-ms decrease in action potential duration. The data describe a novel mechanism by which hERG channel gating is modulated through physiologically and pathophysiologically relevant changes in N-glycosylation; reduced channel sialylation increases hERG channel activity during the action potential, thereby increasing the rate of action potential repolarization.

  19. hERG blocking potential of acids and zwitterions characterized by three thresholds for acidity, size and reactivity

    Nikolov, Nikolai Georgiev; Dybdahl, Marianne; Jonsdottir, Svava Osk

    2014-01-01

    with a concordance of 91% by a decision tree based on the rule. Two external validations were performed with sets of 35 and 48 observations, respectively, both showing concordances of 91%. In addition, a global QSAR model of hERG blocking was constructed based on a large diverse training set of 1374 chemicals...... covering all ionization classes, externally validated showing high predictivity and compared to the decision tree. The decision tree was found to be superior for the acids and zwitterionic ampholytes classes....

  20. Classification

    Hjørland, Birger

    2017-01-01

    This article presents and discusses definitions of the term “classification” and the related concepts “Concept/conceptualization,”“categorization,” “ordering,” “taxonomy” and “typology.” It further presents and discusses theories of classification including the influences of Aristotle...... and Wittgenstein. It presents different views on forming classes, including logical division, numerical taxonomy, historical classification, hermeneutical and pragmatic/critical views. Finally, issues related to artificial versus natural classification and taxonomic monism versus taxonomic pluralism are briefly...

  1. Formalization of the classification pattern: survey of classification modeling in information systems engineering.

    Partridge, Chris; de Cesare, Sergio; Mitchell, Andrew; Odell, James

    2018-01-01

    Formalization is becoming more common in all stages of the development of information systems, as a better understanding of its benefits emerges. Classification systems are ubiquitous, no more so than in domain modeling. The classification pattern that underlies these systems provides a good case study of the move toward formalization in part because it illustrates some of the barriers to formalization, including the formal complexity of the pattern and the ontological issues surrounding the "one and the many." Powersets are a way of characterizing the (complex) formal structure of the classification pattern, and their formalization has been extensively studied in mathematics since Cantor's work in the late nineteenth century. One can use this formalization to develop a useful benchmark. There are various communities within information systems engineering (ISE) that are gradually working toward a formalization of the classification pattern. However, for most of these communities, this work is incomplete, in that they have not yet arrived at a solution with the expressiveness of the powerset benchmark. This contrasts with the early smooth adoption of powerset by other information systems communities to, for example, formalize relations. One way of understanding the varying rates of adoption is recognizing that the different communities have different historical baggage. Many conceptual modeling communities emerged from work done on database design, and this creates hurdles to the adoption of the high level of expressiveness of powersets. Another relevant factor is that these communities also often feel, particularly in the case of domain modeling, a responsibility to explain the semantics of whatever formal structures they adopt. This paper aims to make sense of the formalization of the classification pattern in ISE and surveys its history through the literature, starting from the relevant theoretical works of the mathematical literature and gradually shifting focus

  2. AN APPLICATION OF FUNCTIONAL MULTIVARIATE REGRESSION MODEL TO MULTICLASS CLASSIFICATION

    Krzyśko, Mirosław; Smaga, Łukasz

    2017-01-01

    In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed ...

  3. Vertebrae classification models - Validating classification models that use morphometrics to identify ancient salmonid (Oncorhynchus spp.) vertebrae to species

    National Oceanic and Atmospheric Administration, Department of Commerce — Using morphometric characteristics of modern salmonid (Oncorhynchus spp.) vertebrae, we have developed classification models to identify salmonid vertebrae to the...

  4. A Soft Intelligent Risk Evaluation Model for Credit Scoring Classification

    Mehdi Khashei

    2015-09-01

    Full Text Available Risk management is one of the most important branches of business and finance. Classification models are the most popular and widely used analytical group of data mining approaches that can greatly help financial decision makers and managers to tackle credit risk problems. However, the literature clearly indicates that, despite proposing numerous classification models, credit scoring is often a difficult task. On the other hand, there is no universal credit-scoring model in the literature that can be accurately and explanatorily used in all circumstances. Therefore, the research for improving the efficiency of credit-scoring models has never stopped. In this paper, a hybrid soft intelligent classification model is proposed for credit-scoring problems. In the proposed model, the unique advantages of the soft computing techniques are used in order to modify the performance of the traditional artificial neural networks in credit scoring. Empirical results of Australian credit card data classifications indicate that the proposed hybrid model outperforms its components, and also other classification models presented for credit scoring. Therefore, the proposed model can be considered as an appropriate alternative tool for binary decision making in business and finance, especially in high uncertainty conditions.

  5. Acute leukemia classification by ensemble particle swarm model selection.

    Escalante, Hugo Jair; Montes-y-Gómez, Manuel; González, Jesús A; Gómez-Gil, Pilar; Altamirano, Leopoldo; Reyes, Carlos A; Reta, Carolina; Rosales, Alejandro

    2012-07-01

    Acute leukemia is a malignant disease that affects a large proportion of the world population. Different types and subtypes of acute leukemia require different treatments. In order to assign the correct treatment, a physician must identify the leukemia type or subtype. Advanced and precise methods are available for identifying leukemia types, but they are very expensive and not available in most hospitals in developing countries. Thus, alternative methods have been proposed. An option explored in this paper is based on the morphological properties of bone marrow images, where features are extracted from medical images and standard machine learning techniques are used to build leukemia type classifiers. This paper studies the use of ensemble particle swarm model selection (EPSMS), which is an automated tool for the selection of classification models, in the context of acute leukemia classification. EPSMS is the application of particle swarm optimization to the exploration of the search space of ensembles that can be formed by heterogeneous classification models in a machine learning toolbox. EPSMS does not require prior domain knowledge and it is able to select highly accurate classification models without user intervention. Furthermore, specific models can be used for different classification tasks. We report experimental results for acute leukemia classification with real data and show that EPSMS outperformed the best results obtained using manually designed classifiers with the same data. The highest performance using EPSMS was of 97.68% for two-type classification problems and of 94.21% for more than two types problems. To the best of our knowledge, these are the best results reported for this data set. Compared with previous studies, these improvements were consistent among different type/subtype classification tasks, different features extracted from images, and different feature extraction regions. The performance improvements were statistically significant

  6. Active Learning of Classification Models with Likert-Scale Feedback.

    Xue, Yanbing; Hauskrecht, Milos

    2017-01-01

    Annotation of classification data by humans can be a time-consuming and tedious process. Finding ways of reducing the annotation effort is critical for building the classification models in practice and for applying them to a variety of classification tasks. In this paper, we develop a new active learning framework that combines two strategies to reduce the annotation effort. First, it relies on label uncertainty information obtained from the human in terms of the Likert-scale feedback. Second, it uses active learning to annotate examples with the greatest expected change. We propose a Bayesian approach to calculate the expectation and an incremental SVM solver to reduce the time complexity of the solvers. We show the combination of our active learning strategy and the Likert-scale feedback can learn classification models more rapidly and with a smaller number of labeled instances than methods that rely on either Likert-scale labels or active learning alone.

  7. Signal classification using global dynamical models, Part I: Theory

    Kadtke, J.; Kremliovsky, M.

    1996-01-01

    Detection and classification of signals is one of the principal areas of signal processing, and the utilization of nonlinear information has long been considered as a way of improving performance beyond standard linear (e.g. spectral) techniques. Here, we develop a method for using global models of chaotic dynamical systems theory to define a signal classification processing chain, which is sensitive to nonlinear correlations in the data. We use it to demonstrate classification in high noise regimes (negative SNR), and argue that classification probabilities can be directly computed from ensemble statistics in the model coefficient space. We also develop a modification for non-stationary signals (i.e. transients) using non-autonomous ODEs. In Part II of this paper, we demonstrate the analysis on actual open ocean acoustic data from marine biologics. copyright 1996 American Institute of Physics

  8. Polarimetric SAR image classification based on discriminative dictionary learning model

    Sang, Cheng Wei; Sun, Hong

    2018-03-01

    Polarimetric SAR (PolSAR) image classification is one of the important applications of PolSAR remote sensing. It is a difficult high-dimension nonlinear mapping problem, the sparse representations based on learning overcomplete dictionary have shown great potential to solve such problem. The overcomplete dictionary plays an important role in PolSAR image classification, however for PolSAR image complex scenes, features shared by different classes will weaken the discrimination of learned dictionary, so as to degrade classification performance. In this paper, we propose a novel overcomplete dictionary learning model to enhance the discrimination of dictionary. The learned overcomplete dictionary by the proposed model is more discriminative and very suitable for PolSAR classification.

  9. Co-occurrence Models in Music Genre Classification

    Ahrendt, Peter; Goutte, Cyril; Larsen, Jan

    2005-01-01

    Music genre classification has been investigated using many different methods, but most of them build on probabilistic models of feature vectors x\\_r which only represent the short time segment with index r of the song. Here, three different co-occurrence models are proposed which instead consider...... genre data set with a variety of modern music. The basis was a so-called AR feature representation of the music. Besides the benefit of having proper probabilistic models of the whole song, the lowest classification test errors were found using one of the proposed models....

  10. Towards a Structural View of Drug Binding to hERG K+ Channels.

    Vandenberg, Jamie I; Perozo, Eduardo; Allen, Toby W

    2017-10-01

    The human ether-a-go-go-related gene (hERG) K + channel is of great medical and pharmaceutical relevance. Inherited mutations in hERG result in congenital long-QT syndrome which is associated with a markedly increased risk of cardiac arrhythmia and sudden death. hERG K + channels are also remarkably susceptible to block by a wide range of drugs, which in turn can cause drug-induced long-QT syndrome and an increased risk of sudden death. The recent determination of the near-atomic resolution structure of the hERG K + channel, using single-particle cryo-electron microscopy (cryo-EM), provides tremendous insights into how these channels work. It also suggests a way forward in our quest to understand why these channels are so promiscuous with respect to drug binding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Conceptualising Business Models: Definitions, Frameworks and Classifications

    Erwin Fielt

    2013-01-01

    The business model concept is gaining traction in different disciplines but is still criticized for being fuzzy and vague and lacking consensus on its definition and compositional elements. In this paper we set out to advance our understanding of the business model concept by addressing three areas of foundational research: business model definitions, business model elements, and business model archetypes. We define a business model as a representation of the value logic of an organization in...

  12. Mechanism of HERG potassium channel inhibition by tetra-n-octylammonium bromide and benzethonium chloride

    Long, Yan; Lin, Zuoxian; Xia, Menghang; Zheng, Wei; Li, Zhiyuan

    2013-01-01

    Tetra-n-octylammonium bromide and benzethonium chloride are synthetic quaternary ammonium salts that are widely used in hospitals and industries for the disinfection and surface treatment and as the preservative agent. Recently, the activities of HERG channel inhibition by these compounds have been found to have potential risks to induce the long QT syndrome and cardiac arrhythmia, although the mechanism of action is still elusive. This study was conducted to investigate the mechanism of HERG channel inhibition by these compounds by using whole-cell patch clamp experiments in a CHO cell line stably expressing HERG channels. Tetra-n-octylammonium bromide and benzethonium chloride exhibited concentration-dependent inhibitions of HERG channel currents with IC 50 values of 4 nM and 17 nM, respectively, which were also voltage-dependent and use-dependent. Both compounds shifted the channel activation I–V curves in a hyperpolarized direction for 10–15 mV and accelerated channel activation and inactivation processes by 2-fold. In addition, tetra-n-octylammonium bromide shifted the inactivation I–V curve in a hyperpolarized direction for 24.4 mV and slowed the rate of channel deactivation by 2-fold, whereas benzethonium chloride did not. The results indicate that tetra-n-octylammonium bromide and benzethonium chloride are open-channel blockers that inhibit HERG channels in the voltage-dependent, use-dependent and state-dependent manners. - Highlights: ► Tetra-n-octylammonium and benzethonium are potent HERG channel inhibitors. ► Channel activation and inactivation processes are accelerated by the two compounds. ► Both compounds are the open-channel blockers to HERG channels. ► HERG channel inhibition by both compounds is use-, voltage- and state dependent. ► The in vivo risk of QT prolongation needs to be studied for the two compounds

  13. Mechanism of HERG potassium channel inhibition by tetra-n-octylammonium bromide and benzethonium chloride

    Long, Yan; Lin, Zuoxian [Key Laboratory of Regenerative Biology, Guangzhou Institute of Biomedicine and Health, Chinese Academy of Sciences, Guangzhou 510530 (China); Xia, Menghang; Zheng, Wei [National Center for Advancing Translational Sciences, National Institutes of Health, Bethesda, MD 20892 (United States); Li, Zhiyuan, E-mail: li_zhiyuan@gibh.ac.cn [Key Laboratory of Regenerative Biology, Guangzhou Institute of Biomedicine and Health, Chinese Academy of Sciences, Guangzhou 510530 (China)

    2013-03-01

    Tetra-n-octylammonium bromide and benzethonium chloride are synthetic quaternary ammonium salts that are widely used in hospitals and industries for the disinfection and surface treatment and as the preservative agent. Recently, the activities of HERG channel inhibition by these compounds have been found to have potential risks to induce the long QT syndrome and cardiac arrhythmia, although the mechanism of action is still elusive. This study was conducted to investigate the mechanism of HERG channel inhibition by these compounds by using whole-cell patch clamp experiments in a CHO cell line stably expressing HERG channels. Tetra-n-octylammonium bromide and benzethonium chloride exhibited concentration-dependent inhibitions of HERG channel currents with IC{sub 50} values of 4 nM and 17 nM, respectively, which were also voltage-dependent and use-dependent. Both compounds shifted the channel activation I–V curves in a hyperpolarized direction for 10–15 mV and accelerated channel activation and inactivation processes by 2-fold. In addition, tetra-n-octylammonium bromide shifted the inactivation I–V curve in a hyperpolarized direction for 24.4 mV and slowed the rate of channel deactivation by 2-fold, whereas benzethonium chloride did not. The results indicate that tetra-n-octylammonium bromide and benzethonium chloride are open-channel blockers that inhibit HERG channels in the voltage-dependent, use-dependent and state-dependent manners. - Highlights: ► Tetra-n-octylammonium and benzethonium are potent HERG channel inhibitors. ► Channel activation and inactivation processes are accelerated by the two compounds. ► Both compounds are the open-channel blockers to HERG channels. ► HERG channel inhibition by both compounds is use-, voltage- and state dependent. ► The in vivo risk of QT prolongation needs to be studied for the two compounds.

  14. Lean waste classification model to support the sustainable operational practice

    Sutrisno, A.; Vanany, I.; Gunawan, I.; Asjad, M.

    2018-04-01

    Driven by growing pressure for a more sustainable operational practice, improvement on the classification of non-value added (waste) is one of the prerequisites to realize sustainability of a firm. While the use of the 7 (seven) types of the Ohno model now becoming a versatile tool to reveal the lean waste occurrence. In many recent investigations, the use of the Seven Waste model of Ohno is insufficient to cope with the types of waste occurred in industrial practices at various application levels. Intended to a narrowing down this limitation, this paper presented an improved waste classification model based on survey to recent studies discussing on waste at various operational stages. Implications on the waste classification model to the body of knowledge and industrial practices are provided.

  15. Music genre classification via likelihood fusion from multiple feature models

    Shiu, Yu; Kuo, C.-C. J.

    2005-01-01

    Music genre provides an efficient way to index songs in a music database, and can be used as an effective means to retrieval music of a similar type, i.e. content-based music retrieval. A new two-stage scheme for music genre classification is proposed in this work. At the first stage, we examine a couple of different features, construct their corresponding parametric models (e.g. GMM and HMM) and compute their likelihood functions to yield soft classification results. In particular, the timbre, rhythm and temporal variation features are considered. Then, at the second stage, these soft classification results are integrated to result in a hard decision for final music genre classification. Experimental results are given to demonstrate the performance of the proposed scheme.

  16. An NMR investigation of the structure, function and role of the hERG channel selectivity filter in the long QT syndrome.

    Gravel, Andrée E; Arnold, Alexandre A; Dufourc, Erick J; Marcotte, Isabelle

    2013-06-01

    The human ether-a-go-go-related gene (hERG) voltage-gated K(+) channels are located in heart cell membranes and hold a unique selectivity filter (SF) amino acid sequence (SVGFG) as compared to other K(+) channels (TVGYG). The hERG provokes the acquired long QT syndrome (ALQTS) when blocked, as a side effect of drugs, leading to arrhythmia or heart failure. Its pore domain - including the SF - is believed to be a cardiotoxic drug target. In this study combining solution and solid-state NMR experiments we examine the structure and function of hERG's L(622)-K(638) segment which comprises the SF, as well as its role in the ALQTS using reported active drugs. We first show that the SF segment is unstructured in solution with and without K(+) ions in its surroundings, consistent with the expected flexibility required for the change between the different channel conductive states predicted by computational studies. We also show that the SF segment has the potential to perturb the membrane, but that the presence of K(+) ions cancels this interaction. The SF moiety appears to be a possible target for promethazine in the ALQTS mechanism, but not as much for bepridil, cetirizine, diphenhydramine and fluvoxamine. The membrane affinity of the SF is also affected by the presence of drugs which also perturb model DMPC-based membranes. These results thus suggest that the membrane could play a role in the ALQTS by promoting the access to transmembrane or intracellular targets on the hERG channel, or perturbing the lipid-protein synergy. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Conceptualising Business Models: Definitions, Frameworks and Classifications

    Erwin Fielt

    2013-12-01

    Full Text Available The business model concept is gaining traction in different disciplines but is still criticized for being fuzzy and vague and lacking consensus on its definition and compositional elements. In this paper we set out to advance our understanding of the business model concept by addressing three areas of foundational research: business model definitions, business model elements, and business model archetypes. We define a business model as a representation of the value logic of an organization in terms of how it creates and captures customer value. This abstract and generic definition is made more specific and operational by the compositional elements that need to address the customer, value proposition, organizational architecture (firm and network level and economics dimensions. Business model archetypes complement the definition and elements by providing a more concrete and empirical understanding of the business model concept. The main contributions of this paper are (1 explicitly including the customer value concept in the business model definition and focussing on value creation, (2 presenting four core dimensions that business model elements need to cover, (3 arguing for flexibility by adapting and extending business model elements to cater for different purposes and contexts (e.g. technology, innovation, strategy (4 stressing a more systematic approach to business model archetypes by using business model elements for their description, and (5 suggesting to use business model archetype research for the empirical exploration and testing of business model elements and their relationships.

  18. Rab11-dependent Recycling of the Human Ether-a-go-go-related Gene (hERG) Channel*

    Chen, Jeffery; Guo, Jun; Yang, Tonghua; Li, Wentao; Lamothe, Shawn M.; Kang, Yudi; Szendrey, John A.; Zhang, Shetuan

    2015-01-01

    The human ether-a-go-go-related gene (hERG) encodes the pore-forming subunit of the rapidly activating delayed rectifier potassium channel (IKr). A reduction in the hERG current causes long QT syndrome, which predisposes affected individuals to ventricular arrhythmias and sudden death. We reported previously that hERG channels in the plasma membrane undergo vigorous internalization under low K+ conditions. In the present study, we addressed whether hERG internalization occurs under normal K+ conditions and whether/how internalized channels are recycled back to the plasma membrane. Using patch clamp, Western blot, and confocal imaging analyses, we demonstrated that internalized hERG channels can effectively recycle back to the plasma membrane. Low K+-enhanced hERG internalization is accompanied by an increased rate of hERG recovery in the plasma membrane upon reculture following proteinase K-mediated clearance of cell-surface proteins. The increased recovery rate is not due to enhanced protein synthesis, as hERG mRNA expression was not altered by low K+ exposure, and the increased recovery was observed in the presence of the protein biosynthesis inhibitor cycloheximide. GTPase Rab11, but not Rab4, is involved in the recycling of hERG channels. Interfering with Rab11 function not only delayed hERG recovery in cells after exposure to low K+ medium but also decreased hERG expression and function in cells under normal culture conditions. We concluded that the recycling pathway plays an important role in the homeostasis of plasma membrane-bound hERG channels. PMID:26152716

  19. SEMIPARAMETRIC VERSUS PARAMETRIC CLASSIFICATION MODELS - AN APPLICATION TO DIRECT MARKETING

    BULT, [No Value

    In this paper we are concerned with estimation of a classification model using semiparametric and parametric methods. Benefits and limitations of semiparametric models in general, and of Manski's maximum score method in particular, are discussed. The maximum score method yields consistent estimates

  20. Latent Partially Ordered Classification Models and Normal Mixtures

    Tatsuoka, Curtis; Varadi, Ferenc; Jaeger, Judith

    2013-01-01

    Latent partially ordered sets (posets) can be employed in modeling cognitive functioning, such as in the analysis of neuropsychological (NP) and educational test data. Posets are cognitively diagnostic in the sense that classification states in these models are associated with detailed profiles of cognitive functioning. These profiles allow for…

  1. A classification of open string models

    Nahm, W.

    1985-12-01

    Open string models are classified using modular invariance. No good candidates for new models are found, though the existence of an E 8 invariant model in Rsup(17,1), a similar one in Rsup(5,1) and of a supersymmetric model in Rsup(2,1) cannot be excluded by this technique. An intriguing relation between the left moving and right moving sectors of the heterotic string emerges. (orig.)

  2. Latent Classification Models for Binary Data

    Langseth, Helge; Nielsen, Thomas Dyhre

    2009-01-01

    One of the simplest, and yet most consistently well-performing set of classifiers is the naive Bayes models (a special class of Bayesian network models). However, these models rely on the (naive) assumption that all the attributes used to describe an instance are conditionally independent given t...

  3. Role of the activation gate in determining the extracellular potassium dependency of block of HERG by trapped drugs.

    Pareja, Kristeen; Chu, Elaine; Dodyk, Katrina; Richter, Kristofer; Miller, Alan

    2013-01-01

    Drug induced long QT syndrome (diLQTS) results primarily from block of the cardiac potassium channel HERG (human-ether-a-go-go related gene). In some cases long QT syndrome can result in the lethal arrhythmia torsade de pointes, an arrhythmia characterized by a rapid heart rate and severely compromised cardiac output. Many patients requiring medication present with serum potassium abnormalities due to a variety of conditions including gastrointestinal dysfunction, renal and endocrine disorders, diuretic use, and aging. Extracellular potassium influences HERG channel inactivation and can alter block of HERG by some drugs. However, block of HERG by a number of drugs is not sensitive to extracellular potassium. In this study, we show that block of WT HERG by bepridil and terfenadine, two drugs previously shown to be trapped inside the HERG channel after the channel closes, is insensitive to extracellular potassium over the range of 0 mM to 20 mM. We also show that bepridil block of the HERG mutant D540K, a mutant channel that is unable to trap drugs, is dependent on extracellular potassium, correlates with the permeant ion, and is independent of HERG inactivation. These results suggest that the lack of extracellular potassium dependency of block of HERG by some drugs may in part be related to the ability of these drugs to be trapped inside the channel after the channel closes.

  4. Visualization of Nonlinear Classification Models in Neuroimaging - Signed Sensitivity Maps

    Rasmussen, Peter Mondrup; Schmah, Tanya; Madsen, Kristoffer Hougaard

    2012-01-01

    Classification models are becoming increasing popular tools in the analysis of neuroimaging data sets. Besides obtaining good prediction accuracy, a competing goal is to interpret how the classifier works. From a neuroscientific perspective, we are interested in the brain pattern reflecting...... the underlying neural encoding of an experiment defining multiple brain states. In this relation there is a great desire for the researcher to generate brain maps, that highlight brain locations of importance to the classifiers decisions. Based on sensitivity analysis, we develop further procedures for model...... direction the individual locations influence the classification. We illustrate the visualization procedure on a real data from a simple functional magnetic resonance imaging experiment....

  5. A Dirichlet process mixture model for brain MRI tissue classification.

    Ferreira da Silva, Adelino R

    2007-04-01

    Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.

  6. Interactions between charged residues in the transmembrane segments of the voltage-sensing domain in the hERG channel.

    Zhang, M; Liu, J; Jiang, M; Wu, D-M; Sonawane, K; Guy, H R; Tseng, G-N

    2005-10-01

    Studies on voltage-gated K channels such as Shaker have shown that positive charges in the voltage-sensor (S4) can form salt bridges with negative charges in the surrounding transmembrane segments in a state-dependent manner, and different charge pairings can stabilize the channels in closed or open states. The goal of this study is to identify such charge interactions in the hERG channel. This knowledge can provide constraints on the spatial relationship among transmembrane segments in the channel's voltage-sensing domain, which are necessary for modeling its structure. We first study the effects of reversing S4's positive charges on channel activation. Reversing positive charges at the outer (K525D) and inner (K538D) ends of S4 markedly accelerates hERG activation, whereas reversing the 4 positive charges in between either has no effect or slows activation. We then use the 'mutant cycle analysis' to test whether D456 (outer end of S2) and D411 (inner end of S1) can pair with K525 and K538, respectively. Other positive charges predicted to be able, or unable, to interact with D456 or D411 are also included in the analysis. The results are consistent with predictions based on the distribution of these charged residues, and confirm that there is functional coupling between D456 and K525 and between D411 and K538.

  7. A Classification of PLC Models and Applications

    Mader, Angelika H.; Boel, R.; Stremersch, G.

    In the past years there is an increasing interest in analysing PLC applications with formal methods. The first step to this end is to get formal models of PLC applications. Meanwhile, various models for PLCs have already been introduced in the literature. In our paper we discuss several

  8. Habitat classification modelling with incomplete data: Pushing the habitat envelope

    Phoebe L. Zarnetske; Thomas C. Edwards; Gretchen G. Moisen

    2007-01-01

    Habitat classification models (HCMs) are invaluable tools for species conservation, land-use planning, reserve design, and metapopulation assessments, particularly at broad spatial scales. However, species occurrence data are often lacking and typically limited to presence points at broad scales. This lack of absence data precludes the use of many statistical...

  9. Modeling and evaluating repeatability and reproducibility of ordinal classifications

    de Mast, J.; van Wieringen, W.N.

    2010-01-01

    This paper argues that currently available methods for the assessment of the repeatability and reproducibility of ordinal classifications are not satisfactory. The paper aims to study whether we can modify a class of models from Item Response Theory, well established for the study of the reliability

  10. New aspects of HERG K⁺ channel function depending upon cardiac spatial heterogeneity.

    Pen Zhang

    Full Text Available HERG K(+ channel, the genetic counterpart of rapid delayed rectifier K(+ current in cardiac cells, is responsible for many cases of inherited and drug-induced long QT syndromes. HERG has unusual biophysical properties distinct from those of other K(+ channels. While the conventional pulse protocols in patch-clamp studies have helped us elucidate these properties, their limitations in assessing HERG function have also been progressively noticed. We employed AP-clamp techniques using physiological action potential waveforms recorded from various regions of canine heart to study HERG function in HEK293 cells and identified several novel aspects of HERG function. We showed that under AP-clamp IHERG increased gradually with membrane repolarization, peaked at potentials around 20-30 mV more negative than revealed by pulse protocols and at action potential duration (APD to 60%-70% full repolarization, and fell rapidly at the terminal phase of repolarization. We found that the rising phase of IHERG was conferred by removal of inactivation and the decaying phase resulted from a fall in driving force, which were all determined by the rate of membrane repolarization. We identified regional heterogeneity and transmural gradient of IHERG when quantified with the area covered by IHERG trace. In addition, we observed regional and transmural differences of IHERG in response to dofetilide blockade. Finally, we characterized the influence of HERG function by selective inhibition of other ion currents. Based on our results, we conclude that the distinct biophysical properties of HERG reported by AP-clamp confer its unique function in cardiac repolarization thereby in antiarrhythmia and arrhythmogenesis.

  11. A Multi-Dimensional Classification Model for Scientific Workflow Characteristics

    Ramakrishnan, Lavanya; Plale, Beth

    2010-04-05

    Workflows have been used to model repeatable tasks or operations in manufacturing, business process, and software. In recent years, workflows are increasingly used for orchestration of science discovery tasks that use distributed resources and web services environments through resource models such as grid and cloud computing. Workflows have disparate re uirements and constraints that affects how they might be managed in distributed environments. In this paper, we present a multi-dimensional classification model illustrated by workflow examples obtained through a survey of scientists from different domains including bioinformatics and biomedical, weather and ocean modeling, astronomy detailing their data and computational requirements. The survey results and classification model contribute to the high level understandingof scientific workflows.

  12. Structural classification and a binary structure model for superconductors

    Dong Cheng

    2006-01-01

    Based on structural and bonding features, a new classification scheme of superconductors is proposed to classify conductors can be partitioned into two parts, a superconducting active component and a supplementary component.Partially metallic covalent bonding is found to be a common feature in all superconducting active components, and the electron states of the atoms in the active components usually make a dominant contribution to the energy band near the Fermi surface. Possible directions to explore new superconductors are discussed based on the structural classification and the binary structure model.

  13. A novel assessment of nefazodone-induced hERG inhibition by electrophysiological and stereochemical method

    Shin, Dae-Seop; Park, Myoung Joo [Drug Discovery Platform Technology Research Group, Korea Research Institute of Chemical Technology, Daejeon (Korea, Republic of); Lee, Hyang-Ae [Korea Institute of Toxicology, Korea Research Institute of Chemical Technology, Daejeon (Korea, Republic of); Lee, Joo Yun; Chung, Hee-Chung; Yoo, Dae Seok; Chae, Chong Hak [Drug Discovery Platform Technology Research Group, Korea Research Institute of Chemical Technology, Daejeon (Korea, Republic of); Park, Sang-Joon [College of Veterinary Medicine, Kyungpook National University, Daegu (Korea, Republic of); Kim, Ki-Suk [Korea Institute of Toxicology, Korea Research Institute of Chemical Technology, Daejeon (Korea, Republic of); Bae, Myung Ae, E-mail: mbae@krict.re.kr [Drug Discovery Platform Technology Research Group, Korea Research Institute of Chemical Technology, Daejeon (Korea, Republic of)

    2014-02-01

    Nefazodone was used widely as an antidepressant until it was withdrawn from the U.S. market in 2004 due to hepatotoxicity. We have investigated methods to predict various toxic effects of drug candidates to reduce the failure rate of drug discovery. An electrophysiological method was used to assess the cardiotoxicity of drug candidates. Small molecules, including withdrawn drugs, were evaluated using a patch-clamp method to establish a database of hERG inhibition. Nefazodone inhibited hERG channel activity in our system. However, nefazodone-induced hERG inhibition indicated only a theoretical risk of cardiotoxicity. Nefazodone inhibited the hERG channel in a concentration-dependent manner with an IC{sub 50} of 45.3 nM in HEK-293 cells. Nefazodone accelerated both the recovery from inactivation and its onset. Nefazodone also accelerated steady-state inactivation, although it did not modify the voltage-dependent character. Alanine mutants of hERG S6 and pore region residues were used to identify the nefazodone-binding site on hERG. The hERG S6 point mutants Y652A and F656A largely abolished the inhibition by nefazodone. The pore region mutant S624A mildly reduced the inhibition by nefazodone but T623A had little effect. A docking study showed that the aromatic rings of nefazodone interact with Y652 and F656 via π–π interactions, while an amine interacted with the S624 residue in the pore region. In conclusion, Y652 and F656 in the S6 domain play critical roles in nefazodone binding. - Highlights: • Nefazodone inhibits hERG channels with an IC{sub 50} of 45.3 nM in HEK-293 cells. • Nefazodone blocks hERG channels by binding to the open channels. • Y652 and F656 are important for binding of nefazodone. • The aromatic rings of nefazodone interact with Y652 and F656 via π–π interactions.

  14. Sparse Representation Based Binary Hypothesis Model for Hyperspectral Image Classification

    Yidong Tang

    2016-01-01

    Full Text Available The sparse representation based classifier (SRC and its kernel version (KSRC have been employed for hyperspectral image (HSI classification. However, the state-of-the-art SRC often aims at extended surface objects with linear mixture in smooth scene and assumes that the number of classes is given. Considering the small target with complex background, a sparse representation based binary hypothesis (SRBBH model is established in this paper. In this model, a query pixel is represented in two ways, which are, respectively, by background dictionary and by union dictionary. The background dictionary is composed of samples selected from the local dual concentric window centered at the query pixel. Thus, for each pixel the classification issue becomes an adaptive multiclass classification problem, where only the number of desired classes is required. Furthermore, the kernel method is employed to improve the interclass separability. In kernel space, the coding vector is obtained by using kernel-based orthogonal matching pursuit (KOMP algorithm. Then the query pixel can be labeled by the characteristics of the coding vectors. Instead of directly using the reconstruction residuals, the different impacts the background dictionary and union dictionary have on reconstruction are used for validation and classification. It enhances the discrimination and hence improves the performance.

  15. Early identification of hERG liability in drug discovery programs by automated patch clamp

    Timm eDanker

    2014-09-01

    Full Text Available Blockade of the cardiac ion channel coded by hERG can lead to cardiac arrhythmia, which has become a major concern in drug discovery and development. Automated electrophysiological patch clamp allows assessment of hERG channel effects early in drug development to aid medicinal chemistry programs and has become routine in pharmaceutical companies. However, a number of potential sources of errors in setting up hERG channel assays by automated patch clamp can lead to misinterpretation of data or false effects being reported. This article describes protocols for automated electrophysiology screening of compound effects on the hERG channel current. Protocol details and the translation of criteria known from manual patch clamp experiments to automated patch clamp experiments to achieve good quality data are emphasized. Typical pitfalls and artifacts that may lead to misinterpretation of data are discussed. While this article focuses on hERG channel recordings using the QPatch (Sophion A/S, Copenhagen, Denmark technology, many of the assay and protocol details given in this article can be transferred for setting up different ion channel assays by automated patch clamp and are similar on other planar patch clamp platforms.

  16. Endocytosis of HERG is clathrin-independent and involves arf6.

    Rucha Karnik

    Full Text Available The hERG potassium channel is critical for repolarisation of the cardiac action potential. Reduced expression of hERG at the plasma membrane, whether caused by hereditary mutations or drugs, results in long QT syndrome and increases the risk of ventricular arrhythmias. Thus, it is of fundamental importance to understand how the density of this channel at the plasma membrane is regulated. We used antibodies to an extracellular native or engineered epitope, in conjunction with immunofluorescence and ELISA, to investigate the mechanism of hERG endocytosis in recombinant cells and validated the findings in rat neonatal cardiac myocytes. The data reveal that this channel undergoes rapid internalisation, which is inhibited by neither dynasore, an inhibitor of dynamin, nor a dominant negative construct of Rab5a, into endosomes that are largely devoid of the transferrin receptor. These results support a clathrin-independent mechanism of endocytosis and exclude involvement of dynamin-dependent caveolin and RhoA mechanisms. In agreement, internalised hERG displayed marked overlap with glycosylphosphatidylinositol-anchored GFP, a clathrin-independent cargo. Endocytosis was significantly affected by cholesterol extraction with methyl-β-cyclodextrin and inhibition of Arf6 function with dominant negative Arf6-T27N-eGFP. Taken together, we conclude that hERG undergoes clathrin-independent endocytosis via a mechanism involving Arf6.

  17. Endocytosis of hERG Is Clathrin-Independent and Involves Arf6

    Abuarab, Nada; Smith, Andrew J.; Hardy, Matthew E. L.; Elliott, David J. S.; Sivaprasadarao, Asipu

    2013-01-01

    The hERG potassium channel is critical for repolarisation of the cardiac action potential. Reduced expression of hERG at the plasma membrane, whether caused by hereditary mutations or drugs, results in long QT syndrome and increases the risk of ventricular arrhythmias. Thus, it is of fundamental importance to understand how the density of this channel at the plasma membrane is regulated. We used antibodies to an extracellular native or engineered epitope, in conjunction with immunofluorescence and ELISA, to investigate the mechanism of hERG endocytosis in recombinant cells and validated the findings in rat neonatal cardiac myocytes. The data reveal that this channel undergoes rapid internalisation, which is inhibited by neither dynasore, an inhibitor of dynamin, nor a dominant negative construct of Rab5a, into endosomes that are largely devoid of the transferrin receptor. These results support a clathrin-independent mechanism of endocytosis and exclude involvement of dynamin-dependent caveolin and RhoA mechanisms. In agreement, internalised hERG displayed marked overlap with glycosylphosphatidylinositol-anchored GFP, a clathrin-independent cargo. Endocytosis was significantly affected by cholesterol extraction with methyl-β-cyclodextrin and inhibition of Arf6 function with dominant negative Arf6-T27N-eGFP. Taken together, we conclude that hERG undergoes clathrin-independent endocytosis via a mechanism involving Arf6. PMID:24392021

  18. A strategy learning model for autonomous agents based on classification

    Śnieżyński Bartłomiej

    2015-09-01

    Full Text Available In this paper we propose a strategy learning model for autonomous agents based on classification. In the literature, the most commonly used learning method in agent-based systems is reinforcement learning. In our opinion, classification can be considered a good alternative. This type of supervised learning can be used to generate a classifier that allows the agent to choose an appropriate action for execution. Experimental results show that this model can be successfully applied for strategy generation even if rewards are delayed. We compare the efficiency of the proposed model and reinforcement learning using the farmer-pest domain and configurations of various complexity. In complex environments, supervised learning can improve the performance of agents much faster that reinforcement learning. If an appropriate knowledge representation is used, the learned knowledge may be analyzed by humans, which allows tracking the learning process

  19. Irresponsiveness of two retinoblastoma cases to conservative therapy correlates with up- regulation of hERG1 channels and of the VEGF-A pathway

    La Torre Agostino

    2010-09-01

    Full Text Available Abstract Background Treatment strategies for Retinoblastoma (RB, the most common primary intraocular tumor in children, have evolved over the past few decades and chemoreduction is currently the most popular treatment strategy. Despite success, systemic chemotherapeutic treatment has relevant toxicity, especially in the pediatric population. Antiangiogenic therapy has thus been proposed as a valuable alternative for pediatric malignancies, in particolar RB. Indeed, it has been shown that vessel density correlates with both local invasive growth and presence of metastases in RB, suggesting that angiogenesis could play a pivotal role for both local and systemic invasive growth in RB. We present here two cases of sporadic, bilateral RB that did not benefit from the conservative treatment and we provide evidence that the VEGF-A pathway is significantly up-regulated in both RB cases along with an over expression of hERG1 K+ channels. Case presentation Two patients showed a sporadic, bilateral RB, classified at Stage II of the Reese-Elsworth Classification. Neither of them got benefits from conservative treatment, and the two eyes were enucleated. In samples from both RB cases we studied the VEGF-A pathway: VEGF-A showed high levels in the vitreous, the vegf-a, flt-1, kdr, and hif1-α transcripts were over-expressed. Moreover, both the transcripts and proteins of the hERG1 K+ channels turned out to be up-regulated in the two RB cases compared to the non cancerous retinal tissue. Conclusions We provide evidence that the VEGF-A pathway is up-regulated in two particular aggressive cases of bilateral RB, which did not experience any benefit from conservative treatment, showing the overexpression of the vegf-a, flt-1, kdr and hif1-α transcripts and the high secretion of VEGF-A. Moreover we also show for the first time that the herg1 gene transcripts and protein are over expressed in RB, as occurs in several aggressive tumors. These results further stress

  20. Music Genre Classification using an Auditory Memory Model

    Jensen, Kristoffer

    2011-01-01

    Audio feature estimation is potentially improved by including higher- level models. One such model is the Auditory Short Term Memory (STM) model. A new paradigm of audio feature estimation is obtained by adding the influence of notes in the STM. These notes are identified when the perceptual...... results, and an initial experiment with sensory dissonance has been undertaken with good results. The parameters obtained form the auditory memory model, along with the dissonance measure, are shown here to be of interest in genre classification....

  1. A Classification Methodology and Retrieval Model to Support Software Reuse

    1988-01-01

    Dewey Decimal Classification ( DDC 18), an enumerative scheme, occupies 40 pages [Buchanan 19791. Langridge [19731 states that the facets listed in the...sense of historical importance or wide spread use. The schemes are: Dewey Decimal Classification ( DDC ), Universal Decimal Classification (UDC...Classification Systems ..... ..... 2.3.3 Library Classification__- .52 23.3.1 Dewey Decimal Classification -53 2.33.2 Universal Decimal Classification 55 2333

  2. Classification of customer lifetime value models using Markov chain

    Permana, Dony; Pasaribu, Udjianna S.; Indratno, Sapto W.; Suprayogi

    2017-10-01

    A firm’s potential reward in future time from a customer can be determined by customer lifetime value (CLV). There are some mathematic methods to calculate it. One method is using Markov chain stochastic model. Here, a customer is assumed through some states. Transition inter the states follow Markovian properties. If we are given some states for a customer and the relationships inter states, then we can make some Markov models to describe the properties of the customer. As Markov models, CLV is defined as a vector contains CLV for a customer in the first state. In this paper we make a classification of Markov Models to calculate CLV. Start from two states of customer model, we make develop in many states models. The development a model is based on weaknesses in previous model. Some last models can be expected to describe how real characters of customers in a firm.

  3. A Pruning Neural Network Model in Credit Classification Analysis

    Yajiao Tang

    2018-01-01

    Full Text Available Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency.

  4. Group-Based Active Learning of Classification Models.

    Luo, Zhipeng; Hauskrecht, Milos

    2017-05-01

    Learning of classification models from real-world data often requires additional human expert effort to annotate the data. However, this process can be rather costly and finding ways of reducing the human annotation effort is critical for this task. The objective of this paper is to develop and study new ways of providing human feedback for efficient learning of classification models by labeling groups of examples. Briefly, unlike traditional active learning methods that seek feedback on individual examples, we develop a new group-based active learning framework that solicits label information on groups of multiple examples. In order to describe groups in a user-friendly way, conjunctive patterns are used to compactly represent groups. Our empirical study on 12 UCI data sets demonstrates the advantages and superiority of our approach over both classic instance-based active learning work, as well as existing group-based active-learning methods.

  5. A Novel Computer Virus Propagation Model under Security Classification

    Qingyi Zhu

    2017-01-01

    Full Text Available In reality, some computers have specific security classification. For the sake of safety and cost, the security level of computers will be upgraded with increasing of threats in networks. Here we assume that there exists a threshold value which determines when countermeasures should be taken to level up the security of a fraction of computers with low security level. And in some specific realistic environments the propagation network can be regarded as fully interconnected. Inspired by these facts, this paper presents a novel computer virus dynamics model considering the impact brought by security classification in full interconnection network. By using the theory of dynamic stability, the existence of equilibria and stability conditions is analysed and proved. And the above optimal threshold value is given analytically. Then, some numerical experiments are made to justify the model. Besides, some discussions and antivirus measures are given.

  6. Various forms of indexing HDMR for modelling multivariate classification problems

    Aksu, Çağrı [Bahçeşehir University, Information Technologies Master Program, Beşiktaş, 34349 İstanbul (Turkey); Tunga, M. Alper [Bahçeşehir University, Software Engineering Department, Beşiktaş, 34349 İstanbul (Turkey)

    2014-12-10

    The Indexing HDMR method was recently developed for modelling multivariate interpolation problems. The method uses the Plain HDMR philosophy in partitioning the given multivariate data set into less variate data sets and then constructing an analytical structure through these partitioned data sets to represent the given multidimensional problem. Indexing HDMR makes HDMR be applicable to classification problems having real world data. Mostly, we do not know all possible class values in the domain of the given problem, that is, we have a non-orthogonal data structure. However, Plain HDMR needs an orthogonal data structure in the given problem to be modelled. In this sense, the main idea of this work is to offer various forms of Indexing HDMR to successfully model these real life classification problems. To test these different forms, several well-known multivariate classification problems given in UCI Machine Learning Repository were used and it was observed that the accuracy results lie between 80% and 95% which are very satisfactory.

  7. Fuzzy classification of phantom parent groups in an animal model

    Fikse Freddy

    2009-09-01

    Full Text Available Abstract Background Genetic evaluation models often include genetic groups to account for unequal genetic level of animals with unknown parentage. The definition of phantom parent groups usually includes a time component (e.g. years. Combining several time periods to ensure sufficiently large groups may create problems since all phantom parents in a group are considered contemporaries. Methods To avoid the downside of such distinct classification, a fuzzy logic approach is suggested. A phantom parent can be assigned to several genetic groups, with proportions between zero and one that sum to one. Rules were presented for assigning coefficients to the inverse of the relationship matrix for fuzzy-classified genetic groups. This approach was illustrated with simulated data from ten generations of mass selection. Observations and pedigree records were randomly deleted. Phantom parent groups were defined on the basis of gender and generation number. In one scenario, uncertainty about generation of birth was simulated for some animals with unknown parents. In the distinct classification, one of the two possible generations of birth was randomly chosen to assign phantom parents to genetic groups for animals with simulated uncertainty, whereas the phantom parents were assigned to both possible genetic groups in the fuzzy classification. Results The empirical prediction error variance (PEV was somewhat lower for fuzzy-classified genetic groups. The ranking of animals with unknown parents was more correct and less variable across replicates in comparison with distinct genetic groups. In another scenario, each phantom parent was assigned to three groups, one pertaining to its gender, and two pertaining to the first and last generation, with proportion depending on the (true generation of birth. Due to the lower number of groups, the empirical PEV of breeding values was smaller when genetic groups were fuzzy-classified. Conclusion Fuzzy-classification

  8. ISBDD Model for Classification of Hyperspectral Remote Sensing Imagery

    Na Li

    2018-03-01

    Full Text Available The diverse density (DD algorithm was proposed to handle the problem of low classification accuracy when training samples contain interference such as mixed pixels. The DD algorithm can learn a feature vector from training bags, which comprise instances (pixels. However, the feature vector learned by the DD algorithm cannot always effectively represent one type of ground cover. To handle this problem, an instance space-based diverse density (ISBDD model that employs a novel training strategy is proposed in this paper. In the ISBDD model, DD values of each pixel are computed instead of learning a feature vector, and as a result, the pixel can be classified according to its DD values. Airborne hyperspectral data collected by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS sensor and the Push-broom Hyperspectral Imager (PHI are applied to evaluate the performance of the proposed model. Results show that the overall classification accuracy of ISBDD model on the AVIRIS and PHI images is up to 97.65% and 89.02%, respectively, while the kappa coefficient is up to 0.97 and 0.88, respectively.

  9. MODEL-BASED CLUSTERING FOR CLASSIFICATION OF AQUATIC SYSTEMS AND DIAGNOSIS OF ECOLOGICAL STRESS

    Clustering approaches were developed using the classification likelihood, the mixture likelihood, and also using a randomization approach with a model index. Using a clustering approach based on the mixture and classification likelihoods, we have developed an algorithm that...

  10. Learning Supervised Topic Models for Classification and Regression from Crowds.

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete; Pereira, Francisco C

    2017-12-01

    The growing need to analyze large collections of documents has led to great developments in topic modeling. Since documents are frequently associated with other related variables, such as labels or ratings, much interest has been placed on supervised topic models. However, the nature of most annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages of the proposed model over state-of-the-art approaches.

  11. In Silico Predictions of hERG Channel Blockers in Drug Discovery

    Taboureau, Olivier; Sørensen, Flemming Steen

    2011-01-01

    The risk for cardiotoxic side effects represents a major problem in clinical studies of drug candidates and regulatory agencies have explicitly recommended that all new drug candidates should be tested for blockage of the human Ether-a-go-go Related-Gene (hERG) potassium channel. Indeed, several ...

  12. Kuifjes katholieke jeugd. De katholieke achtergrond van Hergé. Deel 1 van 2

    de Groot, C.N.

    2013-01-01

    Following the launch of Steven Spielberg’s ‘Tintin and the Secret of the Unicorn’, l’Osservore Romano hailed Tintin as a ‘catholic hero’. This article demonstrates that the comic originates, more specifically, in conservative and reactionary milieus in Belgian Catholicism. Hergé (ps. for Georges

  13. A radiolabeled peptide ligand of the hERG channel, [125I]-BeKm-1

    Angelo, Kamilla; Korolkova, Yuliya V; Grunnet, Morten

    2003-01-01

    The wild-type scorpion toxin BeKm-1, which selectively blocks human ether-a-go-go related (hERG) channels, was radiolabeled with iodine at tyrosine 11. Both the mono- and di-iodinated derivatives were found to be biologically active. In electrophysiological patch-clamp recordings mono-[127I]-BeKm-1...... had a concentration of half-maximal inhibition (IC50 value) of 27 nM, while wild-type BeKm-1 inhibited hERG channels with an IC50 value of 7 nM. Mono-[125I]-BeKm-1 was found to bind in a concentration-dependent manner and with picomolar affinity to hERG channel protein in purified membrane vesicles...... of [125I]-BeKm-1 to the hERG channel to an IC50 of 7 nM. In autoradiographic studies on rat hearts, binding of [125I]-BeKm-1 was dose-dependent and could partially be displaced by the addition of excess amounts of non-radioactive BeKm-1. The density of the radioactive signal was equally distributed...

  14. A molecular switch driving inactivation in the cardiac K+ channel HERG.

    David A Köpfer

    Full Text Available K(+ channels control transmembrane action potentials by gating open or closed in response to external stimuli. Inactivation gating, involving a conformational change at the K(+ selectivity filter, has recently been recognized as a major K(+ channel regulatory mechanism. In the K(+ channel hERG, inactivation controls the length of the human cardiac action potential. Mutations impairing hERG inactivation cause life-threatening cardiac arrhythmia, which also occur as undesired side effects of drugs. In this paper, we report atomistic molecular dynamics simulations, complemented by mutational and electrophysiological studies, which suggest that the selectivity filter adopts a collapsed conformation in the inactivated state of hERG. The selectivity filter is gated by an intricate hydrogen bond network around residues S620 and N629. Mutations of this hydrogen bond network are shown to cause inactivation deficiency in electrophysiological measurements. In addition, drug-related conformational changes around the central cavity and pore helix provide a functional mechanism for newly discovered hERG activators.

  15. A Categorical Framework for Model Classification in the Geosciences

    Hauhs, Michael; Trancón y Widemann, Baltasar; Lange, Holger

    2016-04-01

    Models have a mixed record of success in the geosciences. In meteorology, model development and implementation has been among the first and most successful examples of triggering computer technology in science. On the other hand, notorious problems such as the 'equifinality issue' in hydrology lead to a rather mixed reputation of models in other areas. The most successful models in geosciences are applications of dynamic systems theory to non-living systems or phenomena. Thus, we start from the hypothesis that the success of model applications relates to the influence of life on the phenomenon under study. We thus focus on the (formal) representation of life in models. The aim is to investigate whether disappointment in model performance is due to system properties such as heterogeneity and historicity of ecosystems, or rather reflects an abstraction and formalisation problem at a fundamental level. As a formal framework for this investigation, we use category theory as applied in computer science to specify behaviour at an interface. Its methods have been developed for translating and comparing formal structures among different application areas and seems highly suited for a classification of the current "model zoo" in the geosciences. The approach is rather abstract, with a high degree of generality but a low level of expressibility. Here, category theory will be employed to check the consistency of assumptions about life in different models. It will be shown that it is sufficient to distinguish just four logical cases to check for consistency of model content. All four cases can be formalised as variants of coalgebra-algebra homomorphisms. It can be demonstrated that transitions between the four variants affect the relevant observations (time series or spatial maps), the formalisms used (equations, decision trees) and the test criteria of success (prediction, classification) of the resulting model types. We will present examples from hydrology and ecology in

  16. Learning Supervised Topic Models for Classification and Regression from Crowds

    Rodrigues, Filipe; Lourenco, Mariana; Ribeiro, Bernardete

    2017-01-01

    problems, which account for the heterogeneity and biases among different annotators that are encountered in practice when learning from crowds. We develop an efficient stochastic variational inference algorithm that is able to scale to very large datasets, and we empirically demonstrate the advantages...... annotation tasks, prone to ambiguity and noise, often with high volumes of documents, deem learning under a single-annotator assumption unrealistic or unpractical for most real-world applications. In this article, we propose two supervised topic models, one for classification and another for regression...

  17. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    Najibi, Seyed Morteza

    2017-02-08

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  18. Protein Structure Classification and Loop Modeling Using Multiple Ramachandran Distributions

    Najibi, Seyed Morteza; Maadooliat, Mehdi; Zhou, Lan; Huang, Jianhua Z.; Gao, Xin

    2017-01-01

    Recently, the study of protein structures using angular representations has attracted much attention among structural biologists. The main challenge is how to efficiently model the continuous conformational space of the protein structures based on the differences and similarities between different Ramachandran plots. Despite the presence of statistical methods for modeling angular data of proteins, there is still a substantial need for more sophisticated and faster statistical tools to model the large-scale circular datasets. To address this need, we have developed a nonparametric method for collective estimation of multiple bivariate density functions for a collection of populations of protein backbone angles. The proposed method takes into account the circular nature of the angular data using trigonometric spline which is more efficient compared to existing methods. This collective density estimation approach is widely applicable when there is a need to estimate multiple density functions from different populations with common features. Moreover, the coefficients of adaptive basis expansion for the fitted densities provide a low-dimensional representation that is useful for visualization, clustering, and classification of the densities. The proposed method provides a novel and unique perspective to two important and challenging problems in protein structure research: structure-based protein classification and angular-sampling-based protein loop structure prediction.

  19. Likelihood ratio model for classification of forensic evidence

    Zadora, G., E-mail: gzadora@ies.krakow.pl [Institute of Forensic Research, Westerplatte 9, 31-033 Krakow (Poland); Neocleous, T., E-mail: tereza@stats.gla.ac.uk [University of Glasgow, Department of Statistics, 15 University Gardens, Glasgow G12 8QW (United Kingdom)

    2009-05-29

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H{sub 1})/p(E|H{sub 2}). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI{sub b}) and after (RI{sub a}) the annealing process, in the form of dRI = log{sub 10}|RI{sub a} - RI{sub b}|. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this

  20. Likelihood ratio model for classification of forensic evidence

    Zadora, G.; Neocleous, T.

    2009-01-01

    One of the problems of analysis of forensic evidence such as glass fragments, is the determination of their use-type category, e.g. does a glass fragment originate from an unknown window or container? Very small glass fragments arise during various accidents and criminal offences, and could be carried on the clothes, shoes and hair of participants. It is therefore necessary to obtain information on their physicochemical composition in order to solve the classification problem. Scanning Electron Microscopy coupled with an Energy Dispersive X-ray Spectrometer and the Glass Refractive Index Measurement method are routinely used in many forensic institutes for the investigation of glass. A natural form of glass evidence evaluation for forensic purposes is the likelihood ratio-LR = p(E|H 1 )/p(E|H 2 ). The main aim of this paper was to study the performance of LR models for glass object classification which considered one or two sources of data variability, i.e. between-glass-object variability and(or) within-glass-object variability. Within the proposed model a multivariate kernel density approach was adopted for modelling the between-object distribution and a multivariate normal distribution was adopted for modelling within-object distributions. Moreover, a graphical method of estimating the dependence structure was employed to reduce the highly multivariate problem to several lower-dimensional problems. The performed analysis showed that the best likelihood model was the one which allows to include information about between and within-object variability, and with variables derived from elemental compositions measured by SEM-EDX, and refractive values determined before (RI b ) and after (RI a ) the annealing process, in the form of dRI = log 10 |RI a - RI b |. This model gave better results than the model with only between-object variability considered. In addition, when dRI and variables derived from elemental compositions were used, this model outperformed two other

  1. Models of parallel computation :a survey and classification

    ZHANG Yunquan; CHEN Guoliang; SUN Guangzhong; MIAO Qiankun

    2007-01-01

    In this paper,the state-of-the-art parallel computational model research is reviewed.We will introduce various models that were developed during the past decades.According to their targeting architecture features,especially memory organization,we classify these parallel computational models into three generations.These models and their characteristics are discussed based on three generations classification.We believe that with the ever increasing speed gap between the CPU and memory systems,incorporating non-uniform memory hierarchy into computational models will become unavoidable.With the emergence of multi-core CPUs,the parallelism hierarchy of current computing platforms becomes more and more complicated.Describing this complicated parallelism hierarchy in future computational models becomes more and more important.A semi-automatic toolkit that can extract model parameters and their values on real computers can reduce the model analysis complexity,thus allowing more complicated models with more parameters to be adopted.Hierarchical memory and hierarchical parallelism will be two very important features that should be considered in future model design and research.

  2. Bone Turnover Status: Classification Model and Clinical Implications

    Fisher, Alexander; Fisher, Leon; Srikusalanukul, Wichat; Smith, Paul N

    2018-01-01

    Aim: To develop a practical model for classification bone turnover status and evaluate its clinical usefulness. Methods: Our classification of bone turnover status is based on internationally recommended biomarkers of both bone formation (N-terminal propeptide of type1 procollagen, P1NP) and bone resorption (beta C-terminal cross-linked telopeptide of type I collagen, bCTX), using the cutoffs proposed as therapeutic targets. The relationships between turnover subtypes and clinical characteristic were assessed in1223 hospitalised orthogeriatric patients (846 women, 377 men; mean age 78.1±9.50 years): 451(36.9%) subjects with hip fracture (HF), 396(32.4%) with other non-vertebral (non-HF) fractures (HF) and 376 (30.7%) patients without fractures. Resalts: Six subtypes of bone turnover status were identified: 1 - normal turnover (P1NP>32 μg/L, bCTX≤0.250 μg/L and P1NP/bCTX>100.0[(median value]); 2- low bone formation (P1NP ≤32 μg/L), normal bone resorption (bCTX≤0.250 μg/L) and P1NP/bCTX>100.0 (subtype2A) or P1NP/bCTX0.250 μg/L) and P1NP/bCTXturnover (both markers elevated ) and P1NP/bCTX>100.0 (subtype 4A) or P1NP/bCTX75 years and hyperparathyroidism. Hypoalbuminaemia and not using osteoporotic therapy were two independent indicators common for subtypes 3, 4A and 4B; these three subtypes were associated with in-hospital mortality. Subtype 3 was associated with fractures (OR 1.7, for HF OR 2.4), age>75 years, chronic heart failure (CHF), anaemia, and history of malignancy, and predicted post-operative myocardial injury, high inflammatory response and length of hospital stay (LOS) above10 days. Subtype 4A was associated with chronic kidney disease (CKD), anaemia, history of malignancy and walking aids use and predicted LOS>20 days, but was not discriminative for fractures. Subtype 4B was associated with fractures (OR 2.1, for HF OR 2.5), age>75 years, CKD and indicated risks of myocardial injury, high inflammatory response and LOS>10 days. Conclusions: We

  3. Best Practices in Academic Management. Study Programs Classification Model

    Ofelia Ema Aleca

    2016-05-01

    Full Text Available This article proposes and tests a set of performance indicators for the assessment of Bachelor and Master studies, from two perspectives: the study programs and the disciplines. The academic performance at the level of a study program shall be calculated based on success and efficiency rates, and at discipline level, on the basis of rates of efficiency, success and absenteeism. This research proposes a model of classification of the study programs within a Bachelor and Master cycle based on the education performance and efficiency. What recommends this model as a best practice model in academic management is the possibility of grouping a study program or a discipline in a particular category of efficiency

  4. Data on the construction of a recombinant HEK293 cell line overexpressing hERG potassium channel and examining the presence of hERG mRNA and protein expression

    Yi Fan Teah

    2017-10-01

    Full Text Available The data presented in this article are related to the research article entitled “The effects of deoxyelephantopin on the cardiac delayed rectifier potassium channel current (IKr and human ether-a-go-go-related gene (hERG expression” (Y.F. Teah, M.A. Abduraman, A. Amanah, M.I. Adenan, S.F. Sulaiman, M.L. Tan [1], which the possible hERG blocking properties of deoxyelephantopin were investigated. This article describes the construction of human embryonic kidney 293 (HEK293 cells overexpressing HERG potassium channel and verification of the presence of hERG mRNA and protein expression in this recombinant cell line.

  5. Effect of beta-adrenoceptor blockers on human ether-a-go-go-related gene (HERG) potassium channels

    Dupuis, Delphine S; Klaerke, Dan A; Olesen, Søren-Peter

    2005-01-01

    Patients with congenital long QT syndrome may develop arrhythmias under conditions of increased sympathetic tone. We have addressed whether some of the beta-adrenoceptor blockers commonly used to prevent the development of these arrhythmias could per se block the cardiac HERG (Human Ether....... These data showed that HERG blockade by beta-adrenoceptor blockers occurred only at high micromolar concentrations, which are significantly above the recently established safe margin of 100 (Redfern et al., 2003).......-1H-inden-4-yl)oxy]-3-[(1-methylethyl)amino]-2-butanol hydrochloride) blocked the HERG channel with similar affinity, whereas the beta1-receptor antagonists metoprolol and atenolol showed weak effects. Further, the four compounds blocked HERG channels expressed in a mammalian HEK293 cell line...

  6. Effects of Tannic Acid, Green Tea and Red Wine on hERG Channels Expressed in HEK293 Cells.

    Xi Chu

    Full Text Available Tannic acid presents in varying concentrations in plant foods, and in relatively high concentrations in green teas and red wines. Human ether-à-go-go-related gene (hERG channels expressed in multiple tissues (e.g. heart, neurons, smooth muscle and cancer cells, and play important roles in modulating cardiac action potential repolarization and tumor cell biology. The present study investigated the effects of tannic acid, green teas and red wines on hERG currents. The effects of tannic acid, teas and red wines on hERG currents stably transfected in HEK293 cells were studied with a perforated patch clamp technique. In this study, we demonstrated that tannic acid inhibited hERG currents with an IC50 of 3.4 μM and ~100% inhibition at higher concentrations, and significantly shifted the voltage dependent activation to more positive potentials (Δ23.2 mV. Remarkably, a 100-fold dilution of multiple types of tea (green tea, oolong tea and black tea or red wine inhibited hERG currents by ~90%, and significantly shifted the voltage dependent activation to more positive potentials (Δ30.8 mV and Δ26.0 mV, respectively. Green tea Lung Ching and red wine inhibited hERG currents, with IC50 of 0.04% and 0.19%, respectively. The effects of tannic acid, teas and red wine on hERG currents were irreversible. These results suggest tannic acid is a novel hERG channel blocker and consequently provide a new mechanistic evidence for understanding the effects of tannic acid. They also revealed the potential pharmacological basis of tea- and red wine-induced biology activities.

  7. Effects of Tannic Acid, Green Tea and Red Wine on hERG Channels Expressed in HEK293 Cells

    Xu, Bingyuan; Li, Wenya; Lin, Yue; Sun, Xiaorun; Ding, Chunhua; Zhang, Xuan

    2015-01-01

    Tannic acid presents in varying concentrations in plant foods, and in relatively high concentrations in green teas and red wines. Human ether-à-go-go-related gene (hERG) channels expressed in multiple tissues (e.g. heart, neurons, smooth muscle and cancer cells), and play important roles in modulating cardiac action potential repolarization and tumor cell biology. The present study investigated the effects of tannic acid, green teas and red wines on hERG currents. The effects of tannic acid, teas and red wines on hERG currents stably transfected in HEK293 cells were studied with a perforated patch clamp technique. In this study, we demonstrated that tannic acid inhibited hERG currents with an IC50 of 3.4 μM and ~100% inhibition at higher concentrations, and significantly shifted the voltage dependent activation to more positive potentials (Δ23.2 mV). Remarkably, a 100-fold dilution of multiple types of tea (green tea, oolong tea and black tea) or red wine inhibited hERG currents by ~90%, and significantly shifted the voltage dependent activation to more positive potentials (Δ30.8 mV and Δ26.0 mV, respectively). Green tea Lung Ching and red wine inhibited hERG currents, with IC50 of 0.04% and 0.19%, respectively. The effects of tannic acid, teas and red wine on hERG currents were irreversible. These results suggest tannic acid is a novel hERG channel blocker and consequently provide a new mechanistic evidence for understanding the effects of tannic acid. They also revealed the potential pharmacological basis of tea- and red wine-induced biology activities. PMID:26625122

  8. Twitter classification model: the ABC of two million fitness tweets.

    Vickey, Theodore A; Ginis, Kathleen Martin; Dabrowski, Maciej

    2013-09-01

    The purpose of this project was to design and test data collection and management tools that can be used to study the use of mobile fitness applications and social networking within the context of physical activity. This project was conducted over a 6-month period and involved collecting publically shared Twitter data from five mobile fitness apps (Nike+, RunKeeper, MyFitnessPal, Endomondo, and dailymile). During that time, over 2.8 million tweets were collected, processed, and categorized using an online tweet collection application and a customized JavaScript. Using the grounded theory, a classification model was developed to categorize and understand the types of information being shared by application users. Our data show that by tracking mobile fitness app hashtags, a wealth of information can be gathered to include but not limited to daily use patterns, exercise frequency, location-based workouts, and overall workout sentiment.

  9. A comparative study of machine learning models for ethnicity classification

    Trivedi, Advait; Bessie Amali, D. Geraldine

    2017-11-01

    This paper endeavours to adopt a machine learning approach to solve the problem of ethnicity recognition. Ethnicity identification is an important vision problem with its use cases being extended to various domains. Despite the multitude of complexity involved, ethnicity identification comes naturally to humans. This meta information can be leveraged to make several decisions, be it in target marketing or security. With the recent development of intelligent systems a sub module to efficiently capture ethnicity would be useful in several use cases. Several attempts to identify an ideal learning model to represent a multi-ethnic dataset have been recorded. A comparative study of classifiers such as support vector machines, logistic regression has been documented. Experimental results indicate that the logical classifier provides a much accurate classification than the support vector machine.

  10. DEEP LEARNING MODEL FOR BILINGUAL SENTIMENT CLASSIFICATION OF SHORT TEXTS

    Y. B. Abdullin

    2017-01-01

    Full Text Available Sentiment analysis of short texts such as Twitter messages and comments in news portals is challenging due to the lack of contextual information. We propose a deep neural network model that uses bilingual word embeddings to effectively solve sentiment classification problem for a given pair of languages. We apply our approach to two corpora of two different language pairs: English-Russian and Russian-Kazakh. We show how to train a classifier in one language and predict in another. Our approach achieves 73% accuracy for English and 74% accuracy for Russian. For Kazakh sentiment analysis, we propose a baseline method, that achieves 60% accuracy; and a method to learn bilingual embeddings from a large unlabeled corpus using a bilingual word pairs.

  11. Organizational information assets classification model and security architecture methodology

    Mostafa Tamtaji

    2015-12-01

    Full Text Available Today's, Organizations are exposed with huge and diversity of information and information assets that are produced in different systems shuch as KMS, financial and accounting systems, official and industrial automation sysytems and so on and protection of these information is necessary. Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released.several benefits of this model cuses that organization has a great trend to implementing Cloud computing. Maintaining and management of information security is the main challenges in developing and accepting of this model. In this paper, at first, according to "design science research methodology" and compatible with "design process at information systems research", a complete categorization of organizational assets, including 355 different types of information assets in 7 groups and 3 level, is presented to managers be able to plan corresponding security controls according to importance of each groups. Then, for directing of organization to architect it’s information security in cloud computing environment, appropriate methodology is presented. Presented cloud computing security architecture , resulted proposed methodology, and presented classification model according to Delphi method and expers comments discussed and verified.

  12. Classification of NLO operators for composite Higgs models

    Alanne, Tommi; Bizot, Nicolas; Cacciapaglia, Giacomo; Sannino, Francesco

    2018-04-01

    We provide a general classification of template operators, up to next-to-leading order, that appear in chiral perturbation theories based on the two flavor patterns of spontaneous symmetry breaking SU (NF)/Sp (NF) and SU (NF)/SO (NF). All possible explicit-breaking sources parametrized by spurions transforming in the fundamental and in the two-index representations of the flavor symmetry are included. While our general framework can be applied to any model of strong dynamics, we specialize to composite-Higgs models, where the main explicit breaking sources are a current mass, the gauging of flavor symmetries, and the Yukawa couplings (for the top). For the top, we consider both bilinear couplings and linear ones à la partial compositeness. Our templates provide a basis for lattice calculations in specific models. As a special example, we consider the SU (4 )/Sp (4 )≅SO (6 )/SO (5 ) pattern which corresponds to the minimal fundamental composite-Higgs model. We further revisit issues related to the misalignment of the vacuum. In particular, we shed light on the physical properties of the singlet η , showing that it cannot develop a vacuum expectation value without explicit C P violation in the underlying theory.

  13. Model sparsity and brain pattern interpretation of classification models in neuroimaging

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Churchill, Nathan W

    2012-01-01

    Interest is increasing in applying discriminative multivariate analysis techniques to the analysis of functional neuroimaging data. Model interpretation is of great importance in the neuroimaging context, and is conventionally based on a ‘brain map’ derived from the classification model. In this ...

  14. Effects of sample survey design on the accuracy of classification tree models in species distribution models

    Thomas C. Edwards; D. Richard Cutler; Niklaus E. Zimmermann; Linda Geiser; Gretchen G. Moisen

    2006-01-01

    We evaluated the effects of probabilistic (hereafter DESIGN) and non-probabilistic (PURPOSIVE) sample surveys on resultant classification tree models for predicting the presence of four lichen species in the Pacific Northwest, USA. Models derived from both survey forms were assessed using an independent data set (EVALUATION). Measures of accuracy as gauged by...

  15. Design of a hybrid model for cardiac arrhythmia classification based on Daubechies wavelet transform.

    Rajagopal, Rekha; Ranganathan, Vidhyapriya

    2018-06-05

    Automation in cardiac arrhythmia classification helps medical professionals make accurate decisions about the patient's health. The aim of this work was to design a hybrid classification model to classify cardiac arrhythmias. The design phase of the classification model comprises the following stages: preprocessing of the cardiac signal by eliminating detail coefficients that contain noise, feature extraction through Daubechies wavelet transform, and arrhythmia classification using a collaborative decision from the K nearest neighbor classifier (KNN) and a support vector machine (SVM). The proposed model is able to classify 5 arrhythmia classes as per the ANSI/AAMI EC57: 1998 classification standard. Level 1 of the proposed model involves classification using the KNN and the classifier is trained with examples from all classes. Level 2 involves classification using an SVM and is trained specifically to classify overlapped classes. The final classification of a test heartbeat pertaining to a particular class is done using the proposed KNN/SVM hybrid model. The experimental results demonstrated that the average sensitivity of the proposed model was 92.56%, the average specificity 99.35%, the average positive predictive value 98.13%, the average F-score 94.5%, and the average accuracy 99.78%. The results obtained using the proposed model were compared with the results of discriminant, tree, and KNN classifiers. The proposed model is able to achieve a high classification accuracy.

  16. Object-oriented classification of drumlins from digital elevation models

    Saha, Kakoli

    Drumlins are common elements of glaciated landscapes which are easily identified by their distinct morphometric characteristics including shape, length/width ratio, elongation ratio, and uniform direction. To date, most researchers have mapped drumlins by tracing contours on maps, or through on-screen digitization directly on top of hillshaded digital elevation models (DEMs). This paper seeks to utilize the unique morphometric characteristics of drumlins and investigates automated extraction of the landforms as objects from DEMs by Definiens Developer software (V.7), using the 30 m United States Geological Survey National Elevation Dataset DEM as input. The Chautauqua drumlin field in Pennsylvania and upstate New York, USA was chosen as a study area. As the study area is huge (approximately covers 2500 sq.km. of area), small test areas were selected for initial testing of the method. Individual polygons representing the drumlins were extracted from the elevation data set by automated recognition, using Definiens' Multiresolution Segmentation tool, followed by rule-based classification. Subsequently parameters such as length, width and length-width ratio, perimeter and area were measured automatically. To test the accuracy of the method, a second base map was produced by manual on-screen digitization of drumlins from topographic maps and the same morphometric parameters were extracted from the mapped landforms using Definiens Developer. Statistical comparison showed a high agreement between the two methods confirming that object-oriented classification for extraction of drumlins can be used for mapping these landforms. The proposed method represents an attempt to solve the problem by providing a generalized rule-set for mass extraction of drumlins. To check that the automated extraction process was next applied to a larger area. Results showed that the proposed method is as successful for the bigger area as it was for the smaller test areas.

  17. Secondary structure classification of amino-acid sequences using state-space modeling

    Brunnert, Marcus; Krahnke, Tillmann; Urfer, Wolfgang

    2001-01-01

    The secondary structure classification of amino acid sequences can be carried out by a statistical analysis of sequence and structure data using state-space models. Aiming at this classification, a modified filter algorithm programmed in S is applied to data of three proteins. The application leads to correct classifications of two proteins even when using relatively simple estimation methods for the parameters of the state-space models. Furthermore, it has been shown that the assumed initial...

  18. Transport of cohesive sediments : Classification and requirements for turbulence modelling

    Bruens, A.W.

    1999-01-01

    This report describes a classification of sediment-laden flows, which gives an overview of the different transport forms of fine sediment and the interactions of the different processes as acting in an estuary. At the outs et of the proposed classification a distinction in physical states of

  19. Classification of proteins: available structural space for molecular modeling.

    Andreeva, Antonina

    2012-01-01

    The wealth of available protein structural data provides unprecedented opportunity to study and better understand the underlying principles of protein folding and protein structure evolution. A key to achieving this lies in the ability to analyse these data and to organize them in a coherent classification scheme. Over the past years several protein classifications have been developed that aim to group proteins based on their structural relationships. Some of these classification schemes explore the concept of structural neighbourhood (structural continuum), whereas other utilize the notion of protein evolution and thus provide a discrete rather than continuum view of protein structure space. This chapter presents a strategy for classification of proteins with known three-dimensional structure. Steps in the classification process along with basic definitions are introduced. Examples illustrating some fundamental concepts of protein folding and evolution with a special focus on the exceptions to them are presented.

  20. The Zipf Law revisited: An evolutionary model of emerging classification

    Levitin, L.B. [Boston Univ., MA (United States); Schapiro, B. [TINA, Brandenburg (Germany); Perlovsky, L. [NRC, Wakefield, MA (United States)

    1996-12-31

    Zipf`s Law is a remarkable rank-frequency relationship observed in linguistics (the frequencies of the use of words are approximately inversely proportional to their ranks in the decreasing frequency order) as well as in the behavior of many complex systems of surprisingly different nature. We suggest an evolutionary model of emerging classification of objects into classes corresponding to concepts and denoted by words. The evolution of the system is derived from two basic assumptions: first, the probability to recognize an object as belonging to a known class is proportional to the number of objects in this class already recognized, and, second, there exists a small probability to observe an object that requires creation of a new class ({open_quotes}mutation{close_quotes} that gives birth to a new {open_quotes}species{close_quotes}). It is shown that the populations of classes in such a system obey the Zipf Law provided that the rate of emergence of new classes is small. The model leads also to the emergence of a second-tier structure of {open_quotes}super-classes{close_quotes} - groups of classes with almost equal populations.

  1. Interaction between the cardiac rapidly (IKr) and slowly (IKs) activating delayed rectifier potassium channels revealed by low K+-induced hERG endocytic degradation.

    Guo, Jun; Wang, Tingzhong; Yang, Tonghua; Xu, Jianmin; Li, Wentao; Fridman, Michael D; Fisher, John T; Zhang, Shetuan

    2011-10-07

    Cardiac repolarization is controlled by the rapidly (I(Kr)) and slowly (I(Ks)) activating delayed rectifier potassium channels. The human ether-a-go-go-related gene (hERG) encodes I(Kr), whereas KCNQ1 and KCNE1 together encode I(Ks). Decreases in I(Kr) or I(Ks) cause long QT syndrome (LQTS), a cardiac disorder with a high risk of sudden death. A reduction in extracellular K(+) concentration ([K(+)](o)) induces LQTS and selectively causes endocytic degradation of mature hERG channels from the plasma membrane. In the present study, we investigated whether I(Ks) compensates for the reduced I(Kr) under low K(+) conditions. Our data show that when hERG and KCNQ1 were expressed separately in human embryonic kidney (HEK) cells, exposure to 0 mM K(+) for 6 h completely eliminated the mature hERG channel expression but had no effect on KCNQ1. When hERG and KCNQ1 were co-expressed, KCNQ1 significantly delayed 0 mM K(+)-induced hERG reduction. Also, hERG degradation led to a significant reduction in KCNQ1 in 0 mM K(+) conditions. An interaction between hERG and KCNQ1 was identified in hERG+KCNQ1-expressing HEK cells. Furthermore, KCNQ1 preferentially co-immunoprecipitated with mature hERG channels that are localized in the plasma membrane. Biophysical and pharmacological analyses indicate that although hERG and KCNQ1 closely interact with each other, they form distinct hERG and KCNQ1 channels. These data extend our understanding of delayed rectifier potassium channel trafficking and regulation, as well as the pathology of LQTS.

  2. Structural implications of hERG K+ channel block by a high-affinity minimally structured blocker

    Helliwell, Matthew V.; Zhang, Yihong; El Harchi, Aziza; Du, Chunyun; Hancox, Jules C.; Dempsey, Christopher E.

    2018-01-01

    Cardiac potassium channels encoded by human ether-à-go-go–related gene (hERG) are major targets for structurally diverse drugs associated with acquired long QT syndrome. This study characterized hERG channel inhibition by a minimally structured high-affinity hERG inhibitor, Cavalli-2, composed of three phenyl groups linked by polymethylene spacers around a central amino group, chosen to probe the spatial arrangement of side chain groups in the high-affinity drug-binding site of the hERG pore. hERG current (IhERG) recorded at physiological temperature from HEK293 cells was inhibited with an IC50 of 35.6 nm with time and voltage dependence characteristic of blockade contingent upon channel gating. Potency of Cavalli-2 action was markedly reduced for attenuated inactivation mutants located near (S620T; 54-fold) and remote from (N588K; 15-fold) the channel pore. The S6 Y652A and F656A mutations decreased inhibitory potency 17- and 75-fold, respectively, whereas T623A and S624A at the base of the selectivity filter also decreased potency (16- and 7-fold, respectively). The S5 helix F557L mutation decreased potency 10-fold, and both F557L and Y652A mutations eliminated voltage dependence of inhibition. Computational docking using the recent cryo-EM structure of an open channel hERG construct could only partially recapitulate experimental data, and the high dependence of Cavalli-2 block on Phe-656 is not readily explainable in that structure. A small clockwise rotation of the inner (S6) helix of the hERG pore from its configuration in the cryo-EM structure may be required to optimize Phe-656 side chain orientations compatible with high-affinity block. PMID:29545312

  3. 2D Modeling and Classification of Extended Objects in a Network of HRR Radars

    Fasoula, A.

    2011-01-01

    In this thesis, the modeling of extended objects with low-dimensional representations of their 2D geometry is addressed. The ultimate objective is the classification of the objects using libraries of such compact 2D object models that are much smaller than in the state-of-the-art classification

  4. TENSOR MODELING BASED FOR AIRBORNE LiDAR DATA CLASSIFICATION

    N. Li

    2016-06-01

    Full Text Available Feature selection and description is a key factor in classification of Earth observation data. In this paper a classification method based on tensor decomposition is proposed. First, multiple features are extracted from raw LiDAR point cloud, and raster LiDAR images are derived by accumulating features or the “raw” data attributes. Then, the feature rasters of LiDAR data are stored as a tensor, and tensor decomposition is used to select component features. This tensor representation could keep the initial spatial structure and insure the consideration of the neighborhood. Based on a small number of component features a k nearest neighborhood classification is applied.

  5. ALADDIN: a neural model for event classification in dynamic processes

    Roverso, Davide

    1998-02-01

    ALADDIN is a prototype system which combines fuzzy clustering techniques and artificial neural network (ANN) models in a novel approach to the problem of classifying events in dynamic processes. The main motivation for the development of such a system derived originally from the problem of finding new principled methods to perform alarm structuring/suppression in a nuclear power plant (NPP) alarm system. One such method consists in basing the alarm structuring/suppression on a fast recognition of the event generating the alarms, so that a subset of alarms sufficient to efficiently handle the current fault can be selected to be presented to the operator, minimizing in this way the operator's workload in a potentially stressful situation. The scope of application of a system like ALADDIN goes however beyond alarm handling, to include diagnostic tasks in general. The eventual application of the system to domains other than NPPs was also taken into special consideration during the design phase. In this document we report on the first phase of the ALADDIN project which consisted mainly in a comparative study of a series of ANN-based approaches to event classification, and on the proposal of a first system prototype which is to undergo further tests and, eventually, be integrated in existing alarm, diagnosis, and accident management systems such as CASH, IDS, and CAMS. (author)

  6. A classification model for non-alcoholic steatohepatitis (NASH) using confocal Raman micro-spectroscopy

    Yan, Jie; Yu, Yang; Kang, Jeon Woong; Tam, Zhi Yang; Xu, Shuoyu; Fong, Eliza Li Shan; Singh, Surya Pratap; Song, Ziwei; Tucker Kellogg, Lisa; So, Peter; Yu, Hanry

    2017-07-01

    We combined Raman micro-spectroscopy and machine learning techniques to develop a classification model based on a well-established non-alcoholic steatohepatitis (NASH) mouse model, using spectrum pre-processing, biochemical component analysis (BCA) and logistic regression.

  7. A NEW WASTE CLASSIFYING MODEL: HOW WASTE CLASSIFICATION CAN BECOME MORE OBJECTIVE?

    Burcea Stefan Gabriel

    2015-07-01

    documents available in the virtual space, on the websites of certain international organizations involved in the wide and complex issue of waste management. The second part of the paper contains a proposal classification model with four main criteria in order to make waste classification a more objective process. The new classification model has the main role of transforming the traditional patterns of waste classification into an objective waste classification system and a second role of eliminating the strong contextuality of the actual waste classification models.

  8. Incremental Validity of Multidimensional Proficiency Scores from Diagnostic Classification Models: An Illustration for Elementary School Mathematics

    Kunina-Habenicht, Olga; Rupp, André A.; Wilhelm, Oliver

    2017-01-01

    Diagnostic classification models (DCMs) hold great potential for applications in summative and formative assessment by providing discrete multivariate proficiency scores that yield statistically driven classifications of students. Using data from a newly developed diagnostic arithmetic assessment that was administered to 2032 fourth-grade students…

  9. Applications of Diagnostic Classification Models: A Literature Review and Critical Commentary

    Sessoms, John; Henson, Robert A.

    2018-01-01

    Diagnostic classification models (DCMs) classify examinees based on the skills they have mastered given their test performance. This classification enables targeted feedback that can inform remedial instruction. Unfortunately, applications of DCMs have been criticized (e.g., no validity support). Generally, these evaluations have been brief and…

  10. Comparison analysis for classification algorithm in data mining and the study of model use

    Chen, Junde; Zhang, Defu

    2018-04-01

    As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.

  11. Model of high-tech businesses management under the trends of explicit and implicit knowledge markets: classification and business model

    Guzel Isayevna Gumerova; Elmira Shamilevna Shaimieva

    2015-01-01

    Objective to define the notion of ldquohightech businessrdquo to elaborate classification of hightech businesses to elaborate the business model for hightech business management. Methods general scientific methods of theoretical and empirical cognition. Results the research presents a business model of hightech businesses management basing on the trends of explicit and explicit knowledge market with the dominating implicit knowledge market classification of hightech business...

  12. Proposing a Hybrid Model Based on Robson's Classification for Better Impact on Trends of Cesarean Deliveries.

    Hans, Punit; Rohatgi, Renu

    2017-06-01

    To construct a hybrid model classification for cesarean section (CS) deliveries based on the woman-characteristics (Robson's classification with additional layers of indications for CS, keeping in view low-resource settings available in India). This is a cross-sectional study conducted at Nalanda Medical College, Patna. All the women delivered from January 2016 to May 2016 in the labor ward were included. Results obtained were compared with the values obtained for India, from secondary analysis of WHO multi-country survey (2010-2011) by Joshua Vogel and colleagues' study published in "The Lancet Global Health." The three classifications (indication-based, Robson's and hybrid model) applied for categorization of the cesarean deliveries from the same sample of data and a semiqualitative evaluations done, considering the main characteristics, strengths and weaknesses of each classification system. The total number of women delivered during study period was 1462, out of which CS deliveries were 471. Overall, CS rate calculated for NMCH, hospital in this specified period, was 32.21% ( p  = 0.001). Hybrid model scored 23/23, and scores of Robson classification and indication-based classification were 21/23 and 10/23, respectively. Single-study centre and referral bias are the limitations of the study. Given the flexibility of the classifications, we constructed a hybrid model based on the woman-characteristics system with additional layers of other classification. Indication-based classification answers why, Robson classification answers on whom, while through our hybrid model we get to know why and on whom cesarean deliveries are being performed.

  13. Data Field Modeling and Spectral-Spatial Feature Fusion for Hyperspectral Data Classification.

    Liu, Da; Li, Jianxun

    2016-12-16

    Classification is a significant subject in hyperspectral remote sensing image processing. This study proposes a spectral-spatial feature fusion algorithm for the classification of hyperspectral images (HSI). Unlike existing spectral-spatial classification methods, the influences and interactions of the surroundings on each measured pixel were taken into consideration in this paper. Data field theory was employed as the mathematical realization of the field theory concept in physics, and both the spectral and spatial domains of HSI were considered as data fields. Therefore, the inherent dependency of interacting pixels was modeled. Using data field modeling, spatial and spectral features were transformed into a unified radiation form and further fused into a new feature by using a linear model. In contrast to the current spectral-spatial classification methods, which usually simply stack spectral and spatial features together, the proposed method builds the inner connection between the spectral and spatial features, and explores the hidden information that contributed to classification. Therefore, new information is included for classification. The final classification result was obtained using a random forest (RF) classifier. The proposed method was tested with the University of Pavia and Indian Pines, two well-known standard hyperspectral datasets. The experimental results demonstrate that the proposed method has higher classification accuracies than those obtained by the traditional approaches.

  14. EEG Signal Classification With Super-Dirichlet Mixture Model

    Ma, Zhanyu; Tan, Zheng-Hua; Prasad, Swati

    2012-01-01

    Classification of the Electroencephalogram (EEG) signal is a challengeable task in the brain-computer interface systems. The marginalized discrete wavelet transform (mDWT) coefficients extracted from the EEG signals have been frequently used in researches since they reveal features related...

  15. Kuifjes katholieke jeugd. De katholieke achtergrond van Hergé. Deel 1 van 2

    de Groot, C.N.

    2013-01-01

    Following the launch of Steven Spielberg’s ‘Tintin and the Secret of the Unicorn’, l’Osservore Romano hailed Tintin as a ‘catholic hero’. This article demonstrates that the comic originates, more specifically, in conservative and reactionary milieus in Belgian Catholicism. Hergé (ps. for Georges Rémi) designed Tintin for the children’s weekly of a newspaper that, in this period, shared its main themes with the Catholic fascist movement Rex: anti-communism, anti-capitalism, anti-semitism and t...

  16. Classification of integrable two-dimensional models of relativistic field theory by means of computer

    Getmanov, B.S.

    1988-01-01

    The results of classification of two-dimensional relativistic field models (1) spinor; (2) essentially-nonlinear scalar) possessing higher conservation laws using the system of symbolic computer calculations are presented shortly

  17. A tool for urban soundscape evaluation applying Support Vector Machines for developing a soundscape classification model.

    Torija, Antonio J; Ruiz, Diego P; Ramos-Ridao, Angel F

    2014-06-01

    To ensure appropriate soundscape management in urban environments, the urban-planning authorities need a range of tools that enable such a task to be performed. An essential step during the management of urban areas from a sound standpoint should be the evaluation of the soundscape in such an area. In this sense, it has been widely acknowledged that a subjective and acoustical categorization of a soundscape is the first step to evaluate it, providing a basis for designing or adapting it to match people's expectations as well. In this sense, this work proposes a model for automatic classification of urban soundscapes. This model is intended for the automatic classification of urban soundscapes based on underlying acoustical and perceptual criteria. Thus, this classification model is proposed to be used as a tool for a comprehensive urban soundscape evaluation. Because of the great complexity associated with the problem, two machine learning techniques, Support Vector Machines (SVM) and Support Vector Machines trained with Sequential Minimal Optimization (SMO), are implemented in developing model classification. The results indicate that the SMO model outperforms the SVM model in the specific task of soundscape classification. With the implementation of the SMO algorithm, the classification model achieves an outstanding performance (91.3% of instances correctly classified). © 2013 Elsevier B.V. All rights reserved.

  18. Model for Detection and Classification of DDoS Traffic Based on Artificial Neural Network

    D. Peraković

    2017-06-01

    Full Text Available Detection of DDoS (Distributed Denial of Service traffic is of great importance for the availability protection of services and other information and communication resources. The research presented in this paper shows the application of artificial neural networks in the development of detection and classification model for three types of DDoS attacks and legitimate network traffic. Simulation results of developed model showed accuracy of 95.6% in classification of pre-defined classes of traffic.

  19. The effects of deoxyelephantopin on the cardiac delayed rectifier potassium channel current (IKr) and human ether-a-go-go-related gene (hERG) expression.

    Teah, Yi Fan; Abduraman, Muhammad Asyraf; Amanah, Azimah; Adenan, Mohd Ilham; Sulaiman, Shaida Fariza; Tan, Mei Lan

    2017-09-01

    Elephantopus scaber Linn and its major bioactive component, deoxyelephantopin are known for their medicinal properties and are often reported to have various cytotoxic and antitumor activities. This plant is widely used as folk medicine for a plethora of indications although its safety profile remains unknown. Human ether-a-go-go-related gene (hERG) encodes the cardiac I Kr current which is a determinant of the duration of ventricular action potentials and QT interval. The hERG potassium channel is an important antitarget in cardiotoxicity evaluation. This study investigated the effects of deoxyelephantopin on the current, mRNA and protein expression of hERG channel in hERG-transfected HEK293 cells. The hERG tail currents following depolarization pulses were insignificantly affected by deoxyelephantopin in the transfected cell line. Current reduction was less than 40% as compared with baseline at the highest concentration of 50 μM. The results were consistent with the molecular docking simulation and hERG surface protein expression. Interestingly, it does not affect the hERG expression at both transcriptional and translational level at most concentrations, although higher concentration at 10 μM caused protein accumulation. In conclusion, deoxyelephantopin is unlikely a clinically significant hERG channel and I kr blocker. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.

    Kezi Yu

    Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.

  1. A classification model of Hyperion image base on SAM combined decision tree

    Wang, Zhenghai; Hu, Guangdao; Zhou, YongZhang; Liu, Xin

    2009-10-01

    Monitoring the Earth using imaging spectrometers has necessitated more accurate analyses and new applications to remote sensing. A very high dimensional input space requires an exponentially large amount of data to adequately and reliably represent the classes in that space. On the other hand, with increase in the input dimensionality the hypothesis space grows exponentially, which makes the classification performance highly unreliable. Traditional classification algorithms Classification of hyperspectral images is challenging. New algorithms have to be developed for hyperspectral data classification. The Spectral Angle Mapper (SAM) is a physically-based spectral classification that uses an ndimensional angle to match pixels to reference spectra. The algorithm determines the spectral similarity between two spectra by calculating the angle between the spectra, treating them as vectors in a space with dimensionality equal to the number of bands. The key and difficulty is that we should artificial defining the threshold of SAM. The classification precision depends on the rationality of the threshold of SAM. In order to resolve this problem, this paper proposes a new automatic classification model of remote sensing image using SAM combined with decision tree. It can automatic choose the appropriate threshold of SAM and improve the classify precision of SAM base on the analyze of field spectrum. The test area located in Heqing Yunnan was imaged by EO_1 Hyperion imaging spectrometer using 224 bands in visual and near infrared. The area included limestone areas, rock fields, soil and forests. The area was classified into four different vegetation and soil types. The results show that this method choose the appropriate threshold of SAM and eliminates the disturbance and influence of unwanted objects effectively, so as to improve the classification precision. Compared with the likelihood classification by field survey data, the classification precision of this model

  2. Assessing the Accuracy and Consistency of Language Proficiency Classification under Competing Measurement Models

    Zhang, Bo

    2010-01-01

    This article investigates how measurement models and statistical procedures can be applied to estimate the accuracy of proficiency classification in language testing. The paper starts with a concise introduction of four measurement models: the classical test theory (CTT) model, the dichotomous item response theory (IRT) model, the testlet response…

  3. ATLS Hypovolemic Shock Classification by Prediction of Blood Loss in Rats Using Regression Models.

    Choi, Soo Beom; Choi, Joon Yul; Park, Jee Soo; Kim, Deok Won

    2016-07-01

    In our previous study, our input data set consisted of 78 rats, the blood loss in percent as a dependent variable, and 11 independent variables (heart rate, systolic blood pressure, diastolic blood pressure, mean arterial pressure, pulse pressure, respiration rate, temperature, perfusion index, lactate concentration, shock index, and new index (lactate concentration/perfusion)). The machine learning methods for multicategory classification were applied to a rat model in acute hemorrhage to predict the four Advanced Trauma Life Support (ATLS) hypovolemic shock classes for triage in our previous study. However, multicategory classification is much more difficult and complicated than binary classification. We introduce a simple approach for classifying ATLS hypovolaemic shock class by predicting blood loss in percent using support vector regression and multivariate linear regression (MLR). We also compared the performance of the classification models using absolute and relative vital signs. The accuracies of support vector regression and MLR models with relative values by predicting blood loss in percent were 88.5% and 84.6%, respectively. These were better than the best accuracy of 80.8% of the direct multicategory classification using the support vector machine one-versus-one model in our previous study for the same validation data set. Moreover, the simple MLR models with both absolute and relative values could provide possibility of the future clinical decision support system for ATLS classification. The perfusion index and new index were more appropriate with relative changes than absolute values.

  4. AN ADABOOST OPTIMIZED CCFIS BASED CLASSIFICATION MODEL FOR BREAST CANCER DETECTION

    CHANDRASEKAR RAVI

    2017-06-01

    Full Text Available Classification is a Data Mining technique used for building a prototype of the data behaviour, using which an unseen data can be classified into one of the defined classes. Several researchers have proposed classification techniques but most of them did not emphasis much on the misclassified instances and storage space. In this paper, a classification model is proposed that takes into account the misclassified instances and storage space. The classification model is efficiently developed using a tree structure for reducing the storage complexity and uses single scan of the dataset. During the training phase, Class-based Closed Frequent ItemSets (CCFIS were mined from the training dataset in the form of a tree structure. The classification model has been developed using the CCFIS and a similarity measure based on Longest Common Subsequence (LCS. Further, the Particle Swarm Optimization algorithm is applied on the generated CCFIS, which assigns weights to the itemsets and their associated classes. Most of the classifiers are correctly classifying the common instances but they misclassify the rare instances. In view of that, AdaBoost algorithm has been used to boost the weights of the misclassified instances in the previous round so as to include them in the training phase to classify the rare instances. This improves the accuracy of the classification model. During the testing phase, the classification model is used to classify the instances of the test dataset. Breast Cancer dataset from UCI repository is used for experiment. Experimental analysis shows that the accuracy of the proposed classification model outperforms the PSOAdaBoost-Sequence classifier by 7% superior to other approaches like Naïve Bayes Classifier, Support Vector Machine Classifier, Instance Based Classifier, ID3 Classifier, J48 Classifier, etc.

  5. MiR-17-5p Impairs Trafficking of H-ERG K+ Channel Protein by Targeting Multiple ER Stress-Related Chaperones during Chronic Oxidative Stress

    Wang, Qi; Hu, Weina; Lei, Mingming; Wang, Yong; Yan, Bing; Liu, Jun; Zhang, Ren; Jin, Yuanzhe

    2013-01-01

    BACKGROUND: To investigate if microRNAs (miRNAs) play a role in regulating h-ERG trafficking in the setting of chronic oxidative stress as a common deleterious factor for many cardiac disorders. METHODS: We treated neonatal rat ventricular myocytes and HEK293 cells with stable expression of h-ERG with H2O2 for 12 h and 48 h. Expression of miR-17-5p seed miRNAs was quantified by real-time RT-PCR. Protein levels of chaperones and h-ERG trafficking were measured by Western blot analysis. Lucifer...

  6. Models of Marine Fish Biodiversity: Assessing Predictors from Three Habitat Classification Schemes.

    Yates, Katherine L; Mellin, Camille; Caley, M Julian; Radford, Ben T; Meeuwig, Jessica J

    2016-01-01

    Prioritising biodiversity conservation requires knowledge of where biodiversity occurs. Such knowledge, however, is often lacking. New technologies for collecting biological and physical data coupled with advances in modelling techniques could help address these gaps and facilitate improved management outcomes. Here we examined the utility of environmental data, obtained using different methods, for developing models of both uni- and multivariate biodiversity metrics. We tested which biodiversity metrics could be predicted best and evaluated the performance of predictor variables generated from three types of habitat data: acoustic multibeam sonar imagery, predicted habitat classification, and direct observer habitat classification. We used boosted regression trees (BRT) to model metrics of fish species richness, abundance and biomass, and multivariate regression trees (MRT) to model biomass and abundance of fish functional groups. We compared model performance using different sets of predictors and estimated the relative influence of individual predictors. Models of total species richness and total abundance performed best; those developed for endemic species performed worst. Abundance models performed substantially better than corresponding biomass models. In general, BRT and MRTs developed using predicted habitat classifications performed less well than those using multibeam data. The most influential individual predictor was the abiotic categorical variable from direct observer habitat classification and models that incorporated predictors from direct observer habitat classification consistently outperformed those that did not. Our results show that while remotely sensed data can offer considerable utility for predictive modelling, the addition of direct observer habitat classification data can substantially improve model performance. Thus it appears that there are aspects of marine habitats that are important for modelling metrics of fish biodiversity that are

  7. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    C. Fernandez-Lozano

    2013-01-01

    Full Text Available Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM. Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA, the most representative variables for a specific classification problem can be selected.

  8. Median Filter Noise Reduction of Image and Backpropagation Neural Network Model for Cervical Cancer Classification

    Wutsqa, D. U.; Marwah, M.

    2017-06-01

    In this paper, we consider spatial operation median filter to reduce the noise in the cervical images yielded by colposcopy tool. The backpropagation neural network (BPNN) model is applied to the colposcopy images to classify cervical cancer. The classification process requires an image extraction by using a gray level co-occurrence matrix (GLCM) method to obtain image features that are used as inputs of BPNN model. The advantage of noise reduction is evaluated by comparing the performances of BPNN models with and without spatial operation median filter. The experimental result shows that the spatial operation median filter can improve the accuracy of the BPNN model for cervical cancer classification.

  9. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  10. Hidden Markov Models for indirect classification of occupant behaviour

    Liisberg, Jon Anders Reichert; Møller, Jan Kloppenborg; Bloem, H.

    2016-01-01

    Even for similar residential buildings, a huge variability in the energy consumption can be observed. This variability is mainly due to the different behaviours of the occupants and this impacts the thermal (temperature setting, window opening, etc.) as well as the electrical (appliances, TV......, computer, etc.) consumption. It is very seldom to find direct observations of occupant presence and behaviour in residential buildings. However, given the increasing use of smart metering, the opportunity and potential for indirect observation and classification of occupants’ behaviour is possible...... sequence of states was determined (global decoding). From reconstruction of the states, dependencies like ambient air temperature were investigated. Combined with an occupant survey, this was used to classify/interpret the states as (1) absent or asleep, (2) home, medium consumption and (3) home, high...

  11. Cosmic numbers: A physical classification for cosmological models

    Avelino, P.P.; Martins, C.J.A.P.

    2003-01-01

    We introduce the notion of the cosmic numbers of a cosmological model, and discuss how they can be used to naturally classify models according to their ability to solve some of the problems of the standard cosmological model

  12. Predictive mapping of soil organic carbon in wet cultivated lands using classification-tree based models

    Kheir, Rania Bou; Greve, Mogens Humlekrog; Bøcher, Peder Klith

    2010-01-01

    the geographic distribution of SOC across Denmark using remote sensing (RS), geographic information systems (GISs) and decision-tree modeling (un-pruned and pruned classification trees). Seventeen parameters, i.e. parent material, soil type, landscape type, elevation, slope gradient, slope aspect, mean curvature...... field measurements in the area of interest (Denmark). A large number of tree-based classification models (588) were developed using (i) all of the parameters, (ii) all Digital Elevation Model (DEM) parameters only, (iii) the primary DEM parameters only, (iv), the remote sensing (RS) indices only, (v......) selected pairs of parameters, (vi) soil type, parent material and landscape type only, and (vii) the parameters having a high impact on SOC distribution in built pruned trees. The best constructed classification tree models (in the number of three) with the lowest misclassification error (ME...

  13. Classification of finite reparametrization symmetry groups in the three-Higgs-doublet model

    Ivanov, Igor P.; Vdovin, E.

    2013-01-01

    Symmetries play a crucial role in electroweak symmetry breaking models with non-minimal Higgs content. Within each class of these models, it is desirable to know which symmetry groups can be implemented via the scalar sector. In N-Higgs-doublet models, this classification problem was solved only for N=2 doublets. Very recently, we suggested a method to classify all realizable finite symmetry groups of Higgs-family transformations in the three-Higgs-doublet model (3HDM). Here, we present this classification in all detail together with an introduction to the theory of solvable groups, which play the key role in our derivation. We also consider generalized-CP symmetries, and discuss the interplay between Higgs-family symmetries and CP-conservation. In particular, we prove that presence of the Z 4 symmetry guarantees the explicit CP-conservation of the potential. This work completes classification of finite reparametrization symmetry groups in 3HDM. (orig.)

  14. Discriminative Nonlinear Analysis Operator Learning: When Cosparse Model Meets Image Classification.

    Wen, Zaidao; Hou, Biao; Jiao, Licheng

    2017-05-03

    Linear synthesis model based dictionary learning framework has achieved remarkable performances in image classification in the last decade. Behaved as a generative feature model, it however suffers from some intrinsic deficiencies. In this paper, we propose a novel parametric nonlinear analysis cosparse model (NACM) with which a unique feature vector will be much more efficiently extracted. Additionally, we derive a deep insight to demonstrate that NACM is capable of simultaneously learning the task adapted feature transformation and regularization to encode our preferences, domain prior knowledge and task oriented supervised information into the features. The proposed NACM is devoted to the classification task as a discriminative feature model and yield a novel discriminative nonlinear analysis operator learning framework (DNAOL). The theoretical analysis and experimental performances clearly demonstrate that DNAOL will not only achieve the better or at least competitive classification accuracies than the state-of-the-art algorithms but it can also dramatically reduce the time complexities in both training and testing phases.

  15. Objects Classification by Learning-Based Visual Saliency Model and Convolutional Neural Network.

    Li, Na; Zhao, Xinbo; Yang, Yongjia; Zou, Xiaochun

    2016-01-01

    Humans can easily classify different kinds of objects whereas it is quite difficult for computers. As a hot and difficult problem, objects classification has been receiving extensive interests with broad prospects. Inspired by neuroscience, deep learning concept is proposed. Convolutional neural network (CNN) as one of the methods of deep learning can be used to solve classification problem. But most of deep learning methods, including CNN, all ignore the human visual information processing mechanism when a person is classifying objects. Therefore, in this paper, inspiring the completed processing that humans classify different kinds of objects, we bring forth a new classification method which combines visual attention model and CNN. Firstly, we use the visual attention model to simulate the processing of human visual selection mechanism. Secondly, we use CNN to simulate the processing of how humans select features and extract the local features of those selected areas. Finally, not only does our classification method depend on those local features, but also it adds the human semantic features to classify objects. Our classification method has apparently advantages in biology. Experimental results demonstrated that our method made the efficiency of classification improve significantly.

  16. Classification and unification of the microscopic deterministic traffic models.

    Yang, Bo; Monterola, Christopher

    2015-10-01

    We identify a universal mathematical structure in microscopic deterministic traffic models (with identical drivers), and thus we show that all such existing models in the literature, including both the two-phase and three-phase models, can be understood as special cases of a master model by expansion around a set of well-defined ground states. This allows any two traffic models to be properly compared and identified. The three-phase models are characterized by the vanishing of leading orders of expansion within a certain density range, and as an example the popular intelligent driver model is shown to be equivalent to a generalized optimal velocity (OV) model. We also explore the diverse solutions of the generalized OV model that can be important both for understanding human driving behaviors and algorithms for autonomous driverless vehicles.

  17. Using different classification models in wheat grading utilizing visual features

    Basati, Zahra; Rasekh, Mansour; Abbaspour-Gilandeh, Yousef

    2018-04-01

    Wheat is one of the most important strategic crops in Iran and in the world. The major component that distinguishes wheat from other grains is the gluten section. In Iran, sunn pest is one of the most important factors influencing the characteristics of wheat gluten and in removing it from a balanced state. The existence of bug-damaged grains in wheat will reduce the quality and price of the product. In addition, damaged grains reduce the enrichment of wheat and the quality of bread products. In this study, after preprocessing and segmentation of images, 25 features including 9 colour features, 10 morphological features, and 6 textual statistical features were extracted so as to classify healthy and bug-damaged wheat grains of Azar cultivar of four levels of moisture content (9, 11.5, 14 and 16.5% w.b.) and two lighting colours (yellow light, the composition of yellow and white lights). Using feature selection methods in the WEKA software and the CfsSubsetEval evaluator, 11 features were chosen as inputs of artificial neural network, decision tree and discriment analysis classifiers. The results showed that the decision tree with the J.48 algorithm had the highest classification accuracy of 90.20%. This was followed by artificial neural network classifier with the topology of 11-19-2 and discrimient analysis classifier at 87.46 and 81.81%, respectively

  18. Latent log-linear models for handwritten digit classification.

    Deselaers, Thomas; Gass, Tobias; Heigold, Georg; Ney, Hermann

    2012-06-01

    We present latent log-linear models, an extension of log-linear models incorporating latent variables, and we propose two applications thereof: log-linear mixture models and image deformation-aware log-linear models. The resulting models are fully discriminative, can be trained efficiently, and the model complexity can be controlled. Log-linear mixture models offer additional flexibility within the log-linear modeling framework. Unlike previous approaches, the image deformation-aware model directly considers image deformations and allows for a discriminative training of the deformation parameters. Both are trained using alternating optimization. For certain variants, convergence to a stationary point is guaranteed and, in practice, even variants without this guarantee converge and find models that perform well. We tune the methods on the USPS data set and evaluate on the MNIST data set, demonstrating the generalization capabilities of our proposed models. Our models, although using significantly fewer parameters, are able to obtain competitive results with models proposed in the literature.

  19. Classification and moral evaluation of uncertainties in engineering modeling.

    Murphy, Colleen; Gardoni, Paolo; Harris, Charles E

    2011-09-01

    Engineers must deal with risks and uncertainties as a part of their professional work and, in particular, uncertainties are inherent to engineering models. Models play a central role in engineering. Models often represent an abstract and idealized version of the mathematical properties of a target. Using models, engineers can investigate and acquire understanding of how an object or phenomenon will perform under specified conditions. This paper defines the different stages of the modeling process in engineering, classifies the various sources of uncertainty that arise in each stage, and discusses the categories into which these uncertainties fall. The paper then considers the way uncertainty and modeling are approached in science and the criteria for evaluating scientific hypotheses, in order to highlight the very different criteria appropriate for the development of models and the treatment of the inherent uncertainties in engineering. Finally, the paper puts forward nine guidelines for the treatment of uncertainty in engineering modeling.

  20. Common variants in the hERG (KCNH2) voltage-gated potassium channel are associated with altered fasting and glucose-stimulated plasma incretin and glucagon responses

    Engelbrechtsen, Line; Mahendran, Yuvaraj; Jonsson, Anna

    2018-01-01

    BACKGROUND: Patients with long QT syndrome due to rare loss-of-function mutations in the human ether-á-go-go-related gene (hERG) have prolonged QT interval, risk of arrhythmias, increased secretion of insulin and incretins and impaired glucagon response to hypoglycemia. This is caused by a dysfun......BACKGROUND: Patients with long QT syndrome due to rare loss-of-function mutations in the human ether-á-go-go-related gene (hERG) have prolonged QT interval, risk of arrhythmias, increased secretion of insulin and incretins and impaired glucagon response to hypoglycemia. This is caused...... by a dysfunctional Kv11.1 voltage-gated potassium channel. Based on these findings in patients with rare variants in hERG, we hypothesized that common variants in hERG may also lead to alterations in glucose homeostasis. Subsequently, we aimed to evaluate the effect of two common gain-of-function variants in hERG...... in hERG on QT-interval and circulation levels of incretins, insulin and glucagon. The Danish population-based Inter99 cohort (n = 5895) was used to assess the effect of common variants on QT-interval. The Danish ADDITION-PRO cohort was used (n = 1329) to study genetic associations with levels of GLP-1...

  1. Zone-specific logistic regression models improve classification of prostate cancer on multi-parametric MRI

    Dikaios, Nikolaos; Halligan, Steve; Taylor, Stuart; Atkinson, David; Punwani, Shonit [University College London, Centre for Medical Imaging, London (United Kingdom); University College London Hospital, Departments of Radiology, London (United Kingdom); Alkalbani, Jokha; Sidhu, Harbir Singh [University College London, Centre for Medical Imaging, London (United Kingdom); Abd-Alazeez, Mohamed; Ahmed, Hashim U.; Emberton, Mark [University College London, Research Department of Urology, Division of Surgery and Interventional Science, London (United Kingdom); Kirkham, Alex [University College London Hospital, Departments of Radiology, London (United Kingdom); Freeman, Alex [University College London Hospital, Department of Histopathology, London (United Kingdom)

    2015-09-15

    To assess the interchangeability of zone-specific (peripheral-zone (PZ) and transition-zone (TZ)) multiparametric-MRI (mp-MRI) logistic-regression (LR) models for classification of prostate cancer. Two hundred and thirty-one patients (70 TZ training-cohort; 76 PZ training-cohort; 85 TZ temporal validation-cohort) underwent mp-MRI and transperineal-template-prostate-mapping biopsy. PZ and TZ uni/multi-variate mp-MRI LR-models for classification of significant cancer (any cancer-core-length (CCL) with Gleason > 3 + 3 or any grade with CCL ≥ 4 mm) were derived from the respective cohorts and validated within the same zone by leave-one-out analysis. Inter-zonal performance was tested by applying TZ models to the PZ training-cohort and vice-versa. Classification performance of TZ models for TZ cancer was further assessed in the TZ validation-cohort. ROC area-under-curve (ROC-AUC) analysis was used to compare models. The univariate parameters with the best classification performance were the normalised T2 signal (T2nSI) within the TZ (ROC-AUC = 0.77) and normalized early contrast-enhanced T1 signal (DCE-nSI) within the PZ (ROC-AUC = 0.79). Performance was not significantly improved by bi-variate/tri-variate modelling. PZ models that contained DCE-nSI performed poorly in classification of TZ cancer. The TZ model based solely on maximum-enhancement poorly classified PZ cancer. LR-models dependent on DCE-MRI parameters alone are not interchangeable between prostatic zones; however, models based exclusively on T2 and/or ADC are more robust for inter-zonal application. (orig.)

  2. Methodological notes on model comparisons and strategy classification: A falsificationist proposition

    Morten Moshagen; Benjamin E. Hilbig

    2011-01-01

    Taking a falsificationist perspective, the present paper identifies two major shortcomings of existing approaches to comparative model evaluations in general and strategy classifications in particular. These are (1) failure to consider systematic error and (2) neglect of global model fit. Using adherence measures to evaluate competing models implicitly makes the unrealistic assumption that the error associated with the model predictions is entirely random. By means of simple schematic example...

  3. Comparison of Enzymes / Non-Enzymes Proteins Classification Models Based on 3D, Composition, Sequences and Topological Indices

    Munteanu, Cristian Robert

    2014-01-01

    Comparison of Enzymes / Non-Enzymes Proteins Classification Models Based on 3D, Composition, Sequences and Topological Indices, German Conference on Bioinformatics (GCB), Potsdam, Germany (September, 2007)

  4. Bag1 Co-chaperone Promotes TRC8 E3 Ligase-dependent Degradation of Misfolded Human Ether a Go-Go-related Gene (hERG) Potassium Channels.

    Hantouche, Christine; Williamson, Brittany; Valinsky, William C; Solomon, Joshua; Shrier, Alvin; Young, Jason C

    2017-02-10

    Cardiac long QT syndrome type 2 is caused by mutations in the human ether a go-go-related gene (hERG) potassium channel, many of which cause misfolding and degradation at the endoplasmic reticulum instead of normal trafficking to the cell surface. The Hsc70/Hsp70 chaperones assist the folding of the hERG cytosolic domains. Here, we demonstrate that the Hsp70 nucleotide exchange factor Bag1 promotes hERG degradation by the ubiquitin-proteasome system at the endoplasmic reticulum to regulate hERG levels and channel activity. Dissociation of hERG complexes containing Hsp70 and the E3 ubiquitin ligase CHIP requires the interaction of Bag1 with Hsp70, but this does not involve the Bag1 ubiquitin-like domain. The interaction with Bag1 then shifts hERG degradation to the membrane-anchored E3 ligase TRC8 and its E2-conjugating enzyme Ube2g2, as determined by siRNA screening. TRC8 interacts through the transmembrane region with hERG and decreases hERG functional expression. TRC8 also mediates degradation of the misfolded hERG-G601S disease mutant, but pharmacological stabilization of the mutant structure prevents degradation. Our results identify TRC8 as a previously unknown Hsp70-independent quality control E3 ligase for hERG. © 2017 by The American Society for Biochemistry and Molecular Biology, Inc.

  5. Music Genre Classification using the multivariate AR feature integration model

    Ahrendt, Peter; Meng, Anders

    2005-01-01

    informative decisions about musical genre. For the MIREX music genre contest several authors derive long time features based either on statistical moments and/or temporal structure in the short time features. In our contribution we model a segment (1.2 s) of short time features (texture) using a multivariate...... autoregressive model. Other authors have applied simpler statistical models such as the mean-variance model, which also has been included in several of this years MIREX submissions, see e.g. Tzanetakis (2005); Burred (2005); Bergstra et al. (2005); Lidy and Rauber (2005)....

  6. Model-based segmentation and classification of trajectories (Extended abstract)

    Alewijnse, S.P.A.; Buchin, K.; Buchin, M.; Sijben, S.; Westenberg, M.A.

    2014-01-01

    We present efficient algorithms for segmenting and classifying a trajectory based on a parameterized movement model like the Brownian bridge movement model. Segmentation is the problem of subdividing a trajectory into parts such that each art is homogeneous in its movement characteristics. We

  7. Aspect Βased Classification Model for Social Reviews

    J. Mir

    2017-12-01

    Full Text Available Aspect based opinion mining investigates deeply, the emotions related to one’s aspects. Aspects and opinion word identification is the core task of aspect based opinion mining. In previous studies aspect based opinion mining have been applied on service or product domain. Moreover, product reviews are short and simple whereas, social reviews are long and complex. However, this study introduces an efficient model for social reviews which classifies aspects and opinion words related to social domain. The main contributions of this paper are auto tagging and data training phase, feature set definition and dictionary usage. Proposed model results are compared with CR model and Naïve Bayes classifier on same dataset having accuracy 98.17% and precision 96.01%, while recall and F1 are 96.00% and 96.01% respectively. The experimental results show that the proposed model performs better than the CR model and Naïve Bayes classifier.

  8. Empirical classification of resources in a business model concept

    Marko Seppänen

    2009-04-01

    Full Text Available The concept of the business model has been designed for aiding exploitation of the business potential of an innovation. This exploitation inevitably involves new activities in the organisational context and generates a need to select and arrange the resources of the firm in these new activities. A business model encompasses those resources that a firm has access to and aids in a firm’s effort to create a superior ‘innovation capability’. Selecting and arranging resources to utilise innovations requires resource allocation decisions on multiple fronts as well as poses significant challenges for management of innovations. Although current business model conceptualisations elucidate resources, explicit considerations for the composition and the structures of the resource compositions have remained ambiguous. As a result, current business model conceptualisations fail in their core purpose in assisting the decision-making that must consider the resource allocation in exploiting business opportunities. This paper contributes to the existing discussion regarding the representation of resources as components in the business model concept. The categorized list of resources in business models is validated empirically, using two samples of managers in different positions in several industries. The results indicate that most of the theoretically derived resource items have their equivalents in the business language and concepts used by managers. Thus, the categorisation of the resource components enables further development of the business model concept as well as improves daily communication between managers and their subordinates. Future research could be targeted on linking these components of a business model with each other in order to gain a model to assess the performance of different business model configurations. Furthermore, different applications for the developed resource configuration may be envisioned.

  9. Modelling of additive manufacturing processes: a review and classification

    Stavropoulos, Panagiotis; Foteinopoulos, Panagis

    2018-03-01

    Additive manufacturing (AM) is a very promising technology; however, there are a number of open issues related to the different AM processes. The literature on modelling the existing AM processes is reviewed and classified. A categorization of the different AM processes in process groups, according to the process mechanism, has been conducted and the most important issues are stated. Suggestions are made as to which approach is more appropriate according to the key performance indicator desired to be modelled and a discussion is included as to the way that future modelling work can better contribute to improving today's AM process understanding.

  10. Do pre-trained deep learning models improve computer-aided classification of digital mammograms?

    Aboutalib, Sarah S.; Mohamed, Aly A.; Zuley, Margarita L.; Berg, Wendie A.; Luo, Yahong; Wu, Shandong

    2018-02-01

    Digital mammography screening is an important exam for the early detection of breast cancer and reduction in mortality. False positives leading to high recall rates, however, results in unnecessary negative consequences to patients and health care systems. In order to better aid radiologists, computer-aided tools can be utilized to improve distinction between image classifications and thus potentially reduce false recalls. The emergence of deep learning has shown promising results in the area of biomedical imaging data analysis. This study aimed to investigate deep learning and transfer learning methods that can improve digital mammography classification performance. In particular, we evaluated the effect of pre-training deep learning models with other imaging datasets in order to boost classification performance on a digital mammography dataset. Two types of datasets were used for pre-training: (1) a digitized film mammography dataset, and (2) a very large non-medical imaging dataset. By using either of these datasets to pre-train the network initially, and then fine-tuning with the digital mammography dataset, we found an increase in overall classification performance in comparison to a model without pre-training, with the very large non-medical dataset performing the best in improving the classification accuracy.

  11. Classification criteria of syndromes by latent variable models

    Petersen, Janne

    2010-01-01

    patient's characteristics. These methods may erroneously reduce multiplicity either by combining markers of different phenotypes or by mixing HALS with other processes such as aging. Latent class models identify homogenous groups of patients based on sets of variables, for example symptoms. As no gold......The thesis has two parts; one clinical part: studying the dimensions of human immunodeficiency virus associated lipodystrophy syndrome (HALS) by latent class models, and a more statistical part: investigating how to predict scores of latent variables so these can be used in subsequent regression...... standard exists for diagnosing HALS the normally applied diagnostic models cannot be used. Latent class models, which have never before been used to diagnose HALS, make it possible, under certain assumptions, to: statistically evaluate the number of phenotypes, test for mixing of HALS with other processes...

  12. A multi-class classification MCLP model with particle swarm ...

    A M Viswa Bharathy

    clearly show that the proposed model performs better in terms of detection rate, false .... ease the process of target recognition and detection in ... They performed packet-level simulation analysis in ns-2 ... validated using CPLEX and MATLAB.

  13. Pattern Classification Using an Olfactory Model with PCA Feature Selection in Electronic Noses: Study and Application

    Junbao Zheng

    2012-03-01

    Full Text Available Biologically-inspired models and algorithms are considered as promising sensor array signal processing methods for electronic noses. Feature selection is one of the most important issues for developing robust pattern recognition models in machine learning. This paper describes an investigation into the classification performance of a bionic olfactory model with the increase of the dimensions of input feature vector (outer factor as well as its parallel channels (inner factor. The principal component analysis technique was applied for feature selection and dimension reduction. Two data sets of three classes of wine derived from different cultivars and five classes of green tea derived from five different provinces of China were used for experiments. In the former case the results showed that the average correct classification rate increased as more principal components were put in to feature vector. In the latter case the results showed that sufficient parallel channels should be reserved in the model to avoid pattern space crowding. We concluded that 6~8 channels of the model with principal component feature vector values of at least 90% cumulative variance is adequate for a classification task of 3~5 pattern classes considering the trade-off between time consumption and classification rate.

  14. An Investigation of Feature Models for Music Genre Classification using the Support Vector Classifier

    Meng, Anders; Shawe-Taylor, John

    2005-01-01

    In music genre classification the decision time is typically of the order of several seconds however most automatic music genre classification systems focus on short time features derived from 10-50ms. This work investigates two models, the multivariate gaussian model and the multivariate...... probability kernel. In order to examine the different methods an 11 genre music setup was utilized. In this setup the Mel Frequency Cepstral Coefficients (MFCC) were used as short time features. The accuracy of the best performing model on this data set was 44% as compared to a human performance of 52...... autoregressive model for modelling short time features. Furthermore, it was investigated how these models can be integrated over a segment of short time features into a kernel such that a support vector machine can be applied. Two kernels with this property were considered, the convolution kernel and product...

  15. MiR-17-5p impairs trafficking of H-ERG K+ channel protein by targeting multiple er stress-related chaperones during chronic oxidative stress.

    Qi Wang

    Full Text Available BACKGROUND: To investigate if microRNAs (miRNAs play a role in regulating h-ERG trafficking in the setting of chronic oxidative stress as a common deleterious factor for many cardiac disorders. METHODS: We treated neonatal rat ventricular myocytes and HEK293 cells with stable expression of h-ERG with H2O2 for 12 h and 48 h. Expression of miR-17-5p seed miRNAs was quantified by real-time RT-PCR. Protein levels of chaperones and h-ERG trafficking were measured by Western blot analysis. Luciferase reporter gene assay was used to study miRNA and target interactions. Whole-cell patch-clamp techniques were employed to record h-ERG K(+ current. RESULTS: H-ERG trafficking was impaired by H2O2 after 48 h treatment, accompanied by reciprocal changes of expression between miR-17-5p seed miRNAs and several chaperones (Hsp70, Hsc70, CANX, and Golga2, with the former upregulated and the latter downregulated. We established these chaperones as targets for miR-17-5p. Application miR-17-5p inhibitor rescued H2O2-induced impairment of h-ERG trafficking. Upregulation of endogenous by H2O2 or forced miR-17-5p expression either reduced h-ERG current. Sequestration of AP1 by its decoy molecule eliminated the upregulation of miR-17-5p, and ameliorated impairment of h-ERG trafficking. CONCLUSIONS: Collectively, deregulation of the miR-17-5p seed family miRNAs can cause severe impairment of h-ERG trafficking through targeting multiple ER stress-related chaperones, and activation of AP1 likely accounts for the deleterious upregulation of these miRNAs, in the setting of prolonged duration of oxidative stress. These findings revealed the role of miRNAs in h-ERG trafficking, which may contribute to the cardiac electrical disturbances associated with oxidative stress.

  16. MiR-17-5p impairs trafficking of H-ERG K+ channel protein by targeting multiple er stress-related chaperones during chronic oxidative stress.

    Wang, Qi; Hu, Weina; Lei, Mingming; Wang, Yong; Yan, Bing; Liu, Jun; Zhang, Ren; Jin, Yuanzhe

    2013-01-01

    To investigate if microRNAs (miRNAs) play a role in regulating h-ERG trafficking in the setting of chronic oxidative stress as a common deleterious factor for many cardiac disorders. We treated neonatal rat ventricular myocytes and HEK293 cells with stable expression of h-ERG with H2O2 for 12 h and 48 h. Expression of miR-17-5p seed miRNAs was quantified by real-time RT-PCR. Protein levels of chaperones and h-ERG trafficking were measured by Western blot analysis. Luciferase reporter gene assay was used to study miRNA and target interactions. Whole-cell patch-clamp techniques were employed to record h-ERG K(+) current. H-ERG trafficking was impaired by H2O2 after 48 h treatment, accompanied by reciprocal changes of expression between miR-17-5p seed miRNAs and several chaperones (Hsp70, Hsc70, CANX, and Golga2), with the former upregulated and the latter downregulated. We established these chaperones as targets for miR-17-5p. Application miR-17-5p inhibitor rescued H2O2-induced impairment of h-ERG trafficking. Upregulation of endogenous by H2O2 or forced miR-17-5p expression either reduced h-ERG current. Sequestration of AP1 by its decoy molecule eliminated the upregulation of miR-17-5p, and ameliorated impairment of h-ERG trafficking. Collectively, deregulation of the miR-17-5p seed family miRNAs can cause severe impairment of h-ERG trafficking through targeting multiple ER stress-related chaperones, and activation of AP1 likely accounts for the deleterious upregulation of these miRNAs, in the setting of prolonged duration of oxidative stress. These findings revealed the role of miRNAs in h-ERG trafficking, which may contribute to the cardiac electrical disturbances associated with oxidative stress.

  17. Wearable-Sensor-Based Classification Models of Faller Status in Older Adults.

    Jennifer Howcroft

    Full Text Available Wearable sensors have potential for quantitative, gait-based, point-of-care fall risk assessment that can be easily and quickly implemented in clinical-care and older-adult living environments. This investigation generated models for wearable-sensor based fall-risk classification in older adults and identified the optimal sensor type, location, combination, and modelling method; for walking with and without a cognitive load task. A convenience sample of 100 older individuals (75.5 ± 6.7 years; 76 non-fallers, 24 fallers based on 6 month retrospective fall occurrence walked 7.62 m under single-task and dual-task conditions while wearing pressure-sensing insoles and tri-axial accelerometers at the head, pelvis, and left and right shanks. Participants also completed the Activities-specific Balance Confidence scale, Community Health Activities Model Program for Seniors questionnaire, six minute walk test, and ranked their fear of falling. Fall risk classification models were assessed for all sensor combinations and three model types: multi-layer perceptron neural network, naïve Bayesian, and support vector machine. The best performing model was a multi-layer perceptron neural network with input parameters from pressure-sensing insoles and head, pelvis, and left shank accelerometers (accuracy = 84%, F1 score = 0.600, MCC score = 0.521. Head sensor-based models had the best performance of the single-sensor models for single-task gait assessment. Single-task gait assessment models outperformed models based on dual-task walking or clinical assessment data. Support vector machines and neural networks were the best modelling technique for fall risk classification. Fall risk classification models developed for point-of-care environments should be developed using support vector machines and neural networks, with a multi-sensor single-task gait assessment.

  18. First steps towards a state classification in the random-field Ising model

    Basso, Vittorio; Magni, Alessandro; Bertotti, Giorgio

    2006-01-01

    The properties of locally stable states of the random-field Ising model are studied. A map is defined for the dynamics driven by the field starting from a locally stable state. The fixed points of the map are connected with the limit hysteresis loops that appear in the classification of the states

  19. Real-time classification of humans versus animals using profiling sensors and hidden Markov tree model

    Hossen, Jakir; Jacobs, Eddie L.; Chari, Srikant

    2015-07-01

    Linear pyroelectric array sensors have enabled useful classifications of objects such as humans and animals to be performed with relatively low-cost hardware in border and perimeter security applications. Ongoing research has sought to improve the performance of these sensors through signal processing algorithms. In the research presented here, we introduce the use of hidden Markov tree (HMT) models for object recognition in images generated by linear pyroelectric sensors. HMTs are trained to statistically model the wavelet features of individual objects through an expectation-maximization learning process. Human versus animal classification for a test object is made by evaluating its wavelet features against the trained HMTs using the maximum-likelihood criterion. The classification performance of this approach is compared to two other techniques; a texture, shape, and spectral component features (TSSF) based classifier and a speeded-up robust feature (SURF) classifier. The evaluation indicates that among the three techniques, the wavelet-based HMT model works well, is robust, and has improved classification performance compared to a SURF-based algorithm in equivalent computation time. When compared to the TSSF-based classifier, the HMT model has a slightly degraded performance but almost an order of magnitude improvement in computation time enabling real-time implementation.

  20. Modeling time-to-event (survival) data using classification tree analysis.

    Linden, Ariel; Yarnold, Paul R

    2017-12-01

    Time to the occurrence of an event is often studied in health research. Survival analysis differs from other designs in that follow-up times for individuals who do not experience the event by the end of the study (called censored) are accounted for in the analysis. Cox regression is the standard method for analysing censored data, but the assumptions required of these models are easily violated. In this paper, we introduce classification tree analysis (CTA) as a flexible alternative for modelling censored data. Classification tree analysis is a "decision-tree"-like classification model that provides parsimonious, transparent (ie, easy to visually display and interpret) decision rules that maximize predictive accuracy, derives exact P values via permutation tests, and evaluates model cross-generalizability. Using empirical data, we identify all statistically valid, reproducible, longitudinally consistent, and cross-generalizable CTA survival models and then compare their predictive accuracy to estimates derived via Cox regression and an unadjusted naïve model. Model performance is assessed using integrated Brier scores and a comparison between estimated survival curves. The Cox regression model best predicts average incidence of the outcome over time, whereas CTA survival models best predict either relatively high, or low, incidence of the outcome over time. Classification tree analysis survival models offer many advantages over Cox regression, such as explicit maximization of predictive accuracy, parsimony, statistical robustness, and transparency. Therefore, researchers interested in accurate prognoses and clear decision rules should consider developing models using the CTA-survival framework. © 2017 John Wiley & Sons, Ltd.

  1. Classification of scalar and dyadic nonlocal optical response models

    Wubs, Martijn

    2015-01-01

    Nonlocal optical response is one of the emerging effects on the nanoscale for particles made of metals or doped semiconductors. Here we classify and compare both scalar and tensorial nonlocal response models. In the latter case the nonlocality can stem from either the longitudinal response...

  2. From the harmonic oscillator to the A-D-E classification of conformal models

    Itzykson, C.

    1988-01-01

    Arithmetical aspects of the solution of systems involving dimensional statistical models and conformal field theory. From this perspective, the analysis of the harmonic oscillator, the free particle in a box, the rational billards is effectuated. Moreover, the description of the classification of minimal conformal models and Weiss-Lumino-Witten models, based on the simplest affine algebra is also given. Attempts to interpret and justify the appearance of A-D-E classification of algebra in W-Z-W model are made. Extensions of W-Z-W model, based on SU(N) level one, and the ways to deal with rank two Lie groups, using the arithmetics of quadratic intergers, are described

  3. Finite mixture models for sub-pixel coastal land cover classification

    Ritchie, Michaela C

    2017-05-01

    Full Text Available Models for Sub- pixel Coastal Land Cover Classification M. Ritchie Dr. M. Lück-Vogel Dr. P. Debba Dr. V. Goodall ISRSE - 37 Tshwane, South Africa 10 May 2017 2Study Area Africa South Africa FALSE BAY 3Strand Gordon’s Bay Study Area WorldView-2 Image.../Urban 1 10 10 Herbaceous Vegetation 1 5 5 Shadow 1 8 8 Sparse Vegetation 1 3 3 Water 1 10 10 Woody Vegetation 1 5 5 11 Maximum Likelihood Classification (MLC) 12 Gaussian Mixture Discriminant Analysis (GMDA) 13 A B C t-distribution Mixture Discriminant...

  4. Development of a definition, classification system, and model for cultural geology

    Mitchell, Lloyd W., III

    The concept for this study is based upon a personal interest by the author, an American Indian, in promoting cultural perspectives in undergraduate college teaching and learning environments. Most academicians recognize that merged fields can enhance undergraduate curricula. However, conflict may occur when instructors attempt to merge social science fields such as history or philosophy with geoscience fields such as mining and geomorphology. For example, ideologies of Earth structures derived from scientific methodologies may conflict with historical and spiritual understandings of Earth structures held by American Indians. Specifically, this study addresses the problem of how to combine cultural studies with the geosciences into a new merged academic discipline called cultural geology. This study further attempts to develop the merged field of cultural geology using an approach consisting of three research foci: a definition, a classification system, and a model. Literature reviews were conducted for all three foci. Additionally, to better understand merged fields, a literature review was conducted specifically for academic fields that merged social and physical sciences. Methodologies concentrated on the three research foci: definition, classification system, and model. The definition was derived via a two-step process. The first step, developing keyword hierarchical ranking structures, was followed by creating and analyzing semantic word meaning lists. The classification system was developed by reviewing 102 classification systems and incorporating selected components into a system framework. The cultural geology model was created also utilizing a two-step process. A literature review of scientific models was conducted. Then, the definition and classification system were incorporated into a model felt to reflect the realm of cultural geology. A course syllabus was then developed that incorporated the resulting definition, classification system, and model. This

  5. Initial VHTR accident scenario classification: models and data.

    Vilim, R. B.; Feldman, E. E.; Pointer, W. D.; Wei, T. Y. C.; Nuclear Engineering Division

    2005-09-30

    Nuclear systems codes are being prepared for use as computational tools for conducting performance/safety analyses of the Very High Temperature Reactor. The thermal-hydraulic codes are RELAP5/ATHENA for one-dimensional systems modeling and FLUENT and/or Star-CD for three-dimensional modeling. We describe a formal qualification framework, the development of Phenomena Identification and Ranking Tables (PIRTs), the initial filtering of the experiment databases, and a preliminary screening of these codes for use in the performance/safety analyses. In the second year of this project we focused on development of PIRTS. Two events that result in maximum fuel and vessel temperatures, the Pressurized Conduction Cooldown (PCC) event and the Depressurized Conduction Cooldown (DCC) event, were selected for PIRT generation. A third event that may result in significant thermal stresses, the Load Change event, is also selected for PIRT generation. Gas reactor design experience and engineering judgment were used to identify the important phenomena in the primary system for these events. Sensitivity calculations performed with the RELAP5 code were used as an aid to rank the phenomena in order of importance with respect to the approach of plant response to safety limits. The overall code qualification methodology was illustrated by focusing on the Reactor Cavity Cooling System (RCCS). The mixed convection mode of heat transfer and pressure drop is identified as an important phenomenon for Reactor Cavity Cooling System (RCCS) operation. Scaling studies showed that the mixed convection mode is likely to occur in the RCCS air duct during normal operation and during conduction cooldown events. The RELAP5/ATHENA code was found to not adequately treat the mixed convection regime. Readying the code will require adding models for the turbulent mixed convection regime while possibly performing new experiments for the laminar mixed convection regime. Candidate correlations for the turbulent

  6. SVM classification model in depression recognition based on mutation PSO parameter optimization

    Zhang Ming

    2017-01-01

    Full Text Available At present, the clinical diagnosis of depression is mainly through structured interviews by psychiatrists, which is lack of objective diagnostic methods, so it causes the higher rate of misdiagnosis. In this paper, a method of depression recognition based on SVM and particle swarm optimization algorithm mutation is proposed. To address on the problem that particle swarm optimization (PSO algorithm easily trap in local optima, we propose a feedback mutation PSO algorithm (FBPSO to balance the local search and global exploration ability, so that the parameters of the classification model is optimal. We compared different PSO mutation algorithms about classification accuracy for depression, and found the classification accuracy of support vector machine (SVM classifier based on feedback mutation PSO algorithm is the highest. Our study promotes important reference value for establishing auxiliary diagnostic used in depression recognition of clinical diagnosis.

  7. Application of a niche-based model for forest cover classification

    Amici V

    2012-05-01

    Full Text Available In recent years, a surge of interest in biodiversity conservation have led to the development of new approaches to facilitate ecologically-based conservation policies and management plans. In particular, image classification and predictive distribution modeling applied to forest habitats, constitute a crucial issue as forests constitute the most widespread vegetation type and play a key role for ecosystem functioning. Then, the general purpose of this study is to develop a framework that in the absence of large amounts of field data for large areas may allow to select the most appropriate classification. In some cases, a hard division of classes is required, especially as support to environmental policies; despite this it is necessary to take into account problems which derive from a crisp view of ecological entities being mapped, since habitats are expected to be structurally complex and continuously vary within a landscape. In this paper, a niche model (MaxEnt, generally used to estimate species/habitat distribution, has been applied to classify forest cover in a complex Mediterranean area and to estimate the probability distribution of four forest types, producing continuous maps of forest cover. The use of the obtained models as validation of model for crisp classifications, highlighted that crisp classification, which is being continuously used in landscape research and planning, is not free from drawbacks as it is showing a high degree of inner variability. The modeling approach followed by this study, taking into account the uncertainty proper of the natural ecosystems and the use of environmental variables in land cover classification, may represent an useful approach to making more efficient and effective field inventories and to developing effective forest conservation policies.

  8. Classification criteria of syndromes by latent variable models

    Petersen, Janne

    2010-01-01

    , although this is often desired. I have proposed a new method for predicting class membership that, in contrast to methods based on posterior probabilities of class membership, yields consistent estimates when regressed on explanatory variables in a subsequent analysis. There are four different basic models...... analyses. Part 1: HALS engages different phenotypic changes of peripheral lipoatrophy and central lipohypertrophy.  There are several different definitions of HALS and no consensus on the number of phenotypes. Many of the definitions consist of counting fulfilled criteria on markers and do not include...

  9. Social Media Text Classification by Enhancing Well-Formed Text Trained Model

    Phat Jotikabukkana

    2016-09-01

    Full Text Available Social media are a powerful communication tool in our era of digital information. The large amount of user-generated data is a useful novel source of data, even though it is not easy to extract the treasures from this vast and noisy trove. Since classification is an important part of text mining, many techniques have been proposed to classify this kind of information. We developed an effective technique of social media text classification by semi-supervised learning utilizing an online news source consisting of well-formed text. The computer first automatically extracts news categories, well-categorized by publishers, as classes for topic classification. A bag of words taken from news articles provides the initial keywords related to their category in the form of word vectors. The principal task is to retrieve a set of new productive keywords. Term Frequency-Inverse Document Frequency weighting (TF-IDF and Word Article Matrix (WAM are used as main methods. A modification of WAM is recomputed until it becomes the most effective model for social media text classification. The key success factor was enhancing our model with effective keywords from social media. A promising result of 99.50% accuracy was achieved, with more than 98.5% of Precision, Recall, and F-measure after updating the model three times.

  10. Reconstruction of hyperspectral image using matting model for classification

    Xie, Weiying; Li, Yunsong; Ge, Chiru

    2016-05-01

    Although hyperspectral images (HSIs) captured by satellites provide much information in spectral regions, some bands are redundant or have large amounts of noise, which are not suitable for image analysis. To address this problem, we introduce a method for reconstructing the HSI with noise reduction and contrast enhancement using a matting model for the first time. The matting model refers to each spectral band of an HSI that can be decomposed into three components, i.e., alpha channel, spectral foreground, and spectral background. First, one spectral band of an HSI with more refined information than most other bands is selected, and is referred to as an alpha channel of the HSI to estimate the hyperspectral foreground and hyperspectral background. Finally, a combination operation is applied to reconstruct the HSI. In addition, the support vector machine (SVM) classifier and three sparsity-based classifiers, i.e., orthogonal matching pursuit (OMP), simultaneous OMP, and OMP based on first-order neighborhood system weighted classifiers, are utilized on the reconstructed HSI and the original HSI to verify the effectiveness of the proposed method. Specifically, using the reconstructed HSI, the average accuracy of the SVM classifier can be improved by as much as 19%.

  11. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    Magana-Mora, Arturo

    2017-06-14

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  12. OmniGA: Optimized Omnivariate Decision Trees for Generalizable Classification Models

    Magana-Mora, Arturo; Bajic, Vladimir B.

    2017-01-01

    Classification problems from different domains vary in complexity, size, and imbalance of the number of samples from different classes. Although several classification models have been proposed, selecting the right model and parameters for a given classification task to achieve good performance is not trivial. Therefore, there is a constant interest in developing novel robust and efficient models suitable for a great variety of data. Here, we propose OmniGA, a framework for the optimization of omnivariate decision trees based on a parallel genetic algorithm, coupled with deep learning structure and ensemble learning methods. The performance of the OmniGA framework is evaluated on 12 different datasets taken mainly from biomedical problems and compared with the results obtained by several robust and commonly used machine-learning models with optimized parameters. The results show that OmniGA systematically outperformed these models for all the considered datasets, reducing the F score error in the range from 100% to 2.25%, compared to the best performing model. This demonstrates that OmniGA produces robust models with improved performance. OmniGA code and datasets are available at www.cbrc.kaust.edu.sa/omniga/.

  13. Model-based object classification using unification grammars and abstract representations

    Liburdy, Kathleen A.; Schalkoff, Robert J.

    1993-04-01

    The design and implementation of a high level computer vision system which performs object classification is described. General object labelling and functional analysis require models of classes which display a wide range of geometric variations. A large representational gap exists between abstract criteria such as `graspable' and current geometric image descriptions. The vision system developed and described in this work addresses this problem and implements solutions based on a fusion of semantics, unification, and formal language theory. Object models are represented using unification grammars, which provide a framework for the integration of structure and semantics. A methodology for the derivation of symbolic image descriptions capable of interacting with the grammar-based models is described and implemented. A unification-based parser developed for this system achieves object classification by determining if the symbolic image description can be unified with the abstract criteria of an object model. Future research directions are indicated.

  14. [Establishment of Schatzker classification digital models of tibial plateau fractures and its application on virtual surgery].

    Liu, Yong-gang; Zuo, Li-xin; Pei, Guo-xian; Dai, Ke; Sang, Jing-wei

    2013-08-20

    To explore the establishment of Schatzker classification digital model of tibial plateau fractures and its application in virtual surgery. Proximal tibial of one healthy male volunteer was examined with 64-slice spiral computed tomography (CT). The data were processed by software Mimics 10.01 and a model of proximal tibia was reconstructed. According to the Schatzker classification criteria of tibial plateau fractures, each type of fracture model was simulated.Screen-captures of fracture model were saved from different directions.Each type of fracture model was exported as video mode.Fracture model was imported into FreeForm modeling system.With a force feedback device, a surgeon could conduct virtual fracture operation simulation.Utilizing the GHOST of FreeForm modeling system, the software of virtual cutting, fracture reduction and fixation was developed.With a force feedback device PHANTOM, a surgeon could manipulate virtual surgical instruments and fracture classification model and simulate surgical actions such as assembly of surgical instruments, drilling, implantation of screw, reduction of fracture, bone grafting and fracture fixation, etc. The digital fracture model was intuitive, three-dimensional and realistic and it had excellent visual effect.Fracture could be observed and charted from optional direction and angle.Fracture model could rotate 360 ° in the corresponding video mode. The virtual surgical environment had a strong sense of reality, immersion and telepresence as well as good interaction and force feedback function in the FreeForm modeling system. The user could make the corresponding decisions about surgical method and choice of internal fixation according to the specific type of tibial plateau fracture as well as repeated operational practice in virtual surgery system. The digital fracture model of Schatzker classification is intuitive, three-dimensional, realistic and dynamic. The virtual surgery systems of Schatzker classifications make

  15. Introduction of a methodology for visualization and graphical interpretation of Bayesian classification models.

    Balfer, Jenny; Bajorath, Jürgen

    2014-09-22

    Supervised machine learning models are widely used in chemoinformatics, especially for the prediction of new active compounds or targets of known actives. Bayesian classification methods are among the most popular machine learning approaches for the prediction of activity from chemical structure. Much work has focused on predicting structure-activity relationships (SARs) on the basis of experimental training data. By contrast, only a few efforts have thus far been made to rationalize the performance of Bayesian or other supervised machine learning models and better understand why they might succeed or fail. In this study, we introduce an intuitive approach for the visualization and graphical interpretation of naïve Bayesian classification models. Parameters derived during supervised learning are visualized and interactively analyzed to gain insights into model performance and identify features that determine predictions. The methodology is introduced in detail and applied to assess Bayesian modeling efforts and predictions on compound data sets of varying structural complexity. Different classification models and features determining their performance are characterized in detail. A prototypic implementation of the approach is provided.

  16. Cross-validation pitfalls when selecting and assessing regression and classification models.

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  17. Comparison of models of automatic classification of textural patterns of mineral presents in Colombian coals

    Lopez Carvajal, Jaime; Branch Bedoya, John Willian

    2005-01-01

    The automatic classification of objects is a very interesting approach under several problem domains. This paper outlines some results obtained under different classification models to categorize textural patterns of minerals using real digital images. The data set used was characterized by a small size and noise presence. The implemented models were the Bayesian classifier, Neural Network (2-5-1), support vector machine, decision tree and 3-nearest neighbors. The results after applying crossed validation show that the Bayesian model (84%) proved better predictive capacity than the others, mainly due to its noise robustness behavior. The neuronal network (68%) and the SVM (67%) gave promising results, because they could be improved increasing the data amount used, while the decision tree (55%) and K-NN (54%) did not seem to be adequate for this problem, because of their sensibility to noise

  18. Improved functional expression of recombinant human ether-a-go-go (hERG K+ channels by cultivation at reduced temperature

    Hamilton Bruce

    2007-12-01

    Full Text Available Abstract Background HERG potassium channel blockade is the major cause for drug-induced long QT syndrome, which sometimes cause cardiac disrhythmias and sudden death. There is a strong interest in the pharmaceutical industry to develop high quality medium to high-throughput assays for detecting compounds with potential cardiac liability at the earliest stages of drug development. Cultivation of cells at lower temperature has been used to improve the folding and membrane localization of trafficking defective hERG mutant proteins. The objective of this study was to investigate the effect of lower temperature maintenance on wild type hERG expression and assay performance. Results Wild type hERG was stably expressed in CHO-K1 cells, with the majority of channel protein being located in the cytoplasm, but relatively little on the cell surface. Expression at both locations was increased several-fold by cultivation at lower growth temperatures. Intracellular hERG protein levels were highest at 27°C and this correlated with maximal 3H-dofetilide binding activity. In contrast, the expression of functionally active cell surface-associated hERG measured by patch clamp electrophysiology was optimal at 30°C. The majority of the cytoplasmic hERG protein was associated with the membranes of cytoplasmic vesicles, which markedly increased in quantity and size at lower temperatures or in the presence of the Ca2+-ATPase inhibitor, thapsigargin. Incubation with the endocytic trafficking blocker, nocodazole, led to an increase in hERG activity at 37°C, but not at 30°C. Conclusion Our results are consistent with the concept that maintenance of cells at reduced temperature can be used to boost the functional expression of difficult-to-express membrane proteins and improve the quality of assays for medium to high-throughput compound screening. In addition, these results shed some light on the trafficking of hERG protein under these growth conditions.

  19. Voltage-sensing domain mode shift is coupled to the activation gate by the N-terminal tail of hERG channels.

    Tan, Peter S; Perry, Matthew D; Ng, Chai Ann; Vandenberg, Jamie I; Hill, Adam P

    2012-09-01

    Human ether-a-go-go-related gene (hERG) potassium channels exhibit unique gating kinetics characterized by unusually slow activation and deactivation. The N terminus of the channel, which contains an amphipathic helix and an unstructured tail, has been shown to be involved in regulation of this slow deactivation. However, the mechanism of how this occurs and the connection between voltage-sensing domain (VSD) return and closing of the gate are unclear. To examine this relationship, we have used voltage-clamp fluorometry to simultaneously measure VSD motion and gate closure in N-terminally truncated constructs. We report that mode shifting of the hERG VSD results in a corresponding shift in the voltage-dependent equilibrium of channel closing and that at negative potentials, coupling of the mode-shifted VSD to the gate defines the rate of channel closure. Deletion of the first 25 aa from the N terminus of hERG does not alter mode shifting of the VSD but uncouples the shift from closure of the cytoplasmic gate. Based on these observations, we propose the N-terminal tail as an adaptor that couples voltage sensor return to gate closure to define slow deactivation gating in hERG channels. Furthermore, because the mode shift occurs on a time scale relevant to the cardiac action potential, we suggest a physiological role for this phenomenon in maximizing current flow through hERG channels during repolarization.

  20. Optimal Non-Invasive Fault Classification Model for Packaged Ceramic Tile Quality Monitoring Using MMW Imaging

    Agarwal, Smriti; Singh, Dharmendra

    2016-04-01

    Millimeter wave (MMW) frequency has emerged as an efficient tool for different stand-off imaging applications. In this paper, we have dealt with a novel MMW imaging application, i.e., non-invasive packaged goods quality estimation for industrial quality monitoring applications. An active MMW imaging radar operating at 60 GHz has been ingeniously designed for concealed fault estimation. Ceramic tiles covered with commonly used packaging cardboard were used as concealed targets for undercover fault classification. A comparison of computer vision-based state-of-the-art feature extraction techniques, viz, discrete Fourier transform (DFT), wavelet transform (WT), principal component analysis (PCA), gray level co-occurrence texture (GLCM), and histogram of oriented gradient (HOG) has been done with respect to their efficient and differentiable feature vector generation capability for undercover target fault classification. An extensive number of experiments were performed with different ceramic tile fault configurations, viz., vertical crack, horizontal crack, random crack, diagonal crack along with the non-faulty tiles. Further, an independent algorithm validation was done demonstrating classification accuracy: 80, 86.67, 73.33, and 93.33 % for DFT, WT, PCA, GLCM, and HOG feature-based artificial neural network (ANN) classifier models, respectively. Classification results show good capability for HOG feature extraction technique towards non-destructive quality inspection with appreciably low false alarm as compared to other techniques. Thereby, a robust and optimal image feature-based neural network classification model has been proposed for non-invasive, automatic fault monitoring for a financially and commercially competent industrial growth.

  1. Probabilistic topic modeling for the analysis and classification of genomic sequences

    2015-01-01

    Background Studies on genomic sequences for classification and taxonomic identification have a leading role in the biomedical field and in the analysis of biodiversity. These studies are focusing on the so-called barcode genes, representing a well defined region of the whole genome. Recently, alignment-free techniques are gaining more importance because they are able to overcome the drawbacks of sequence alignment techniques. In this paper a new alignment-free method for DNA sequences clustering and classification is proposed. The method is based on k-mers representation and text mining techniques. Methods The presented method is based on Probabilistic Topic Modeling, a statistical technique originally proposed for text documents. Probabilistic topic models are able to find in a document corpus the topics (recurrent themes) characterizing classes of documents. This technique, applied on DNA sequences representing the documents, exploits the frequency of fixed-length k-mers and builds a generative model for a training group of sequences. This generative model, obtained through the Latent Dirichlet Allocation (LDA) algorithm, is then used to classify a large set of genomic sequences. Results and conclusions We performed classification of over 7000 16S DNA barcode sequences taken from Ribosomal Database Project (RDP) repository, training probabilistic topic models. The proposed method is compared to the RDP tool and Support Vector Machine (SVM) classification algorithm in a extensive set of trials using both complete sequences and short sequence snippets (from 400 bp to 25 bp). Our method reaches very similar results to RDP classifier and SVM for complete sequences. The most interesting results are obtained when short sequence snippets are considered. In these conditions the proposed method outperforms RDP and SVM with ultra short sequences and it exhibits a smooth decrease of performance, at every taxonomic level, when the sequence length is decreased. PMID:25916734

  2. Comparing the performance of flat and hierarchical Habitat/Land-Cover classification models in a NATURA 2000 site

    Gavish, Yoni; O'Connell, Jerome; Marsh, Charles J.; Tarantino, Cristina; Blonda, Palma; Tomaselli, Valeria; Kunin, William E.

    2018-02-01

    The increasing need for high quality Habitat/Land-Cover (H/LC) maps has triggered considerable research into novel machine-learning based classification models. In many cases, H/LC classes follow pre-defined hierarchical classification schemes (e.g., CORINE), in which fine H/LC categories are thematically nested within more general categories. However, none of the existing machine-learning algorithms account for this pre-defined hierarchical structure. Here we introduce a novel Random Forest (RF) based application of hierarchical classification, which fits a separate local classification model in every branching point of the thematic tree, and then integrates all the different local models to a single global prediction. We applied the hierarchal RF approach in a NATURA 2000 site in Italy, using two land-cover (CORINE, FAO-LCCS) and one habitat classification scheme (EUNIS) that differ from one another in the shape of the class hierarchy. For all 3 classification schemes, both the hierarchical model and a flat model alternative provided accurate predictions, with kappa values mostly above 0.9 (despite using only 2.2-3.2% of the study area as training cells). The flat approach slightly outperformed the hierarchical models when the hierarchy was relatively simple, while the hierarchical model worked better under more complex thematic hierarchies. Most misclassifications came from habitat pairs that are thematically distant yet spectrally similar. In 2 out of 3 classification schemes, the additional constraints of the hierarchical model resulted with fewer such serious misclassifications relative to the flat model. The hierarchical model also provided valuable information on variable importance which can shed light into "black-box" based machine learning algorithms like RF. We suggest various ways by which hierarchical classification models can increase the accuracy and interpretability of H/LC classification maps.

  3. Evaluation of soft segment modeling on a context independent phoneme classification system

    Razzazi, F.; Sayadiyan, A.

    2007-01-01

    The geometric distribution of states duration is one of the main performance limiting assumptions of hidden Markov modeling of speech signals. Stochastic segment models, generally, and segmental HMM, specifically overcome this deficiency partly at the cost of more complexity in both training and recognition phases. In addition to this assumption, the gradual temporal changes of speech statistics has not been modeled in HMM. In this paper, a new duration modeling approach is presented. The main idea of the model is to consider the effect of adjacent segments on the probability density function estimation and evaluation of each acoustic segment. This idea not only makes the model robust against segmentation errors, but also it models gradual change from one segment to the next one with a minimum set of parameters. The proposed idea is analytically formulated and tested on a TIMIT based context independent phenomena classification system. During the test procedure, the phoneme classification of different phoneme classes was performed by applying various proposed recognition algorithms. The system was optimized and the results have been compared with a continuous density hidden Markov model (CDHMM) with similar computational complexity. The results show 8-10% improvement in phoneme recognition rate in comparison with standard continuous density hidden Markov model. This indicates improved compatibility of the proposed model with the speech nature. (author)

  4. A unified model for context-based behavioural modelling and classification.

    Dabrowski, JJ

    2015-11-01

    Full Text Available the continuous dynamics of a entity and incorporating various contextual elements that influence behaviour. The entity is classified according to its behaviour. Classification is expressed as a conditional probability of the entity class given its tracked...

  5. Chromatographic profiles of Phyllanthus aqueous extracts samples: a proposition of classification using chemometric models.

    Martins, Lucia Regina Rocha; Pereira-Filho, Edenir Rodrigues; Cass, Quezia Bezerra

    2011-04-01

    Taking in consideration the global analysis of complex samples, proposed by the metabolomic approach, the chromatographic fingerprint encompasses an attractive chemical characterization of herbal medicines. Thus, it can be used as a tool in quality control analysis of phytomedicines. The generated multivariate data are better evaluated by chemometric analyses, and they can be modeled by classification methods. "Stone breaker" is a popular Brazilian plant of Phyllanthus genus, used worldwide to treat renal calculus, hepatitis, and many other diseases. In this study, gradient elution at reversed-phase conditions with detection at ultraviolet region were used to obtain chemical profiles (fingerprints) of botanically identified samples of six Phyllanthus species. The obtained chromatograms, at 275 nm, were organized in data matrices, and the time shifts of peaks were adjusted using the Correlation Optimized Warping algorithm. Principal Component Analyses were performed to evaluate similarities among cultivated and uncultivated samples and the discrimination among the species and, after that, the samples were used to compose three classification models using Soft Independent Modeling of Class analogy, K-Nearest Neighbor, and Partial Least Squares for Discriminant Analysis. The ability of classification models were discussed after their successful application for authenticity evaluation of 25 commercial samples of "stone breaker."

  6. A novel transferable individual tree crown delineation model based on Fishing Net Dragging and boundary classification

    Liu, Tao; Im, Jungho; Quackenbush, Lindi J.

    2015-12-01

    This study provides a novel approach to individual tree crown delineation (ITCD) using airborne Light Detection and Ranging (LiDAR) data in dense natural forests using two main steps: crown boundary refinement based on a proposed Fishing Net Dragging (FiND) method, and segment merging based on boundary classification. FiND starts with approximate tree crown boundaries derived using a traditional watershed method with Gaussian filtering and refines these boundaries using an algorithm that mimics how a fisherman drags a fishing net. Random forest machine learning is then used to classify boundary segments into two classes: boundaries between trees and boundaries between branches that belong to a single tree. Three groups of LiDAR-derived features-two from the pseudo waveform generated along with crown boundaries and one from a canopy height model (CHM)-were used in the classification. The proposed ITCD approach was tested using LiDAR data collected over a mountainous region in the Adirondack Park, NY, USA. Overall accuracy of boundary classification was 82.4%. Features derived from the CHM were generally more important in the classification than the features extracted from the pseudo waveform. A comprehensive accuracy assessment scheme for ITCD was also introduced by considering both area of crown overlap and crown centroids. Accuracy assessment using this new scheme shows the proposed ITCD achieved 74% and 78% as overall accuracy, respectively, for deciduous and mixed forest.

  7. Statistical Fractal Models Based on GND-PCA and Its Application on Classification of Liver Diseases

    Huiyan Jiang

    2013-01-01

    Full Text Available A new method is proposed to establish the statistical fractal model for liver diseases classification. Firstly, the fractal theory is used to construct the high-order tensor, and then Generalized -dimensional Principal Component Analysis (GND-PCA is used to establish the statistical fractal model and select the feature from the region of liver; at the same time different features have different weights, and finally, Support Vector Machine Optimized Ant Colony (ACO-SVM algorithm is used to establish the classifier for the recognition of liver disease. In order to verify the effectiveness of the proposed method, PCA eigenface method and normal SVM method are chosen as the contrast methods. The experimental results show that the proposed method can reconstruct liver volume better and improve the classification accuracy of liver diseases.

  8. Estimating Classification Errors under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC)

    Boeschoten, Laura; Oberski, Daniel; De Waal, Ton

    2017-01-01

    Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible

  9. Research on evaluating water resource resilience based on projection pursuit classification model

    Liu, Dong; Zhao, Dan; Liang, Xu; Wu, Qiuchen

    2016-03-01

    Water is a fundamental natural resource while agriculture water guarantees the grain output, which shows that the utilization and management of water resource have a significant practical meaning. Regional agricultural water resource system features with unpredictable, self-organization, and non-linear which lays a certain difficulty on the evaluation of regional agriculture water resource resilience. The current research on water resource resilience remains to focus on qualitative analysis and the quantitative analysis is still in the primary stage, thus, according to the above issues, projection pursuit classification model is brought forward. With the help of artificial fish-swarm algorithm (AFSA), it optimizes the projection index function, seeks for the optimal projection direction, and improves AFSA with the application of self-adaptive artificial fish step and crowding factor. Taking Hongxinglong Administration of Heilongjiang as the research base and on the basis of improving AFSA, it established the evaluation of projection pursuit classification model to agriculture water resource system resilience besides the proceeding analysis of projection pursuit classification model on accelerating genetic algorithm. The research shows that the water resource resilience of Hongxinglong is the best than Raohe Farm, and the last 597 Farm. And the further analysis shows that the key driving factors influencing agricultural water resource resilience are precipitation and agriculture water consumption. The research result reveals the restoring situation of the local water resource system, providing foundation for agriculture water resource management.

  10. Setting a generalized functional linear model (GFLM for the classification of different types of cancer

    Miguel Flores

    2016-11-01

    Full Text Available This work aims to classify the DNA sequences of healthy and malignant cancer respectively. For this, supervised and unsupervised classification methods from a functional context are used; i.e. each strand of DNA is an observation. The observations are discretized, for that reason different ways to represent these observations with functions are evaluated. In addition, an exploratory study is done: estimating the mean and variance of each functional type of cancer. For the unsupervised classification method, hierarchical clustering with different measures of functional distance is used. On the other hand, for the supervised classification method, a functional generalized linear model is used. For this model the first and second derivatives are used which are included as discriminating variables. It has been verified that one of the advantages of working in the functional context is to obtain a model to correctly classify cancers by 100%. For the implementation of the methods it has been used the fda.usc R package that includes all the techniques of functional data analysis used in this work. In addition, some that have been developed in recent decades. For more details of these techniques can be consulted Ramsay, J. O. and Silverman (2005 and Ferraty et al. (2006.

  11. A bayesian hierarchical model for classification with selection of functional predictors.

    Zhu, Hongxiao; Vannucci, Marina; Cox, Dennis D

    2010-06-01

    In functional data classification, functional observations are often contaminated by various systematic effects, such as random batch effects caused by device artifacts, or fixed effects caused by sample-related factors. These effects may lead to classification bias and thus should not be neglected. Another issue of concern is the selection of functions when predictors consist of multiple functions, some of which may be redundant. The above issues arise in a real data application where we use fluorescence spectroscopy to detect cervical precancer. In this article, we propose a Bayesian hierarchical model that takes into account random batch effects and selects effective functions among multiple functional predictors. Fixed effects or predictors in nonfunctional form are also included in the model. The dimension of the functional data is reduced through orthonormal basis expansion or functional principal components. For posterior sampling, we use a hybrid Metropolis-Hastings/Gibbs sampler, which suffers slow mixing. An evolutionary Monte Carlo algorithm is applied to improve the mixing. Simulation and real data application show that the proposed model provides accurate selection of functional predictors as well as good classification.

  12. Video event classification and image segmentation based on noncausal multidimensional hidden Markov models.

    Ma, Xiang; Schonfeld, Dan; Khokhar, Ashfaq A

    2009-06-01

    In this paper, we propose a novel solution to an arbitrary noncausal, multidimensional hidden Markov model (HMM) for image and video classification. First, we show that the noncausal model can be solved by splitting it into multiple causal HMMs and simultaneously solving each causal HMM using a fully synchronous distributed computing framework, therefore referred to as distributed HMMs. Next we present an approximate solution to the multiple causal HMMs that is based on an alternating updating scheme and assumes a realistic sequential computing framework. The parameters of the distributed causal HMMs are estimated by extending the classical 1-D training and classification algorithms to multiple dimensions. The proposed extension to arbitrary causal, multidimensional HMMs allows state transitions that are dependent on all causal neighbors. We, thus, extend three fundamental algorithms to multidimensional causal systems, i.e., 1) expectation-maximization (EM), 2) general forward-backward (GFB), and 3) Viterbi algorithms. In the simulations, we choose to limit ourselves to a noncausal 2-D model whose noncausality is along a single dimension, in order to significantly reduce the computational complexity. Simulation results demonstrate the superior performance, higher accuracy rate, and applicability of the proposed noncausal HMM framework to image and video classification.

  13. Fuzzy Continuous Review Inventory Model using ABC Multi-Criteria Classification Approach: A Single Case Study

    Meriastuti - Ginting

    2015-07-01

    Full Text Available Abstract. Inventory is considered as the most expensive, yet important,to any companies. It representsapproximately 50% of the total investment. Inventory cost has become one of the majorcontributorsto inefficiency, therefore it should be managed effectively. This study aims to propose an alternative inventory model,  by using ABC multi-criteria classification approach to minimize total cost. By combining FANP (Fuzzy Analytical Network Process and TOPSIS (Technique of Order Preferences by Similarity to the Ideal Solution, the ABC multi-criteria classification approach identified 12 items of 69 inventory items as “outstanding important class” that contributed to 80% total inventory cost. This finding  is then used as the basis to determine the proposed continuous review inventory model.This study found that by using fuzzy trapezoidal cost, the inventory  turnover ratio can be increased, and inventory cost can be decreased by 78% for each item in “class A” inventory.Keywords:ABC multi-criteria classification, FANP-TOPSIS, continuous review inventory model lead-time demand distribution, trapezoidal fuzzy number 

  14. Using ELM-based weighted probabilistic model in the classification of synchronous EEG BCI.

    Tan, Ping; Tan, Guan-Zheng; Cai, Zi-Xing; Sa, Wei-Ping; Zou, Yi-Qun

    2017-01-01

    Extreme learning machine (ELM) is an effective machine learning technique with simple theory and fast implementation, which has gained increasing interest from various research fields recently. A new method that combines ELM with probabilistic model method is proposed in this paper to classify the electroencephalography (EEG) signals in synchronous brain-computer interface (BCI) system. In the proposed method, the softmax function is used to convert the ELM output to classification probability. The Chernoff error bound, deduced from the Bayesian probabilistic model in the training process, is adopted as the weight to take the discriminant process. Since the proposed method makes use of the knowledge from all preceding training datasets, its discriminating performance improves accumulatively. In the test experiments based on the datasets from BCI competitions, the proposed method is compared with other classification methods, including the linear discriminant analysis, support vector machine, ELM and weighted probabilistic model methods. For comparison, the mutual information, classification accuracy and information transfer rate are considered as the evaluation indicators for these classifiers. The results demonstrate that our method shows competitive performance against other methods.

  15. Generative embedding for model-based classification of fMRI data.

    Kay H Brodersen

    2011-06-01

    Full Text Available Decoding models, such as those underlying multivariate classification algorithms, have been increasingly used to infer cognitive or clinical brain states from measures of brain activity obtained by functional magnetic resonance imaging (fMRI. The practicality of current classifiers, however, is restricted by two major challenges. First, due to the high data dimensionality and low sample size, algorithms struggle to separate informative from uninformative features, resulting in poor generalization performance. Second, popular discriminative methods such as support vector machines (SVMs rarely afford mechanistic interpretability. In this paper, we address these issues by proposing a novel generative-embedding approach that incorporates neurobiologically interpretable generative models into discriminative classifiers. Our approach extends previous work on trial-by-trial classification for electrophysiological recordings to subject-by-subject classification for fMRI and offers two key advantages over conventional methods: it may provide more accurate predictions by exploiting discriminative information encoded in 'hidden' physiological quantities such as synaptic connection strengths; and it affords mechanistic interpretability of clinical classifications. Here, we introduce generative embedding for fMRI using a combination of dynamic causal models (DCMs and SVMs. We propose a general procedure of DCM-based generative embedding for subject-wise classification, provide a concrete implementation, and suggest good-practice guidelines for unbiased application of generative embedding in the context of fMRI. We illustrate the utility of our approach by a clinical example in which we classify moderately aphasic patients and healthy controls using a DCM of thalamo-temporal regions during speech processing. Generative embedding achieves a near-perfect balanced classification accuracy of 98% and significantly outperforms conventional activation-based and

  16. Local anesthetic interaction with human ether-a-go-go-related gene (HERG) channels: role of aromatic amino acids Y652 and F656

    Siebrands, Cornelia C; Schmitt, Nicole; Friederich, Patrick

    2005-01-01

    was to determine the effect of the mutations Y652A and F656A in the putative drug binding region of HERG on the inhibition by bupivacaine, ropivacaine, and mepivacaine. METHODS: The authors examined the inhibition of wild-type and mutant HERG channels, transiently expressed in Chinese hamster ovary cells...... by bupivacaine, ropivacaine, and mepivacaine. Whole cell patch clamp recordings were performed at room temperature. RESULTS: Inhibition of HERG wild-type and mutant channels by the different local anesthetics was concentration dependent, stereoselective, and reversible. The sensitivity decreased in the order...... bupivacaine > ropivacaine > mepivacaine for wild-type and mutant channels. The mutant channels were approximately 4-30 times less sensitive to the inhibitory action of the different local anesthetics than the wild-type channel. The concentration-response data were described by Hill functions (bupivacaine...

  17. Oxycodone is associated with dose-dependent QTc prolongation in patients and low-affinity inhibiting of hERG activity in vitro

    Fanoe, Søren; Jensen, Gorm Boje; Sjøgren, Per

    2008-01-01

    with the use of these drugs. WHAT THIS PAPER ADDS: This study is the first to show that oxycodone dose is associated with QT prolongation and in vitro blockade of hERG channels expressed in HEK293. Neither morphine nor tramadol doses are associated with the QT interval length. AIMS: During recent years some...... and TdP could be a more general problem associated with the use of these drugs. The aims of this study were to evaluate the association between different opioids and the QTc among patients and measure hERG activity under influence by opioids in vitro. METHODS: One hundred chronic nonmalignant pain...... patients treated with methadone, oxycodone, morphine or tramadol were recruited in a cross-sectional study. The QTc was estimated from a 12-lead ECG. To examine hERG activity in the presence of oxycodone, electrophysiological testing was conducted using Xenopus laevis oocytes and HEK293 cells expressing h...

  18. Acute and Chronic Toxicity, Cytochrome P450 Enzyme Inhibition, and hERG Channel Blockade Studies with a Polyherbal, Ayurvedic Formulation for Inflammation

    Debendranath Dey

    2015-01-01

    Full Text Available Ayurvedic plants are known for thousands of years to have anti-inflammatory and antiarthritic effect. We have recently shown that BV-9238, a proprietary formulation of Withania somnifera, Boswellia serrata, Zingiber officinale, and Curcuma longa, inhibits LPS-induced TNF-alpha and nitric oxide production from mouse macrophage and reduces inflammation in different animal models. To evaluate the safety parameters of BV-9238, we conducted a cytotoxicity study in RAW 264.7 cells (0.005–1 mg/mL by MTT/formazan method, an acute single dose (2–10 g/kg bodyweight toxicity study and a 180-day chronic study with 1 g and 2 g/kg bodyweight in Sprague Dawley rats. Some sedation, ptosis, and ataxia were observed for first 15–20 min in very high acute doses and hence not used for further chronic studies. At the end of 180 days, gross and histopathology, blood cell counts, liver and renal functions were all at normal levels. Further, a modest attempt was made to assess the effects of BV-9238 (0.5 µg/mL on six major human cytochrome P450 enzymes and 3H radioligand binding assay with human hERG receptors. BV-9238 did not show any significant inhibition of these enzymes at the tested dose. All these suggest that BV-9238 has potential as a safe and well tolerated anti-inflammatory formulation for future use.

  19. SAR Imagery Simulation of Ship Based on Electromagnetic Calculations and Sea Clutter Modelling for Classification Applications

    Ji, K F; Zhao, Z; Xing, X W; Zou, H X; Zhou, S L

    2014-01-01

    Ship detection and classification with space-borne SAR has many potential applications within the maritime surveillance, fishery activity management, monitoring ship traffic, and military security. While ship detection techniques with SAR imagery are well established, ship classification is still an open issue. One of the main reasons may be ascribed to the difficulties on acquiring the required quantities of real data of vessels under different observation and environmental conditions with precise ground truth. Therefore, simulation of SAR images with high scenario flexibility and reasonable computation costs is compulsory for ship classification algorithms development. However, the simulation of SAR imagery of ship over sea surface is challenging. Though great efforts have been devoted to tackle this difficult problem, it is far from being conquered. This paper proposes a novel scheme for SAR imagery simulation of ship over sea surface. The simulation is implemented based on high frequency electromagnetic calculations methods of PO, MEC, PTD and GO. SAR imagery of sea clutter is modelled by the representative K-distribution clutter model. Then, the simulated SAR imagery of ship can be produced by inserting the simulated SAR imagery chips of ship into the SAR imagery of sea clutter. The proposed scheme has been validated with canonical and complex ship targets over a typical sea scene

  20. Butterfly Classification by HSI and RGB Color Models Using Neural Networks

    Jorge E. Grajales-Múnera

    2013-11-01

    Full Text Available This study aims the classification of Butterfly species through the implementation of Neural Networks and Image Processing. A total of 9 species of Morpho genre which has blue as a characteristic color are processed. For Butterfly segmentation we used image processing tools such as: Binarization, edge processing and mathematical morphology. For data processing RGB values are obtained for every image which are converted to HSI color model to identify blue pixels and obtain the data to the proposed Neural Networks: Back-Propagation and Perceptron. For analysis and verification of results confusion matrix are built and analyzed with the results of neural networks with the lowest error levels. We obtain error levels close to 1% in classification of some Butterfly species.

  1. The role of hERG1 ion channels in epithelial-mesenchymal transition and the capacity of riluzole to reduce cisplatin resistance in colorectal cancer cells.

    Fortunato, Angelo

    2017-08-01

    The transition of cells from the epithelial to the mesenchymal state (EMT) plays an important role in tumor progression. EMT allows cells to acquire mobility, stem-like behavior and resistance to apoptosis and drug treatment. These features turn EMT into a central process in tumor biology. Ion channels are attractive targets for the treatment of cancer since they play critical roles in controlling a wide range of physiological processes that are frequently deregulated in cancer. Here, we investigated the role of ether-a-go-go-related 1 (hERG1) ion channels in the EMT of colorectal cancer cells. We studied the epithelial-mesenchymal profile of different colorectal cancer-derived cell lines and the expression of hERG1 potassium channels in these cell lines using real-time PCR. Next, we knocked down hERG1 expression in HCT116 cells using lentivirus mediated RNA interference and characterized the hERG1 silenced cells in vitro and in vivo. Finally, we investigated the capacity of riluzole, an ion channel-modulating drug used in humans to treat amyotrophic lateral sclerosis, to reduce the resistance of the respective colorectal cancer cells to the chemotherapeutic drug cisplatin. We found that of the colorectal cancer-derived cell lines tested, HCT116 showed the highest mesenchymal profile and a high hERG1 expression. Subsequent hERG1 expression knockdown induced a change in cell morphology, which was accompanied by a reduction in the proliferative and tumorigenic capacities of the cells. Notably, we found that hERG1expression knockdown elicited a reversion of the EMT profile in HCT116 cells with a reacquisition of the epithelial-like profile. We also found that riluzole increased the sensitivity of HCT116 cisplatin-resistant cells to cisplatin. Our data indicate that hERG1 plays a role in the EMT of colorectal cancer cells and that its knockdown reduces the proliferative and tumorigenic capacities of these cells. In addition, we conclude that riluzole may be used in

  2. Validation and Clinical Utility of the hERG IC50:Cmax Ratio to Determine the Risk of Drug-Induced Torsades de Pointes: A Meta-Analysis.

    Lehmann, David F; Eggleston, William D; Wang, Dongliang

    2018-03-01

    Use of the QT interval corrected for heart rate (QTc) on the electrocardiogram (ECG) to predict torsades de pointes (TdP) risk from culprit drugs is neither sensitive nor specific. The ratio of the half-maximum inhibitory concentration of the hERG channel (hERG IC50) to the peak serum concentration of unbound drug (C max ) is used during drug development to screen out chemical entities likely to cause TdP. To validate the use of the hERG IC50:C max ratio to predict TdP risk from a culprit drug by its correlation with TdP incidence. Medline (between 1966 and March 2017) was accessed for hERG IC50 and C max values from the antihistamine, fluoroquinolone, and antipsychotic classes to identify cases of drug-induced TdP. Exposure to a culprit drug was estimated from annual revenues reported by the manufacturer. Inclusion criteria for TdP cases were provision of an ECG tracing that demonstrated QTc prolongation with TdP and normal serum values of potassium, calcium, and magnesium. Cases reported in patients with a prior rhythm disturbance and those involving a drug interaction were excluded. The Meta-Analysis of Observational Studies in Epidemiology checklist was used for epidemiological data extraction by two authors. Negligible risk drugs were defined by an hERG IC50:C max ratio that correlated with less than a 5% chance of one TdP event for every 100 million exposures (relative risk [RR] 1.0). The hERG IC50:C max ratio correlated with TdP risk (0.312; 95% confidence interval 0.205-0.476, pratio of 80 (RR 1.0). The RR from olanzapine is on par with loratadine; ziprasidone is comparable with ciprofloxacin. Drugs with an RR greater than 50 include astemizole, risperidone, haloperidol, and thioridazine. The hERG IC50:C max ratio was correlated with TdP incidence for culprit drugs. This validation provides support for the potential use of the hERG IC50:C max ratio for clinical decision making in instances of drug selection where TdP risk is a concern. © 2018

  3. Model of Numerical Spatial Classification for Sustainable Agriculture in Badung Regency and Denpasar City, Indonesia

    Trigunasih, N. M.; Lanya, I.; Subadiyasa, N. N.; Hutauruk, J.

    2018-02-01

    Increasing number and activity of the population to meet the needs of their lives greatly affect the utilization of land resources. Land needs for activities of the population continue to grow, while the availability of land is limited. Therefore, there will be changes in land use. As a result, the problems faced by land degradation and conversion of agricultural land become non-agricultural. The objectives of this research are: (1) to determine parameter of spatial numerical classification of sustainable food agriculture in Badung Regency and Denpasar City (2) to know the projection of food balance in Badung Regency and Denpasar City in 2020, 2030, 2040, and 2050 (3) to specify of function of spatial numerical classification in the making of zonation model of sustainable agricultural land area in Badung regency and Denpasar city (4) to determine the appropriate model of the area to protect sustainable agricultural land in spatial and time scale in Badung and Denpasar regencies. The method used in this research was quantitative method include: survey, soil analysis, spatial data development, geoprocessing analysis (spatial analysis of overlay and proximity analysis), interpolation of raster digital elevation model data, and visualization (cartography). Qualitative methods consisted of literature studies, and interviews. The parameters observed for a total of 11 parameters Badung regency and Denpasar as much as 9 parameters. Numerical classification parameter analysis results used the standard deviation and the mean of the population data and projections relationship rice field in the food balance sheet by modelling. The result of the research showed that, the number of different numerical classification parameters in rural areas (Badung) and urban areas (Denpasar), in urban areas the number of parameters is less than the rural areas. The based on numerical classification weighting and scores generate population distribution parameter analysis results of a standard

  4. A theory of fine structure image models with an application to detection and classification of dementia.

    O'Neill, William; Penn, Richard; Werner, Michael; Thomas, Justin

    2015-06-01

    Estimation of stochastic process models from data is a common application of time series analysis methods. Such system identification processes are often cast as hypothesis testing exercises whose intent is to estimate model parameters and test them for statistical significance. Ordinary least squares (OLS) regression and the Levenberg-Marquardt algorithm (LMA) have proven invaluable computational tools for models being described by non-homogeneous, linear, stationary, ordinary differential equations. In this paper we extend stochastic model identification to linear, stationary, partial differential equations in two independent variables (2D) and show that OLS and LMA apply equally well to these systems. The method employs an original nonparametric statistic as a test for the significance of estimated parameters. We show gray scale and color images are special cases of 2D systems satisfying a particular autoregressive partial difference equation which estimates an analogous partial differential equation. Several applications to medical image modeling and classification illustrate the method by correctly classifying demented and normal OLS models of axial magnetic resonance brain scans according to subject Mini Mental State Exam (MMSE) scores. Comparison with 13 image classifiers from the literature indicates our classifier is at least 14 times faster than any of them and has a classification accuracy better than all but one. Our modeling method applies to any linear, stationary, partial differential equation and the method is readily extended to 3D whole-organ systems. Further, in addition to being a robust image classifier, estimated image models offer insights into which parameters carry the most diagnostic image information and thereby suggest finer divisions could be made within a class. Image models can be estimated in milliseconds which translate to whole-organ models in seconds; such runtimes could make real-time medicine and surgery modeling possible.

  5. New binding site on common molecular scaffold provides HERG channel specificity of scorpion toxin BeKm-1

    Korolkova, Yuliya V; Bocharov, Eduard V; Angelo, Kamilla

    2002-01-01

    The scorpion toxin BeKm-1 is unique among a variety of known short scorpion toxins affecting potassium channels in its selective action on ether-a-go-go-related gene (ERG)-type channels. BeKm-1 shares the common molecular scaffold with other short scorpion toxins. The toxin spatial structure...... resolved by NMR consists of a short alpha-helix and a triple-stranded antiparallel beta-sheet. By toxin mutagenesis study we identified the residues that are important for the binding of BeKm-1 to the human ERG K+ (HERG) channel. The most critical residues (Tyr-11, Lys-18, Arg-20, Lys-23) are located...

  6. Towards low carbon business park energy systems: Classification of techno-economic energy models

    Timmerman, Jonas; Vandevelde, Lieven; Van Eetvelde, Greet

    2014-01-01

    To mitigate climate destabilisation, human-induced greenhouse gas emissions urgently need to be curbed. A major share of these emissions originates from the industry and energy sectors. Hence, a low carbon shift in industrial and business park energy systems is called for. Low carbon business parks minimise energy-related carbon dioxide emissions by maximal exploitation of local renewable energy production, enhanced energy efficiency, and inter-firm heat exchange, combined in a collective energy system. The holistic approach of techno-economic energy models facilitates the design of such systems, while yielding an optimal trade-off between energetic, economic and environmental performances. However, no models custom-tailored for industrial park energy systems are detected in literature. In this paper, existing energy model classifications are scanned for adequate model characteristics and accordingly, a confined number of models are selected and described. Subsequently, a practical typology is proposed, existing of energy system evolution, optimisation, simulation, accounting and integration models, and key model features are compared. Finally, important features for a business park energy model are identified. - Highlights: • A holistic perspective on (low carbon) business park energy systems is introduced. • A new categorisation of techno-economic energy models is proposed. • Model characteristics are described per model category. • Essential model features for business park energy system modelling are identified. • A strategy towards a techno-economic energy model for business parks is proposed

  7. A Hierarchical Feature Extraction Model for Multi-Label Mechanical Patent Classification

    Jie Hu

    2018-01-01

    Full Text Available Various studies have focused on feature extraction methods for automatic patent classification in recent years. However, most of these approaches are based on the knowledge from experts in related domains. Here we propose a hierarchical feature extraction model (HFEM for multi-label mechanical patent classification, which is able to capture both local features of phrases as well as global and temporal semantics. First, a n-gram feature extractor based on convolutional neural networks (CNNs is designed to extract salient local lexical-level features. Next, a long dependency feature extraction model based on the bidirectional long–short-term memory (BiLSTM neural network model is proposed to capture sequential correlations from higher-level sequence representations. Then the HFEM algorithm and its hierarchical feature extraction architecture are detailed. We establish the training, validation and test datasets, containing 72,532, 18,133, and 2679 mechanical patent documents, respectively, and then check the performance of HFEMs. Finally, we compared the results of the proposed HFEM and three other single neural network models, namely CNN, long–short-term memory (LSTM, and BiLSTM. The experimental results indicate that our proposed HFEM outperforms the other compared models in both precision and recall.

  8. When machine vision meets histology: A comparative evaluation of model architecture for classification of histology sections.

    Zhong, Cheng; Han, Ju; Borowsky, Alexander; Parvin, Bahram; Wang, Yunfu; Chang, Hang

    2017-01-01

    Classification of histology sections in large cohorts, in terms of distinct regions of microanatomy (e.g., stromal) and histopathology (e.g., tumor, necrosis), enables the quantification of tumor composition, and the construction of predictive models of genomics and clinical outcome. To tackle the large technical variations and biological heterogeneities, which are intrinsic in large cohorts, emerging systems utilize either prior knowledge from pathologists or unsupervised feature learning for invariant representation of the underlying properties in the data. However, to a large degree, the architecture for tissue histology classification remains unexplored and requires urgent systematical investigation. This paper is the first attempt to provide insights into three fundamental questions in tissue histology classification: I. Is unsupervised feature learning preferable to human engineered features? II. Does cellular saliency help? III. Does the sparse feature encoder contribute to recognition? We show that (a) in I, both Cellular Morphometric Feature and features from unsupervised feature learning lead to superior performance when compared to SIFT and [Color, Texture]; (b) in II, cellular saliency incorporation impairs the performance for systems built upon pixel-/patch-level features; and (c) in III, the effect of the sparse feature encoder is correlated with the robustness of features, and the performance can be consistently improved by the multi-stage extension of systems built upon both Cellular Morphmetric Feature and features from unsupervised feature learning. These insights are validated with two cohorts of Glioblastoma Multiforme (GBM) and Kidney Clear Cell Carcinoma (KIRC). Copyright © 2016 Elsevier B.V. All rights reserved.

  9. In Vivo Mouse Intervertebral Disc Degeneration Model Based on a New Histological Classification.

    Takashi Ohnishi

    Full Text Available Although human intervertebral disc degeneration can lead to several spinal diseases, its pathogenesis remains unclear. This study aimed to create a new histological classification applicable to an in vivo mouse intervertebral disc degeneration model induced by needle puncture. One hundred six mice were operated and the L4/5 intervertebral disc was punctured with a 35- or 33-gauge needle. Micro-computed tomography scanning was performed, and the punctured region was confirmed. Evaluation was performed by using magnetic resonance imaging and histology by employing our classification scoring system. Our histological classification scores correlated well with the findings of magnetic resonance imaging and could detect degenerative progression, irrespective of the punctured region. However, the magnetic resonance imaging analysis revealed that there was no significant degenerative intervertebral disc change between the ventrally punctured and non-punctured control groups. To induce significant degeneration in the lumbar intervertebral discs, the central or dorsal region should be punctured instead of the ventral region.

  10. A physiologically-inspired model of numerical classification based on graded stimulus coding

    John Pearson

    2010-01-01

    Full Text Available In most natural decision contexts, the process of selecting among competing actions takes place in the presence of informative, but potentially ambiguous, stimuli. Decisions about magnitudes—quantities like time, length, and brightness that are linearly ordered—constitute an important subclass of such decisions. It has long been known that perceptual judgments about such quantities obey Weber’s Law, wherein the just-noticeable difference in a magnitude is proportional to the magnitude itself. Current physiologically inspired models of numerical classification assume discriminations are made via a labeled line code of neurons selectively tuned for numerosity, a pattern observed in the firing rates of neurons in the ventral intraparietal area (VIP of the macaque. By contrast, neurons in the contiguous lateral intraparietal area (LIP signal numerosity in a graded fashion, suggesting the possibility that numerical classification could be achieved in the absence of neurons tuned for number. Here, we consider the performance of a decision model based on this analog coding scheme in a paradigmatic discrimination task—numerosity bisection. We demonstrate that a basic two-neuron classifier model, derived from experimentally measured monotonic responses of LIP neurons, is sufficient to reproduce the numerosity bisection behavior of monkeys, and that the threshold of the classifier can be set by reward maximization via a simple learning rule. In addition, our model predicts deviations from Weber Law scaling of choice behavior at high numerosity. Together, these results suggest both a generic neuronal framework for magnitude-based decisions and a role for reward contingency in the classification of such stimuli.

  11. Classification of human cancers based on DNA copy number amplification modeling

    Knuutila Sakari

    2008-05-01

    Full Text Available Abstract Background DNA amplifications alter gene dosage in cancer genomes by multiplying the gene copy number. Amplifications are quintessential in a considerable number of advanced cancers of various anatomical locations. The aims of this study were to classify human cancers based on their amplification patterns, explore the biological and clinical fundamentals behind their amplification-pattern based classification, and understand the characteristics in human genomic architecture that associate with amplification mechanisms. Methods We applied a machine learning approach to model DNA copy number amplifications using a data set of binary amplification records at chromosome sub-band resolution from 4400 cases that represent 82 cancer types. Amplification data was fused with background data: clinical, histological and biological classifications, and cytogenetic annotations. Statistical hypothesis testing was used to mine associations between the data sets. Results Probabilistic clustering of each chromosome identified 111 amplification models and divided the cancer cases into clusters. The distribution of classification terms in the amplification-model based clustering of cancer cases revealed cancer classes that were associated with specific DNA copy number amplification models. Amplification patterns – finite or bounded descriptions of the ranges of the amplifications in the chromosome – were extracted from the clustered data and expressed according to the original cytogenetic nomenclature. This was achieved by maximal frequent itemset mining using the cluster-specific data sets. The boundaries of amplification patterns were shown to be enriched with fragile sites, telomeres, centromeres, and light chromosome bands. Conclusions Our results demonstrate that amplifications are non-random chromosomal changes and specifically selected in tumor tissue microenvironment. Furthermore, statistical evidence showed that specific chromosomal features

  12. Educational Objectives and the Learning Domains: A New Formulation [And] Summary: Pierce-Gray Classification Model for the Cognitive, Affective and Psychomotor Domains.

    Gray, Charles E.; Pierce, Walter D.

    This paper examines and summarizes the "Pierce-Gray Classification Model for the Cognitive, Affective, and Psychomotor Domains," a model developed for the classification of educational objectives. The classification system was developed to provide a framework that teachers could use as a guide when developing specific instructional objectives for…

  13. SVM Based Descriptor Selection and Classification of Neurodegenerative Disease Drugs for Pharmacological Modeling.

    Shahid, Mohammad; Shahzad Cheema, Muhammad; Klenner, Alexander; Younesi, Erfan; Hofmann-Apitius, Martin

    2013-03-01

    Systems pharmacological modeling of drug mode of action for the next generation of multitarget drugs may open new routes for drug design and discovery. Computational methods are widely used in this context amongst which support vector machines (SVM) have proven successful in addressing the challenge of classifying drugs with similar features. We have applied a variety of such SVM-based approaches, namely SVM-based recursive feature elimination (SVM-RFE). We use the approach to predict the pharmacological properties of drugs widely used against complex neurodegenerative disorders (NDD) and to build an in-silico computational model for the binary classification of NDD drugs from other drugs. Application of an SVM-RFE model to a set of drugs successfully classified NDD drugs from non-NDD drugs and resulted in overall accuracy of ∼80 % with 10 fold cross validation using 40 top ranked molecular descriptors selected out of total 314 descriptors. Moreover, SVM-RFE method outperformed linear discriminant analysis (LDA) based feature selection and classification. The model reduced the multidimensional descriptors space of drugs dramatically and predicted NDD drugs with high accuracy, while avoiding over fitting. Based on these results, NDD-specific focused libraries of drug-like compounds can be designed and existing NDD-specific drugs can be characterized by a well-characterized set of molecular descriptors. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Modeling Wood Fibre Length in Black Spruce (Picea mariana (Mill. BSP Based on Ecological Land Classification

    Elisha Townshend

    2015-09-01

    Full Text Available Effective planning to optimize the forest value chain requires accurate and detailed information about the resource; however, estimates of the distribution of fibre properties on the landscape are largely unavailable prior to harvest. Our objective was to fit a model of the tree-level average fibre length related to ecosite classification and other forest inventory variables depicted at the landscape scale. A series of black spruce increment cores were collected at breast height from trees in nine different ecosite groups within the boreal forest of northeastern Ontario, and processed using standard techniques for maceration and fibre length measurement. Regression tree analysis and random forests were used to fit hierarchical classification models and find the most important predictor variables for the response variable area-weighted mean stem-level fibre length. Ecosite group was the best predictor in the regression tree. Longer mean fibre-length was associated with more productive ecosites that supported faster growth. The explanatory power of the model of fitted data was good; however, random forests simulations indicated poor generalizability. These results suggest the potential to develop localized models linking wood fibre length in black spruce to landscape-level attributes, and improve the sustainability of forest management by identifying ideal locations to harvest wood that has desirable fibre characteristics.

  15. QSAR classification models for the prediction of endocrine disrupting activity of brominated flame retardants.

    Kovarich, Simona; Papa, Ester; Gramatica, Paola

    2011-06-15

    The identification of potential endocrine disrupting (ED) chemicals is an important task for the scientific community due to their diffusion in the environment; the production and use of such compounds will be strictly regulated through the authorization process of the REACH regulation. To overcome the problem of insufficient experimental data, the quantitative structure-activity relationship (QSAR) approach is applied to predict the ED activity of new chemicals. In the present study QSAR classification models are developed, according to the OECD principles, to predict the ED potency for a class of emerging ubiquitary pollutants, viz. brominated flame retardants (BFRs). Different endpoints related to ED activity (i.e. aryl hydrocarbon receptor agonism and antagonism, estrogen receptor agonism and antagonism, androgen and progesterone receptor antagonism, T4-TTR competition, E2SULT inhibition) are modeled using the k-NN classification method. The best models are selected by maximizing the sensitivity and external predictive ability. We propose simple QSARs (based on few descriptors) characterized by internal stability, good predictive power and with a verified applicability domain. These models are simple tools that are applicable to screen BFRs in relation to their ED activity, and also to design safer alternatives, in agreement with the requirements of REACH regulation at the authorization step. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Effects of the small molecule HERG activator NS1643 on Kv11.3 channels.

    Arne Bilet

    Full Text Available NS1643 is one of the small molecule HERG (Kv11.1 channel activators and has also been found to increase erg2 (Kv11.2 currents. We now investigated whether NS1643 is also able to act as an activator of Kv11.3 (erg3 channels expressed in CHO cells. Activation of rat Kv11.3 current occurred in a dose-dependent manner and maximal current increasing effects were obtained with 10 µM NS1643. At this concentration, steady-state outward current increased by about 80% and the current increase was associated with a significant shift in the voltage dependence of activation to more negative potentials by about 15 mV. In addition, activation kinetics were accelerated, whereas deactivation was slowed. There was no significant effect on the kinetics of inactivation and recovery from inactivation. The strong current-activating agonistic effect of NS1643 did not result from a shift in the voltage dependence of Kv11.3 channel inactivation and was independent from external Na(+ or Ca(2+. At the higher concentration of 20 µM, NS1643 induced clearly less current increase. The left shift in the voltage dependence of activation reversed and the voltage sensitivity of activation dramatically decreased along with a slowing of Kv11.3 channel activation. These data show that, in comparison to other Kv11 family members, NS1643 exerts distinct effects on Kv11.3 channels with especially pronounced partial antagonistic effects at higher concentration.

  17. Calibration of a Plastic Classification System with the CCW Model; Calibracion de un Sistema de Clasification de Plasticos segun el Modelo CCW

    Barcala Riveira, J M; Fernandez Marron, J L; Alberdi Primicia, J; Navarrete Marin, J J; Oller Gonzalez, J C

    2003-07-01

    This document describes the calibration of a plastic Classification system with the CCW model (Classification by Quaternions built Wavelet Coefficients). The method is applied to spectra of plastics usually present in domestic wastes. Obtained results are showed. (Author) 16 refs.

  18. A model to facilitate implementation of the International Classification of Functioning, Disability and Health into prosthetics and orthotics.

    Jarl, Gustav; Ramstrand, Nerrolyn

    2017-09-01

    The International Classification of Functioning, Disability and Health is a classification of human functioning and disability and is based on a biopsychosocial model of health. As such, International Classification of Functioning, Disability and Health seems suitable as a basis for constructing models defining the clinical P&O process. The aim was to use International Classification of Functioning, Disability and Health to facilitate development of such a model. Proposed model: A model, the Prosthetic and Orthotic Process (POP) model, is proposed. The Prosthetic and Orthotic Process model is based on the concepts of the International Classification of Functioning, Disability and Health and comprises four steps in a cycle: (1) Assessment, including the medical history and physical examination of the patient. (2) Goals, specified on four levels including those related to participation, activity, body functions and structures and technical requirements of the device. (3) Intervention, in which the appropriate course of action is determined based on the specified goal and evidence-based practice. (4) Evaluation of outcomes, where the outcomes are assessed and compared to the corresponding goals. After the evaluation of goal fulfilment, the first cycle in the process is complete, and a broad evaluation is now made including overriding questions about the patient's satisfaction with the outcomes and the process. This evaluation will determine if the process should be ended or if another cycle in the process should be initiated. The Prosthetic and Orthotic Process model can provide a common understanding of the P&O process. Concepts of International Classification of Functioning, Disability and Health have been incorporated into the model to facilitate communication with other rehabilitation professionals and encourage a holistic and patient-centred approach in clinical practice. Clinical relevance The Prosthetic and Orthotic Process model can support the implementation

  19. A stylistic classification of Russian-language texts based on the random walk model

    Kramarenko, A. A.; Nekrasov, K. A.; Filimonov, V. V.; Zhivoderov, A. A.; Amieva, A. A.

    2017-09-01

    A formal approach to text analysis is suggested that is based on the random walk model. The frequencies and reciprocal positions of the vowel letters are matched up by a process of quasi-particle migration. Statistically significant difference in the migration parameters for the texts of different functional styles is found. Thus, a possibility of classification of texts using the suggested method is demonstrated. Five groups of the texts are singled out that can be distinguished from one another by the parameters of the quasi-particle migration process.

  20. Classification Model for Forest Fire Hotspot Occurrences Prediction Using ANFIS Algorithm

    Wijayanto, A. K.; Sani, O.; Kartika, N. D.; Herdiyeni, Y.

    2017-01-01

    This study proposed the application of data mining technique namely Adaptive Neuro-Fuzzy inference system (ANFIS) on forest fires hotspot data to develop classification models for hotspots occurrence in Central Kalimantan. Hotspot is a point that is indicated as the location of fires. In this study, hotspot distribution is categorized as true alarm and false alarm. ANFIS is a soft computing method in which a given inputoutput data set is expressed in a fuzzy inference system (FIS). The FIS implements a nonlinear mapping from its input space to the output space. The method of this study classified hotspots as target objects by correlating spatial attributes data using three folds in ANFIS algorithm to obtain the best model. The best result obtained from the 3rd fold provided low error for training (error = 0.0093676) and also low error testing result (error = 0.0093676). Attribute of distance to road is the most determining factor that influences the probability of true and false alarm where the level of human activities in this attribute is higher. This classification model can be used to develop early warning system of forest fire.

  1. Model of high-tech businesses management under the trends of explicit and implicit knowledge markets: classification and business model

    Guzel Isayevna Gumerova

    2015-03-01

    Full Text Available Objective to define the notion of ldquohightech businessrdquo to elaborate classification of hightech businesses to elaborate the business model for hightech business management. Methods general scientific methods of theoretical and empirical cognition. Results the research presents a business model of hightech businesses management basing on the trends of explicit and explicit knowledge market with the dominating implicit knowledge market classification of hightech businesses taking into consideration the three types of economic activity possibilities to manage hightech business basing on its market cost technological innovations costs and business indicators. Scientific novelty the interpretation of the notion of ldquohightech businessrdquo has been renewed the classification of hightech businesses has been elaborated for the first time allocating three groups of enterprises. Practical value theoretical significance ndash development of notional apparatus of hightech business management practical significancenbsp ndash grounding of the necessity to manage enterprises under development of explicit and explicit knowledge markets in Russia as a complex of capital and noncapital assets with dominating indicators of ldquomarket valuerdquo and ldquolife span of a companyrdquo. nbsp

  2. "A Nightmare Land, a Place of Death": An Exploration of the Moon as a Motif in Herge's "Destination Moon" (1953) and "Explorers on the Moon" (1954)

    Beauvais, Clementine

    2010-01-01

    This article analyses the symbolic meaning of the Moon in two "bande dessinee" books from the Tintin series, Herge's "Destination Moon" ("Objectif Lune," 1953) and its sequel "Explorers on the Moon" ("On a Marche sur la Lune," 1954). It argues that these two volumes stand out in the series for their graphic, narrative and philosophical emphasis on…

  3. The N-terminal tail of hERG contains an amphipathic α-helix that regulates channel deactivation.

    Chai Ann Ng

    Full Text Available The cytoplasmic N-terminal domain of the human ether-a-go-go related gene (hERG K+ channel is critical for the slow deactivation kinetics of the channel. However, the mechanism(s by which the N-terminal domain regulates deactivation remains to be determined. Here we show that the solution NMR structure of the N-terminal 135 residues of hERG contains a previously described Per-Arnt-Sim (PAS domain (residues 26-135 as well as an amphipathic α-helix (residues 13-23 and an initial unstructured segment (residues 2-9. Deletion of residues 2-25, only the unstructured segment (residues 2-9 or replacement of the α-helix with a flexible linker all result in enhanced rates of deactivation. Thus, both the initial flexible segment and the α-helix are required but neither is sufficient to confer slow deactivation kinetics. Alanine scanning mutagenesis identified R5 and G6 in the initial flexible segment as critical for slow deactivation. Alanine mutants in the helical region had less dramatic phenotypes. We propose that the PAS domain is bound close to the central core of the channel and that the N-terminal α-helix ensures that the flexible tail is correctly orientated for interaction with the activation gating machinery to stabilize the open state of the channel.

  4. Prospective identification of adolescent suicide ideation using classification tree analysis: Models for community-based screening.

    Hill, Ryan M; Oosterhoff, Benjamin; Kaplow, Julie B

    2017-07-01

    Although a large number of risk markers for suicide ideation have been identified, little guidance has been provided to prospectively identify adolescents at risk for suicide ideation within community settings. The current study addressed this gap in the literature by utilizing classification tree analysis (CTA) to provide a decision-making model for screening adolescents at risk for suicide ideation. Participants were N = 4,799 youth (Mage = 16.15 years, SD = 1.63) who completed both Waves 1 and 2 of the National Longitudinal Study of Adolescent to Adult Health. CTA was used to generate a series of decision rules for identifying adolescents at risk for reporting suicide ideation at Wave 2. Findings revealed 3 distinct solutions with varying sensitivity and specificity for identifying adolescents who reported suicide ideation. Sensitivity of the classification trees ranged from 44.6% to 77.6%. The tree with greatest specificity and lowest sensitivity was based on a history of suicide ideation. The tree with moderate sensitivity and high specificity was based on depressive symptoms, suicide attempts or suicide among family and friends, and social support. The most sensitive but least specific tree utilized these factors and gender, ethnicity, hours of sleep, school-related factors, and future orientation. These classification trees offer community organizations options for instituting large-scale screenings for suicide ideation risk depending on the available resources and modality of services to be provided. This study provides a theoretically and empirically driven model for prospectively identifying adolescents at risk for suicide ideation and has implications for preventive interventions among at-risk youth. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  5. Neutral face classification using personalized appearance models for fast and robust emotion detection.

    Chiranjeevi, Pojala; Gopalakrishnan, Viswanath; Moogi, Pratibha

    2015-09-01

    Facial expression recognition is one of the open problems in computer vision. Robust neutral face recognition in real time is a major challenge for various supervised learning-based facial expression recognition methods. This is due to the fact that supervised methods cannot accommodate all appearance variability across the faces with respect to race, pose, lighting, facial biases, and so on, in the limited amount of training data. Moreover, processing each and every frame to classify emotions is not required, as user stays neutral for majority of the time in usual applications like video chat or photo album/web browsing. Detecting neutral state at an early stage, thereby bypassing those frames from emotion classification would save the computational power. In this paper, we propose a light-weight neutral versus emotion classification engine, which acts as a pre-processer to the traditional supervised emotion classification approaches. It dynamically learns neutral appearance at key emotion (KE) points using a statistical texture model, constructed by a set of reference neutral frames for each user. The proposed method is made robust to various types of user head motions by accounting for affine distortions based on a statistical texture model. Robustness to dynamic shift of KE points is achieved by evaluating the similarities on a subset of neighborhood patches around each KE point using the prior information regarding the directionality of specific facial action units acting on the respective KE point. The proposed method, as a result, improves emotion recognition (ER) accuracy and simultaneously reduces computational complexity of the ER system, as validated on multiple databases.

  6. Utility of BRDF Models for Estimating Optimal View Angles in Classification of Remotely Sensed Images

    Valdez, P. F.; Donohoe, G. W.

    1997-01-01

    Statistical classification of remotely sensed images attempts to discriminate between surface cover types on the basis of the spectral response recorded by a sensor. It is well known that surfaces reflect incident radiation as a function of wavelength producing a spectral signature specific to the material under investigation. Multispectral and hyperspectral sensors sample the spectral response over tens and even hundreds of wavelength bands to capture the variation of spectral response with wavelength. Classification algorithms then exploit these differences in spectral response to distinguish between materials of interest. Sensors of this type, however, collect detailed spectral information from one direction (usually nadir); consequently, do not consider the directional nature of reflectance potentially detectable at different sensor view angles. Improvements in sensor technology have resulted in remote sensing platforms capable of detecting reflected energy across wavelengths (spectral signatures) and from multiple view angles (angular signatures) in the fore and aft directions. Sensors of this type include: the moderate resolution imaging spectroradiometer (MODIS), the multiangle imaging spectroradiometer (MISR), and the airborne solid-state array spectroradiometer (ASAS). A goal of this paper, then, is to explore the utility of Bidirectional Reflectance Distribution Function (BRDF) models in the selection of optimal view angles for the classification of remotely sensed images by employing a strategy of searching for the maximum difference between surface BRDFs. After a brief discussion of directional reflect ante in Section 2, attention is directed to the Beard-Maxwell BRDF model and its use in predicting the bidirectional reflectance of a surface. The selection of optimal viewing angles is addressed in Section 3, followed by conclusions and future work in Section 4.

  7. Assimilation of a knowledge base and physical models to reduce errors in passive-microwave classifications of sea ice

    Maslanik, J. A.; Key, J.

    1992-01-01

    An expert system framework has been developed to classify sea ice types using satellite passive microwave data, an operational classification algorithm, spatial and temporal information, ice types estimated from a dynamic-thermodynamic model, output from a neural network that detects the onset of melt, and knowledge about season and region. The rule base imposes boundary conditions upon the ice classification, modifies parameters in the ice algorithm, determines a `confidence' measure for the classified data, and under certain conditions, replaces the algorithm output with model output. Results demonstrate the potential power of such a system for minimizing overall error in the classification and for providing non-expert data users with a means of assessing the usefulness of the classification results for their applications.

  8. Site effect classification based on microtremor data analysis using a concentration-area fractal model

    Adib, A.; Afzal, P.; Heydarzadeh, K.

    2015-01-01

    The aim of this study is to classify the site effect using concentration-area (C-A) fractal model in Meybod city, central Iran, based on microtremor data analysis. Log-log plots of the frequency, amplification and vulnerability index (k-g) indicate a multifractal nature for the parameters in the area. The results obtained from the C-A fractal modelling reveal that proper soil types are located around the central city. The results derived via the fractal modelling were utilized to improve the Nogoshi and Igarashi (1970, 1971) classification results in the Meybod city. The resulting categories are: (1) hard soil and weak rock with frequency of 6.2 to 8 Hz, (2) stiff soil with frequency of about 4.9 to 6.2 Hz, (3) moderately soft soil with the frequency of 2.4 to 4.9 Hz, and (4) soft soil with the frequency lower than 2.4 Hz.

  9. Site effect classification based on microtremor data analysis using concentration-area fractal model

    Adib, A.; Afzal, P.; Heydarzadeh, K.

    2014-07-01

    The aim of this study is to classify the site effect using concentration-area (C-A) fractal model in Meybod city, Central Iran, based on microtremor data analysis. Log-log plots of the frequency, amplification and vulnerability index (k-g) indicate a multifractal nature for the parameters in the area. The results obtained from the C-A fractal modeling reveal that proper soil types are located around the central city. The results derived via the fractal modeling were utilized to improve the Nogoshi's classification results in the Meybod city. The resulted categories are: (1) hard soil and weak rock with frequency of 6.2 to 8 Hz, (2) stiff soil with frequency of about 4.9 to 6.2 Hz, (3) moderately soft soil with the frequency of 2.4 to 4.9 Hz, and (4) soft soil with the frequency lower than 2.4 Hz.

  10. Automatic earthquake detection and classification with continuous hidden Markov models: a possible tool for monitoring Las Canadas caldera in Tenerife

    Beyreuther, Moritz; Wassermann, Joachim [Department of Earth and Environmental Sciences (Geophys. Observatory), Ludwig Maximilians Universitaet Muenchen, D-80333 (Germany); Carniel, Roberto [Dipartimento di Georisorse e Territorio Universitat Degli Studi di Udine, I-33100 (Italy)], E-mail: roberto.carniel@uniud.it

    2008-10-01

    A possible interaction of (volcano-) tectonic earthquakes with the continuous seismic noise recorded in the volcanic island of Tenerife was recently suggested, but existing catalogues seem to be far from being self consistent, calling for the development of automatic detection and classification algorithms. In this work we propose the adoption of a methodology based on Hidden Markov Models (HMMs), widely used already in other fields, such as speech classification.

  11. A Classification Model and an Open E-Learning System Based on Intuitionistic Fuzzy Sets for Instructional Design Concepts

    Güyer, Tolga; Aydogdu, Seyhmus

    2016-01-01

    This study suggests a classification model and an e-learning system based on this model for all instructional theories, approaches, models, strategies, methods, and technics being used in the process of instructional design that constitutes a direct or indirect resource for educational technology based on the theory of intuitionistic fuzzy sets…

  12. Classification of Multiple Seizure-Like States in Three Different Rodent Models of Epileptogenesis.

    Guirgis, Mirna; Serletis, Demitre; Zhang, Jane; Florez, Carlos; Dian, Joshua A; Carlen, Peter L; Bardakjian, Berj L

    2014-01-01

    Epilepsy is a dynamical disease and its effects are evident in over fifty million people worldwide. This study focused on objective classification of the multiple states involved in the brain's epileptiform activity. Four datasets from three different rodent hippocampal preparations were explored, wherein seizure-like-events (SLE) were induced by the perfusion of a low - Mg(2+) /high-K(+) solution or 4-Aminopyridine. Local field potentials were recorded from CA3 pyramidal neurons and interneurons and modeled as Markov processes. Specifically, hidden Markov models (HMM) were used to determine the nature of the states present. Properties of the Hilbert transform were used to construct the feature spaces for HMM training. By sequentially applying the HMM training algorithm, multiple states were identified both in episodes of SLE and nonSLE activity. Specifically, preSLE and postSLE states were differentiated and multiple inner SLE states were identified. This was accomplished using features extracted from the lower frequencies (1-4 Hz, 4-8 Hz) alongside those of both the low- (40-100 Hz) and high-gamma (100-200 Hz) of the recorded electrical activity. The learning paradigm of this HMM-based system eliminates the inherent bias associated with other learning algorithms that depend on predetermined state segmentation and renders it an appropriate candidate for SLE classification.

  13. Approach for Text Classification Based on the Similarity Measurement between Normal Cloud Models

    Jin Dai

    2014-01-01

    Full Text Available The similarity between objects is the core research area of data mining. In order to reduce the interference of the uncertainty of nature language, a similarity measurement between normal cloud models is adopted to text classification research. On this basis, a novel text classifier based on cloud concept jumping up (CCJU-TC is proposed. It can efficiently accomplish conversion between qualitative concept and quantitative data. Through the conversion from text set to text information table based on VSM model, the text qualitative concept, which is extraction from the same category, is jumping up as a whole category concept. According to the cloud similarity between the test text and each category concept, the test text is assigned to the most similar category. By the comparison among different text classifiers in different feature selection set, it fully proves that not only does CCJU-TC have a strong ability to adapt to the different text features, but also the classification performance is also better than the traditional classifiers.

  14. Introduction of the gross motor function classification system in Venezuela--a model for knowledge dissemination.

    Löwing, Kristina; Arredondo, Ynes C; Tedroff, Marika; Tedroff, Kristina

    2015-09-04

    A current worldwide common goal is to optimize the health and well-being of children with cerebral palsy (CP). In order to reach that goal, for this heterogeneous group, a common language and classification systems are required to predict development and offer evidence based interventions. In most countries in Africa, South America, Asia and Eastern Europe the classification systems for CP are unfamiliar and rarely used. Education and implementation are required. The specific aims of this study were to examine a model in order to introduce the Gross Motor Function Classification System (GMFCS-E&R) in Venezuela, and to examine the validity and the reliability. Children with CP, registered at a National child rehabilitation centre in Venezuela, were invited to participate. The Spanish version of GMFCS-E&R was used. The Wilson mobility scale was translated and used to examine the concurrent validity. A structured questionnaire, comprising aspects of mobility and gross motor function, was constructed. In addition, each child was filmed. A paediatrician in Venezuela received supervised self-education in GMFCS-E&R and the Wilson mobility scale. A Swedish student was educated in GMFCS-E&R and the Wilson mobility scale prior to visiting Venezuela. In Venezuela, all children were classified and scored by the paediatrician and student independently. An experienced paediatric physiotherapist (PT) in Sweden made independent GMFCS-E&R classifications and Wilson mobility scale scorings, accomplished through merging data from the structured questionnaire with observations of the films. Descriptive statistics were used and reliability was presented with weighted Kappa (Kw). Spearman's correlation coefficient was calculated to explore the concurrent validity between GMFCS-E&R and Wilson mobility scale. Eighty-eight children (56 boys), mean age 10 years (3-18), with CP participated. The inter-rater reliability of GMFCS-E&R between; the paediatrician and the PT was Kw = 0.85 (95% CI

  15. A coupled classification - evolutionary optimization model for contamination event detection in water distribution systems.

    Oliker, Nurit; Ostfeld, Avi

    2014-03-15

    This study describes a decision support system, alerts for contamination events in water distribution systems. The developed model comprises a weighted support vector machine (SVM) for the detection of outliers, and a following sequence analysis for the classification of contamination events. The contribution of this study is an improvement of contamination events detection ability and a multi-dimensional analysis of the data, differing from the parallel one-dimensional analysis conducted so far. The multivariate analysis examines the relationships between water quality parameters and detects changes in their mutual patterns. The weights of the SVM model accomplish two goals: blurring the difference between sizes of the two classes' data sets (as there are much more normal/regular than event time measurements), and adhering the time factor attribute by a time decay coefficient, ascribing higher importance to recent observations when classifying a time step measurement. All model parameters were determined by data driven optimization so the calibration of the model was completely autonomic. The model was trained and tested on a real water distribution system (WDS) data set with randomly simulated events superimposed on the original measurements. The model is prominent in its ability to detect events that were only partly expressed in the data (i.e., affecting only some of the measured parameters). The model showed high accuracy and better detection ability as compared to previous modeling attempts of contamination event detection. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Proposed Core Competencies and Empirical Validation Procedure in Competency Modeling: Confirmation and Classification.

    Baczyńska, Anna K; Rowiński, Tomasz; Cybis, Natalia

    2016-01-01

    Competency models provide insight into key skills which are common to many positions in an organization. Moreover, there is a range of competencies that is used by many companies. Researchers have developed core competency terminology to underline their cross-organizational value. The article presents a theoretical model of core competencies consisting of two main higher-order competencies called performance and entrepreneurship. Each of them consists of three elements: the performance competency includes cooperation, organization of work and goal orientation, while entrepreneurship includes innovativeness, calculated risk-taking and pro-activeness. However, there is lack of empirical validation of competency concepts in organizations and this would seem crucial for obtaining reliable results from organizational research. We propose a two-step empirical validation procedure: (1) confirmation factor analysis, and (2) classification of employees. The sample consisted of 636 respondents (M = 44.5; SD = 15.1). Participants were administered a questionnaire developed for the study purpose. The reliability, measured by Cronbach's alpha, ranged from 0.60 to 0.83 for six scales. Next, we tested the model using a confirmatory factor analysis. The two separate, single models of performance and entrepreneurial orientations fit quite well to the data, while a complex model based on the two single concepts needs further research. In the classification of employees based on the two higher order competencies we obtained four main groups of employees. Their profiles relate to those found in the literature, including so-called niche finders and top performers. Some proposal for organizations is discussed.

  17. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune

    2014-01-01

    Latent Dirichlet Allocation. Model application was tested on control subjects and patients with periodic leg movements (PLM) representing a non-neurodegenerative group, and patients with idiopathic REM sleep behavior disorder (iRBD) and Parkinson's Disease (PD) representing a neurodegenerative group......Background: The golden standard for sleep classification uses manual scoring of polysomnography despite points of criticism such as oversimplification, low inter-rater reliability and the standard being designed on young and healthy subjects. New method: To meet the criticism and reveal the latent...... sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1 s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model...

  18. Classification of parameter-dependent quantum integrable models, their parameterization, exact solution and other properties

    Owusu, Haile K; Yuzbashyan, Emil A

    2011-01-01

    We study general quantum integrable Hamiltonians linear in a coupling constant and represented by finite N x N real symmetric matrices. The restriction on the coupling dependence leads to a natural notion of nontrivial integrals of motion and classification of integrable families into types according to the number of such integrals. A type M family in our definition is formed by N-M nontrivial mutually commuting operators linear in the coupling. Working from this definition alone, we parameterize type M operators, i.e. resolve the commutation relations, and obtain an exact solution for their eigenvalues and eigenvectors. We show that our parameterization covers all type 1, 2 and 3 integrable models and discuss the extent to which it is complete for other types. We also present robust numerical observation on the number of energy-level crossings in type M integrable systems and analyze the taxonomy of types in the 1D Hubbard model. (paper)

  19. BCDForest: a boosting cascade deep forest model towards the classification of cancer subtypes based on gene expression data.

    Guo, Yang; Liu, Shuhui; Li, Zhanhuai; Shang, Xuequn

    2018-04-11

    The classification of cancer subtypes is of great importance to cancer disease diagnosis and therapy. Many supervised learning approaches have been applied to cancer subtype classification in the past few years, especially of deep learning based approaches. Recently, the deep forest model has been proposed as an alternative of deep neural networks to learn hyper-representations by using cascade ensemble decision trees. It has been proved that the deep forest model has competitive or even better performance than deep neural networks in some extent. However, the standard deep forest model may face overfitting and ensemble diversity challenges when dealing with small sample size and high-dimensional biology data. In this paper, we propose a deep learning model, so-called BCDForest, to address cancer subtype classification on small-scale biology datasets, which can be viewed as a modification of the standard deep forest model. The BCDForest distinguishes from the standard deep forest model with the following two main contributions: First, a named multi-class-grained scanning method is proposed to train multiple binary classifiers to encourage diversity of ensemble. Meanwhile, the fitting quality of each classifier is considered in representation learning. Second, we propose a boosting strategy to emphasize more important features in cascade forests, thus to propagate the benefits of discriminative features among cascade layers to improve the classification performance. Systematic comparison experiments on both microarray and RNA-Seq gene expression datasets demonstrate that our method consistently outperforms the state-of-the-art methods in application of cancer subtype classification. The multi-class-grained scanning and boosting strategy in our model provide an effective solution to ease the overfitting challenge and improve the robustness of deep forest model working on small-scale data. Our model provides a useful approach to the classification of cancer subtypes

  20. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  1. Modeling and Classification of Kinetic Patterns of Dynamic Metabolic Biomarkers in Physical Activity.

    Marc Breit

    2015-08-01

    Full Text Available The objectives of this work were the classification of dynamic metabolic biomarker candidates and the modeling and characterization of kinetic regulatory mechanisms in human metabolism with response to external perturbations by physical activity. Longitudinal metabolic concentration data of 47 individuals from 4 different groups were examined, obtained from a cycle ergometry cohort study. In total, 110 metabolites (within the classes of acylcarnitines, amino acids, and sugars were measured through a targeted metabolomics approach, combining tandem mass spectrometry (MS/MS with the concept of stable isotope dilution (SID for metabolite quantitation. Biomarker candidates were selected by combined analysis of maximum fold changes (MFCs in concentrations and P-values resulting from statistical hypothesis testing. Characteristic kinetic signatures were identified through a mathematical modeling approach utilizing polynomial fitting. Modeled kinetic signatures were analyzed for groups with similar behavior by applying hierarchical cluster analysis. Kinetic shape templates were characterized, defining different forms of basic kinetic response patterns, such as sustained, early, late, and other forms, that can be used for metabolite classification. Acetylcarnitine (C2, showing a late response pattern and having the highest values in MFC and statistical significance, was classified as late marker and ranked as strong predictor (MFC = 1.97, P < 0.001. In the class of amino acids, highest values were shown for alanine (MFC = 1.42, P < 0.001, classified as late marker and strong predictor. Glucose yields a delayed response pattern, similar to a hockey stick function, being classified as delayed marker and ranked as moderate predictor (MFC = 1.32, P < 0.001. These findings coincide with existing knowledge on central metabolic pathways affected in exercise physiology, such as β-oxidation of fatty acids, glycolysis, and glycogenolysis. The presented modeling

  2. Diagnostics of enterprise bankruptcy occurrence probability in an anti-crisis management: modern approaches and classification of models

    I.V. Zhalinska

    2015-09-01

    Full Text Available Diagnostics of enterprise bankruptcy occurrence probability is defined as an important tool ensuring the viability of an organization under conditions of unpredictable dynamic environment. The paper aims to define the basic features of diagnostics of bankruptcy occurrence probability models and their classification. The article grounds the objective increasing of crisis probability in modern enterprises where such increasing leads to the need to improve the efficiency of anti-crisis enterprise activities. The system of anti-crisis management is based on the subsystem of diagnostics of bankruptcy occurrence probability. Such a subsystem is the main one for further measures to prevent and overcome the crisis. The classification of existing models of enterprise bankruptcy occurrence probability has been suggested. The classification is based on methodical and methodological principles of models. The following main groups of models are determined: the models using financial ratios, aggregates and scores, the models of discriminated analysis, the methods of strategic analysis, informal models, artificial intelligence systems and the combination of the models. The classification made it possible to identify the analytical capabilities of each of the groups of models suggested.

  3. A classification of marked hijaiyah letters' pronunciation using hidden Markov model

    Wisesty, Untari N.; Mubarok, M. Syahrul; Adiwijaya

    2017-08-01

    Hijaiyah letters are the letters that arrange the words in Al Qur'an consisting of 28 letters. They symbolize the consonant sounds. On the other hand, the vowel sounds are symbolized by harokat/marks. Speech recognition system is a system used to process the sound signal to be data so that it can be recognized by computer. To build the system, some stages are needed i.e characteristics/feature extraction and classification. In this research, LPC and MFCC extraction method, K-Means Quantization vector and Hidden Markov Model classification are used. The data used are the 28 letters and 6 harakat with the total class of 168. After several are testing done, it can be concluded that the system can recognize the pronunciation pattern of marked hijaiyah letter very well in the training data with its highest accuracy of 96.1% using the feature of LPC extraction and 94% using the MFCC. Meanwhile, when testing system is used, the accuracy decreases up to 41%.

  4. Tissue Classification

    Van Leemput, Koen; Puonti, Oula

    2015-01-01

    Computational methods for automatically segmenting magnetic resonance images of the brain have seen tremendous advances in recent years. So-called tissue classification techniques, aimed at extracting the three main brain tissue classes (white matter, gray matter, and cerebrospinal fluid), are now...... well established. In their simplest form, these methods classify voxels independently based on their intensity alone, although much more sophisticated models are typically used in practice. This article aims to give an overview of often-used computational techniques for brain tissue classification...

  5. Indexing Density Models for Incremental Learning and Anytime Classification on Data Streams

    Seidl, Thomas; Assent, Ira; Kranen, Philipp

    2009-01-01

    Classification of streaming data faces three basic challenges: it has to deal with huge amounts of data, the varying time between two stream data items must be used best possible (anytime classification) and additional training data must be incrementally learned (anytime learning) for applying...... to the individual object to be classified) a hierarchy of mixture densities that represent kernel density estimators at successively coarser levels. Our probability density queries together with novel classification improvement strategies provide the necessary information for very effective classification at any...... point of interruption. Moreover, we propose a novel evaluation method for anytime classification using Poisson streams and demonstrate the anytime learning performance of the Bayes tree....

  6. A comparative study of deep learning models for medical image classification

    Dutta, Suvajit; Manideep, B. C. S.; Rai, Shalva; Vijayarajan, V.

    2017-11-01

    Deep Learning(DL) techniques are conquering over the prevailing traditional approaches of neural network, when it comes to the huge amount of dataset, applications requiring complex functions demanding increase accuracy with lower time complexities. Neurosciences has already exploited DL techniques, thus portrayed itself as an inspirational source for researchers exploring the domain of Machine learning. DL enthusiasts cover the areas of vision, speech recognition, motion planning and NLP as well, moving back and forth among fields. This concerns with building models that can successfully solve variety of tasks requiring intelligence and distributed representation. The accessibility to faster CPUs, introduction of GPUs-performing complex vector and matrix computations, supported agile connectivity to network. Enhanced software infrastructures for distributed computing worked in strengthening the thought that made researchers suffice DL methodologies. The paper emphases on the following DL procedures to traditional approaches which are performed manually for classifying medical images. The medical images are used for the study Diabetic Retinopathy(DR) and computed tomography (CT) emphysema data. Both DR and CT data diagnosis is difficult task for normal image classification methods. The initial work was carried out with basic image processing along with K-means clustering for identification of image severity levels. After determining image severity levels ANN has been applied on the data to get the basic classification result, then it is compared with the result of DNNs (Deep Neural Networks), which performed efficiently because of its multiple hidden layer features basically which increases accuracy factors, but the problem of vanishing gradient in DNNs made to consider Convolution Neural Networks (CNNs) as well for better results. The CNNs are found to be providing better outcomes when compared to other learning models aimed at classification of images. CNNs are

  7. Deep learning-based fine-grained car make/model classification for visual surveillance

    Gundogdu, Erhan; Parıldı, Enes Sinan; Solmaz, Berkan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    Fine-grained object recognition is a potential computer vision problem that has been recently addressed by utilizing deep Convolutional Neural Networks (CNNs). Nevertheless, the main disadvantage of classification methods relying on deep CNN models is the need for considerably large amount of data. In addition, there exists relatively less amount of annotated data for a real world application, such as the recognition of car models in a traffic surveillance system. To this end, we mainly concentrate on the classification of fine-grained car make and/or models for visual scenarios by the help of two different domains. First, a large-scale dataset including approximately 900K images is constructed from a website which includes fine-grained car models. According to their labels, a state-of-the-art CNN model is trained on the constructed dataset. The second domain that is dealt with is the set of images collected from a camera integrated to a traffic surveillance system. These images, which are over 260K, are gathered by a special license plate detection method on top of a motion detection algorithm. An appropriately selected size of the image is cropped from the region of interest provided by the detected license plate location. These sets of images and their provided labels for more than 30 classes are employed to fine-tune the CNN model which is already trained on the large scale dataset described above. To fine-tune the network, the last two fully-connected layers are randomly initialized and the remaining layers are fine-tuned in the second dataset. In this work, the transfer of a learned model on a large dataset to a smaller one has been successfully performed by utilizing both the limited annotated data of the traffic field and a large scale dataset with available annotations. Our experimental results both in the validation dataset and the real field show that the proposed methodology performs favorably against the training of the CNN model from scratch.

  8. Modelling the results of health promotion activities in Switzerland: development of the Swiss Model for Outcome Classification in Health Promotion and Prevention.

    Spencer, Brenda; Broesskamp-Stone, Ursel; Ruckstuhl, Brigitte; Ackermann, Günter; Spoerri, Adrian; Cloetta, Bernhard

    2008-03-01

    This paper describes the Model for Outcome Classification in Health Promotion and Prevention adopted by Health Promotion Switzerland (SMOC, Swiss Model for Outcome Classification) and the process of its development. The context and method of model development, and the aim and objectives of the model are outlined. Preliminary experience with application of the model in evaluation planning and situation analysis is reported. On the basis of an extensive literature search, the model is situated within the wider international context of similar efforts to meet the challenge of developing tools to assess systematically the activities of health promotion and prevention.

  9. Simulation Modeling by Classification of Problems: A Case of Cellular Manufacturing

    Afiqah, K N; Mahayuddin, Z R

    2016-01-01

    Cellular manufacturing provides good solution approach to manufacturing area by applying Group Technology concept. The evolution of cellular manufacturing can enhance performance of the cell and to increase the quality of the product manufactured but it triggers other problem. Generally, this paper highlights factors and problems which emerge commonly in cellular manufacturing. The aim of the research is to develop a thorough understanding of common problems in cellular manufacturing. A part from that, in order to find a solution to the problems exist using simulation technique, this classification framework is very useful to be adapted during model building. Biology evolution tool was used in the research in order to classify the problems emerge. The result reveals 22 problems and 25 factors using cladistic technique. In this research, the expected result is the cladogram established based on the problems in cellular manufacturing gathered. (paper)

  10. Prototype-based Models for the Supervised Learning of Classification Schemes

    Biehl, Michael; Hammer, Barbara; Villmann, Thomas

    2017-06-01

    An introduction is given to the use of prototype-based models in supervised machine learning. The main concept of the framework is to represent previously observed data in terms of so-called prototypes, which reflect typical properties of the data. Together with a suitable, discriminative distance or dissimilarity measure, prototypes can be used for the classification of complex, possibly high-dimensional data. We illustrate the framework in terms of the popular Learning Vector Quantization (LVQ). Most frequently, standard Euclidean distance is employed as a distance measure. We discuss how LVQ can be equipped with more general dissimilarites. Moreover, we introduce relevance learning as a tool for the data-driven optimization of parameterized distances.

  11. Selecting statistical models and variable combinations for optimal classification using otolith microchemistry.

    Mercier, Lény; Darnaude, Audrey M; Bruguier, Olivier; Vasconcelos, Rita P; Cabral, Henrique N; Costa, Maria J; Lara, Monica; Jones, David L; Mouillot, David

    2011-06-01

    Reliable assessment of fish origin is of critical importance for exploited species, since nursery areas must be identified and protected to maintain recruitment to the adult stock. During the last two decades, otolith chemical signatures (or "fingerprints") have been increasingly used as tools to discriminate between coastal habitats. However, correct assessment of fish origin from otolith fingerprints depends on various environmental and methodological parameters, including the choice of the statistical method used to assign fish to unknown origin. Among the available methods of classification, Linear Discriminant Analysis (LDA) is the most frequently used, although it assumes data are multivariate normal with homogeneous within-group dispersions, conditions that are not always met by otolith chemical data, even after transformation. Other less constrained classification methods are available, but there is a current lack of comparative analysis in applications to otolith microchemistry. Here, we assessed stock identification accuracy for four classification methods (LDA, Quadratic Discriminant Analysis [QDA], Random Forests [RF], and Artificial Neural Networks [ANN]), through the use of three distinct data sets. In each case, all possible combinations of chemical elements were examined to identify the elements to be used for optimal accuracy in fish assignment to their actual origin. Our study shows that accuracy differs according to the model and the number of elements considered. Best combinations did not include all the elements measured, and it was not possible to define an ad hoc multielement combination for accurate site discrimination. Among all the models tested, RF and ANN performed best, especially for complex data sets (e.g., with numerous fish species and/or chemical elements involved). However, for these data, RF was less time-consuming and more interpretable than ANN, and far more efficient and less demanding in terms of assumptions than LDA or QDA

  12. Comparative Study on KNN and SVM Based Weather Classification Models for Day Ahead Short Term Solar PV Power Forecasting

    Fei Wang

    2017-12-01

    Full Text Available Accurate solar photovoltaic (PV power forecasting is an essential tool for mitigating the negative effects caused by the uncertainty of PV output power in systems with high penetration levels of solar PV generation. Weather classification based modeling is an effective way to increase the accuracy of day-ahead short-term (DAST solar PV power forecasting because PV output power is strongly dependent on the specific weather conditions in a given time period. However, the accuracy of daily weather classification relies on both the applied classifiers and the training data. This paper aims to reveal how these two factors impact the classification performance and to delineate the relation between classification accuracy and sample dataset scale. Two commonly used classification methods, K-nearest neighbors (KNN and support vector machines (SVM are applied to classify the daily local weather types for DAST solar PV power forecasting using the operation data from a grid-connected PV plant in Hohhot, Inner Mongolia, China. We assessed the performance of SVM and KNN approaches, and then investigated the influences of sample scale, the number of categories, and the data distribution in different categories on the daily weather classification results. The simulation results illustrate that SVM performs well with small sample scale, while KNN is more sensitive to the length of the training dataset and can achieve higher accuracy than SVM with sufficient samples.

  13. Using classification tree modelling to investigate drug prescription practices at health facilities in rural Tanzania

    Kajungu Dan K

    2012-09-01

    Full Text Available Abstract Background Drug prescription practices depend on several factors related to the patient, health worker and health facilities. A better understanding of the factors influencing prescription patterns is essential to develop strategies to mitigate the negative consequences associated with poor practices in both the public and private sectors. Methods A cross-sectional study was conducted in rural Tanzania among patients attending health facilities, and health workers. Patients, health workers and health facilities-related factors with the potential to influence drug prescription patterns were used to build a model of key predictors. Standard data mining methodology of classification tree analysis was used to define the importance of the different factors on prescription patterns. Results This analysis included 1,470 patients and 71 health workers practicing in 30 health facilities. Patients were mostly treated in dispensaries. Twenty two variables were used to construct two classification tree models: one for polypharmacy (prescription of ≥3 drugs on a single clinic visit and one for co-prescription of artemether-lumefantrine (AL with antibiotics. The most important predictor of polypharmacy was the diagnosis of several illnesses. Polypharmacy was also associated with little or no supervision of the health workers, administration of AL and private facilities. Co-prescription of AL with antibiotics was more frequent in children under five years of age and the other important predictors were transmission season, mode of diagnosis and the location of the health facility. Conclusion Standard data mining methodology is an easy-to-implement analytical approach that can be useful for decision-making. Polypharmacy is mainly due to the diagnosis of multiple illnesses.

  14. Classification and regression tree (CART) model to predict pulmonary tuberculosis in hospitalized patients.

    Aguiar, Fabio S; Almeida, Luciana L; Ruffino-Netto, Antonio; Kritski, Afranio Lineu; Mello, Fernanda Cq; Werneck, Guilherme L

    2012-08-07

    Tuberculosis (TB) remains a public health issue worldwide. The lack of specific clinical symptoms to diagnose TB makes the correct decision to admit patients to respiratory isolation a difficult task for the clinician. Isolation of patients without the disease is common and increases health costs. Decision models for the diagnosis of TB in patients attending hospitals can increase the quality of care and decrease costs, without the risk of hospital transmission. We present a predictive model for predicting pulmonary TB in hospitalized patients in a high prevalence area in order to contribute to a more rational use of isolation rooms without increasing the risk of transmission. Cross sectional study of patients admitted to CFFH from March 2003 to December 2004. A classification and regression tree (CART) model was generated and validated. The area under the ROC curve (AUC), sensitivity, specificity, positive and negative predictive values were used to evaluate the performance of model. Validation of the model was performed with a different sample of patients admitted to the same hospital from January to December 2005. We studied 290 patients admitted with clinical suspicion of TB. Diagnosis was confirmed in 26.5% of them. Pulmonary TB was present in 83.7% of the patients with TB (62.3% with positive sputum smear) and HIV/AIDS was present in 56.9% of patients. The validated CART model showed sensitivity, specificity, positive predictive value and negative predictive value of 60.00%, 76.16%, 33.33%, and 90.55%, respectively. The AUC was 79.70%. The CART model developed for these hospitalized patients with clinical suspicion of TB had fair to good predictive performance for pulmonary TB. The most important variable for prediction of TB diagnosis was chest radiograph results. Prospective validation is still necessary, but our model offer an alternative for decision making in whether to isolate patients with clinical suspicion of TB in tertiary health facilities in

  15. Model-based Clustering of Categorical Time Series with Multinomial Logit Classification

    Frühwirth-Schnatter, Sylvia; Pamminger, Christoph; Winter-Ebmer, Rudolf; Weber, Andrea

    2010-09-01

    A common problem in many areas of applied statistics is to identify groups of similar time series in a panel of time series. However, distance-based clustering methods cannot easily be extended to time series data, where an appropriate distance-measure is rather difficult to define, particularly for discrete-valued time series. Markov chain clustering, proposed by Pamminger and Frühwirth-Schnatter [6], is an approach for clustering discrete-valued time series obtained by observing a categorical variable with several states. This model-based clustering method is based on finite mixtures of first-order time-homogeneous Markov chain models. In order to further explain group membership we present an extension to the approach of Pamminger and Frühwirth-Schnatter [6] by formulating a probabilistic model for the latent group indicators within the Bayesian classification rule by using a multinomial logit model. The parameters are estimated for a fixed number of clusters within a Bayesian framework using an Markov chain Monte Carlo (MCMC) sampling scheme representing a (full) Gibbs-type sampler which involves only draws from standard distributions. Finally, an application to a panel of Austrian wage mobility data is presented which leads to an interesting segmentation of the Austrian labour market.

  16. Echo-waveform classification using model and model free techniques: Experimental study results from central western continental shelf of India

    Chakraborty, B.; Navelkar, G.S.; Desai, R.G.P.; Janakiraman, G.; Mahale, V.; Fernandes, W.A.; Rao, N.

    seafloor of India, but unable to provide a suitable means for seafloor classification. This paper also suggests a hybrid artificial neural network (ANN) architecture i.e. Learning Vector Quantisation (LVQ) for seafloor classification. An analysis...

  17. Sensitivity analysis of the GEMS soil organic carbon model to land cover land use classification uncertainties under different climate scenarios in Senegal

    Dieye, A.M.; Roy, David P.; Hanan, N.P.; Liu, S.; Hansen, M.; Toure, A.

    2012-01-01

    Spatially explicit land cover land use (LCLU) change information is needed to drive biogeochemical models that simulate soil organic carbon (SOC) dynamics. Such information is increasingly being mapped using remotely sensed satellite data with classification schemes and uncertainties constrained by the sensing system, classification algorithms and land cover schemes. In this study, automated LCLU classification of multi-temporal Landsat satellite data were used to assess the sensitivity of SOC modeled by the Global Ensemble Biogeochemical Modeling System (GEMS). The GEMS was run for an area of 1560 km2 in Senegal under three climate change scenarios with LCLU maps generated using different Landsat classification approaches. This research provides a method to estimate the variability of SOC, specifically the SOC uncertainty due to satellite classification errors, which we show is dependent not only on the LCLU classification errors but also on where the LCLU classes occur relative to the other GEMS model inputs.

  18. Modeling activity recognition of multi resident using label combination of multi label classification in smart home

    Mohamed, Raihani; Perumal, Thinagaran; Sulaiman, Md Nasir; Mustapha, Norwati; Zainudin, M. N. Shah

    2017-10-01

    Pertaining to the human centric concern and non-obtrusive way, the ambient sensor type technology has been selected, accepted and embedded in the environment in resilient style. Human activities, everyday are gradually becoming complex and thus complicate the inferences of activities when it involving the multi resident in the same smart environment. Current works solutions focus on separate model between the resident, activities and interactions. Some study use data association and extra auxiliary of graphical nodes to model human tracking information in an environment and some produce separate framework to incorporate the auxiliary for interaction feature model. Thus, recognizing the activities and which resident perform the activity at the same time in the smart home are vital for the smart home development and future applications. This paper will cater the above issue by considering the simplification and efficient method using the multi label classification framework. This effort eliminates time consuming and simplifies a lot of pre-processing tasks comparing with previous approach. Applications to the multi resident multi label learning in smart home problems shows the LC (Label Combination) using Decision Tree (DT) as base classifier can tackle the above problems.

  19. Supervised learning classification models for prediction of plant virus encoded RNA silencing suppressors.

    Zeenia Jagga

    Full Text Available Viral encoded RNA silencing suppressor proteins interfere with the host RNA silencing machinery, facilitating viral infection by evading host immunity. In plant hosts, the viral proteins have several basic science implications and biotechnology applications. However in silico identification of these proteins is limited by their high sequence diversity. In this study we developed supervised learning based classification models for plant viral RNA silencing suppressor proteins in plant viruses. We developed four classifiers based on supervised learning algorithms: J48, Random Forest, LibSVM and Naïve Bayes algorithms, with enriched model learning by correlation based feature selection. Structural and physicochemical features calculated for experimentally verified primary protein sequences were used to train the classifiers. The training features include amino acid composition; auto correlation coefficients; composition, transition, and distribution of various physicochemical properties; and pseudo amino acid composition. Performance analysis of predictive models based on 10 fold cross-validation and independent data testing revealed that the Random Forest based model was the best and achieved 86.11% overall accuracy and 86.22% balanced accuracy with a remarkably high area under the Receivers Operating Characteristic curve of 0.95 to predict viral RNA silencing suppressor proteins. The prediction models for plant viral RNA silencing suppressors can potentially aid identification of novel viral RNA silencing suppressors, which will provide valuable insights into the mechanism of RNA silencing and could be further explored as potential targets for designing novel antiviral therapeutics. Also, the key subset of identified optimal features may help in determining compositional patterns in the viral proteins which are important determinants for RNA silencing suppressor activities. The best prediction model developed in the study is available as a

  20. Modelling the Happiness Classification of Addicted, Addiction Risk, Threshold and Non-Addicted Groups on Internet Usage

    Sapmaz, Fatma; Totan, Tarik

    2018-01-01

    The aim of this study is to model the happiness classification of university students--grouped as addicted, addiction risk, threshold and non-addicted to internet usage--with compatibility analysis on a map as happiness, average and unhappiness. The participants in this study were 400 university students from Turkey. According to the results of…

  1. Ligand and structure-based classification models for Prediction of P-glycoprotein inhibitors

    Klepsch, Freya; Poongavanam, Vasanthanathan; Ecker, Gerhard Franz

    2014-01-01

    an algorithm based on Euclidean distance. Results show that random forest and SVM performed best for classification of P-gp inhibitors and non-inhibitors, correctly predicting 73/75 % of the external test set compounds. Classification based on the docking experiments using the scoring function Chem...

  2. Classifying Classifications

    Debus, Michael S.

    2017-01-01

    This paper critically analyzes seventeen game classifications. The classifications were chosen on the basis of diversity, ranging from pre-digital classification (e.g. Murray 1952), over game studies classifications (e.g. Elverdam & Aarseth 2007) to classifications of drinking games (e.g. LaBrie et...... al. 2013). The analysis aims at three goals: The classifications’ internal consistency, the abstraction of classification criteria and the identification of differences in classification across fields and/or time. Especially the abstraction of classification criteria can be used in future endeavors...... into the topic of game classifications....

  3. Establishment of land model at the Shika Nuclear Power Plant. Mainly, on rock board classification

    Katagawa, Hideki; Hashimoto, Toru; Hirano, Shuji

    1999-01-01

    In order to grasp engineering properties of basic land of constructions, there is rock board classification as a method to classify whole of rock board to some groups considerable to be nearly equal on its properties. Among the method, various methods in response to its aim and characteristics are devised, and for a classification to hard rock board, the Denken type rock board classification considering degree of weathering to its main element and so forth are well known. The basic rock board of the Shika Nuclear Power Plant is composed of middle and hard types of rock, and its weathering is limited to its shallow portion, most of which are held at fresh condition. For such land, a new classification standard in response to characteristics of land was established. Here were introduced on a progress to establish a new classification standard, its application results and rock board properties. (G.K.)

  4. An Active Patch Model for Real World Texture and Appearance Classification.

    Mao, Junhua; Zhu, Jun; Yuille, Alan L

    2014-09-06

    This paper addresses the task of natural texture and appearance classification. Our goal is to develop a simple and intuitive method that performs at state of the art on datasets ranging from homogeneous texture (e.g., material texture), to less homogeneous texture (e.g., the fur of animals), and to inhomogeneous texture (the appearance patterns of vehicles). Our method uses a bag-of-words model where the features are based on a dictionary of active patches. Active patches are raw intensity patches which can undergo spatial transformations (e.g., rotation and scaling) and adjust themselves to best match the image regions. The dictionary of active patches is required to be compact and representative, in the sense that we can use it to approximately reconstruct the images that we want to classify. We propose a probabilistic model to quantify the quality of image reconstruction and design a greedy learning algorithm to obtain the dictionary. We classify images using the occurrence frequency of the active patches. Feature extraction is fast (about 100 ms per image) using the GPU. The experimental results show that our method improves the state of the art on a challenging material texture benchmark dataset (KTH-TIPS2). To test our method on less homogeneous or inhomogeneous images, we construct two new datasets consisting of appearance image patches of animals and vehicles cropped from the PASCAL VOC dataset. Our method outperforms competing methods on these datasets.

  5. Quantitative Outline-based Shape Analysis and Classification of Planetary Craterforms using Supervised Learning Models

    Slezak, Thomas Joseph; Radebaugh, Jani; Christiansen, Eric

    2017-10-01

    The shapes of craterform morphology on planetary surfaces provides rich information about their origins and evolution. While morphologic information provides rich visual clues to geologic processes and properties, the ability to quantitatively communicate this information is less easily accomplished. This study examines the morphology of craterforms using the quantitative outline-based shape methods of geometric morphometrics, commonly used in biology and paleontology. We examine and compare landforms on planetary surfaces using shape, a property of morphology that is invariant to translation, rotation, and size. We quantify the shapes of paterae on Io, martian calderas, terrestrial basaltic shield calderas, terrestrial ash-flow calderas, and lunar impact craters using elliptic Fourier analysis (EFA) and the Zahn and Roskies (Z-R) shape function, or tangent angle approach to produce multivariate shape descriptors. These shape descriptors are subjected to multivariate statistical analysis including canonical variate analysis (CVA), a multiple-comparison variant of discriminant analysis, to investigate the link between craterform shape and classification. Paterae on Io are most similar in shape to terrestrial ash-flow calderas and the shapes of terrestrial basaltic shield volcanoes are most similar to martian calderas. The shapes of lunar impact craters, including simple, transitional, and complex morphology, are classified with a 100% rate of success in all models. Multiple CVA models effectively predict and classify different craterforms using shape-based identification and demonstrate significant potential for use in the analysis of planetary surfaces.

  6. Sensing Urban Land-Use Patterns by Integrating Google Tensorflow and Scene-Classification Models

    Yao, Y.; Liang, H.; Li, X.; Zhang, J.; He, J.

    2017-09-01

    With the rapid progress of China's urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN) library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model's ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI). Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs). To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP) method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737) for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.

  7. Conduction Delay Learning Model for Unsupervised and Supervised Classification of Spatio-Temporal Spike Patterns.

    Matsubara, Takashi

    2017-01-01

    Precise spike timing is considered to play a fundamental role in communications and signal processing in biological neural networks. Understanding the mechanism of spike timing adjustment would deepen our understanding of biological systems and enable advanced engineering applications such as efficient computational architectures. However, the biological mechanisms that adjust and maintain spike timing remain unclear. Existing algorithms adopt a supervised approach, which adjusts the axonal conduction delay and synaptic efficacy until the spike timings approximate the desired timings. This study proposes a spike timing-dependent learning model that adjusts the axonal conduction delay and synaptic efficacy in both unsupervised and supervised manners. The proposed learning algorithm approximates the Expectation-Maximization algorithm, and classifies the input data encoded into spatio-temporal spike patterns. Even in the supervised classification, the algorithm requires no external spikes indicating the desired spike timings unlike existing algorithms. Furthermore, because the algorithm is consistent with biological models and hypotheses found in existing biological studies, it could capture the mechanism underlying biological delay learning.

  8. A comparative study of PCA, SIMCA and Cole model for classification of bioimpedance spectroscopy measurements.

    Nejadgholi, Isar; Bolic, Miodrag

    2015-08-01

    Due to safety and low cost of bioimpedance spectroscopy (BIS), classification of BIS can be potentially a preferred way of detecting changes in living tissues. However, for longitudinal datasets linear classifiers fail to classify conventional Cole parameters extracted from BIS measurements because of their high variability. In some applications, linear classification based on Principal Component Analysis (PCA) has shown more accurate results. Yet, these methods have not been established for BIS classification, since PCA features have neither been investigated in combination with other classifiers nor have been compared to conventional Cole features in benchmark classification tasks. In this work, PCA and Cole features are compared in three synthesized benchmark classification tasks which are expected to be detected by BIS. These three tasks are classification of before and after geometry change, relative composition change and blood perfusion in a cylindrical organ. Our results show that in all tasks the features extracted by PCA are more discriminant than Cole parameters. Moreover, a pilot study was done on a longitudinal arm BIS dataset including eight subjects and three arm positions. The goal of the study was to compare different methods in arm position classification which includes all three synthesized changes mentioned above. Our comparative study on various classification methods shows that the best classification accuracy is obtained when PCA features are classified by a K-Nearest Neighbors (KNN) classifier. The results of this work suggest that PCA+KNN is a promising method to be considered for classification of BIS datasets that deal with subject and time variability. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. A unified model for context-based behavioural modelling and classification

    Dabrowski, JJ

    2015-11-01

    Full Text Available maritime environment. The simulated data is produced from a generative statistical model that is discussed in (Dabrowski and de Villiers, 2015). The simulation provides tracked coordinates of maritime vessels over a specified region. Sailing conditions over...

  10. Use of circulation types classifications to evaluate AR4 climate models over the Euro-Atlantic region

    Pastor, M.A.; Casado, M.J. [Agencia Estatal de Meteorologia (AEMET), Madrid (Spain)

    2012-10-15

    This paper presents an evaluation of the multi-model simulations for the 4th Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) in terms of their ability to simulate the ERA40 circulation types over the Euro-Atlantic region in winter season. Two classification schemes, k-means and SANDRA, have been considered to test the sensitivity of the evaluation results to the classification procedure. The assessment allows establishing different rankings attending spatial and temporal features of the circulation types. Regarding temporal characteristics, in general, all AR4 models tend to underestimate the frequency of occurrence. The best model simulating spatial characteristics is the UKMO-HadGEM1 whereas CCSM3, UKMO-HadGEM1 and CGCM3.1(T63) are the best simulating the temporal features, for both classification schemes. This result agrees with the AR4 models ranking obtained when having analysed the ability of the same AR4 models to simulate Euro-Atlantic variability modes. This study has proved the utility of applying such a synoptic climatology approach as a diagnostic tool for models' assessment. The ability of the models to properly reproduce the position of ridges and troughs and the frequency of synoptic patterns, will therefore improve our confidence in the response of models to future climate changes. (orig.)

  11. Persistent pulmonary subsolid nodules: model-based iterative reconstruction for nodule classification and measurement variability on low-dose CT

    Kim, Hyungjin; Kim, Seong Ho; Lee, Sang Min; Lee, Kyung Hee; Park, Chang Min; Park, Sang Joon; Goo, Jin Mo

    2014-01-01

    To compare the pulmonary subsolid nodule (SSN) classification agreement and measurement variability between filtered back projection (FBP) and model-based iterative reconstruction (MBIR). Low-dose CTs were reconstructed using FBP and MBIR for 47 patients with 47 SSNs. Two readers independently classified SSNs into pure or part-solid ground-glass nodules, and measured the size of the whole nodule and solid portion twice on both reconstruction algorithms. Nodule classification agreement was analyzed using Cohen's kappa and compared between reconstruction algorithms using McNemar's test. Measurement variability was investigated using Bland-Altman analysis and compared with the paired t-test. Cohen's kappa for inter-reader SSN classification agreement was 0.541-0.662 on FBP and 0.778-0.866 on MBIR. Between the two readers, nodule classification was consistent in 79.8 % (75/94) with FBP and 91.5 % (86/94) with MBIR (p = 0.027). Inter-reader measurement variability range was -5.0-2.1 mm on FBP and -3.3-1.8 mm on MBIR for whole nodule size, and was -6.5-0.9 mm on FBP and -5.5-1.5 mm on MBIR for solid portion size. Inter-reader measurement differences were significantly smaller on MBIR (p = 0.027, whole nodule; p = 0.011, solid portion). MBIR significantly improved SSN classification agreement and reduced measurement variability of both whole nodules and solid portions between readers. (orig.)

  12. Classification and regression tree (CART model to predict pulmonary tuberculosis in hospitalized patients

    Aguiar Fabio S

    2012-08-01

    Full Text Available Abstract Background Tuberculosis (TB remains a public health issue worldwide. The lack of specific clinical symptoms to diagnose TB makes the correct decision to admit patients to respiratory isolation a difficult task for the clinician. Isolation of patients without the disease is common and increases health costs. Decision models for the diagnosis of TB in patients attending hospitals can increase the quality of care and decrease costs, without the risk of hospital transmission. We present a predictive model for predicting pulmonary TB in hospitalized patients in a high prevalence area in order to contribute to a more rational use of isolation rooms without increasing the risk of transmission. Methods Cross sectional study of patients admitted to CFFH from March 2003 to December 2004. A classification and regression tree (CART model was generated and validated. The area under the ROC curve (AUC, sensitivity, specificity, positive and negative predictive values were used to evaluate the performance of model. Validation of the model was performed with a different sample of patients admitted to the same hospital from January to December 2005. Results We studied 290 patients admitted with clinical suspicion of TB. Diagnosis was confirmed in 26.5% of them. Pulmonary TB was present in 83.7% of the patients with TB (62.3% with positive sputum smear and HIV/AIDS was present in 56.9% of patients. The validated CART model showed sensitivity, specificity, positive predictive value and negative predictive value of 60.00%, 76.16%, 33.33%, and 90.55%, respectively. The AUC was 79.70%. Conclusions The CART model developed for these hospitalized patients with clinical suspicion of TB had fair to good predictive performance for pulmonary TB. The most important variable for prediction of TB diagnosis was chest radiograph results. Prospective validation is still necessary, but our model offer an alternative for decision making in whether to isolate patients with

  13. In silico analysis of conformational changes induced by mutation of aromatic binding residues: consequences for drug binding in the hERG K+ channel.

    Kirsten Knape

    Full Text Available Pharmacological inhibition of cardiac hERG K(+ channels is associated with increased risk of lethal arrhythmias. Many drugs reduce hERG current by directly binding to the channel, thereby blocking ion conduction. Mutation of two aromatic residues (F656 and Y652 substantially decreases the potency of numerous structurally diverse compounds. Nevertheless, some drugs are only weakly affected by mutation Y652A. In this study we utilize molecular dynamics simulations and docking studies to analyze the different effects of mutation Y652A on a selected number of hERG blockers. MD simulations reveal conformational changes in the binding site induced by mutation Y652A. Loss of π-π-stacking between the two aromatic residues induces a conformational change of the F656 side chain from a cavity facing to cavity lining orientation. Docking studies and MD simulations qualitatively reproduce the diverse experimentally observed modulatory effects of mutation Y652A and provide a new structural interpretation for the sensitivity differences.

  14. Bayesian Classification Models for Premature Ventricular Contraction Detection on ECG Traces.

    Casas, Manuel M; Avitia, Roberto L; Gonzalez-Navarro, Felix F; Cardenas-Haro, Jose A; Reyna, Marco A

    2018-01-01

    According to the American Heart Association, in its latest commission about Ventricular Arrhythmias and Sudden Death 2006, the epidemiology of the ventricular arrhythmias ranges from a series of risk descriptors and clinical markers that go from ventricular premature complexes and nonsustained ventricular tachycardia to sudden cardiac death due to ventricular tachycardia in patients with or without clinical history. The premature ventricular complexes (PVCs) are known to be associated with malignant ventricular arrhythmias and sudden cardiac death (SCD) cases. Detecting this kind of arrhythmia has been crucial in clinical applications. The electrocardiogram (ECG) is a clinical test used to measure the heart electrical activity for inferences and diagnosis. Analyzing large ECG traces from several thousands of beats has brought the necessity to develop mathematical models that can automatically make assumptions about the heart condition. In this work, 80 different features from 108,653 ECG classified beats of the gold-standard MIT-BIH database were extracted in order to classify the Normal, PVC, and other kind of ECG beats. Three well-known Bayesian classification algorithms were trained and tested using these extracted features. Experimental results show that the F1 scores for each class were above 0.95, giving almost the perfect value for the PVC class. This gave us a promising path in the development of automated mechanisms for the detection of PVC complexes.

  15. Study and ranking of determinants of Taenia solium infections by classification tree models.

    Mwape, Kabemba E; Phiri, Isaac K; Praet, Nicolas; Dorny, Pierre; Muma, John B; Zulu, Gideon; Speybroeck, Niko; Gabriël, Sarah

    2015-01-01

    Taenia solium taeniasis/cysticercosis is an important public health problem occurring mainly in developing countries. This work aimed to study the determinants of human T. solium infections in the Eastern province of Zambia and rank them in order of importance. A household (HH)-level questionnaire was administered to 680 HHs from 53 villages in two rural districts and the taeniasis and cysticercosis status determined. A classification tree model (CART) was used to define the relative importance and interactions between different predictor variables in their effect on taeniasis and cysticercosis. The Katete study area had a significantly higher taeniasis and cysticercosis prevalence than the Petauke area. The CART analysis for Katete showed that the most important determinant for cysticercosis infections was the number of HH inhabitants (6 to 10) and for taeniasis was the number of HH inhabitants > 6. The most important determinant in Petauke for cysticercosis was the age of head of household > 32 years and for taeniasis it was age taeniasis and cysticercosis infections was the number of HH inhabitants (6 to 10) in Katete district and age in Petauke. The results suggest that control measures should target HHs with a high number of inhabitants and older individuals. © The American Society of Tropical Medicine and Hygiene.

  16. Rapid Identification of Asteraceae Plants with Improved RBF-ANN Classification Models Based on MOS Sensor E-Nose

    Hui-Qin Zou

    2014-01-01

    Full Text Available Plants from Asteraceae family are widely used as herbal medicines and food ingredients, especially in Asian area. Therefore, authentication and quality control of these different Asteraceae plants are important for ensuring consumers’ safety and efficacy. In recent decades, electronic nose (E-nose has been studied as an alternative approach. In this paper, we aim to develop a novel discriminative model by improving radial basis function artificial neural network (RBF-ANN classification model. Feature selection algorithms, including principal component analysis (PCA and BestFirst + CfsSubsetEval (BC, were applied in the improvement of RBF-ANN models. Results illustrate that in the improved RBF-ANN models with lower dimension data classification accuracies (100% remained the same as in the original model with higher-dimension data. It is the first time to introduce feature selection methods to get valuable information on how to attribute more relevant MOS sensors; namely, in this case, S1, S3, S4, S6, and S7 show better capability to distinguish these Asteraceae plants. This paper also gives insights to further research in this area, for instance, sensor array optimization and performance improvement of classification model.

  17. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges.

    Phillips, Charles D

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  18. Persistent pulmonary subsolid nodules: model-based iterative reconstruction for nodule classification and measurement variability on low-dose CT.

    Kim, Hyungjin; Park, Chang Min; Kim, Seong Ho; Lee, Sang Min; Park, Sang Joon; Lee, Kyung Hee; Goo, Jin Mo

    2014-11-01

    To compare the pulmonary subsolid nodule (SSN) classification agreement and measurement variability between filtered back projection (FBP) and model-based iterative reconstruction (MBIR). Low-dose CTs were reconstructed using FBP and MBIR for 47 patients with 47 SSNs. Two readers independently classified SSNs into pure or part-solid ground-glass nodules, and measured the size of the whole nodule and solid portion twice on both reconstruction algorithms. Nodule classification agreement was analyzed using Cohen's kappa and compared between reconstruction algorithms using McNemar's test. Measurement variability was investigated using Bland-Altman analysis and compared with the paired t-test. Cohen's kappa for inter-reader SSN classification agreement was 0.541-0.662 on FBP and 0.778-0.866 on MBIR. Between the two readers, nodule classification was consistent in 79.8 % (75/94) with FBP and 91.5 % (86/94) with MBIR (p = 0.027). Inter-reader measurement variability range was -5.0-2.1 mm on FBP and -3.3-1.8 mm on MBIR for whole nodule size, and was -6.5-0.9 mm on FBP and -5.5-1.5 mm on MBIR for solid portion size. Inter-reader measurement differences were significantly smaller on MBIR (p = 0.027, whole nodule; p = 0.011, solid portion). MBIR significantly improved SSN classification agreement and reduced measurement variability of both whole nodules and solid portions between readers. • Low-dose CT using MBIR algorithm improves reproducibility in the classification of SSNs. • MBIR would enable more confident clinical planning according to the SSN type. • Reduced measurement variability on MBIR allows earlier detection of potentially malignant nodules.

  19. Example-Dependent Cost-Sensitive Classification with Applications in Financial Risk Modeling and Marketing Analytics

    Correa Bahnsen, Alejandro

    2015-01-01

    Several real-world binary classification problems are example-dependent cost-sensitive in nature, where the costs due to misclassification vary between examples and not only within classes. However, standard binary classification methods do not take these costs into account, and assume a constant cost of misclassification errors. This approach is not realistic in many real-world applications. For example in credit card fraud detection, failing to detect a fraudulent transaction may have an ec...

  20. Estimating Classification Errors Under Edit Restrictions in Composite Survey-Register Data Using Multiple Imputation Latent Class Modelling (MILC

    Boeschoten Laura

    2017-12-01

    Full Text Available Both registers and surveys can contain classification errors. These errors can be estimated by making use of a composite data set. We propose a new method based on latent class modelling to estimate the number of classification errors across several sources while taking into account impossible combinations with scores on other variables. Furthermore, the latent class model, by multiply imputing a new variable, enhances the quality of statistics based on the composite data set. The performance of this method is investigated by a simulation study, which shows that whether or not the method can be applied depends on the entropy R2 of the latent class model and the type of analysis a researcher is planning to do. Finally, the method is applied to public data from Statistics Netherlands.

  1. BClass: A Bayesian Approach Based on Mixture Models for Clustering and Classification of Heterogeneous Biological Data

    Arturo Medrano-Soto

    2004-12-01

    Full Text Available Based on mixture models, we present a Bayesian method (called BClass to classify biological entities (e.g. genes when variables of quite heterogeneous nature are analyzed. Various statistical distributions are used to model the continuous/categorical data commonly produced by genetic experiments and large-scale genomic projects. We calculate the posterior probability of each entry to belong to each element (group in the mixture. In this way, an original set of heterogeneous variables is transformed into a set of purely homogeneous characteristics represented by the probabilities of each entry to belong to the groups. The number of groups in the analysis is controlled dynamically by rendering the groups as 'alive' and 'dormant' depending upon the number of entities classified within them. Using standard Metropolis-Hastings and Gibbs sampling algorithms, we constructed a sampler to approximate posterior moments and grouping probabilities. Since this method does not require the definition of similarity measures, it is especially suitable for data mining and knowledge discovery in biological databases. We applied BClass to classify genes in RegulonDB, a database specialized in information about the transcriptional regulation of gene expression in the bacterium Escherichia coli. The classification obtained is consistent with current knowledge and allowed prediction of missing values for a number of genes. BClass is object-oriented and fully programmed in Lisp-Stat. The output grouping probabilities are analyzed and interpreted using graphical (dynamically linked plots and query-based approaches. We discuss the advantages of using Lisp-Stat as a programming language as well as the problems we faced when the data volume increased exponentially due to the ever-growing number of genomic projects.

  2. Binary classification of dyslipidemia from the waist-to-hip ratio and body mass index: a comparison of linear, logistic, and CART models

    Paccaud Fred

    2004-04-01

    Full Text Available Abstract Background We sought to improve upon previously published statistical modeling strategies for binary classification of dyslipidemia for general population screening purposes based on the waist-to-hip circumference ratio and body mass index anthropometric measurements. Methods Study subjects were participants in WHO-MONICA population-based surveys conducted in two Swiss regions. Outcome variables were based on the total serum cholesterol to high density lipoprotein cholesterol ratio. The other potential predictor variables were gender, age, current cigarette smoking, and hypertension. The models investigated were: (i linear regression; (ii logistic classification; (iii regression trees; (iv classification trees (iii and iv are collectively known as "CART". Binary classification performance of the region-specific models was externally validated by classifying the subjects from the other region. Results Waist-to-hip circumference ratio and body mass index remained modest predictors of dyslipidemia. Correct classification rates for all models were 60–80%, with marked gender differences. Gender-specific models provided only small gains in classification. The external validations provided assurance about the stability of the models. Conclusions There were no striking differences between either the algebraic (i, ii vs. non-algebraic (iii, iv, or the regression (i, iii vs. classification (ii, iv modeling approaches. Anticipated advantages of the CART vs. simple additive linear and logistic models were less than expected in this particular application with a relatively small set of predictor variables. CART models may be more useful when considering main effects and interactions between larger sets of predictor variables.

  3. The Implementation of ABC Classification and (Q, R with Economic Order Quantity (EOQ Model on the Travel Agency

    Anggi Oktaviani

    2017-03-01

    Full Text Available To support customer loyalty programs, the travel agencies gave a souvenir to their customers. In one of the travel agencies in Jakarta, the demand for travel agency services could not be ensured. This had an impact on inventory items that were surplus to requirements. Inventory management was done by combining classifications ABC and (Q, R with Economic Order Quantity (EOQ model, which was usually used for uncertain demand. With “A” classification of the goods, two model (Q, R scenarios were made and then simulated with software Arena. From these two scenarios, the results show that both have a tendency to decline, or the stockouts occur. However, the second scenario is more optimistic because a dummy variable is added the second scenario. Thus, the tendency is stable and does not decline.

  4. An application to pulmonary emphysema classification based on model of texton learning by sparse representation

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryojiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2012-03-01

    We aim at using a new texton based texture classification method in the classification of pulmonary emphysema in computed tomography (CT) images of the lungs. Different from conventional computer-aided diagnosis (CAD) pulmonary emphysema classification methods, in this paper, firstly, the dictionary of texton is learned via applying sparse representation(SR) to image patches in the training dataset. Then the SR coefficients of the test images over the dictionary are used to construct the histograms for texture presentations. Finally, classification is performed by using a nearest neighbor classifier with a histogram dissimilarity measure as distance. The proposed approach is tested on 3840 annotated regions of interest consisting of normal tissue and mild, moderate and severe pulmonary emphysema of three subtypes. The performance of the proposed system, with an accuracy of about 88%, is comparably higher than state of the art method based on the basic rotation invariant local binary pattern histograms and the texture classification method based on texton learning by k-means, which performs almost the best among other approaches in the literature.

  5. SENSING URBAN LAND-USE PATTERNS BY INTEGRATING GOOGLE TENSORFLOW AND SCENE-CLASSIFICATION MODELS

    Y. Yao

    2017-09-01

    Full Text Available With the rapid progress of China’s urbanization, research on the automatic detection of land-use patterns in Chinese cities is of substantial importance. Deep learning is an effective method to extract image features. To take advantage of the deep-learning method in detecting urban land-use patterns, we applied a transfer-learning-based remote-sensing image approach to extract and classify features. Using the Google Tensorflow framework, a powerful convolution neural network (CNN library was created. First, the transferred model was previously trained on ImageNet, one of the largest object-image data sets, to fully develop the model’s ability to generate feature vectors of standard remote-sensing land-cover data sets (UC Merced and WHU-SIRI. Then, a random-forest-based classifier was constructed and trained on these generated vectors to classify the actual urban land-use pattern on the scale of traffic analysis zones (TAZs. To avoid the multi-scale effect of remote-sensing imagery, a large random patch (LRP method was used. The proposed method could efficiently obtain acceptable accuracy (OA = 0.794, Kappa = 0.737 for the study area. In addition, the results show that the proposed method can effectively overcome the multi-scale effect that occurs in urban land-use classification at the irregular land-parcel level. The proposed method can help planners monitor dynamic urban land use and evaluate the impact of urban-planning schemes.

  6. Schistosoma mansoni reinfection: Analysis of risk factors by classification and regression tree (CART modeling.

    Andréa Gazzinelli

    Full Text Available Praziquantel (PZQ is an effective chemotherapy for schistosomiasis mansoni and a mainstay for its control and potential elimination. However, it does not prevent against reinfection, which can occur rapidly in areas with active transmission. A guide to ranking the risk factors for Schistosoma mansoni reinfection would greatly contribute to prioritizing resources and focusing prevention and control measures to prevent rapid reinfection. The objective of the current study was to explore the relationship among the socioeconomic, demographic, and epidemiological factors that can influence reinfection by S. mansoni one year after successful treatment with PZQ in school-aged children in Northeastern Minas Gerais state Brazil. Parasitological, socioeconomic, demographic, and water contact information were surveyed in 506 S. mansoni-infected individuals, aged 6 to 15 years, resident in these endemic areas. Eligible individuals were treated with PZQ until they were determined to be negative by the absence of S. mansoni eggs in the feces on two consecutive days of Kato-Katz fecal thick smear. These individuals were surveyed again 12 months from the date of successful treatment with PZQ. A classification and regression tree modeling (CART was then used to explore the relationship between socioeconomic, demographic, and epidemiological variables and their reinfection status. The most important risk factor identified for S. mansoni reinfection was their "heavy" infection at baseline. Additional analyses, excluding heavy infection status, showed that lower socioeconomic status and a lower level of education of the household head were also most important risk factors for S. mansoni reinfection. Our results provide an important contribution toward the control and possible elimination of schistosomiasis by identifying three major risk factors that can be used for targeted treatment and monitoring of reinfection. We suggest that control measures that target

  7. Objective classification of latent behavioral states in bio-logging data using multivariate-normal hidden Markov models.

    Phillips, Joe Scutt; Patterson, Toby A; Leroy, Bruno; Pilling, Graham M; Nicol, Simon J

    2015-07-01

    Analysis of complex time-series data from ecological system study requires quantitative tools for objective description and classification. These tools must take into account largely ignored problems of bias in manual classification, autocorrelation, and noise. Here we describe a method using existing estimation techniques for multivariate-normal hidden Markov models (HMMs) to develop such a classification. We use high-resolution behavioral data from bio-loggers attached to free-roaming pelagic tuna as an example. Observed patterns are assumed to be generated by an unseen Markov process that switches between several multivariate-normal distributions. Our approach is assessed in two parts. The first uses simulation experiments, from which the ability of the HMM to estimate known parameter values is examined using artificial time series of data consistent with hypotheses about pelagic predator foraging ecology. The second is the application to time series of continuous vertical movement data from yellowfin and bigeye tuna taken from tuna tagging experiments. These data were compressed into summary metrics capturing the variation of patterns in diving behavior and formed into a multivariate time series used to estimate a HMM. Each observation was associated with covariate information incorporating the effect of day and night on behavioral switching. Known parameter values were well recovered by the HMMs in our simulation experiments, resulting in mean correct classification rates of 90-97%, although some variance-covariance parameters were estimated less accurately. HMMs with two distinct behavioral states were selected for every time series of real tuna data, predicting a shallow warm state, which was similar across all individuals, and a deep colder state, which was more variable. Marked diurnal behavioral switching was predicted, consistent with many previous empirical studies on tuna. HMMs provide easily interpretable models for the objective classification of

  8. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Cuihong Wen

    Full Text Available Optical Music Recognition (OMR has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM. The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM, which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs and Neural Networks (NNs.

  9. A Directed Acyclic Graph-Large Margin Distribution Machine Model for Music Symbol Classification.

    Wen, Cuihong; Zhang, Jing; Rebelo, Ana; Cheng, Fanyong

    2016-01-01

    Optical Music Recognition (OMR) has received increasing attention in recent years. In this paper, we propose a classifier based on a new method named Directed Acyclic Graph-Large margin Distribution Machine (DAG-LDM). The DAG-LDM is an improvement of the Large margin Distribution Machine (LDM), which is a binary classifier that optimizes the margin distribution by maximizing the margin mean and minimizing the margin variance simultaneously. We modify the LDM to the DAG-LDM to solve the multi-class music symbol classification problem. Tests are conducted on more than 10000 music symbol images, obtained from handwritten and printed images of music scores. The proposed method provides superior classification capability and achieves much higher classification accuracy than the state-of-the-art algorithms such as Support Vector Machines (SVMs) and Neural Networks (NNs).

  10. AdOn HDP-HMM: An Adaptive Online Model for Segmentation and Classification of Sequential Data.

    Bargi, Ava; Xu, Richard Yi Da; Piccardi, Massimo

    2017-09-21

    Recent years have witnessed an increasing need for the automated classification of sequential data, such as activities of daily living, social media interactions, financial series, and others. With the continuous flow of new data, it is critical to classify the observations on-the-fly and without being limited by a predetermined number of classes. In addition, a model should be able to update its parameters in response to a possible evolution in the distributions of the classes. This compelling problem, however, does not seem to have been adequately addressed in the literature, since most studies focus on offline classification over predefined class sets. In this paper, we present a principled solution for this problem based on an adaptive online system leveraging Markov switching models and hierarchical Dirichlet process priors. This adaptive online approach is capable of classifying the sequential data over an unlimited number of classes while meeting the memory and delay constraints typical of streaming contexts. In this paper, we introduce an adaptive ''learning rate'' that is responsible for balancing the extent to which the model retains its previous parameters or adapts to new observations. Experimental results on stationary and evolving synthetic data and two video data sets, TUM Assistive Kitchen and collated Weizmann, show a remarkable performance in terms of segmentation and classification, particularly for sequences from evolutionary distributions and/or those containing previously unseen classes.

  11. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  12. A new ambulatory classification and funding model for radiation oncology: non-admitted patients in Victorian hospitals.

    Antioch, K M; Walsh, M K; Anderson, D; Wilson, R; Chambers, C; Willmer, P

    1998-01-01

    The Victorian Department of Human Services has developed a classification and funding model for non-admitted radiation oncology patients. Agencies were previously funded on an historical cost input basis. For 1996-97, payments were made according to the new Non-admitted Radiation Oncology Classification System and include four key components. Fixed grants are based on Weighted Radiation Therapy Services targets for megavoltage courses, planning procedures (dosimetry and simulation) and consultations. The additional throughput pool covers additional Weighted Radiation Therapy Services once targets are reached, with access conditional on the utilisation of a minimum number of megavoltage fields by each hospital. Block grants cover specialised treatments, such as brachytherapy, allied health payments and other support services. Compensation grants were available to bring payments up to the level of the previous year. There is potential to provide incentives to promote best practice in Australia through linking appropriate practice to funding models. Key Australian and international developments should be monitored, including economic evaluation studies, classification and funding models, and the deliberations of the American College of Radiology, the American Society for Therapeutic Radiology and Oncology, the Trans-Tasman Radiation Oncology Group and the Council of Oncology Societies of Australia. National impact on clinical practice guidelines in Australia can be achieved through the Quality of Care and Health Outcomes Committee of the National Health and Medical Research Council.

  13. [Establishment of comprehensive prediction model of acute gastrointestinal injury classification of critically ill patients].

    Wang, Yan; Wang, Jianrong; Liu, Weiwei; Zhang, Guangliang

    2018-03-25

    trauma. There were 33 emergency operation, 10 cases of elecoperectomy and 17 cases of drug treatment. There were 56 cases of diabetes(93.3%). Forty-five cases (75.0%) used vasoactive drugs, 37 cases (61.7%) used mechanical ventilation and 44 cases (73.3%) used enteral nutrition. APACHE II( score were 4.0 to 28.0(average 16.8) points. Four clinical factors were significantly positively related with AGI grades, including lactic acid level (r=0.215, P=0.000), SOFA score (r=0.383, P=0.000), the use of vascular active drugs (r=0.611, P=0.000) and mechanical ventilation (r=0.142, P=0.014). In addition to the five indexes of gastric bowel sounds which were found to be negatively correlated with AGI grades, the characteristics of 333 by 9 were composed of these nine indexes with high correlation of AGI grades. Five main components were selected after principal component analysis of these nine correlated indexes. A comprehensive AGI grades model of critically ill patients with a fitting degree of 0.967 3 and an accuracy rate of 82.61% was built by BP artificial neural network. The comprehensive model to classify AGI grades with the GIS is developed, which can help further predicting the classification of AGI grades of critically ill patients.

  14. A Comparative Study of Classification and Regression Algorithms for Modelling Students' Academic Performance

    Strecht, Pedro; Cruz, Luís; Soares, Carlos; Mendes-Moreira, João; Abreu, Rui

    2015-01-01

    Predicting the success or failure of a student in a course or program is a problem that has recently been addressed using data mining techniques. In this paper we evaluate some of the most popular classification and regression algorithms on this problem. We address two problems: prediction of approval/failure and prediction of grade. The former is…

  15. Conceptual process models and quantitative analysis of classification problems in Scrum software development practices

    Helwerda, L.S.; Niessink, F.; Verbeek, F.J.

    2017-01-01

    We propose a novel classification method that integrates into existing agile software development practices by collecting data records generated by software and tools used in the development process. We extract features from the collected data and create visualizations that provide insights,

  16. Classification of forensic autopsy reports through conceptual graph-based document representation model.

    Mujtaba, Ghulam; Shuib, Liyana; Raj, Ram Gopal; Rajandram, Retnagowri; Shaikh, Khairunisa; Al-Garadi, Mohammed Ali

    2018-06-01

    Text categorization has been used extensively in recent years to classify plain-text clinical reports. This study employs text categorization techniques for the classification of open narrative forensic autopsy reports. One of the key steps in text classification is document representation. In document representation, a clinical report is transformed into a format that is suitable for classification. The traditional document representation technique for text categorization is the bag-of-words (BoW) technique. In this study, the traditional BoW technique is ineffective in classifying forensic autopsy reports because it merely extracts frequent but discriminative features from clinical reports. Moreover, this technique fails to capture word inversion, as well as word-level synonymy and polysemy, when classifying autopsy reports. Hence, the BoW technique suffers from low accuracy and low robustness unless it is improved with contextual and application-specific information. To overcome the aforementioned limitations of the BoW technique, this research aims to develop an effective conceptual graph-based document representation (CGDR) technique to classify 1500 forensic autopsy reports from four (4) manners of death (MoD) and sixteen (16) causes of death (CoD). Term-based and Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT) based conceptual features were extracted and represented through graphs. These features were then used to train a two-level text classifier. The first level classifier was responsible for predicting MoD. In addition, the second level classifier was responsible for predicting CoD using the proposed conceptual graph-based document representation technique. To demonstrate the significance of the proposed technique, its results were compared with those of six (6) state-of-the-art document representation techniques. Lastly, this study compared the effects of one-level classification and two-level classification on the experimental results

  17. Improved classification and visualization of healthy and pathological hard dental tissues by modeling specular reflections in NIR hyperspectral images

    Usenik, Peter; Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-03-01

    Despite major improvements in dental healthcare and technology, dental caries remains one of the most prevalent chronic diseases of modern society. The initial stages of dental caries are characterized by demineralization of enamel crystals, commonly known as white spots, which are difficult to diagnose. Near-infrared (NIR) hyperspectral imaging is a new promising technique for early detection of demineralization which can classify healthy and pathological dental tissues. However, due to non-ideal illumination of the tooth surface the hyperspectral images can exhibit specular reflections, in particular around the edges and the ridges of the teeth. These reflections significantly affect the performance of automated classification and visualization methods. Cross polarized imaging setup can effectively remove the specular reflections, however is due to the complexity and other imaging setup limitations not always possible. In this paper, we propose an alternative approach based on modeling the specular reflections of hard dental tissues, which significantly improves the classification accuracy in the presence of specular reflections. The method was evaluated on five extracted human teeth with corresponding gold standard for 6 different healthy and pathological hard dental tissues including enamel, dentin, calculus, dentin caries, enamel caries and demineralized regions. Principal component analysis (PCA) was used for multivariate local modeling of healthy and pathological dental tissues. The classification was performed by employing multiple discriminant analysis. Based on the obtained results we believe the proposed method can be considered as an effective alternative to the complex cross polarized imaging setups.

  18. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  19. Landform classification using a sub-pixel spatial attraction model to increase spatial resolution of digital elevation model (DEM

    Marzieh Mokarrama

    2018-04-01

    Full Text Available The purpose of the present study is preparing a landform classification by using digital elevation model (DEM which has a high spatial resolution. To reach the mentioned aim, a sub-pixel spatial attraction model was used as a novel method for preparing DEM with a high spatial resolution in the north of Darab, Fars province, Iran. The sub-pixel attraction models convert the pixel into sub-pixels based on the neighboring pixels fraction values, which can only be attracted by a central pixel. Based on this approach, a mere maximum of eight neighboring pixels can be selected for calculating of the attraction value. In the mentioned model, other pixels are supposed to be far from the central pixel to receive any attraction. In the present study by using a sub-pixel attraction model, the spatial resolution of a DEM was increased. The design of the algorithm is accomplished by using a DEM with a spatial resolution of 30 m (the Advanced Space borne Thermal Emission and Reflection Radiometer; (ASTER and a 90 m (the Shuttle Radar Topography Mission; (SRTM. In the attraction model, scale factors of (S = 2, S = 3, and S = 4 with two neighboring methods of touching (T = 1 and quadrant (T = 2 are applied to the DEMs by using MATLAB software. The algorithm is evaluated by taking the best advantages of 487 sample points, which are measured by surveyors. The spatial attraction model with scale factor of (S = 2 gives better results compared to those scale factors which are greater than 2. Besides, the touching neighborhood method is turned to be more accurate than the quadrant method. In fact, dividing each pixel into more than two sub-pixels decreases the accuracy of the resulted DEM. On the other hand, in these cases DEM, is itself in charge of increasing the value of root-mean-square error (RMSE and shows that attraction models could not be used for S which is greater than 2. Thus considering results, the proposed model is highly capable of

  20. Tree Species Abundance Predictions in a Tropical Agricultural Landscape with a Supervised Classification Model and Imbalanced Data

    Sarah J. Graves

    2016-02-01

    Full Text Available Mapping species through classification of imaging spectroscopy data is facilitating research to understand tree species distributions at increasingly greater spatial scales. Classification requires a dataset of field observations matched to the image, which will often reflect natural species distributions, resulting in an imbalanced dataset with many samples for common species and few samples for less common species. Despite the high prevalence of imbalanced datasets in multiclass species predictions, the effect on species prediction accuracy and landscape species abundance has not yet been quantified. First, we trained and assessed the accuracy of a support vector machine (SVM model with a highly imbalanced dataset of 20 tropical species and one mixed-species class of 24 species identified in a hyperspectral image mosaic (350–2500 nm of Panamanian farmland and secondary forest fragments. The model, with an overall accuracy of 62% ± 2.3% and F-score of 59% ± 2.7%, was applied to the full image mosaic (23,000 ha at a 2-m resolution to produce a species prediction map, which suggested that this tropical agricultural landscape is more diverse than what has been presented in field-based studies. Second, we quantified the effect of class imbalance on model accuracy. Model assessment showed a trend where species with more samples were consistently over predicted while species with fewer samples were under predicted. Standardizing sample size reduced model accuracy, but also reduced the level of species over- and under-prediction. This study advances operational species mapping of diverse tropical landscapes by detailing the effect of imbalanced data on classification accuracy and providing estimates of tree species abundance in an agricultural landscape. Species maps using data and methods presented here can be used in landscape analyses of species distributions to understand human or environmental effects, in addition to focusing conservation

  1. ACOUSTIC CLASSIFICATION OF FRESHWATER FISH SPECIES USING ARTIFICIAL NEURAL NETWORK: EVALUATION OF THE MODEL PERFORMANCE

    Zulkarnaen Fahmi

    2013-06-01

    Full Text Available Hydroacoustic techniques are a valuable tool for the stock assessments of many fish species. Nonetheless, such techniques are limited by problems of species identification. Several methods and techniques have been used in addressing the problem of acoustic identification species and one of them is Artificial Neural Networks (ANNs. In this paper, Back propagation (BP and Multi Layer Perceptron (MLP of the Artificial Neural Network were used to classify carp (Cyprinus carpio, tilapia (Oreochromis niloticus, and catfish (Pangasius hypothalmus. Classification was done using a set of descriptors extracted from the acoustic data records, i.e. Volume Back scattering (Sv, Target Strength (TS, Area Back scattering Strength, Skewness, Kurtosis, Depth, Height and Relative altitude. The results showed that the Multi Layer Perceptron approach performed better than the Back propagation. The classification rates was 85.7% with the multi layer perceptron (MLP compared to 84.8% with back propagation (BP ANN.

  2. Object based classification of high resolution data in urban areas considering digital surface models

    Oczipka, Martin Eckhard

    2010-01-01

    Over the last couple of years more and more analogue airborne cameras were replaced by digital cameras. Digitally recorded image data have significant advantages to film based data. Digital aerial photographs have a much better radiometric resolution. Image information can be acquired in shaded areas too. This information is essential for a stable and continuous classification, because no data or unclassified areas should be as small as possible. Considering this technological progress, on...

  3. Gender perspective on fear of falling using the classification of functioning as the model

    Pohl, Petra; Ahlgren, Christina; Nordin, Ellinor; Lundquist, Anders; Lundin-Olsson, Lillemor

    2014-01-01

    Purpose: To investigate associations between fear of falling (FOF) and recurrent falls among women and men, and gender differences in FOF with respect to International Classification of Functioning (ICF). Methods: Community-dwelling people (n?=?230, 75?93 years, 72% women) were included and followed 1 year regarding falls. Data collection included self-reported demographics, questionnaires, and physical performance-based tests. FOF was assessed with the question ?Are you afraid of falling??. ...

  4. BmTx3, a scorpion toxin with two putative functional faces separately active on A-type K+ and HERG currents.

    Huys, Isabelle; Xu, Chen-Qi; Wang, Cheng-Zhong; Vacher, Hélène; Martin-Eauclaire, Marie-France; Chi, Cheng-Wu; Tytgat, Jan

    2004-01-01

    A novel HERG channel blocker was isolated from the venom of the scorpion Buthus martensi Karsch, sequenced and characterized at the pharmacological level after chemical synthesis. According to the determined amino acid sequence, the cDNA and genomic genes were then cloned. The genomic gene consists of two exons interrupted by an intron of 65 bp at position -6 upstream from the mature toxin. The protein sequence of this toxin was completely identical with that of a known A-type K+ current bloc...

  5. Estudio electrofisiológico de canales Herg en células de cáncer de colón

    Granja del Río, Alejandra

    2013-01-01

    En el presente estudio, mostramos que en las células de colon normales (NCM460) y en las tumorales (HT29) las principales corrientes activadas por voltaje son corrientes de K+. Sin embargo, el hallazgo más interesante fue mostrar que, en contraste con las células normales de colon, las células tumorales, al parecer, expresan corrientes de K+ activadas por voltaje que podrían ser mediadas por canales herg. Departamento de Bioquímica y Biología Molecular y Fisiología Máster en Investigaci...

  6. Pulmonary emphysema classification based on an improved texton learning model by sparse representation

    Zhang, Min; Zhou, Xiangrong; Goshima, Satoshi; Chen, Huayue; Muramatsu, Chisako; Hara, Takeshi; Yokoyama, Ryujiro; Kanematsu, Masayuki; Fujita, Hiroshi

    2013-03-01

    In this paper, we present a texture classification method based on texton learned via sparse representation (SR) with new feature histogram maps in the classification of emphysema. First, an overcomplete dictionary of textons is learned via KSVD learning on every class image patches in the training dataset. In this stage, high-pass filter is introduced to exclude patches in smooth area to speed up the dictionary learning process. Second, 3D joint-SR coefficients and intensity histograms of the test images are used for characterizing regions of interest (ROIs) instead of conventional feature histograms constructed from SR coefficients of the test images over the dictionary. Classification is then performed using a classifier with distance as a histogram dissimilarity measure. Four hundreds and seventy annotated ROIs extracted from 14 test subjects, including 6 paraseptal emphysema (PSE) subjects, 5 centrilobular emphysema (CLE) subjects and 3 panlobular emphysema (PLE) subjects, are used to evaluate the effectiveness and robustness of the proposed method. The proposed method is tested on 167 PSE, 240 CLE and 63 PLE ROIs consisting of mild, moderate and severe pulmonary emphysema. The accuracy of the proposed system is around 74%, 88% and 89% for PSE, CLE and PLE, respectively.

  7. Stochastic change detection in uncertain nonlinear systems using reduced-order models: classification

    Yun, Hae-Bum; Masri, Sami F

    2009-01-01

    A reliable structural health monitoring methodology (SHM) is proposed to detect relatively small changes in uncertain nonlinear systems. A total of 4000 physical tests were performed using a complex nonlinear magneto-rheological (MR) damper. With the effective (or 'genuine') changes and uncertainties in the system characteristics of the semi-active MR damper, which were precisely controlled with known means and standard deviation of the input current, the tested MR damper was identified with the restoring force method (RFM), a non-parametric system identification method involving two-dimensional orthogonal polynomials. Using the identified RFM coefficients, both supervised and unsupervised pattern recognition techniques (including support vector classification and k-means clustering) were employed to detect system changes in the MR damper. The classification results showed that the identified coefficients with orthogonal basis function can be used as reliable indicators for detecting (small) changes, interpreting the physical meaning of the detected changes without a priori knowledge of the monitored system and quantifying the uncertainty bounds of the detected changes. The classification errors were analyzed using the standard detection theory to evaluate the performance of the developed SHM methodology. An optimal classifier design procedure was also proposed and evaluated to minimize type II (or 'missed') errors

  8. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  9. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer.

    Bart Rogiers

    Full Text Available Cone penetration testing (CPT is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT classification would be very useful. This paper investigates the use of model-based clustering for SBT classification, and the influence of different clustering approaches on the properties and spatial distribution of the obtained soil classes. We additionally propose a methodology for automated lithostratigraphic mapping of regionally occurring sedimentary units using SBT classification. The methodology is applied to a large CPT dataset, covering a groundwater basin of ~60 km2 with predominantly unconsolidated sandy sediments in northern Belgium. Results show that the model-based approach is superior in detecting the true lithological classes when compared to more frequently applied unsupervised classification approaches or literature classification diagrams. We demonstrate that automated mapping of lithostratigraphic units using advanced SBT classification techniques can provide a large gain in efficiency, compared to more time-consuming manual approaches and yields at least equally accurate results.

  10. Stream classification of the Apalachicola-Chattahoochee-Flint River System to support modeling of aquatic habitat response to climate change

    Elliott, Caroline M.; Jacobson, Robert B.; Freeman, Mary C.

    2014-01-01

    A stream classification and associated datasets were developed for the Apalachicola-Chattahoochee-Flint River Basin to support biological modeling of species response to climate change in the southeastern United States. The U.S. Geological Survey and the Department of the Interior’s National Climate Change and Wildlife Science Center established the Southeast Regional Assessment Project (SERAP) which used downscaled general circulation models to develop landscape-scale assessments of climate change and subsequent effects on land cover, ecosystems, and priority species in the southeastern United States. The SERAP aquatic and hydrologic dynamics modeling efforts involve multiscale watershed hydrology, stream-temperature, and fish-occupancy models, which all are based on the same stream network. Models were developed for the Apalachicola-Chattahoochee-Flint River Basin and subbasins in Alabama, Florida, and Georgia, and for the Upper Roanoke River Basin in Virginia. The stream network was used as the spatial scheme through which information was shared across the various models within SERAP. Because these models operate at different scales, coordinated pair versions of the network were delineated, characterized, and parameterized for coarse- and fine-scale hydrologic and biologic modeling. The stream network used for the SERAP aquatic models was extracted from a 30-meter (m) scale digital elevation model (DEM) using standard topographic analysis of flow accumulation. At the finer scale, reaches were delineated to represent lengths of stream channel with fairly homogenous physical characteristics (mean reach length = 350 m). Every reach in the network is designated with geomorphic attributes including upstream drainage basin area, channel gradient, channel width, valley width, Strahler and Shreve stream order, stream power, and measures of stream confinement. The reach network was aggregated from tributary junction to tributary junction to define segments for the

  11. Functional characterization of Kv11.1 (hERG) potassium channels split in the voltage-sensing domain.

    de la Peña, Pilar; Domínguez, Pedro; Barros, Francisco

    2018-03-23

    Voltage-dependent KCNH family potassium channel functionality can be reconstructed using non-covalently linked voltage-sensing domain (VSD) and pore modules (split channels). However, the necessity of a covalent continuity for channel function has not been evaluated at other points within the two functionally independent channel modules. We find here that by cutting Kv11.1 (hERG, KCNH2) channels at the different loops linking the transmembrane spans of the channel core, not only channels split at the S4-S5 linker level, but also those split at the intracellular S2-S3 and the extracellular S3-S4 loops, yield fully functional channel proteins. Our data indicate that albeit less markedly, channels split after residue 482 in the S2-S3 linker resemble the uncoupled gating phenotype of those split at the C-terminal end of the VSD S4 transmembrane segment. Channels split after residues 514 and 518 in the S3-S4 linker show gating characteristics similar to those of the continuous wild-type channel. However, breaking the covalent link at this level strongly accelerates the voltage-dependent accessibility of a membrane impermeable methanethiosulfonate reagent to an engineered cysteine at the N-terminal region of the S4 transmembrane helix. Thus, besides that of the S4-S5 linker, structural integrity of the intracellular S2-S3 linker seems to constitute an important factor for proper transduction of VSD rearrangements to opening and closing the cytoplasmic gate. Furthermore, our data suggest that the short and probably rigid characteristics of the extracellular S3-S4 linker are not an essential component of the Kv11.1 voltage sensing machinery.

  12. Development of classification models to detect Salmonella Enteritidis and Salmonella Typhimurium found in poultry carcass rinses by visible-near infrared hyperspectral imaging

    Seo, Young Wook; Yoon, Seung Chul; Park, Bosoon; Hinton, Arthur; Windham, William R.; Lawrence, Kurt C.

    2013-05-01

    Salmonella is a major cause of foodborne disease outbreaks resulting from the consumption of contaminated food products in the United States. This paper reports the development of a hyperspectral imaging technique for detecting and differentiating two of the most common Salmonella serotypes, Salmonella Enteritidis (SE) and Salmonella Typhimurium (ST), from background microflora that are often found in poultry carcass rinse. Presumptive positive screening of colonies with a traditional direct plating method is a labor intensive and time consuming task. Thus, this paper is concerned with the detection of differences in spectral characteristics among the pure SE, ST, and background microflora grown on brilliant green sulfa (BGS) and xylose lysine tergitol 4 (XLT4) agar media with a spread plating technique. Visible near-infrared hyperspectral imaging, providing the spectral and spatial information unique to each microorganism, was utilized to differentiate SE and ST from the background microflora. A total of 10 classification models, including five machine learning algorithms, each without and with principal component analysis (PCA), were validated and compared to find the best model in classification accuracy. The five machine learning (classification) algorithms used in this study were Mahalanobis distance (MD), k-nearest neighbor (kNN), linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), and support vector machine (SVM). The average classification accuracy of all 10 models on a calibration (or training) set of the pure cultures on BGS agar plates was 98% (Kappa coefficient = 0.95) in determining the presence of SE and/or ST although it was difficult to differentiate between SE and ST. The average classification accuracy of all 10 models on a training set for ST detection on XLT4 agar was over 99% (Kappa coefficient = 0.99) although SE colonies on XLT4 agar were difficult to differentiate from background microflora. The average classification

  13. Extricating Manual and Non-Manual Features for Subunit Level Medical Sign Modelling in Automatic Sign Language Classification and Recognition.

    R, Elakkiya; K, Selvamani

    2017-09-22

    Subunit segmenting and modelling in medical sign language is one of the important studies in linguistic-oriented and vision-based Sign Language Recognition (SLR). Many efforts were made in the precedent to focus the functional subunits from the view of linguistic syllables but the problem is implementing such subunit extraction using syllables is not feasible in real-world computer vision techniques. And also, the present recognition systems are designed in such a way that it can detect the signer dependent actions under restricted and laboratory conditions. This research paper aims at solving these two important issues (1) Subunit extraction and (2) Signer independent action on visual sign language recognition. Subunit extraction involved in the sequential and parallel breakdown of sign gestures without any prior knowledge on syllables and number of subunits. A novel Bayesian Parallel Hidden Markov Model (BPaHMM) is introduced for subunit extraction to combine the features of manual and non-manual parameters to yield better results in classification and recognition of signs. Signer independent action aims in using a single web camera for different signer behaviour patterns and for cross-signer validation. Experimental results have proved that the proposed signer independent subunit level modelling for sign language classification and recognition has shown improvement and variations when compared with other existing works.

  14. Classification of quantum phases and topology of logical operators in an exactly solved model of quantum codes

    Yoshida, Beni

    2011-01-01

    Searches for possible new quantum phases and classifications of quantum phases have been central problems in physics. Yet, they are indeed challenging problems due to the computational difficulties in analyzing quantum many-body systems and the lack of a general framework for classifications. While frustration-free Hamiltonians, which appear as fixed point Hamiltonians of renormalization group transformations, may serve as representatives of quantum phases, it is still difficult to analyze and classify quantum phases of arbitrary frustration-free Hamiltonians exhaustively. Here, we address these problems by sharpening our considerations to a certain subclass of frustration-free Hamiltonians, called stabilizer Hamiltonians, which have been actively studied in quantum information science. We propose a model of frustration-free Hamiltonians which covers a large class of physically realistic stabilizer Hamiltonians, constrained to only three physical conditions; the locality of interaction terms, translation symmetries and scale symmetries, meaning that the number of ground states does not grow with the system size. We show that quantum phases arising in two-dimensional models can be classified exactly through certain quantum coding theoretical operators, called logical operators, by proving that two models with topologically distinct shapes of logical operators are always separated by quantum phase transitions.

  15. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large sta...

  16. The Pediatric Home Care/Expenditure Classification Model (P/ECM): A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    Phillips, Charles D.

    2015-01-01

    Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM) grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges. PMID:26740744

  17. The Pediatric Home Care/Expenditure Classification Model (P/ECM: A Home Care Case-Mix Model for Children Facing Special Health Care Challenges

    Charles D. Phillips

    2015-01-01

    Full Text Available Case-mix classification and payment systems help assure that persons with similar needs receive similar amounts of care resources, which is a major equity concern for consumers, providers, and programs. Although health service programs for adults regularly use case-mix payment systems, programs providing health services to children and youth rarely use such models. This research utilized Medicaid home care expenditures and assessment data on 2,578 children receiving home care in one large state in the USA. Using classification and regression tree analyses, a case-mix model for long-term pediatric home care was developed. The Pediatric Home Care/Expenditure Classification Model (P/ECM grouped children and youth in the study sample into 24 groups, explaining 41% of the variance in annual home care expenditures. The P/ECM creates the possibility of a more equitable, and potentially more effective, allocation of home care resources among children and youth facing serious health care challenges.

  18. Robust through-the-wall radar image classification using a target-model alignment procedure.

    Smith, Graeme E; Mobasseri, Bijan G

    2012-02-01

    A through-the-wall radar image (TWRI) bears little resemblance to the equivalent optical image, making it difficult to interpret. To maximize the intelligence that may be obtained, it is desirable to automate the classification of targets in the image to support human operators. This paper presents a technique for classifying stationary targets based on the high-range resolution profile (HRRP) extracted from 3-D TWRIs. The dependence of the image on the target location is discussed using a system point spread function (PSF) approach. It is shown that the position dependence will cause a classifier to fail, unless the image to be classified is aligned to a classifier-training location. A target image alignment technique based on deconvolution of the image with the system PSF is proposed. Comparison of the aligned target images with measured images shows the alignment process introducing normalized mean squared error (NMSE) ≤ 9%. The HRRP extracted from aligned target images are classified using a naive Bayesian classifier supported by principal component analysis. The classifier is tested using a real TWRI of canonical targets behind a concrete wall and shown to obtain correct classification rates ≥ 97%. © 2011 IEEE

  19. Predicting student satisfaction with courses based on log data from a virtual learning environment – a neural network and classification tree model

    Ivana Đurđević Babić

    2015-03-01

    Full Text Available Student satisfaction with courses in academic institutions is an important issue and is recognized as a form of support in ensuring effective and quality education, as well as enhancing student course experience. This paper investigates whether there is a connection between student satisfaction with courses and log data on student courses in a virtual learning environment. Furthermore, it explores whether a successful classification model for predicting student satisfaction with course can be developed based on course log data and compares the results obtained from implemented methods. The research was conducted at the Faculty of Education in Osijek and included analysis of log data and course satisfaction on a sample of third and fourth year students. Multilayer Perceptron (MLP with different activation functions and Radial Basis Function (RBF neural networks as well as classification tree models were developed, trained and tested in order to classify students into one of two categories of course satisfaction. Type I and type II errors, and input variable importance were used for model comparison and classification accuracy. The results indicate that a successful classification model using tested methods can be created. The MLP model provides the highest average classification accuracy and the lowest preference in misclassification of students with a low level of course satisfaction, although a t-test for the difference in proportions showed that the difference in performance between the compared models is not statistically significant. Student involvement in forum discussions is recognized as a valuable predictor of student satisfaction with courses in all observed models.

  20. Pitch Based Sound Classification

    Nielsen, Andreas Brinch; Hansen, Lars Kai; Kjems, U

    2006-01-01

    A sound classification model is presented that can classify signals into music, noise and speech. The model extracts the pitch of the signal using the harmonic product spectrum. Based on the pitch estimate and a pitch error measure, features are created and used in a probabilistic model with soft......-max output function. Both linear and quadratic inputs are used. The model is trained on 2 hours of sound and tested on publicly available data. A test classification error below 0.05 with 1 s classification windows is achieved. Further more it is shown that linear input performs as well as a quadratic......, and that even though classification gets marginally better, not much is achieved by increasing the window size beyond 1 s....

  1. Logical-Rule Models of Classification Response Times: A Synthesis of Mental-Architecture, Random-Walk, and Decision-Bound Approaches

    Fific, Mario; Little, Daniel R.; Nosofsky, Robert M.

    2010-01-01

    We formalize and provide tests of a set of logical-rule models for predicting perceptual classification response times (RTs) and choice probabilities. The models are developed by synthesizing mental-architecture, random-walk, and decision-bound approaches. According to the models, people make independent decisions about the locations of stimuli…

  2. Data mining and model adaptation for the land use and land cover classification of a Worldview 2 image

    Nascimento, L. C.; Cruz, C. B. M.; Souza, E. M. F. R.

    2013-10-01

    Forest fragmentation studies have increased since the last 3 decades. Land use and land cover maps (LULC) are important tools for this analysis, as well as other remote sensing techniques. The object oriented analysis classifies the image according to patterns as texture, color, shape, and context. However, there are many attributes to be analyzed, and data mining tools helped us to learn about them and to choose the best ones. In this way, the aim of this paper is to describe data mining techniques and results of a heterogeneous area, as the municipality of Silva Jardim, Rio de Janeiro, Brazil. The municipality has forest, urban areas, pastures, water bodies, agriculture and also some shadows as objects to be represented. Worldview 2 satellite image from 2010 was used and LULC classification was processed using the values that data mining software has provided according to the J48 method. Afterwards, this classification was analyzed, and the verification was made by the confusion matrix, being possible to evaluate the accuracy (58,89%). The best results were in classes "water" and "forest" which have more homogenous reflectance. Because of that, the model has been adapted, in order to create a model for the most homogeneous classes. As result, 2 new classes were created, some values and some attributes changed, and others added. In the end, the accuracy was 89,33%. It is important to highlight this is not a conclusive paper; there are still many steps to develop in highly heterogeneous surfaces.

  3. Structure based classification for bile salt export pump (BSEP) inhibitors using comparative structural modeling of human BSEP

    Jain, Sankalp; Grandits, Melanie; Richter, Lars; Ecker, Gerhard F.

    2017-06-01

    The bile salt export pump (BSEP) actively transports conjugated monovalent bile acids from the hepatocytes into the bile. This facilitates the formation of micelles and promotes digestion and absorption of dietary fat. Inhibition of BSEP leads to decreased bile flow and accumulation of cytotoxic bile salts in the liver. A number of compounds have been identified to interact with BSEP, which results in drug-induced cholestasis or liver injury. Therefore, in silico approaches for flagging compounds as potential BSEP inhibitors would be of high value in the early stage of the drug discovery pipeline. Up to now, due to the lack of a high-resolution X-ray structure of BSEP, in silico based identification of BSEP inhibitors focused on ligand-based approaches. In this study, we provide a homology model for BSEP, developed using the corrected mouse P-glycoprotein structure (PDB ID: 4M1M). Subsequently, the model was used for docking-based classification of a set of 1212 compounds (405 BSEP inhibitors, 807 non-inhibitors). Using the scoring function ChemScore, a prediction accuracy of 81% on the training set and 73% on two external test sets could be obtained. In addition, the applicability domain of the models was assessed based on Euclidean distance. Further, analysis of the protein-ligand interaction fingerprints revealed certain functional group-amino acid residue interactions that could play a key role for ligand binding. Though ligand-based models, due to their high speed and accuracy, remain the method of choice for classification of BSEP inhibitors, structure-assisted docking models demonstrate reasonably good prediction accuracies while additionally providing information about putative protein-ligand interactions.

  4. Efficacy of hidden markov model over support vector machine on multiclass classification of healthy and cancerous cervical tissues

    Mukhopadhyay, Sabyasachi; Kurmi, Indrajit; Pratiher, Sawon; Mukherjee, Sukanya; Barman, Ritwik; Ghosh, Nirmalya; Panigrahi, Prasanta K.

    2018-02-01

    In this paper, a comparative study between SVM and HMM has been carried out for multiclass classification of cervical healthy and cancerous tissues. In our study, the HMM methodology is more promising to produce higher accuracy in classification.

  5. More than a name: Heterogeneity in characteristics of models of maternity care reported from the Australian Maternity Care Classification System validation study.

    Donnolley, Natasha R; Chambers, Georgina M; Butler-Henderson, Kerryn A; Chapman, Michael G; Sullivan, Elizabeth A

    2017-08-01

    Without a standard terminology to classify models of maternity care, it is problematic to compare and evaluate clinical outcomes across different models. The Maternity Care Classification System is a novel system developed in Australia to classify models of maternity care based on their characteristics and an overarching broad model descriptor (Major Model Category). This study aimed to assess the extent of variability in the defining characteristics of models of care grouped to the same Major Model Category, using the Maternity Care Classification System. All public hospital maternity services in New South Wales, Australia, were invited to complete a web-based survey classifying two local models of care using the Maternity Care Classification System. A descriptive analysis of the variation in 15 attributes of models of care was conducted to evaluate the level of heterogeneity within and across Major Model Categories. Sixty-nine out of seventy hospitals responded, classifying 129 models of care. There was wide variation in a number of important attributes of models classified to the same Major Model Category. The category of 'Public hospital maternity care' contained the most variation across all characteristics. This study demonstrated that although models of care can be grouped into a distinct set of Major Model Categories, there are significant variations in models of the same type. This could result in seemingly 'like' models of care being incorrectly compared if grouped only by the Major Model Category. Copyright © 2017 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  6. CASAnova: a multiclass support vector machine model for the classification of human sperm motility patterns.

    Goodson, Summer G; White, Sarah; Stevans, Alicia M; Bhat, Sanjana; Kao, Chia-Yu; Jaworski, Scott; Marlowe, Tamara R; Kohlmeier, Martin; McMillan, Leonard; Zeisel, Steven H; O'Brien, Deborah A

    2017-11-01

    The ability to accurately monitor alterations in sperm motility is paramount to understanding multiple genetic and biochemical perturbations impacting normal fertilization. Computer-aided sperm analysis (CASA) of human sperm typically reports motile percentage and kinematic parameters at the population level, and uses kinematic gating methods to identify subpopulations such as progressive or hyperactivated sperm. The goal of this study was to develop an automated method that classifies all patterns of human sperm motility during in vitro capacitation following the removal of seminal plasma. We visually classified CASA tracks of 2817 sperm from 18 individuals and used a support vector machine-based decision tree to compute four hyperplanes that separate five classes based on their kinematic parameters. We then developed a web-based program, CASAnova, which applies these equations sequentially to assign a single classification to each motile sperm. Vigorous sperm are classified as progressive, intermediate, or hyperactivated, and nonvigorous sperm as slow or weakly motile. This program correctly classifies sperm motility into one of five classes with an overall accuracy of 89.9%. Application of CASAnova to capacitating sperm populations showed a shift from predominantly linear patterns of motility at initial time points to more vigorous patterns, including hyperactivated motility, as capacitation proceeds. Both intermediate and hyperactivated motility patterns were largely eliminated when sperm were incubated in noncapacitating medium, demonstrating the sensitivity of this method. The five CASAnova classifications are distinctive and reflect kinetic parameters of washed human sperm, providing an accurate, quantitative, and high-throughput method for monitoring alterations in motility. © The Authors 2017. Published by Oxford University Press on behalf of Society for the Study of Reproduction. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Teaching habitat and animal classification to fourth graders using an engineering-design model

    Marulcu, Ismail

    2014-05-01

    Background: The motivation for this work is built upon the premise that there is a need for research-based materials for design-based science instruction. In this paper, a small portion of our work investigating the impact of a LEGOTM engineering unit on fourth grade students' preconceptions and understanding of animals is presented. Purpose: The driving questions for our work are: (1) What is the impact of an engineering-design-based curricular module on students' understanding of habitat and animal classification? (2) What are students' misconceptions regarding animal classification and habitat? Sample: The study was conducted in an inner-city K-8 school in the northeastern region of the United States. There were two fourth grade classrooms in the school. The first classroom included seven girls and nine boys, whereas the other classroom included eight girls and eight boys. All fourth grade students participated in the study. Design and methods: In answering the research questions mixed-method approaches are used. Data collection methods included pre- and post-tests, pre- and post-interviews, student journals, and classroom observations. Identical pre- and post-tests were administered to measure students' understanding of animals. They included four multiple-choice and six open-ended questions. Identical pre- and post-interviews were administered to explore students' in-depth understanding of animals. Results: Our results show that students significantly increased their performance after instruction on both the multiple-choice questions (t = -3.586, p = .001) and the open-ended questions (t = -5.04, p = .000). They performed better on the post interviews as well. Also, it is found that design-based instruction helped students comprehend core concepts of a life science subject, animals. Conclusions: Based on these results, the main argument of the study is that engineering design is a useful framework for teaching not only physical science-related subjects, but

  8. Molecular classification of liver cirrhosis in a rat model by proteomics and bioinformatics.

    Xu, Xiu-Qin; Leow, Chon K; Lu, Xin; Zhang, Xuegong; Liu, Jun S; Wong, Wing-Hung; Asperger, Arndt; Deininger, Sören; Eastwood Leung, Hon-Chiu

    2004-10-01

    Liver cirrhosis is a worldwide health problem. Reliable, noninvasive methods for early detection of liver cirrhosis are not available. Using a three-step approach, we classified sera from rats with liver cirrhosis following different treatment insults. The approach consisted of: (i) protein profiling using surface-enhanced laser desorption/ionization (SELDI) technology; (ii) selection of a statistically significant serum biomarker set using machine learning algorithms; and (iii) identification of selected serum biomarkers by peptide sequencing. We generated serum protein profiles from three groups of rats: (i) normal (n=8), (ii) thioacetamide-induced liver cirrhosis (n=22), and (iii) bile duct ligation-induced liver fibrosis (n=5) using a weak cation exchanger surface. Profiling data were further analyzed by a recursive support vector machine algorithm to select a panel of statistically significant biomarkers for class prediction. Sensitivity and specificity of classification using the selected protein marker set were higher than 92%. A consistently down-regulated 3495 Da protein in cirrhosis samples was one of the selected significant biomarkers. This 3495 Da protein was purified on-chip and trypsin digested. Further structural characterization of this biomarkers candidate was done by using cross-platform matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) peptide mass fingerprinting (PMF) and matrix-assisted laser desorption/ionization time of flight/time of flight (MALDI-TOF/TOF) tandem mass spectrometry (MS/MS). Combined data from PMF and MS/MS spectra of two tryptic peptides suggested that this 3495 Da protein shared homology to a histidine-rich glycoprotein. These results demonstrated a novel approach to discovery of new biomarkers for early detection of liver cirrhosis and classification of liver diseases.

  9. A 'Swinging Cradle' model for in vitro classification of different types of response elements of a nuclear receptor

    Malo, Madhu S.; Pushpakaran, Premraj; Hodin, Richard A.

    2005-01-01

    Nuclear receptors are hormone-activated transcription factors that bind to specific target sequences termed hormone-response element (HRE). A HRE usually consists of two half-sites (5'-AGGTCA-3' consensus sequence) arranged as a direct, everted or inverted repeat with variable spacer region. Assignment of a HRE as a direct, everted or inverted repeat is based on its homology to the consensus half-site, but minor variations can make such an assignment confusing. We hypothesize a 'Swinging Cradle' model for HRE classification, whereby the core HRE functions as the 'sitting platform' for the NR, and the extra nucleotides at either end act as the 'sling' of the Cradle. We show that in vitro binding of the thyroid hormone receptor and 9-cis retinoic acid receptor heterodimer to an everted repeat TRE follows the 'Swinging Cradle' model, whereas the other TREs do not. We also show that among these TREs, the everted repeat mediates the highest biological activity

  10. Understanding the Effect of Land Cover Classification on Model Estimates of Regional Carbon Cycling in the Boreal Forest Biome

    Kimball, John; Kang, Sinkyu

    2003-01-01

    The original objectives of this proposed 3-year project were to: 1) quantify the respective contributions of land cover and disturbance (i.e., wild fire) to uncertainty associated with regional carbon source/sink estimates produced by a variety of boreal ecosystem models; 2) identify the model processes responsible for differences in simulated carbon source/sink patterns for the boreal forest; 3) validate model outputs using tower and field- based estimates of NEP and NPP; and 4) recommend/prioritize improvements to boreal ecosystem carbon models, which will better constrain regional source/sink estimates for atmospheric C02. These original objectives were subsequently distilled to fit within the constraints of a 1 -year study. This revised study involved a regional model intercomparison over the BOREAS study region involving Biome-BGC, and TEM (A.D. McGuire, UAF) ecosystem models. The major focus of these revised activities involved quantifying the sensitivity of regional model predictions associated with land cover classification uncertainties. We also evaluated the individual and combined effects of historical fire activity, historical atmospheric CO2 concentrations, and climate change on carbon and water flux simulations within the BOREAS study region.

  11. Assessment of an extended version of the Jenkinson-Collison classification on CMIP5 models over Europe

    Otero, Noelia; Sillmann, Jana; Butler, Tim

    2018-03-01

    A gridded, geographically extended weather type classification has been developed based on the Jenkinson-Collison (JC) classification system and used to evaluate the representation of weather types over Europe in a suite of climate model simulations. To this aim, a set of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) is compared with the circulation from two reanalysis products. Furthermore, we examine seasonal changes between simulated frequencies of weather types at present and future climate conditions. The models are in reasonably good agreement with the reanalyses, but some discrepancies occur in cyclonic days being overestimated over North, and underestimated over South Europe, while anticyclonic situations were overestimated over South, and underestimated over North Europe. Low flow conditions were generally underestimated, especially in summer over South Europe, and Westerly conditions were generally overestimated. The projected frequencies of weather types in the late twenty-first century suggest an increase of Anticyclonic days over South Europe in all seasons except summer, while Westerly days increase over North and Central Europe, particularly in winter. We find significant changes in the frequency of Low flow conditions and the Easterly type that become more frequent during the warmer seasons over Southeast and Southwest Europe, respectively. Our results indicate that in winter the Westerly type has significant impacts on positive anomalies of maximum and minimum temperature over most of Europe. Except in winter, the warmer temperatures are linked to Easterlies, Anticyclonic and Low Flow conditions, especially over the Mediterranean area. Furthermore, we show that changes in the frequency of weather types represent a minor contribution of the total change of European temperatures, which would be mainly driven by changes in the temperature anomalies associated with the weather types themselves.

  12. Enhancing Classification Performance of Functional Near-Infrared Spectroscopy- Brain–Computer Interface Using Adaptive Estimation of General Linear Model Coefficients

    Nauman Khalid Qureshi

    2017-07-01

    Full Text Available In this paper, a novel methodology for enhanced classification of functional near-infrared spectroscopy (fNIRS signals utilizable in a two-class [motor imagery (MI and rest; mental rotation (MR and rest] brain–computer interface (BCI is presented. First, fNIRS signals corresponding to MI and MR are acquired from the motor and prefrontal cortex, respectively, afterward, filtered to remove physiological noises. Then, the signals are modeled using the general linear model, the coefficients of which are adaptively estimated using the least squares technique. Subsequently, multiple feature combinations of estimated coefficients were used for classification. The best classification accuracies achieved for five subjects, for MI versus rest are 79.5, 83.7, 82.6, 81.4, and 84.1% whereas those for MR versus rest are 85.5, 85.2, 87.8, 83.7, and 84.8%, respectively, using support vector machine. These results are compared with the best classification accuracies obtained using the conventional hemodynamic response. By means of the proposed methodology, the average classification accuracy obtained was significantly higher (p < 0.05. These results serve to demonstrate the feasibility of developing a high-classification-performance fNIRS-BCI.

  13. A simulated Linear Mixture Model to Improve Classification Accuracy of Satellite Data Utilizing Degradation of Atmospheric Effect

    WIDAD Elmahboub

    2005-02-01

    Full Text Available Researchers in remote sensing have attempted to increase the accuracy of land cover information extracted from remotely sensed imagery. Factors that influence the supervised and unsupervised classification accuracy are the presence of atmospheric effect and mixed pixel information. A linear mixture simulated model experiment is generated to simulate real world data with known end member spectral sets and class cover proportions (CCP. The CCP were initially generated by a random number generator and normalized to make the sum of the class proportions equal to 1.0 using MATLAB program. Random noise was intentionally added to pixel values using different combinations of noise levels to simulate a real world data set. The atmospheric scattering error is computed for each pixel value for three generated images with SPOT data. Accuracy can either be classified or misclassified. Results portrayed great improvement in classified accuracy, for example, in image 1, misclassified pixels due to atmospheric noise is 41 %. Subsequent to the degradation of atmospheric effect, the misclassified pixels were reduced to 4 %. We can conclude that accuracy of classification can be improved by degradation of atmospheric noise.

  14. THE LOW BACKSCATTERING OBJECTS CLASSIFICATION IN POLSAR IMAGE BASED ON BAG OF WORDS MODEL USING SUPPORT VECTOR MACHINE

    L. Yang

    2018-04-01

    Full Text Available Due to the forward scattering and block of radar signal, the water, bare soil, shadow, named low backscattering objects (LBOs, often present low backscattering intensity in polarimetric synthetic aperture radar (PolSAR image. Because the LBOs rise similar backscattering intensity and polarimetric responses, the spectral-based classifiers are inefficient to deal with LBO classification, such as Wishart method. Although some polarimetric features had been exploited to relieve the confusion phenomenon, the backscattering features are still found unstable when the system noise floor varies in the range direction. This paper will introduce a simple but effective scene classification method based on Bag of Words (BoW model using Support Vector Machine (SVM to discriminate the LBOs, without relying on any polarimetric features. In the proposed approach, square windows are firstly opened around the LBOs adaptively to determine the scene images, and then the Scale-Invariant Feature Transform (SIFT points are detected in training and test scenes. The several SIFT features detected are clustered using K-means to obtain certain cluster centers as the visual word lists and scene images are represented using word frequency. At last, the SVM is selected for training and predicting new scenes as some kind of LBOs. The proposed method is executed over two AIRSAR data sets at C band and L band, including water, bare soil and shadow scenes. The experimental results illustrate the effectiveness of the scene method in distinguishing LBOs.

  15. [Analysis of dietary pattern and diabetes mellitus influencing factors identified by classification tree model in adults of Fujian].

    Yu, F L; Ye, Y; Yan, Y S

    2017-05-10

    Objective: To find out the dietary patterns and explore the relationship between environmental factors (especially dietary patterns) and diabetes mellitus in the adults of Fujian. Methods: Multi-stage sampling method were used to survey residents aged ≥18 years by questionnaire, physical examination and laboratory detection in 10 disease surveillance points in Fujian. Factor analysis was used to identify the dietary patterns, while logistic regression model was applied to analyze relationship between dietary patterns and diabetes mellitus, and classification tree model was adopted to identify the influencing factors for diabetes mellitus. Results: There were four dietary patterns in the population, including meat, plant, high-quality protein, and fried food and beverages patterns. The result of logistic analysis showed that plant pattern, which has higher factor loading of fresh fruit-vegetables and cereal-tubers, was a protective factor for non-diabetes mellitus. The risk of diabetes mellitus in the population at T2 and T3 levels of factor score were 0.727 (95 %CI: 0.561-0.943) times and 0.736 (95 %CI : 0.573-0.944) times higher, respectively, than those whose factor score was in lowest quartile. Thirteen influencing factors and eleven group at high-risk for diabetes mellitus were identified by classification tree model. The influencing factors were dyslipidemia, age, family history of diabetes, hypertension, physical activity, career, sex, sedentary time, abdominal adiposity, BMI, marital status, sleep time and high-quality protein pattern. Conclusion: There is a close association between dietary patterns and diabetes mellitus. It is necessary to promote healthy and reasonable diet, strengthen the monitoring and control of blood lipids, blood pressure and body weight, and have good lifestyle for the prevention and control of diabetes mellitus.

  16. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    Gutiérrez, Salvador; Tardaguila, Javier; Fernández-Novales, Juan; Diago, María P

    2015-01-01

    The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR) spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L.) varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years and leaves

  17. Support Vector Machine and Artificial Neural Network Models for the Classification of Grapevine Varieties Using a Portable NIR Spectrophotometer.

    Salvador Gutiérrez

    Full Text Available The identification of different grapevine varieties, currently attended using visual ampelometry, DNA analysis and very recently, by hyperspectral analysis under laboratory conditions, is an issue of great importance in the wine industry. This work presents support vector machine and artificial neural network's modelling for grapevine varietal classification from in-field leaf spectroscopy. Modelling was attempted at two scales: site-specific and a global scale. Spectral measurements were obtained on the near-infrared (NIR spectral range between 1600 to 2400 nm under field conditions in a non-destructive way using a portable spectrophotometer. For the site specific approach, spectra were collected from the adaxial side of 400 individual leaves of 20 grapevine (Vitis vinifera L. varieties one week after veraison. For the global model, two additional sets of spectra were collected one week before harvest from two different vineyards in another vintage, each one consisting on 48 measurement from individual leaves of six varieties. Several combinations of spectra scatter correction and smoothing filtering were studied. For the training of the models, support vector machines and artificial neural networks were employed using the pre-processed spectra as input and the varieties as the classes of the models. The results from the pre-processing study showed that there was no influence whether using scatter correction or not. Also, a second-degree derivative with a window size of 5 Savitzky-Golay filtering yielded the highest outcomes. For the site-specific model, with 20 classes, the best results from the classifiers thrown an overall score of 87.25% of correctly classified samples. These results were compared under the same conditions with a model trained using partial least squares discriminant analysis, which showed a worse performance in every case. For the global model, a 6-class dataset involving samples from three different vineyards, two years

  18. Integration of fuzzy theory into Kano model for classification of service quality elements: A case study in a machinery industry of China

    Qingliang Meng

    2015-11-01

    Full Text Available Purpose: The purpose of study is to meet customer requirements and improve customer satisfaction that aims to classify customer requirements more effectively. And the classification is focused on the customer psychology. Design/methodology/approach: In this study, considering the advantages of Kano model in taking into account both customer’s consuming psychology and motivation, and combining with fuzzy theory which is effective to cope with uncertainty and ambiguity, a Kano model based on fuzzy theory is proposed. In view of the strong subjectivity of traditional Kano questionnaires, a fuzzy Kano questionnaire to classify the service quality elements more objectively is proposed. Furthermore, this study will also develop a mathematical calculation performance according to the quality classification of fuzzy Kano model. It’s more objective than traditional Kano model to realize the service quality elements classification. With this method, the accurate mentality can be fully reasonable reflected in some unknown circumstances. Finally, an empirical study in Xuzhou Construction Machinery Group Co., Ltd, the largest manufacturing industry in China, is showed to testify its feasibility and validity. Findings: The calculation results indicate that the proposed model has good performance in classifying customer requirements. With this method, the accurate mentality can be fully reasonable reflected in unknown circumstances and it is more objective than traditional Kano model to classify the service quality elements. Originality/value: This study provides a method to integrate fuzzy theory and Kano model, and develops a mathematical calculation performance according to the quality classification of fuzzy Kano model.

  19. Quality prediction modeling for multistage manufacturing based on classification and association rule mining

    Kao Hung-An

    2017-01-01

    Full Text Available For manufacturing enterprises, product quality is a key factor to assess production capability and increase their core competence. To reduce external failure cost, many research and methodology have been introduced in order to improve process yield rate, such as TQC/TQM, Shewhart CycleDeming's 14 Points, etc. Nowadays, impressive progress has been made in process monitoring and industrial data analysis because of the Industry 4.0 trend. Industries start to utilize quality control (QC methodology to lower inspection overhead and internal failure cost. Currently, the focus of QC is mostly in the inspection of single workstation and final product, however, for multistage manufacturing, many factors (like equipment, operators, parameters, etc. can have cumulative and interactive effects to the final quality. When failure occurs, it is difficult to resume the original settings for cause analysis. To address these problems, this research proposes a combination of principal components analysis (PCA with classification and association rule mining algorithms to extract features representing relationship of multiple workstations, predict final product quality, and analyze the root-cause of product defect. The method is demonstrated on a semiconductor data set.

  20. Prediction of passive blood-brain partitioning: straightforward and effective classification models based on in silico derived physicochemical descriptors.

    Vilar, Santiago; Chakrabarti, Mayukh; Costanzi, Stefano

    2010-06-01

    The distribution of compounds between blood and brain is a very important consideration for new candidate drug molecules. In this paper, we describe the derivation of two linear discriminant analysis (LDA) models for the prediction of passive blood-brain partitioning, expressed in terms of logBB values. The models are based on computationally derived physicochemical descriptors, namely the octanol/water partition coefficient (logP), the topological polar surface area (TPSA) and the total number of acidic and basic atoms, and were obtained using a homogeneous training set of 307 compounds, for all of which the published experimental logBB data had been determined in vivo. In particular, since molecules with logBB>0.3 cross the blood-brain barrier (BBB) readily while molecules with logBB<-1 are poorly distributed to the brain, on the basis of these thresholds we derived two distinct models, both of which show a percentage of good classification of about 80%. Notably, the predictive power of our models was confirmed by the analysis of a large external dataset of compounds with reported activity on the central nervous system (CNS) or lack thereof. The calculation of straightforward physicochemical descriptors is the only requirement for the prediction of the logBB of novel compounds through our models, which can be conveniently applied in conjunction with drug design and virtual screenings. Published by Elsevier Inc.

  1. Deep learning based classification for head and neck cancer detection with hyperspectral imaging in an animal model

    Ma, Ling; Lu, Guolan; Wang, Dongsheng; Wang, Xu; Chen, Zhuo Georgia; Muller, Susan; Chen, Amy; Fei, Baowei

    2017-03-01

    Hyperspectral imaging (HSI) is an emerging imaging modality that can provide a noninvasive tool for cancer detection and image-guided surgery. HSI acquires high-resolution images at hundreds of spectral bands, providing big data to differentiating different types of tissue. We proposed a deep learning based method for the detection of head and neck cancer with hyperspectral images. Since the deep learning algorithm can learn the feature hierarchically, the learned features are more discriminative and concise than the handcrafted features. In this study, we adopt convolutional neural networks (CNN) to learn the deep feature of pixels for classifying each pixel into tumor or normal tissue. We evaluated our proposed classification method on the dataset containing hyperspectral images from 12 tumor-bearing mice. Experimental results show that our method achieved an average accuracy of 91.36%. The preliminary study demonstrated that our deep learning method can be applied to hyperspectral images for detecting head and neck tumors in animal models.

  2. An enhanced forest classification scheme for modeling vegetation-climate interactions based on national forest inventory data

    Majasalmi, Titta; Eisner, Stephanie; Astrup, Rasmus; Fridman, Jonas; Bright, Ryan M.

    2018-01-01

    Forest management affects the distribution of tree species and the age class of a forest, shaping its overall structure and functioning and in turn the surface-atmosphere exchanges of mass, energy, and momentum. In order to attribute climate effects to anthropogenic activities like forest management, good accounts of forest structure are necessary. Here, using Fennoscandia as a case study, we make use of Fennoscandic National Forest Inventory (NFI) data to systematically classify forest cover into groups of similar aboveground forest structure. An enhanced forest classification scheme and related lookup table (LUT) of key forest structural attributes (i.e., maximum growing season leaf area index (LAImax), basal-area-weighted mean tree height, tree crown length, and total stem volume) was developed, and the classification was applied for multisource NFI (MS-NFI) maps from Norway, Sweden, and Finland. To provide a complete surface representation, our product was integrated with the European Space Agency Climate Change Initiative Land Cover (ESA CCI LC) map of present day land cover (v.2.0.7). Comparison of the ESA LC and our enhanced LC products (https://doi.org/10.21350/7zZEy5w3) showed that forest extent notably (κ = 0.55, accuracy 0.64) differed between the two products. To demonstrate the potential of our enhanced LC product to improve the description of the maximum growing season LAI (LAImax) of managed forests in Fennoscandia, we compared our LAImax map with reference LAImax maps created using the ESA LC product (and related cross-walking table) and PFT-dependent LAImax values used in three leading land models. Comparison of the LAImax maps showed that our product provides a spatially more realistic description of LAImax in managed Fennoscandian forests compared to reference maps. This study presents an approach to account for the transient nature of forest structural attributes due to human intervention in different land models.

  3. Gender perspective on fear of falling using the classification of functioning as the model

    Ahlgren, Christina; Nordin, Ellinor; Lundquist, Anders; Lundin-Olsson, Lillemor

    2015-01-01

    Purpose: To investigate associations between fear of falling (FOF) and recurrent falls among women and men, and gender differences in FOF with respect to International Classification of Functioning (ICF). Methods: Community-dwelling people (n = 230, 75–93 years, 72% women) were included and followed 1 year regarding falls. Data collection included self-reported demographics, questionnaires, and physical performance-based tests. FOF was assessed with the question “Are you afraid of falling?”. Results were discussed with a gender relational approach. Results: At baseline 55% women (n = 92) and 22% men (n = 14) reported FOF. During the follow-up 21% women (n = 35) and 30% men (n = 19) experienced recurrent falls. There was an association between gender and FOF (p = 0.001), but not between FOF and recurrent falls (p = 0.79), or between gender and recurrent falls (p = 0.32). FOF was related to Personal factors and Activity and Participation. The relationship between FOF and Personal factors was in opposite directions for women and men. Conclusions: Results did not support the prevailing paradigm that FOF increases rate of recurrent falls in community-dwelling people, and indicated that the answer to “Are you afraid of falling?” might be highly influenced by gendered patterns. Implications for Rehabilitation The question “Are you afraid of falling?” has no predictive value when screening for the risk of falling in independent community-dwelling women or men over 75 years of age. Gendered patterns might influence the answer to the question “Are you afraid of falling?” Healthcare personnel are recommended to be aware of this when asking older women and men about fear of falling. PMID:24786969

  4. Modeling Oligo-Miocence channel sands (Dezful Embayment, SW Iran): an integrated facies classification workflow

    Heydari, Mostafa; Maddahi, Iradj; Moradpour, Mehran; Esmaeilpour, Sajjad

    2014-01-01

    This study has been conducted on Mansuri onshore oilfield located in Dezful Embayment, south-west Iran. One of the hydrocarbon-bearing formations is a Oligo-Miocene Asmari formation—the most prolific Iranian reservoir rock. Like many other oilfields in the area, the trap in this field is deemed structural (anticline), formed during the collision of the Arabian plate with the Iranian plate and the folding of Neotethys deposits with a NW–SE trend. This study integrates three different quantitative studies from geology, geophysics and petrophysics disciplines to quantitate ‘the qualitative study of seismic facies analysis based on trace shapes and 3D multi-attribute clustering’. First, stratigraphic sequences and seismic detectable facies were derived at one well location using the available high resolution core facies analysis and depositional environment assessment reports. Paleo and petrophysical logs from other wells were subsequently used for the extrapolation of stratigraphic sequences interpreted at the first well. Utilizing lithology discrimination obtained by wire-line log interpretation, facies were extrapolated to all wells in the area. Seismic 3D attribute analysis and seismic facies classification established a 3D facies volume accordingly, which was finally calibrated to geological facies at well locations. The ultimate extracted facies-guided geobody shows that good reservoir-quality channel sands have accumulated with NW/SE elongation at the ridge of the structure. As a result, this type of geometry has created a stratigraphic/structural hydrocarbon trap in this oilfield. Moreover, seismic facies analysis shows that buried channels do not parallel the predominant Arabian plate-originated channels (with SW–NE trends) in SW Zagros and are locally swerved in this area. (paper)

  5. The attractor recurrent neural network based on fuzzy functions: An effective model for the classification of lung abnormalities.

    Khodabakhshi, Mohammad Bagher; Moradi, Mohammad Hassan

    2017-05-01

    The respiratory system dynamic is of high significance when it comes to the detection of lung abnormalities, which highlights the importance of presenting a reliable model for it. In this paper, we introduce a novel dynamic modelling method for the characterization of the lung sounds (LS), based on the attractor recurrent neural network (ARNN). The ARNN structure allows the development of an effective LS model. Additionally, it has the capability to reproduce the distinctive features of the lung sounds using its formed attractors. Furthermore, a novel ARNN topology based on fuzzy functions (FFs-ARNN) is developed. Given the utility of the recurrent quantification analysis (RQA) as a tool to assess the nature of complex systems, it was used to evaluate the performance of both the ARNN and the FFs-ARNN models. The experimental results demonstrate the effectiveness of the proposed approaches for multichannel LS analysis. In particular, a classification accuracy of 91% was achieved using FFs-ARNN with sequences of RQA features. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Classification of 'healthier' and 'less healthy' supermarket foods by two Australasian nutrient profiling models.

    Eyles, Helen; Gorton, Delvina; Ni Mhurchu, Cliona

    2010-09-10

    To determine whether a modified version of the Heart Foundation Tick (MHFT) nutrient profiling model appropriately classifies supermarket foods to endorse its use for identifying 'healthier' products eligible for promotion in a supermarket intervention trial. Top-selling products (n=550) were selected from an existing supermarket nutrient composition database. Percentage of products classified as 'healthier' by the MHFT and a modified comparator model (Food Standards Australia New Zealand; MFSANZ) were calculated. Percentage agreement, consistency (kappa statistic), and average nutrient values were assessed overall, and across seven food groups. The MHFT model categorised 16% fewer products as 'healthier' than the MFSANZ model. Agreement and consistency between models were 72% and kappa=0.46 (P=0.00), respectively. For both models, 'healthier' products were on average lower in energy, protein, saturated fat, sugar, and sodium than their 'less healthy' counterparts. The MHFT nutrient profiling model categorised regularly purchased supermarket foods similarly to the MFSANZ model, and both appear to distinguish appropriately between 'healthier' and 'less healthy' options. Therefore, both models have the potential to appropriately identify 'healthier' foods for promotion and positively influence food choices.

  7. Statistical modelling approach to derive quantitative nanowastes classification index; estimation of nanomaterials exposure

    Ntaka, L

    2013-08-01

    Full Text Available . In this work, statistical inference approach specifically the non-parametric bootstrapping and linear model were applied. Data used to develop the model were sourced from the literature. 104 data points with information on aggregation, natural organic matter...

  8. CUFE at SemEval-2016 Task 4: A Gated Recurrent Model for Sentiment Classification

    Nabil, Mahmoud

    2016-06-16

    In this paper we describe a deep learning system that has been built for SemEval 2016 Task4 (Subtask A and B). In this work we trained a Gated Recurrent Unit (GRU) neural network model on top of two sets of word embeddings: (a) general word embeddings generated from unsupervised neural language model; and (b) task specific word embeddings generated from supervised neural language model that was trained to classify tweets into positive and negative categories. We also added a method for analyzing and splitting multi-words hashtags and appending them to the tweet body before feeding it to our model. Our models achieved 0.58 F1-measure for Subtask A (ranked 12/34) and 0.679 Recall for Subtask B (ranked 12/19).

  9. Functional characterization of a novel hERG variant in a family with recurrent sudden infant death syndrome: Retracting a genetic diagnosis.

    Sergeev, Valentine; Perry, Frances; Roston, Thomas M; Sanatani, Shubhayan; Tibbits, Glen F; Claydon, Thomas W

    2018-03-01

    Long QT syndrome (LQTS) is the most common cardiac ion channelopathy and has been found to be responsible for approximately 10% of sudden infant death syndrome (SIDS) cases. Despite increasing use of broad panels and now whole exome sequencing (WES) in the investigation of SIDS, the probability of identifying a pathogenic mutation in a SIDS victim is low. We report a family-based study who are afflicted by recurrent SIDS in which several members harbor a variant, p.Pro963Thr, in the C-terminal region of the human-ether-a-go-go (hERG) gene, published to be responsible for cases of LQTS type 2. Functional characterization was undertaken due to the variable phenotype in carriers, the discrepancy with published cases, and the importance of identifying a cause for recurrent deaths in a single family. Studies of the mutated ion channel in in vitro heterologous expression systems revealed that the mutation has no detectable impact on membrane surface expression, biophysical gating properties such as activation, deactivation and inactivation, or the amplitude of the protective current conducted by hERG channels during early repolarization. These observations suggest that the p.Pro963Thr mutation is not a monogenic disease-causing LQTS mutation despite evidence of co-segregation in two siblings affected by SIDS. Our findings demonstrate some of the potential pitfalls in post-mortem molecular testing and the importance of functional testing of gene variants in determining disease-causation, especially where the impacts of cascade screening can affect multiple generations. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Clear-sky classification procedures and models using a world-wide data-base

    Younes, S.; Muneer, T.

    2007-01-01

    Clear-sky data need to be extracted from all-sky measured solar-irradiance dataset, often by using algorithms that rely on other measured meteorological parameters. Current procedures for clear-sky data extraction have been examined and compared with each other to determine their reliability and location dependency. New clear-sky determination algorithms are proposed that are based on a combination of clearness index, diffuse ratio, cloud cover and Linke's turbidity limits. Various researchers have proposed clear-sky irradiance models that rely on synoptic parameters; four of these models, MRM, PRM, YRM and REST2 have been compared for six world-wide-locations. Based on a previously-developed comprehensive accuracy scoring method, the models MRM, REST2 and YRM were found to be of satisfactory performance in decreasing order. The so-called Page radiation model (PRM) was found to underestimate solar radiation, even though local turbidity data were provided for its operation

  11. Kent mixture model for classification of remote sensing data on spherical manifolds

    Lunga, D

    2011-10-01

    Full Text Available Modern remote sensing imaging sensor technology provides detailed spectral and spatial information that enables precise analysis of land cover usage. From a research point of view, traditional widely used statistical models are often limited...

  12. Selective classification and quantification model of C&D waste from material resources consumed in residential building construction.

    Mercader-Moyano, Pilar; Ramírez-de-Arellano-Agudo, Antonio

    2013-05-01

    The unfortunate economic situation involving Spain and the European Union is, among other factors, the result of intensive construction activity over recent years. The excessive consumption of natural resources, together with the impact caused by the uncontrolled dumping of untreated C&D waste in illegal landfills have caused environmental pollution and a deterioration of the landscape. The objective of this research was to generate a selective classification and quantification model of C&D waste based on the material resources consumed in the construction of residential buildings, either new or renovated, namely the Conventional Constructive Model (CCM). A practical example carried out on ten residential buildings in Seville, Spain, enabled the identification and quantification of the C&D waste generated in their construction and the origin of the waste, in terms of the building material from which it originated and its impact for every m(2) constructed. This model enables other researchers to establish comparisons between the various improvements proposed for the minimization of the environmental impact produced by building a CCM, new corrective measures to be proposed in future policies that regulate the production and management of C&D waste generated in construction from the design stage to the completion of the construction process, and the establishment of sustainable management for C&D waste and for the selection of materials for the construction on projected or renovated buildings.

  13. A comparison of machine learning algorithms for chemical toxicity classification using a simulated multi-scale data model

    Li Zhen

    2008-05-01

    Full Text Available Abstract Background Bioactivity profiling using high-throughput in vitro assays can reduce the cost and time required for toxicological screening of environmental chemicals and can also reduce the need for animal testing. Several public efforts are aimed at discovering patterns or classifiers in high-dimensional bioactivity space that predict tissue, organ or whole animal toxicological endpoints. Supervised machine learning is a powerful approach to discover combinatorial relationships in complex in vitro/in vivo datasets. We present a novel model to simulate complex chemical-toxicology data sets and use this model to evaluate the relative performance of different machine learning (ML methods. Results The classification performance of Artificial Neural Networks (ANN, K-Nearest Neighbors (KNN, Linear Discriminant Analysis (LDA, Naïve Bayes (NB, Recursive Partitioning and Regression Trees (RPART, and Support Vector Machines (SVM in the presence and absence of filter-based feature selection was analyzed using K-way cross-validation testing and independent validation on simulated in vitro assay data sets with varying levels of model complexity, number of irrelevant features and measurement noise. While the prediction accuracy of all ML methods decreased as non-causal (irrelevant features were added, some ML methods performed better than others. In the limit of using a large number of features, ANN and SVM were always in the top performing set of methods while RPART and KNN (k = 5 were always in the poorest performing set. The addition of measurement noise and irrelevant features decreased the classification accuracy of all ML methods, with LDA suffering the greatest performance degradation. LDA performance is especially sensitive to the use of feature selection. Filter-based feature selection generally improved performance, most strikingly for LDA. Conclusion We have developed a novel simulation model to evaluate machine learning methods for the

  14. A new in silico classification model for ready biodegradability, based on molecular fragments.

    Lombardo, Anna; Pizzo, Fabiola; Benfenati, Emilio; Manganaro, Alberto; Ferrari, Thomas; Gini, Giuseppina

    2014-08-01

    Regulations such as the European REACH (Registration, Evaluation, Authorization and restriction of Chemicals) often require chemicals to be evaluated for ready biodegradability, to assess the potential risk for environmental and human health. Because not all chemicals can be tested, there is an increasing demand for tools for quick and inexpensive biodegradability screening, such as computer-based (in silico) theoretical models. We developed an in silico model starting from a dataset of 728 chemicals with ready biodegradability data (MITI-test Ministry of International Trade and Industry). We used the novel software SARpy to automatically extract, through a structural fragmentation process, a set of substructures statistically related to ready biodegradability. Then, we analysed these substructures in order to build some general rules. The model consists of a rule-set made up of the combination of the statistically relevant fragments and of the expert-based rules. The model gives good statistical performance with 92%, 82% and 76% accuracy on the training, test and external set respectively. These results are comparable with other in silico models like BIOWIN developed by the United States Environmental Protection Agency (EPA); moreover this new model includes an easily understandable explanation. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  16. Maximum mutual information regularized classification

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  17. Learning Classification Models of Cognitive Conditions from Subtle Behaviors in the Digital Clock Drawing Test.

    Souillard-Mandar, William; Davis, Randall; Rudin, Cynthia; Au, Rhoda; Libon, David J; Swenson, Rodney; Price, Catherine C; Lamar, Melissa; Penney, Dana L

    2016-03-01

    The Clock Drawing Test - a simple pencil and paper test - has been used for more than 50 years as a screening tool to differentiate normal individuals from those with cognitive impairment, and has proven useful in helping to diagnose cognitive dysfunction associated with neurological disorders such as Alzheimer's disease, Parkinson's disease, and other dementias and conditions. We have been administering the test using a digitizing ballpoint pen that reports its position with considerable spatial and temporal precision, making available far more detailed data about the subject's performance. Using pen stroke data from these drawings categorized by our software, we designed and computed a large collection of features, then explored the tradeoffs in performance and interpretability in classifiers built using a number of different subsets of these features and a variety of different machine learning techniques. We used traditional machine learning methods to build prediction models that achieve high accuracy. We operationalized widely used manual scoring systems so that we could use them as benchmarks for our models. We worked with clinicians to define guidelines for model interpretability, and constructed sparse linear models and rule lists designed to be as easy to use as scoring systems currently used by clinicians, but more accurate. While our models will require additional testing for validation, they offer the possibility of substantial improvement in detecting cognitive impairment earlier than currently possible, a development with considerable potential impact in practice.

  18. Beyond dysfunction and threshold-based classification: a multidimensional model of personality disorder diagnosis.

    Bornstein, Robert F; Huprich, Steven K

    2011-06-01

    An alternative dimensional model of personality disorder (PD) diagnosis that addresses several difficulties inherent in the current DSM conceptualization of PDs (excessive PD overlap and comorbidity, use of arbitrary thresholds to distinguish normal from pathological personality functioning, failure to capture variations in the adaptative value of PD symptoms, and inattention to the impact of situational influences on PD-related behaviors) is outlined. The model uses a set of diagnostician-friendly strategies to render PD diagnosis in three steps: (1) the diagnostician assigns every patient a single dimensional rating of overall level of personality dysfunction on a 50-point continuum; (2) the diagnostician assigns separate intensity and impairment ratings for each PD dimension (e.g., narcissism, avoidance, dependency); and (3) the diagnostician lists any personality traits-including PD-related traits-that enhance adaptation and functioning (e.g., histrionic theatricality, obsessive attention to detail). Advantages of the proposed model for clinicians and clinical researchers are discussed.

  19. Core Self-Evaluations as Personal Factors in the World Health Organization's International Classification of Functioning, Disability and Health Model: An Application in Persons with Spinal Cord Injury

    Yaghmanian, Rana; Smedema, Susan Miller; Thompson, Kerry

    2017-01-01

    Purpose: To evaluate Chan, Gelman, Ditchman, Kim, and Chiu's (2009) revised World Health Organization's International Classification of Functioning, Disability and Health (ICF) model using core self-evaluations (CSE) to account for Personal Factors in persons with spinal cord injury (SCI). Method: One hundred eighty-seven adults with SCI were…

  20. Model-based classification of CPT data and automated lithostratigraphic mapping for high-resolution characterization of a heterogeneous sedimentary aquifer

    Rogiers, Bart; Mallants, Dirk; Batelaan, Okke; Gedeon, Matej; Huysmans, Marijke; Dassargues, Alain

    2017-01-01

    Cone penetration testing (CPT) is one of the most efficient and versatile methods currently available for geotechnical, lithostratigraphic and hydrogeological site characterization. Currently available methods for soil behaviour type classification (SBT) of CPT data however have severe limitations, often restricting their application to a local scale. For parameterization of regional groundwater flow or geotechnical models, and delineation of regional hydro- or lithostratigraphy, regional SBT...

  1. Modeling Ecosystem Services for Park Trees: Sensitivity of i-Tree Eco Simulations to Light Exposure and Tree Species Classification

    Rocco Pace

    2018-02-01

    Full Text Available Ecosystem modeling can help decision making regarding planting of urban trees for climate change mitigation and air pollution reduction. Algorithms and models that link the properties of plant functional types, species groups, or single species to their impact on specific ecosystem services have been developed. However, these models require a considerable effort for initialization that is inherently related to uncertainties originating from the high diversity of plant species in urban areas. We therefore suggest a new automated method to be used with the i-Tree Eco model to derive light competition for individual trees and investigate the importance of this property. Since competition depends also on the species, which is difficult to determine from increasingly used remote sensing methodologies, we also investigate the impact of uncertain tree species classification on the ecosystem services by comparing a species-specific inventory determined by field observation with a genus-specific categorization and a model initialization for the dominant deciduous and evergreen species only. Our results show how the simulation of competition affects the determination of carbon sequestration, leaf area, and related ecosystem services and that the proposed method provides a tool for improving estimations. Misclassifications of tree species can lead to large deviations in estimates of ecosystem impacts, particularly concerning biogenic volatile compound emissions. In our test case, monoterpene emissions almost doubled and isoprene emissions decreased to less than 10% when species were estimated to belong only to either two groups instead of being determined by species or genus. It is discussed that this uncertainty of emission estimates propagates further uncertainty in the estimation of potential ozone formation. Overall, we show the importance of using an individual light competition approach and explicitly parameterizing all ecosystem functions at the

  2. Use of topographic and climatological models in a geographical data base to improve Landsat MSS classification for Olympic National Park

    Cibula, William G.; Nyquist, Maurice O.

    1987-01-01

    An unsupervised computer classification of vegetation/landcover of Olympic National Park and surrounding environs was initially carried out using four bands of Landsat MSS data. The primary objective of the project was to derive a level of landcover classifications useful for park management applications while maintaining an acceptably high level of classification accuracy. Initially, nine generalized vegetation/landcover classes were derived. Overall classification accuracy was 91.7 percent. In an attempt to refine the level of classification, a geographic information system (GIS) approach was employed. Topographic data and watershed boundaries (inferred precipitation/temperature) data were registered with the Landsat MSS data. The resultant boolean operations yielded 21 vegetation/landcover classes while maintaining the same level of classification accuracy. The final classification provided much better identification and location of the major forest types within the park at the same high level of accuracy, and these met the project objective. This classification could now become inputs into a GIS system to help provide answers to park management coupled with other ancillary data programs such as fire management.

  3. Comparing Methodologies for Developing an Early Warning System: Classification and Regression Tree Model versus Logistic Regression. REL 2015-077

    Koon, Sharon; Petscher, Yaacov

    2015-01-01

    The purpose of this report was to explicate the use of logistic regression and classification and regression tree (CART) analysis in the development of early warning systems. It was motivated by state education leaders' interest in maintaining high classification accuracy while simultaneously improving practitioner understanding of the rules by…

  4. A Comparison of Computer-Based Classification Testing Approaches Using Mixed-Format Tests with the Generalized Partial Credit Model

    Kim, Jiseon

    2010-01-01

    Classification testing has been widely used to make categorical decisions by determining whether an examinee has a certain degree of ability required by established standards. As computer technologies have developed, classification testing has become more computerized. Several approaches have been proposed and investigated in the context of…

  5. Inter-labeler and intra-labeler variability of condition severity classification models using active and passive learning methods.

    Nissim, Nir; Shahar, Yuval; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2017-09-01

    Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers' learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by using the labels provided by seven

  6. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    Kersebaum, K C; Boote, K J; Jorgenson, J S

    2015-01-01

    Experimental field data are used at different levels of complexity to calibrate, validate and improve agro-ecosystem models to enhance their reliability for regional impact assessment. A methodological framework and software are presented to evaluate and classify data sets into four classes regar...

  7. Classification of cosmology with arbitrary matter in the Horava-Lifshitz model

    Minamitsuji, Masato

    2010-01-01

    In this work, we discuss the cosmological evolutions in the nonrelativistic and possibly renormalizable gravitational theory, called the Horava-Lifshitz (HL) theory. We consider the original HL model (type I), and the modified version obtained by an analytic continuation of parameters (type II). We classify the possible cosmological evolutions with arbitrary matter. We will find a variety of cosmology.

  8. A Probabilistic Model for Diagnosing Misconceptions by a Pattern Classification Approach.

    Tatsuoka, Kikumi K.

    A probabilistic approach is introduced to classify and diagnose erroneous rules of operation resulting from a variety of misconceptions ("bugs") in a procedural domain of arithmetic. The model is contrasted with the deterministic approach which has commonly been used in the field of artificial intelligence, and the advantage of treating the…

  9. Explicit Foreground and Background Modeling in The Classification of Text Blocks in Scene Images

    Sriman, Bowornrat; Schomaker, Lambertus

    2015-01-01

    Achieving high accuracy for classifying foreground and background is an interesting challenge in the field of scene image analysis because of the wide range of illumination, complex background, and scale changes. Classifying fore- ground and background using bag-of-feature model gives a good result.

  10. Analysis and classification of data sets for calibration and validation of agro-ecosystem models

    Kersebaum, K. C.; Boote, K. J.; Jorgenson, J. S.; Nendel, C.; Bindi, M.; Frühauf, C.; Gaiser, T.; Hoogenboom, G.; Kollas, C.; Olesen, J. E.; Rötter, R. P.; Ruget, F.; Thorburn, P. J.; Trnka, Miroslav; Wegener, M.

    2015-01-01

    Roč. 72, Oct 15 (2015), s. 402-417 ISSN 1364-8152 Institutional support: RVO:67179843 Keywords : field experiments * data quality * crop modeling * data requirement * minimum data * software Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 4.207, year: 2015

  11. Animal models of behavioral dysfunctions: basic concepts and classifications, and an evaluation strategy

    Staay, van der F.J.

    2006-01-01

    In behavioral neurosciences, such as neurobiology and biopsychology, animal models make it possible to investigate brain-behavior relations, with the aim of gaining insight into normal and abnormal human behavior and its underlying neuronal and neuroendocrinological processes. Different types of

  12. Modelling the angular effects on satellite retrieved LST at global scale using a land surface classification

    Ermida, Sofia; DaCamara, Carlos C.; Trigo, Isabel F.; Pires, Ana C.; Ghent, Darren

    2017-04-01

    Land Surface Temperature (LST) is a key climatological variable and a diagnostic parameter of land surface conditions. Remote sensing constitutes the most effective method to observe LST over large areas and on a regular basis. Although LST estimation from remote sensing instruments operating in the Infrared (IR) is widely used and has been performed for nearly 3 decades, there is still a list of open issues. One of these is the LST dependence on viewing and illumination geometry. This effect introduces significant discrepancies among LST estimations from different sensors, overlapping in space and time, that are not related to uncertainties in the methodologies or input data used. Furthermore, these directional effects deviate LST products from an ideally defined LST, which should represent to the ensemble of directional radiometric temperature of all surface elements within the FOV. Angular effects on LST are here conveniently estimated by means of a kernel model of the surface thermal emission, which describes the angular dependence of LST as a function of viewing and illumination geometry. The model is calibrated using LST data as provided by a wide range of sensors to optimize spatial coverage, namely: 1) a LEO sensor - the Moderate Resolution Imaging Spectroradiometer (MODIS) on-board NASA's TERRA and AQUA; and 2) 3 GEO sensors - the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on-board EUMETSAT's Meteosat Second Generation (MSG), the Japanese Meteorological Imager (JAMI) on-board the Japanese Meteorological Association (JMA) Multifunction Transport SATellite (MTSAT-2), and NASA's Geostationary Operational Environmental Satellites (GOES). As shown in our previous feasibility studies the sampling of illumination and view angles has a high impact on the obtained model parameters. This impact may be mitigated when the sampling size is increased by aggregating pixels with similar surface conditions. Here we propose a methodology where land surface is

  13. Inter-Labeler and Intra-Labeler Variability of Condition Severity Classification Models Using Active and Passive Learning Methods

    Nissim, Nir; Shahar, Yuval; Boland, Mary Regina; Tatonetti, Nicholas P; Elovici, Yuval; Hripcsak, George; Moskovitch, Robert

    2018-01-01

    Background and Objectives Labeling instances by domain experts for classification is often time consuming and expensive. To reduce such labeling efforts, we had proposed the application of active learning (AL) methods, introduced our CAESAR-ALE framework for classifying the severity of clinical conditions, and shown its significant reduction of labeling efforts. The use of any of three AL methods (one well known [SVM-Margin], and two that we introduced [Exploitation and Combination_XA]) significantly reduced (by 48% to 64%) condition labeling efforts, compared to standard passive (random instance-selection) SVM learning. Furthermore, our new AL methods achieved maximal accuracy using 12% fewer labeled cases than the SVM-Margin AL method. However, because labelers have varying levels of expertise, a major issue associated with learning methods, and AL methods in particular, is how to best to use the labeling provided by a committee of labelers. First, we wanted to know, based on the labelers’ learning curves, whether using AL methods (versus standard passive learning methods) has an effect on the Intra-labeler variability (within the learning curve of each labeler) and inter-labeler variability (among the learning curves of different labelers). Then, we wanted to examine the effect of learning (either passively or actively) from the labels created by the majority consensus of a group of labelers. Methods We used our CAESAR-ALE framework for classifying the severity of clinical conditions, the three AL methods and the passive learning method, as mentioned above, to induce the classifications models. We used a dataset of 516 clinical conditions and their severity labeling, represented by features aggregated from the medical records of 1.9 million patients treated at Columbia University Medical Center. We analyzed the variance of the classification performance within (intra-labeler), and especially among (inter-labeler) the classification models that were induced by

  14. The International Classification of Functioning, Disability and Health: a unifying model for the conceptual description of physical and rehabilitation medicine.

    Stucki, Gerold; Melvin, John

    2007-05-01

    There is a need to develop a contemporary and internationally accepted conceptual description of physical and rehabilitation medicine (PRM). The process of evolving such a definition can now rely on the unifying conceptual model and taxonomy of the International Classification of Functioning, Disability and Health (ICF) and an ICF-based conceptual description of rehabilitation understood as a health strategy. The PRM section of the European Union of Medical Specialists (UEMS) has endorsed the application of the ICF as a unifying conceptual model for PRM and supports the process of moving towards an "ICF-based conceptual description and according definitions of PRM". With this goal in mind, the authors have developed a first tentative conceptual description in co-operation with the professional practice committee of the UEMS-PRM-section. A respective brief definition describes PRM as the medical specialty that, based on the assessment of functioning and including the diagnosis and treatment of health conditions, performs, applies and co-ordinates biomedical and engineering and a wide range of other interventions with the goal of optimizing functioning of people experiencing or likely to experience disability. Readers of the Journal of Rehabilitation Medicine are invited to contribute to the process of achieving an internationally accepted ICF-based conceptual description of PRM by submitting commentaries to the Editor of this journal.

  15. The International Classification of Functioning, Disability and Health (ICF): a unifying model for the conceptual description of the rehabilitation strategy.

    Stucki, Gerold; Cieza, Alarcos; Melvin, John

    2007-05-01

    An important basis for the successful development of rehabilitation practice and research is a conceptually sound description of rehabilitation understood as a health strategy based on a universally accepted conceptual model and taxonomy of human functioning. With the approval of the International Classification of Functioning, Disability and Health (ICF) by the World Health Assembly in 2001 and the reference to the ICF in the World Health Assembly's resolution on "Disability, including prevention, management and rehabilitation" in 2005, we can now rely on a universally accepted conceptual model. It is thus time to initiate the process of evolving an ICF-based conceptual description that can serve as a basis for similar conceptual descriptions and according definitions of the professions applying the rehabilitation strategy and of distinct scientific fields of human functioning and rehabilitation research. In co-operation with the Physical and Rehabilitation Medicine (PRM) section of the European Union of Medical Specialists (UEMS) and its professional practice committee, we present a first tentative version of an ICF-based conceptual description in this paper. A brief definition describes rehabilitation as the health strategy applied by PRM and professionals in the health sector and across other sectors that aims to enable people with health conditions experiencing or likely to experience disability to achieve and maintain optimal functioning in interaction with the environment. Readers of the Journal of Rehabilitation Medicine are invited to contribute towards achieving an internationally accepted ICF-based conceptual description of rehabilitation by submitting commentaries to the Editor of this journal.

  16. The Application of Data Mining Technology to Build a Forecasting Model for Classification of Road Traffic Accidents

    Yau-Ren Shiau

    2015-01-01

    Full Text Available With the ever-increasing number of vehicles on the road, traffic accidents have also increased, resulting in the loss of lives and properties, as well as immeasurable social costs. The environment, time, and region influence the occurrence of traffic accidents. The life and property loss is expected to be reduced by improving traffic engineering, education, and administration of law and advocacy. This study observed 2,471 traffic accidents which occurred in central Taiwan from January to December 2011 and used the Recursive Feature Elimination (RFE of Feature Selection to screen the important factors affecting traffic accidents. It then established models to analyze traffic accidents with various methods, such as Fuzzy Robust Principal Component Analysis (FRPCA, Backpropagation Neural Network (BPNN, and Logistic Regression (LR. The proposed model aims to probe into the environments of traffic accidents, as well as the relationships between the variables of road designs, rule-violation items, and accident types. The results showed that the accuracy rate of classifiers FRPCA-BPNN (85.89% and FRPCA-LR (85.14% combined with FRPCA is higher than that of BPNN (84.37% and LR (85.06% by 1.52% and 0.08%, respectively. Moreover, the performance of FRPCA-BPNN and FRPCA-LR combined with FRPCA in classification prediction is better than that of BPNN and LR.

  17. Rapid Erosion Modeling in a Western Kenya Watershed using Visible Near Infrared Reflectance, Classification Tree Analysis and 137Cesium.

    deGraffenried, Jeff B; Shepherd, Keith D

    2009-12-15

    Human induced soil erosion has severe economic and environmental impacts throughout the world. It is more severe in the tropics than elsewhere and results in diminished food production and security. Kenya has limited arable land and 30 percent of the country experiences severe to very severe human induced soil degradation. The purpose of this research was to test visible near infrared diffuse reflectance spectroscopy (VNIR) as a tool for rapid assessment and benchmarking of soil condition and erosion severity class. The study was conducted in the Saiwa River watershed in the northern Rift Valley Province of western Kenya, a tropical highland area. Soil 137 Cs concentration was measured to validate spectrally derived erosion classes and establish the background levels for difference land use types. Results indicate VNIR could be used to accurately evaluate a large and diverse soil data set and predict soil erosion characteristics. Soil condition was spectrally assessed and modeled. Analysis of mean raw spectra indicated significant reflectance differences between soil erosion classes. The largest differences occurred between 1,350 and 1,950 nm with the largest separation occurring at 1,920 nm. Classification and Regression Tree (CART) analysis indicated that the spectral model had practical predictive success (72%) with Receiver Operating Characteristic (ROC) of 0.74. The change in 137 Cs concentrations supported the premise that VNIR is an effective tool for rapid screening of soil erosion condition.

  18. ICF and casemix models for healthcare funding: use of the WHO family of classifications to improve casemix.

    Madden, Richard; Marshall, Ric; Race, Susan

    2013-06-01

    Casemix models for funding activity in health care and assessing performance depend on data based on uniformity of resource utilisation. It has long been an ideal to relate the measure of value more to patient outcome than output. A problem frequently expressed by clinicians is that measures of activity such as Functional Independence Measure (FIM) and Barthel Index scores may not sufficiently represent the aspirations of patients in many care programs. Firstly, the key features of the International Classification of Functioning, Disability and Health are outlined. Secondly, the use of ICF dimensions in Australia and other countries is reviewed. Thirdly, a broader set of domains with potential for casemix funding models and performance reporting is considered. In recent years, the ICF has provided a more developed set of domains against which outcome goals can be expressed. Additional dimensions could be used to supplement existing data. Instances of developments in this area are identified and their potential discussed. A well-selected set of data items representing the broader dimensions of outcome goals may provide the ability to more meaningfully and systematically measure the goals of both curative and rehabilitation care against which outcome should be measured. More information about patient goals may be needed.

  19. DIFFERENTIAL DIAGNOSTICS MODEL RESEARCH BY MEANS OF THE POTENTIAL FUNCTIONS METHOD FOR NEUROLOGY DISEASES CLASSIFICATION

    V. Z. Stetsyuk

    2016-10-01

    Full Text Available Informatization in medicine offers a lot of opportunities to enhance quality of medical support, accuracy of diagnosis and provides the use of accumulated experience. Modern program systems are utilized now as additional tools to get appropriate advice. This article offers the way to provide help for neurology department doctor of NCSH «OKHMATDYT» during diagnosis determining. It was decided to design the program system for this purpose based on differential diagnostic model. The key problems in differential diagnosis are symptoms similarity between each other in one disease group and the absence of key symptom. Therefore the differential diagnostic model is needed. It is constructed using the potential function method in characteristics space. This characteristics space is formed by 100-200 points - patients with their symptoms. The main feature of this method here is that the decision function is building during recognition step united with learning that became possible with the help of modern powerful computers.

  20. Classification of effective operators for interactions between the Standard Model and dark matter

    Duch, M.; Grzadkowski, B.; Wudka, J.

    2015-01-01

    We construct a basis for effective operators responsible for interactions between the Standard Model and a dark sector composed of particles with spin ≤1. Redundant operators are eliminated using dim-4 equations of motion. We consider simple scenarios where the dark matter components are stabilized against decay by ℤ_2 symmetries. We determine operators which are loop-generated within an underlying theory and those that are potentially tree-level generated.

  1. Organizational Models for Non-Core Processes Management: A Classification Framework

    Alberto F. De Toni

    2012-12-01

    The framework enables the identification and the explanation of the main advantages and disadvantages of each strategy and to highlight how a company should coherently choose an organizational model on the basis of: (a the specialization/complexity of the non‐core processes, (b the focus on core processes, (c its inclination towards know‐how outsourcing, and (d the desired level of autonomy in the management of non‐core processes.

  2. Automated morphological analysis of bone marrow cells in microscopic images for diagnosis of leukemia: nucleus-plasma separation and cell classification using a hierarchical tree model of hematopoesis

    Krappe, Sebastian; Wittenberg, Thomas; Haferlach, Torsten; Münzenmayer, Christian

    2016-03-01

    The morphological differentiation of bone marrow is fundamental for the diagnosis of leukemia. Currently, the counting and classification of the different types of bone marrow cells is done manually under the use of bright field microscopy. This is a time-consuming, subjective, tedious and error-prone process. Furthermore, repeated examinations of a slide may yield intra- and inter-observer variances. For that reason a computer assisted diagnosis system for bone marrow differentiation is pursued. In this work we focus (a) on a new method for the separation of nucleus and plasma parts and (b) on a knowledge-based hierarchical tree classifier for the differentiation of bone marrow cells in 16 different classes. Classification trees are easily interpretable and understandable and provide a classification together with an explanation. Using classification trees, expert knowledge (i.e. knowledge about similar classes and cell lines in the tree model of hematopoiesis) is integrated in the structure of the tree. The proposed segmentation method is evaluated with more than 10,000 manually segmented cells. For the evaluation of the proposed hierarchical classifier more than 140,000 automatically segmented bone marrow cells are used. Future automated solutions for the morphological analysis of bone marrow smears could potentially apply such an approach for the pre-classification of bone marrow cells and thereby shortening the examination time.

  3. A novel wavelet sequence based on deep bidirectional LSTM network model for ECG signal classification.

    Yildirim, Özal

    2018-05-01

    Long-short term memory networks (LSTMs), which have recently emerged in sequential data analysis, are the most widely used type of recurrent neural networks (RNNs) architecture. Progress on the topic of deep learning includes successful adaptations of deep versions of these architectures. In this study, a new model for deep bidirectional LSTM network-based wavelet sequences called DBLSTM-WS was proposed for classifying electrocardiogram (ECG) signals. For this purpose, a new wavelet-based layer is implemented to generate ECG signal sequences. The ECG signals were decomposed into frequency sub-bands at different scales in this layer. These sub-bands are used as sequences for the input of LSTM networks. New network models that include unidirectional (ULSTM) and bidirectional (BLSTM) structures are designed for performance comparisons. Experimental studies have been performed for five different types of heartbeats obtained from the MIT-BIH arrhythmia database. These five types are Normal Sinus Rhythm (NSR), Ventricular Premature Contraction (VPC), Paced Beat (PB), Left Bundle Branch Block (LBBB), and Right Bundle Branch Block (RBBB). The results show that the DBLSTM-WS model gives a high recognition performance of 99.39%. It has been observed that the wavelet-based layer proposed in the study significantly improves the recognition performance of conventional networks. This proposed network structure is an important approach that can be applied to similar signal processing problems. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. Implementation of the ISTH classification of non-overt DIC in a thromboplastin induced rabbit model

    Berthelsen, Line Olrik; Kristensen, Annemarie Thuri; Wiinberg, Bo

    2009-01-01

    , but the scoring systems have rarely been applied to animal models of DIC. In this study, we use rabbit brain thromboplastin (thromboplastin) to induce DIC in a rabbit model and test the applicability of the ISTH criteria for standardized diagnosis of DIC. Cardiovascular and haematological parameters from rabbits......, either saline-injected or administered 0.625, 1.25, 2.5 or 5 mg thromboplastin/kg as a single bolus, were collected at four timepoints over a 90 minute period. All groups of rabbits were scored at each time point according to the ISTH diagnostic criteria for non-overt DIC. Injection of 5 mg...... and number of thrombi in lung vasculature was seen. The administration of a bolus of 1.25 - 2.5 mg thromboplastin/kg to rabbits induced a reproducible dose dependent model of non-overt DIC according to the ISTH diagnostic criteria. We conclude that the non-overt ISTH score can be applied to evaluate severity...

  5. Coupling geophysical investigation with hydrothermal modeling to constrain the enthalpy classification of a potential geothermal resource.

    White, Jeremy T.; Karakhanian, Arkadi; Connor, Chuck; Connor, Laura; Hughes, Joseph D.; Malservisi, Rocco; Wetmore, Paul

    2015-01-01

    An appreciable challenge in volcanology and geothermal resource development is to understand the relationships between volcanic systems and low-enthalpy geothermal resources. The enthalpy of an undeveloped geothermal resource in the Karckar region of Armenia is investigated by coupling geophysical and hydrothermal modeling. The results of 3-dimensional inversion of gravity data provide key inputs into a hydrothermal circulation model of the system and associated hot springs, which is used to evaluate possible geothermal system configurations. Hydraulic and thermal properties are specified using maximum a priori estimates. Limited constraints provided by temperature data collected from an existing down-gradient borehole indicate that the geothermal system can most likely be classified as low-enthalpy and liquid dominated. We find the heat source for the system is likely cooling quartz monzonite intrusions in the shallow subsurface and that meteoric recharge in the pull-apart basin circulates to depth, rises along basin-bounding faults and discharges at the hot springs. While other combinations of subsurface properties and geothermal system configurations may fit the temperature distribution equally well, we demonstrate that the low-enthalpy system is reasonably explained based largely on interpretation of surface geophysical data and relatively simple models.

  6. Simultaneous data pre-processing and SVM classification model selection based on a parallel genetic algorithm applied to spectroscopic data of olive oils.

    Devos, Olivier; Downey, Gerard; Duponchel, Ludovic

    2014-04-01

    Classification is an important task in chemometrics. For several years now, support vector machines (SVMs) have proven to be powerful for infrared spectral data classification. However such methods require optimisation of parameters in order to control the risk of overfitting and the complexity of the boundary. Furthermore, it is established that the prediction ability of classification models can be improved using pre-processing in order to remove unwanted variance in the spectra. In this paper we propose a new methodology based on genetic algorithm (GA) for the simultaneous optimisation of SVM parameters and pre-processing (GENOPT-SVM). The method has been tested for the discrimination of the geographical origin of Italian olive oil (Ligurian and non-Ligurian) on the basis of near infrared (NIR) or mid infrared (FTIR) spectra. Different classification models (PLS-DA, SVM with mean centre data, GENOPT-SVM) have been tested and statistically compared using McNemar's statistical test. For the two datasets, SVM with optimised pre-processing give models with higher accuracy than the one obtained with PLS-DA on pre-processed data. In the case of the NIR dataset, most of this accuracy improvement (86.3% compared with 82.8% for PLS-DA) occurred using only a single pre-processing step. For the FTIR dataset, three optimised pre-processing steps are required to obtain SVM model with significant accuracy improvement (82.2%) compared to the one obtained with PLS-DA (78.6%). Furthermore, this study demonstrates that even SVM models have to be developed on the basis of well-corrected spectral data in order to obtain higher classification rates. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Classification of Beta-lactamases and penicillin binding proteins using ligand-centric network models.

    Hakime Öztürk

    Full Text Available β-lactamase mediated antibiotic resistance is an important health issue and the discovery of new β-lactam type antibiotics or β-lactamase inhibitors is an area of intense research. Today, there are about a thousand β-lactamases due to the evolutionary pressure exerted by these ligands. While β-lactamases hydrolyse the β-lactam ring of antibiotics, rendering them ineffective, Penicillin-Binding Proteins (PBPs, which share high structural similarity with β-lactamases, also confer antibiotic resistance to their host organism by acquiring mutations that allow them to continue their participation in cell wall biosynthesis. In this paper, we propose a novel approach to include ligand sharing information for classifying and clustering β-lactamases and PBPs in an effort to elucidate the ligand induced evolution of these β-lactam binding proteins. We first present a detailed summary of the β-lactamase and PBP families in the Protein Data Bank, as well as the compounds they bind to. Then, we build two different types of networks in which the proteins are represented as nodes, and two proteins are connected by an edge with a weight that depends on the number of shared identical or similar ligands. These models are analyzed under three different edge weight settings, namely unweighted, weighted, and normalized weighted. A detailed comparison of these six networks showed that the use of ligand sharing information to cluster proteins resulted in modules comprising proteins with not only sequence similarity but also functional similarity. Consideration of ligand similarity highlighted some interactions that were not detected in the identical ligand network. Analysing the β-lactamases and PBPs using ligand-centric network models enabled the identification of novel relationships, suggesting that these models can be used to examine other protein families to obtain information on their ligand induced evolutionary paths.

  8. QSAR Classification Model for Antibacterial Compounds and Its Use in Virtual Screening

    2012-09-26

    pertussis (290 000), and tetanus (210 000).1 Considering this staggering number of deaths, there should be a lucrative market for drug therapies for...e Th r e a t R edu c t i o n Ag en c y G r a n t Journal of Chemical Information and Modeling Article dx.doi.org/10.1021/ci300336v | J. Chem. Inf...15, 2012). (32) Knox, C.; Law, V.; Jewison, T.; Liu, P.; Ly, S.; Frolkis, A.; Pon, A.; Banco , K.; Mak, C.; Neveu, V.; Djoumbou, Y.; Eisner, R.; Guo, A

  9. Mean – Variance parametric Model for the Classification based on Cries of Babies

    Khalid Nazim S. A; Dr. M.B Sanjay Pande

    2010-01-01

    Cry is a feature which makes a individual to take certain care about the infant which has initiated it. It is also equally understood that cry makes a person to take certain steps. In the present work, we have tried to implement a mathematical model which can classify the cry into its cluster or group based on certain parameters based on which a cry is classified into a normal or abnormal. To corroborate the methodology we taken 17 distinguished features of cry. The implemented mathematical m...

  10. A Hidden Markov Models Approach for Crop Classification: Linking Crop Phenology to Time Series of Multi-Sensor Remote Sensing Data

    Sofia Siachalou

    2015-03-01

    Full Text Available Vegetation monitoring and mapping based on multi-temporal imagery has recently received much attention due to the plethora of medium-high spatial resolution satellites and the improved classification accuracies attained compared to uni-temporal approaches. Efficient image processing strategies are needed to exploit the phenological information present in temporal image sequences and to limit data redundancy and computational complexity. Within this framework, we implement the theory of Hidden Markov Models in crop classification, based on the time-series analysis of phenological states, inferred by a sequence of remote sensing observations. More specifically, we model the dynamics of vegetation over an agricultural area of Greece, characterized by spatio-temporal heterogeneity and small-sized fields, using RapidEye and Landsat ETM+ imagery. In addition, the classification performance of image sequences with variable spatial and temporal characteristics is evaluated and compared. The classification model considering one RapidEye and four pan-sharpened Landsat ETM+ images was found superior, resulting in a conditional kappa from 0.77 to 0.94 per class and an overall accuracy of 89.7%. The results highlight the potential of the method for operational crop mapping in Euro-Mediterranean areas and provide some hints for optimal image acquisition windows regarding major crop types in Greece.

  11. [Proposing a physiological model for Emergency Department. Operating principles, classification of overcrowding and guidelines for redesign].

    Herrera Carranza, M; Aguado Correa, F; Padilla Garrido, N; López Camacho, F

    2017-04-30

    The operation of Emergency Departments (ED) is determined by demand, their own organizational structures and the connection to other medical care levels. When these elements are not simultaneous, it hinders patient flow and decreases capacity, making it necessary to employ a systemic approach to the chain of emergency care as a single operational entity. With this theoretical orientation, we suggest a conceptual model similar to the physiological cardiac output, in which the preload is the demand, the contractile or flow pump is the organizational structure, the afterload is the hospital, the pre-ED valve is primary care and outpatient emergencies, and the post-ED valve is the diagnostic support services and the specialist consultants. Based on this theoretical approach we classify the different types of ED overcrowding and systematise its causes and the different waiting lists that it generates, which can help to redesign the service and avoid its saturation.

  12. A Self-Adaptive Hidden Markov Model for Emotion Classification in Chinese Microblogs

    Li Liu

    2015-01-01

    we propose a modified version of hidden Markov model (HMM classifier, called self-adaptive HMM, whose parameters are optimized by Particle Swarm Optimization algorithms. Since manually labeling large-scale dataset is difficult, we also employ the entropy to decide whether a new unlabeled tweet shall be contained in the training dataset after being assigned an emotion using our HMM-based approach. In the experiment, we collected about 200,000 Chinese tweets from Sina Weibo. The results show that the F-score of our approach gets 76% on happiness and fear and 65% on anger, surprise, and sadness. In addition, the self-adaptive HMM classifier outperforms Naive Bayes and Support Vector Machine on recognition of happiness, anger, and sadness.

  13. Fractal Geometry Enables Classification of Different Lung Morphologies in a Model of Experimental Asthma

    Obert, Martin; Hagner, Stefanie; Krombach, Gabriele A.; Inan, Selcuk; Renz, Harald

    2015-06-01

    Animal models represent the basis of our current understanding of the pathophysiology of asthma and are of central importance in the preclinical development of drug therapies. The characterization of irregular lung shapes is a major issue in radiological imaging of mice in these models. The aim of this study was to find out whether differences in lung morphology can be described by fractal geometry. Healthy and asthmatic mouse groups, before and after an acute asthma attack induced by methacholine, were studied. In vivo flat-panel-based high-resolution Computed Tomography (CT) was used for mice's thorax imaging. The digital image data of the mice's lungs were segmented from the surrounding tissue. After that, the lungs were divided by image gray-level thresholds into two additional subsets. One subset contained basically the air transporting bronchial system. The other subset corresponds mainly to the blood vessel system. We estimated the fractal dimension of all sets of the different mouse groups using the mass radius relation (mrr). We found that the air transporting subset of the bronchial lung tissue enables a complete and significant differentiation between all four mouse groups (mean D of control mice before methacholine treatment: 2.64 ± 0.06; after treatment: 2.76 ± 0.03; asthma mice before methacholine treatment: 2.37 ± 0.16; after treatment: 2.71 ± 0.03; p < 0.05). We conclude that the concept of fractal geometry allows a well-defined, quantitative numerical and objective differentiation of lung shapes — applicable most likely also in human asthma diagnostics.

  14. Using nursing intervention classification in an advance practice registered nurse-led preventive model for adults aging with developmental disabilities.

    Hahn, Joan Earle

    2014-09-01

    To describe the most frequently reported and the most central nursing interventions in an advance practice registered nurse (APRN)-led in-home preventive intervention model for adults aging with developmental disabilities using the Nursing Intervention Classification (NIC) system. A descriptive data analysis and a market basket analysis were conducted on de-identified nominal nursing intervention data from two home visits conducted by nurse practitioners (NPs) from October 2010 to June 2012 for 80 community-dwelling adults with developmental disabilities, ages 29 to 68 years. The mean number of NIC interventions was 4.7 in the first visit and 6.0 in the second visit and last visit. NPs reported 45 different intervention types as classified using a standardized language, with 376 in Visit One and 470 in Visit Two. Approximately 85% of the sample received the Health education intervention. The market basket analysis revealed common pairs, triples, and quadruple sets of interventions in this preventive model. The NIC nursing interventions that occurred together repeatedly were: Health education, Weight management, Nutrition management, Health screening, and Behavior management. Five NIC interventions form the basis of an APRN-led preventive intervention model for individuals aging with lifelong disability, with health education as the most common intervention, combined with interventions to manage weight and nutrition, promote healthy behaviors, and encourage routine health screening. Less frequently reported NIC interventions suggest the need to tailor prevention to individual needs, whether acute or chronic. APRNs employing prevention among adults aging with developmental disabilities must anticipate the need to focus on health education strategies for health promotion and prevention as well as tailor and target a patient-centered approach to support self-management of health to promote healthy aging in place. These NIC interventions serve not only as a guide for

  15. Classification of brain tumors using texture based analysis of T1-post contrast MR scans in a preclinical model

    Tang, Tien T.; Zawaski, Janice A.; Francis, Kathleen N.; Qutub, Amina A.; Gaber, M. Waleed

    2018-02-01

    Accurate diagnosis of tumor type is vital for effective treatment planning. Diagnosis relies heavily on tumor biopsies and other clinical factors. However, biopsies do not fully capture the tumor's heterogeneity due to sampling bias and are only performed if the tumor is accessible. An alternative approach is to use features derived from routine diagnostic imaging such as magnetic resonance (MR) imaging. In this study we aim to establish the use of quantitative image features to classify brain tumors and extend the use of MR images beyond tumor detection and localization. To control for interscanner, acquisition and reconstruction protocol variations, the established workflow was performed in a preclinical model. Using glioma (U87 and GL261) and medulloblastoma (Daoy) models, T1-weighted post contrast scans were acquired at different time points post-implant. The tumor regions at the center, middle, and peripheral were analyzed using in-house software to extract 32 different image features consisting of first and second order features. The extracted features were used to construct a decision tree, which could predict tumor type with 10-fold cross-validation. Results from the final classification model demonstrated that middle tumor region had the highest overall accuracy at 79%, while the AUC accuracy was over 90% for GL261 and U87 tumors. Our analysis further identified image features that were unique to certain tumor region, although GL261 tumors were more homogenous with no significant differences between the central and peripheral tumor regions. In conclusion our study shows that texture features derived from MR scans can be used to classify tumor type with high success rates. Furthermore, the algorithm we have developed can be implemented with any imaging datasets and may be applicable to multiple tumor types to determine diagnosis.

  16. Automated and simultaneous fovea center localization and macula segmentation using the new dynamic identification and classification of edges model

    Onal, Sinan; Chen, Xin; Satamraju, Veeresh; Balasooriya, Maduka; Dabil-Karacal, Humeyra

    2016-01-01

    Abstract. Detecting the position of retinal structures, including the fovea center and macula, in retinal images plays a key role in diagnosing eye diseases such as optic nerve hypoplasia, amblyopia, diabetic retinopathy, and macular edema. However, current detection methods are unreliable for infants or certain ethnic populations. Thus, a methodology is proposed here that may be useful for infants and across ethnicities that automatically localizes the fovea center and segments the macula on digital fundus images. First, dark structures and bright artifacts are removed from the input image using preprocessing operations, and the resulting image is transformed to polar space. Second, the fovea center is identified, and the macula region is segmented using the proposed dynamic identification and classification of edges (DICE) model. The performance of the method was evaluated using 1200 fundus images obtained from the relatively large, diverse, and publicly available Messidor database. In 96.1% of these 1200 cases, the distance between the fovea center identified manually by ophthalmologists and automatically using the proposed method remained within 0 to 8 pixels. The dice similarity index comparing the manually obtained results with those of the model for macula segmentation was 96.12% for these 1200 cases. Thus, the proposed method displayed a high degree of accuracy. The methodology using the DICE model is unique and advantageous over previously reported methods because it simultaneously determines the fovea center and segments the macula region without using any structural information, such as optic disc or blood vessel location, and it may prove useful for all populations, including infants. PMID:27660803

  17. Classification of edible oils and modeling of their physico-chemical properties by chemometric methods using mid-IR spectroscopy

    Luna, Aderval S.; da Silva, Arnaldo P.; Ferré, Joan; Boqué, Ricard

    This research work describes two studies for the classification and characterization of edible oils and its quality parameters through Fourier transform mid infrared spectroscopy (FT-mid-IR) together with chemometric methods. The discrimination of canola, sunflower, corn and soybean oils was investigated using SVM-DA, SIMCA and PLS-DA. Using FT-mid-IR, DPLS was able to classify 100% of the samples from the validation set, but SIMCA and SVM-DA were not. The quality parameters: refraction index and relative density of edible oils were obtained from reference methods. Prediction models for FT-mid-IR spectra were calculated for these quality parameters using partial least squares (PLS) and support vector machines (SVM). Several preprocessing alternatives (first derivative, multiplicative scatter correction, mean centering, and standard normal variate) were investigated. The best result for the refraction index was achieved with SVM as well as for the relative density except when the preprocessing combination of mean centering and first derivative was used. For both of quality parameters, the best results obtained for the figures of merit expressed by the root mean square error of cross validation (RMSECV) and prediction (RMSEP) were equal to 0.0001.

  18. Identification, Classification, Mapping of Model and Secondary Steppe Ecosystems Within the Orenburg-Kazakhstan Cross-Border Region

    Yakovlev Ilya Gennadyevich

    2014-09-01

    Full Text Available The article deals with the current issues of modern steppe management in the Orenburg-Kazakhstan cross-border region. The authors use the data of their own field research over the period of 2009-2014 aimed at detection and classification of model and secondary steppe ecosystems in the region. For the last 6 years it has been revealed that some steppe and fallow lands have different squares. The detected lands are multiple-aged and differ according to their qualitative composition depending on aged-specific (time for completion of agricultural activity, soil-lithogenous and floristic features.The authors detected sites of anthropogenic influence on steppe ecosystems as well as the factors that have favorable affect on restoration of natural ecosystems. The article also reveals the centers of restoration of traditional steppe fauna within the Orenburg-Kazakhstan region and the distribution area of marmot, little bustard, bustard, saiga antelope. The authors carried out the comparative analysis of agro-ecological situation in the region for a few last years as well as over long period of time according to archival and polling data.

  19. Modification of Gaussian mixture models for data classification in high energy physics

    Štěpánek, Michal; Franc, Jiří; Kůs, Václav

    2015-01-01

    In high energy physics, we deal with demanding task of signal separation from background. The Model Based Clustering method involves the estimation of distribution mixture parameters via the Expectation-Maximization algorithm in the training phase and application of Bayes' rule in the testing phase. Modifications of the algorithm such as weighting, missing data processing, and overtraining avoidance will be discussed. Due to the strong dependence of the algorithm on initialization, genetic optimization techniques such as mutation, elitism, parasitism, and the rank selection of individuals will be mentioned. Data pre-processing plays a significant role for the subsequent combination of final discriminants in order to improve signal separation efficiency. Moreover, the results of the top quark separation from the Tevatron collider will be compared with those of standard multivariate techniques in high energy physics. Results from this study has been used in the measurement of the inclusive top pair production cross section employing DØ Tevatron full Runll data (9.7 fb-1).

  20. Symmetry chains for the atomic shell model. I. Classification of symmetry chains for atomic configurations

    Gruber, B.; Thomas, M.S.

    1980-01-01

    In this article the symmetry chains for the atomic shell model are classified in such a way that they lead from the group SU(4l+2) to its subgroup SOsub(J)(3). The atomic configurations (nl)sup(N) transform like irreducible representations of the group SU(4l+2), while SOsub(J)(3) corresponds to total angular momentum in SU(4l+2). The defining matrices for the various embeddings are given for each symmetry chain that is obtained. These matrices also define the projection onto the weight subspaces for the corresponding subsymmetries and thus relate the various quantum numbers and determine the branching of representations. It is shown in this article that three (interrelated) symmetry chains are obtained which correspond to L-S coupling, j-j coupling, and a seniority dependent coupling. Moreover, for l<=6 these chains are complete, i.e., there are no other chains but these. In articles to follow, the symmetry chains that lead from the group SO(8l+5) to SOsub(J)(3) will be discussed, with the entire atomic shell transforming like an irreducible representation of SO(8l+5). The transformation properties of the states of the atomic shell will be determined according to the various symmetry chains obtained. The symmetry lattice discussed in this article forms a sublattice of the larger symmetry lattice with SO(8l+5) as supergroup. Thus the transformation properties of the states of the atomic configurations, according to the various symmetry chains discussed in this article, will be obtained too. (author)

  1. Transporter Classification Database (TCDB)

    U.S. Department of Health & Human Services — The Transporter Classification Database details a comprehensive classification system for membrane transport proteins known as the Transporter Classification (TC)...

  2. AERIAL IMAGES FROM AN UAV SYSTEM: 3D MODELING AND TREE SPECIES CLASSIFICATION IN A PARK AREA

    R. Gini

    2012-07-01

    Full Text Available The use of aerial imagery acquired by Unmanned Aerial Vehicles (UAVs is scheduled within the FoGLIE project (Fruition of Goods Landscape in Interactive Environment: it starts from the need to enhance the natural, artistic and cultural heritage, to produce a better usability of it by employing audiovisual movable systems of 3D reconstruction and to improve monitoring procedures, by using new media for integrating the fruition phase with the preservation ones. The pilot project focus on a test area, Parco Adda Nord, which encloses various goods' types (small buildings, agricultural fields and different tree species and bushes. Multispectral high resolution images were taken by two digital compact cameras: a Pentax Optio A40 for RGB photos and a Sigma DP1 modified to acquire the NIR band. Then, some tests were performed in order to analyze the UAV images' quality with both photogrammetric and photo-interpretation purposes, to validate the vector-sensor system, the image block geometry and to study the feasibility of tree species classification. Many pre-signalized Control Points were surveyed through GPS to allow accuracy analysis. Aerial Triangulations (ATs were carried out with photogrammetric commercial software, Leica Photogrammetry Suite (LPS and PhotoModeler, with manual or automatic selection of Tie Points, to pick out pros and cons of each package in managing non conventional aerial imagery as well as the differences in the modeling approach. Further analysis were done on the differences between the EO parameters and the corresponding data coming from the on board UAV navigation system.

  3. A discriminative model-constrained EM approach to 3D MRI brain tissue classification and intensity non-uniformity correction

    Wels, Michael; Hornegger, Joachim; Zheng Yefeng; Comaniciu, Dorin; Huber, Martin

    2011-01-01

    We describe a fully automated method for tissue classification, which is the segmentation into cerebral gray matter (GM), cerebral white matter (WM), and cerebral spinal fluid (CSF), and intensity non-uniformity (INU) correction in brain magnetic resonance imaging (MRI) volumes. It combines supervised MRI modality-specific discriminative modeling and unsupervised statistical expectation maximization (EM) segmentation into an integrated Bayesian framework. While both the parametric observation models and the non-parametrically modeled INUs are estimated via EM during segmentation itself, a Markov random field (MRF) prior model regularizes segmentation and parameter estimation. Firstly, the regularization takes into account knowledge about spatial and appearance-related homogeneity of segments in terms of pairwise clique potentials of adjacent voxels. Secondly and more importantly, patient-specific knowledge about the global spatial distribution of brain tissue is incorporated into the segmentation process via unary clique potentials. They are based on a strong discriminative model provided by a probabilistic boosting tree (PBT) for classifying image voxels. It relies on the surrounding context and alignment-based features derived from a probabilistic anatomical atlas. The context considered is encoded by 3D Haar-like features of reduced INU sensitivity. Alignment is carried out fully automatically by means of an affine registration algorithm minimizing cross-correlation. Both types of features do not immediately use the observed intensities provided by the MRI modality but instead rely on specifically transformed features, which are less sensitive to MRI artifacts. Detailed quantitative evaluations on standard phantom scans and standard real-world data show the accuracy and robustness of the proposed method. They also demonstrate relative superiority in comparison to other state-of-the-art approaches to this kind of computational task: our method achieves average

  4. Comparative analysis of tree classification models for detecting fusarium oxysporum f. sp cubense (TR4) based on multi soil sensor parameters

    Estuar, Maria Regina Justina; Victorino, John Noel; Coronel, Andrei; Co, Jerelyn; Tiausas, Francis; Señires, Chiara Veronica

    2017-09-01

    Use of wireless sensor networks and smartphone integration design to monitor environmental parameters surrounding plantations is made possible because of readily available and affordable sensors. Providing low cost monitoring devices would be beneficial, especially to small farm owners, in a developing country like the Philippines, where agriculture covers a significant amount of the labor market. This study discusses the integration of wireless soil sensor devices and smartphones to create an application that will use multidimensional analysis to detect the presence or absence of plant disease. Specifically, soil sensors are designed to collect soil quality parameters in a sink node from which the smartphone collects data from via Bluetooth. Given these, there is a need to develop a classification model on the mobile phone that will report infection status of a soil. Though tree classification is the most appropriate approach for continuous parameter-based datasets, there is a need to determine whether tree models will result to coherent results or not. Soil sensor data that resides on the phone is modeled using several variations of decision tree, namely: decision tree (DT), best-fit (BF) decision tree, functional tree (FT), Naive Bayes (NB) decision tree, J48, J48graft and LAD tree, where decision tree approaches the problem by considering all sensor nodes as one. Results show that there are significant differences among soil sensor parameters indicating that there are variances in scores between the infected and uninfected sites. Furthermore, analysis of variance in accuracy, recall, precision and F1 measure scores from tree classification models homogeneity among NBTree, J48graft and J48 tree classification models.

  5. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models

    Mikkelsen, Irene Klaerke; Ribe, Lars Riisgaard; Bekke, Susanne Lise; Tietze, Anna; Oestergaard, Leif; Mouridsen, Kim [Aarhus University Hospital, Center of Functionally Integrative Neuroscience, Aarhus C (Denmark); Jones, P.S.; Alawneh, Josef [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Puig, Josep; Pedraza, Salva [Dr. Josep Trueta Girona University Hospitals, Department of Radiology, Girona Biomedical Research Institute, Girona (Spain); Gillard, Jonathan H. [University of Cambridge, Department of Radiology, Cambridge (United Kingdom); Warburton, Elisabeth A. [Cambrigde University Hospitals, Addenbrooke, Stroke Unit, Cambridge (United Kingdom); Baron, Jean-Claude [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Centre Hospitalier Sainte Anne, INSERM U894, Paris (France)

    2015-07-15

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10 % of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5 %) and 17/40 patients (42.5 %), respectively. Down-sampling to 128 x 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively. BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. (orig.)

  6. Biased visualization of hypoperfused tissue by computed tomography due to short imaging duration: improved classification by image down-sampling and vascular models

    Mikkelsen, Irene Klaerke; Ribe, Lars Riisgaard; Bekke, Susanne Lise; Tietze, Anna; Oestergaard, Leif; Mouridsen, Kim; Jones, P.S.; Alawneh, Josef; Puig, Josep; Pedraza, Salva; Gillard, Jonathan H.; Warburton, Elisabeth A.; Baron, Jean-Claude

    2015-01-01

    Lesion detection in acute stroke by computed-tomography perfusion (CTP) can be affected by incomplete bolus coverage in veins and hypoperfused tissue, so-called bolus truncation (BT), and low contrast-to-noise ratio (CNR). We examined the BT-frequency and hypothesized that image down-sampling and a vascular model (VM) for perfusion calculation would improve normo- and hypoperfused tissue classification. CTP datasets from 40 acute stroke patients were retrospectively analysed for BT. In 16 patients with hypoperfused tissue but no BT, repeated 2-by-2 image down-sampling and uniform filtering was performed, comparing CNR to perfusion-MRI levels and tissue classification to that of unprocessed data. By simulating reduced scan duration, the minimum scan-duration at which estimated lesion volumes came within 10 % of their true volume was compared for VM and state-of-the-art algorithms. BT in veins and hypoperfused tissue was observed in 9/40 (22.5 %) and 17/40 patients (42.5 %), respectively. Down-sampling to 128 x 128 resolution yielded CNR comparable to MR data and improved tissue classification (p = 0.0069). VM reduced minimum scan duration, providing reliable maps of cerebral blood flow and mean transit time: 5 s (p = 0.03) and 7 s (p < 0.0001), respectively. BT is not uncommon in stroke CTP with 40-s scan duration. Applying image down-sampling and VM improve tissue classification. (orig.)

  7. Waste Classification based on Waste Form Heat Generation in Advanced Nuclear Fuel Cycles Using the Fuel-Cycle Integration and Tradeoffs (FIT) Model

    Denia Djokic; Steven J. Piet; Layne F. Pincock; Nick R. Soelberg

    2013-02-01

    This study explores the impact of wastes generated from potential future fuel cycles and the issues presented by classifying these under current classification criteria, and discusses the possibility of a comprehensive and consistent characteristics-based classification framework based on new waste streams created from advanced fuel cycles. A static mass flow model, Fuel-Cycle Integration and Tradeoffs (FIT), was used to calculate the composition of waste streams resulting from different nuclear fuel cycle choices. This analysis focuses on the impact of waste form heat load on waste classification practices, although classifying by metrics of radiotoxicity, mass, and volume is also possible. The value of separation of heat-generating fission products and actinides in different fuel cycles is discussed. It was shown that the benefits of reducing the short-term fission-product heat load of waste destined for geologic disposal are neglected under the current source-based radioactive waste classification system , and that it is useful to classify waste streams based on how favorable the impact of interim storage is in increasing repository capacity.

  8. Classification of tumor based on magnetic resonance (MR) brain images using wavelet energy feature and neuro-fuzzy model

    Damayanti, A.; Werdiningsih, I.

    2018-03-01

    The brain is the organ that coordinates all the activities that occur in our bodies. Small abnormalities in the brain will affect body activity. Tumor of the brain is a mass formed a result of cell growth not normal and unbridled in the brain. MRI is a non-invasive medical test that is useful for doctors in diagnosing and treating medical conditions. The process of classification of brain tumor can provide the right decision and correct treatment and right on the process of treatment of brain tumor. In this study, the classification process performed to determine the type of brain tumor disease, namely Alzheimer’s, Glioma, Carcinoma and normal, using energy coefficient and ANFIS. Process stages in the classification of images of MR brain are the extraction of a feature, reduction of a feature, and process of classification. The result of feature extraction is a vector approximation of each wavelet decomposition level. The feature reduction is a process of reducing the feature by using the energy coefficients of the vector approximation. The feature reduction result for energy coefficient of 100 per feature is 1 x 52 pixels. This vector will be the input on the classification using ANFIS with Fuzzy C-Means and FLVQ clustering process and LM back-propagation. Percentage of success rate of MR brain images recognition using ANFIS-FLVQ, ANFIS, and LM back-propagation was obtained at 100%.

  9. Classification of bones from MR images in torso PET-MR imaging using a statistical shape model

    Reza Ay, Mohammad; Akbarzadeh, Afshin; Ahmadian, Alireza; Zaidi, Habib

    2014-01-01

    There have been exclusive features for hybrid PET/MRI systems in comparison with its PET/CT counterpart in terms of reduction of radiation exposure, improved soft-tissue contrast and truly simultaneous and multi-parametric imaging capabilities. However, quantitative imaging on PET/MR is challenged by attenuation of annihilation photons through their pathway. The correction for photon attenuation requires the availability of patient-specific attenuation map, which accounts for the spatial distribution of attenuation coefficients of biological tissues. However, the lack of information on electron density in the MR signal poses an inherent difficulty to the derivation of the attenuation map from MR images. In other words, the MR signal correlates with proton densities and tissue relaxation properties, rather than with electron density and, as such, it is not directly related to attenuation coefficients. In order to derive the attenuation map from MR images at 511 keV, various strategies have been proposed and implemented on prototype and commercial PET/MR systems. Segmentation-based methods generate an attenuation map by classification of T1-weighted or high resolution Dixon MR sequences followed by assignment of predefined attenuation coefficients to various tissue types. Intensity-based segmentation approaches fail to include bones in the attenuation map since the segmentation of bones from conventional MR sequences is a difficult task. Most MR-guided attenuation correction techniques ignore bones owing to the inherent difficulties associated with bone segmentation unless specialized MR sequences such as ultra-short echo (UTE) sequence are utilized. In this work, we introduce a new technique based on statistical shape modeling to segment bones and generate a four-class attenuation map. Our segmentation approach requires a torso bone shape model based on principle component analysis (PCA). A CT-based training set including clearly segmented bones of the torso region

  10. North American vegetation model for land-use planning in a changing climate: A solution to large classification problems

    Gerald E. Rehfeldt; Nicholas L. Crookston; Cuauhtemoc Saenz-Romero; Elizabeth M. Campbell

    2012-01-01

    Data points intensively sampling 46 North American biomes were used to predict the geographic distribution of biomes from climate variables using the Random Forests classification tree. Techniques were incorporated to accommodate a large number of classes and to predict the future occurrence of climates beyond the contemporary climatic range of the biomes. Errors of...

  11. MODELLING THE RELATIONSHIP BETWEEN LAND SURFACE TEMPERATURE AND LANDSCAPE PATTERNS OF LAND USE LAND COVER CLASSIFICATION USING MULTI LINEAR REGRESSION MODELS

    A. M. Bernales

    2016-06-01

    Full Text Available The threat of the ailments related to urbanization like heat stress is very prevalent. There are a lot of things that can be done to lessen the effect of urbanization to the surface temperature of the area like using green roofs or planting trees in the area. So land use really matters in both increasing and decreasing surface temperature. It is known that there is a relationship between land use land cover (LULC and land surface temperature (LST. Quantifying this relationship in terms of a mathematical model is very important so as to provide a way to predict LST based on the LULC alone. This study aims to examine the relationship between LST and LULC as well as to create a model that can predict LST using class-level spatial metrics from LULC. LST was derived from a Landsat 8 image and LULC classification was derived from LiDAR and Orthophoto datasets. Class-level spatial metrics were created in FRAGSTATS with the LULC and LST as inputs and these metrics were analysed using a statistical framework. Multi linear regression was done to create models that would predict LST for each class and it was found that the spatial metric “Effective mesh size” was a top predictor for LST in 6 out of 7 classes. The model created can still be refined by adding a temporal aspect by analysing the LST of another farming period (for rural areas and looking for common predictors between LSTs of these two different farming periods.

  12. A Method for Application of Classification Tree Models to Map Aquatic Vegetation Using Remotely Sensed Images from Different Sensors and Dates

    Ying Cai

    2012-09-01

    Full Text Available In previous attempts to identify aquatic vegetation from remotely-sensed images using classification trees (CT, the images used to apply CT models to different times or locations necessarily originated from the same satellite sensor as that from which the original images used in model development came, greatly limiting the application of CT. We have developed an effective normalization method to improve the robustness of CT models when applied to images originating from different sensors and dates. A total of 965 ground-truth samples of aquatic vegetation types were obtained in 2009 and 2010 in Taihu Lake, China. Using relevant spectral indices (SI as classifiers, we manually developed a stable CT model structure and then applied a standard CT algorithm to obtain quantitative (optimal thresholds from 2009 ground-truth data and images from Landsat7-ETM+, HJ-1B-CCD, Landsat5-TM and ALOS-AVNIR-2 sensors. Optimal CT thresholds produced average classification accuracies of 78.1%, 84.7% and 74.0% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. However, the optimal CT thresholds for different sensor images differed from each other, with an average relative variation (RV of 6.40%. We developed and evaluated three new approaches to normalizing the images. The best-performing method (Method of 0.1% index scaling normalized the SI images using tailored percentages of extreme pixel values. Using the images normalized by Method of 0.1% index scaling, CT models for a particular sensor in which thresholds were replaced by those from the models developed for images originating from other sensors provided average classification accuracies of 76.0%, 82.8% and 68.9% for emergent vegetation, floating-leaf vegetation and submerged vegetation, respectively. Applying the CT models developed for normalized 2009 images to 2010 images resulted in high classification (78.0%–93.3% and overall (92.0%–93.1% accuracies. Our

  13. Plaque Burden Influences Accurate Classification of Fibrous Cap Atheroma by In-Vivo Optical Coherence Tomography in a Porcine Model of Advanced Coronary Atherosclerosis

    Poulsen, Christian B; Pedrigi, Ryan M; Pareek, Nilesh

    2018-01-01

    AIMS: In-vivo validation of coronary optical coherence tomography (OCT) against histology and the effects of plaque burden (PB) on plaque classification remain unreported. We investigated this in a porcine model with human-like coronary atherosclerosis. METHODS AND RESULTS: Five female Yucatan D374...... a validated algorithm. Lesions were adjudicated using the Virmani classification and PB assessed from histology. OCT had a high sensitivity, but modest specificity (92.9% and 74.6%), for identifying fibrous cap atheroma (FCA). The reduced specificity for OCT was due to misclassification of plaques...... with histologically defined pathological intimal thickening (PIT) as FCA (46.1% of the frames with histological PIT were misclassified). PIT lesions misclassified as FCA by OCT had a statistically higher PB than in other OCT frames (median 32.0% versus 13.4%; p

  14. A reaction limited in vivo dissolution model for the study of drug absorption: Towards a new paradigm for the biopharmaceutic classification of drugs.

    Macheras, Panos; Iliadis, Athanassios; Melagraki, Georgia

    2018-05-30

    The aim of this work is to develop a gastrointestinal (GI) drug absorption model based on a reaction limited model of dissolution and consider its impact on the biopharmaceutic classification of drugs. Estimates for the fraction of dose absorbed as a function of dose, solubility, reaction/dissolution rate constant and the stoichiometry of drug-GI fluids reaction/dissolution were derived by numerical solution of the model equations. The undissolved drug dose and the reaction/dissolution rate constant drive the dissolution rate and determine the extent of absorption when high-constant drug permeability throughout the gastrointestinal tract is assumed. Dose is an important element of drug-GI fluids reaction/dissolution while solubility exclusively acts as an upper limit for drug concentrations in the lumen. The 3D plots of fraction of dose absorbed as a function of dose and reaction/dissolution rate constant for highly soluble and low soluble drugs for different "stoichiometries" (0.7, 1.0, 2.0) of the drug-reaction/dissolution with the GI fluids revealed that high extent of absorption was found assuming high drug- reaction/dissolution rate constant and high drug solubility. The model equations were used to simulate in vivo supersaturation and precipitation phenomena. The model developed provides the theoretical basis for the interpretation of the extent of drug's absorption on the basis of the parameters associated with the drug-GI fluids reaction/dissolution. A new paradigm emerges for the biopharmaceutic classification of drugs, namely, a model independent biopharmaceutic classification scheme of four drug categories based on either the fulfillment or not of the current dissolution criteria and the high or low % drug metabolism. Copyright © 2018. Published by Elsevier B.V.

  15. Terrain Classification on Venus from Maximum-Likelihood Inversion of Parameterized Models of Topography, Gravity, and their Relation

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.

    2013-12-01

    Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of

  16. QSTR with extended topochemical atom (ETA) indices. 16. Development of predictive classification and regression models for toxicity of ionic liquids towards Daphnia magna

    Roy, Kunal; Das, Rudra Narayan

    2013-01-01

    Highlights: • Ionic liquids are not intrinsically ‘green chemicals’ and require toxicological assessment. • Predictive QSTR models have been developed for toxicity of ILs to Daphnia magna. • Simple two dimensional descriptors were used to reduce the computational burden. • Discriminant and regression based models showed appreciable predictivity and reproducibility. • The extracted features can be explored in designing novel environmentally-friendly agents. -- Abstract: Ionic liquids have been judged much with respect to their wide applicability than their considerable harmful effects towards the living ecosystem which has been observed in many instances. Hence, toxicological introspection of these chemicals by the development of predictive mathematical models can be of good help. This study presents an attempt to develop predictive classification and regression models correlating the structurally derived chemical information of a group of 62 diverse ionic liquids with their toxicity towards Daphnia magna and their interpretation. We have principally used the extended topochemical atom (ETA) indices along with various topological non-ETA and thermodynamic parameters as independent variables. The developed quantitative models have been subjected to extensive statistical tests employing multiple validation strategies from which acceptable results have been reported. The best models obtained from classification and regression studies captured necessary structural information on lipophilicity, branching pattern, electronegativity and chain length of the cationic substituents for explaining ecotoxicity of ionic liquids towards D. magna. The derived information can be successfully used to design better ionic liquid analogues acquiring the qualities of a true eco-friendly green chemical

  17. A tool for enhancing strategic health planning: a modeled use of the International Classification of Functioning, Disability and Health.

    Sinclair, Lisa Bundara; Fox, Michael H; Betts, Donald R

    2013-01-01

    This article describes use of the International Classification of Functioning, Disability and Health (ICF) as a tool for strategic planning. The ICF is the international classification system for factors that influence health, including Body Structures, Body Functions, Activities and Participation and Environmental Factors. An overview of strategic planning and the ICF are provided. Selected ICF concepts and nomenclature are used to demonstrate its utility in helping develop a classic planning framework, objectives, measures and actions. Some issues and resolutions for applying the ICF are described. Applying the ICF for strategic health planning is an innovative approach that fosters the inclusion of social ecological health determinants and broad populations. If employed from the onset of planning, the ICF can help public health organizations systematically conceptualize, organize and communicate a strategic health plan. Published 2012. This article is a US Government work and is in the public domain in the USA.

  18. Classification in context

    Mai, Jens Erik

    2004-01-01

    This paper surveys classification research literature, discusses various classification theories, and shows that the focus has traditionally been on establishing a scientific foundation for classification research. This paper argues that a shift has taken place, and suggests that contemporary...... classification research focus on contextual information as the guide for the design and construction of classification schemes....

  19. Classification of the web

    Mai, Jens Erik

    2004-01-01

    This paper discusses the challenges faced by investigations into the classification of the Web and outlines inquiries that are needed to use principles for bibliographic classification to construct classifications of the Web. This paper suggests that the classification of the Web meets challenges...... that call for inquiries into the theoretical foundation of bibliographic classification theory....

  20. Knowledge discovery from patients' behavior via clustering-classification algorithms based on weighted eRFM and CLV model: An empirical study in public health care services.

    Zare Hosseini, Zeinab; Mohammadzadeh, Mahdi

    2016-01-01

    The rapid growing of information technology (IT) motivates and makes competitive advantages in health care industry. Nowadays, many hospitals try to build a successful customer relationship management (CRM) to recognize target and potential patients, increase patient loyalty and satisfaction and finally maximize their profitability. Many hospitals have large data warehouses containing customer demographic and transactions information. Data mining techniques can be used to analyze this data and discover hidden knowledge of customers. This research develops an extended RFM model, namely RFML (added parameter: Length) based on health care services for a public sector hospital in Iran with the idea that there is contrast between patient and customer loyalty, to estimate customer life time value (CLV) for each patient. We used Two-step and K-means algorithms as clustering methods and Decision tree (CHAID) as classification technique to segment the patients to find out target, potential and loyal customers in order to implement strengthen CRM. Two approaches are used for classification: first, the result of clustering is considered as Decision attribute in classification process and second, the result of segmentation based on CLV value of patients (estimated by RFML) is considered as Decision attribute. Finally the results of CHAID algorithm show the significant hidden rules and identify existing patterns of hospital consumers.

  1. Knowledge discovery from patients’ behavior via clustering-classification algorithms based on weighted eRFM and CLV model: An empirical study in public health care services

    Zare Hosseini, Zeinab; Mohammadzadeh, Mahdi

    2016-01-01

    The rapid growing of information technology (IT) motivates and makes competitive advantages in health care industry. Nowadays, many hospitals try to build a successful customer relationship management (CRM) to recognize target and potential patients, increase patient loyalty and satisfaction and finally maximize their profitability. Many hospitals have large data warehouses containing customer demographic and transactions information. Data mining techniques can be used to analyze this data and discover hidden knowledge of customers. This research develops an extended RFM model, namely RFML (added parameter: Length) based on health care services for a public sector hospital in Iran with the idea that there is contrast between patient and customer loyalty, to estimate customer life time value (CLV) for each patient. We used Two-step and K-means algorithms as clustering methods and Decision tree (CHAID) as classification technique to segment the patients to find out target, potential and loyal customers in order to implement strengthen CRM. Two approaches are used for classification: first, the result of clustering is considered as Decision attribute in classification process and second, the result of segmentation based on CLV value of patients (estimated by RFML) is considered as Decision attribute. Finally the results of CHAID algorithm show the significant hidden rules and identify existing patterns of hospital consumers. PMID:27610177

  2. The need for a characteristics-based approach to radioactive waste classification as informed by advanced nuclear fuel cycles using the fuel-cycle integration and tradeoffs (FIT) model

    Djokic, D.; Piet, S.; Pincock, L.; Soelberg, N.

    2013-01-01

    This study explores the impact of wastes generated from potential future fuel cycles and the issues presented by classifying these under current classification criteria, and discusses the possibility of a comprehensive and consistent characteristics-based classification framework based on new waste streams created from advanced fuel cycles. A static mass flow model, Fuel-Cycle Integration and Tradeoffs (FIT), was used to calculate the composition of waste streams resulting from different nuclear fuel cycle choices. Because heat generation is generally the most important factor limiting geological repository areal loading, this analysis focuses on the impact of waste form heat load on waste classification practices, although classifying by metrics of radiotoxicity, mass, and volume is also possible. Waste streams generated in different fuel cycles and their possible classification based on the current U.S. framework and international standards are discussed. It is shown that the effects of separating waste streams are neglected under a source-based radioactive waste classification system. (authors)

  3. SU-G-BRC-13: Model Based Classification for Optimal Position Selection for Left-Sided Breast Radiotherapy: Free Breathing, DIBH, Or Prone

    Lin, H; Liu, T; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Petillion, S; Kindts, I [University Hospitals Leuven, Leuven, Vlaams-Brabant (Belgium); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States)

    2016-06-15

    Purpose: There are clinical decision challenges to select optimal treatment positions for left-sided breast cancer patients—supine free breathing (FB), supine Deep Inspiration Breath Hold (DIBH) and prone free breathing (prone). Physicians often make the decision based on experiences and trials, which might not always result optimal OAR doses. We herein propose a mathematical model to predict the lowest OAR doses among these three positions, providing a quantitative tool for corresponding clinical decision. Methods: Patients were scanned in FB, DIBH, and prone positions under an IRB approved protocol. Tangential beam plans were generated for each position, and OAR doses were calculated. The position with least OAR doses is defined as the optimal position. The following features were extracted from each scan to build the model: heart, ipsilateral lung, breast volume, in-field heart, ipsilateral lung volume, distance between heart and target, laterality of heart, and dose to heart and ipsilateral lung. Principal Components Analysis (PCA) was applied to remove the co-linearity of the input data and also to lower the data dimensionality. Feature selection, another method to reduce dimensionality, was applied as a comparison. Support Vector Machine (SVM) was then used for classification. Thirtyseven patient data were acquired; up to now, five patient plans were available. K-fold cross validation was used to validate the accuracy of the classifier model with small training size. Results: The classification results and K-fold cross validation demonstrated the model is capable of predicting the optimal position for patients. The accuracy of K-fold cross validations has reached 80%. Compared to PCA, feature selection allows causal features of dose to be determined. This provides more clinical insights. Conclusion: The proposed classification system appeared to be feasible. We are generating plans for the rest of the 37 patient images, and more statistically significant

  4. CCM: A Text Classification Method by Clustering

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    In this paper, a new Cluster based Classification Model (CCM) for suspicious email detection and other text classification tasks, is presented. Comparative experiments of the proposed model against traditional classification models and the boosting algorithm are also discussed. Experimental results...... show that the CCM outperforms traditional classification models as well as the boosting algorithm for the task of suspicious email detection on terrorism domain email dataset and topic categorization on the Reuters-21578 and 20 Newsgroups datasets. The overall finding is that applying a cluster based...

  5. Hazard classification methodology

    Brereton, S.J.

    1996-01-01

    This document outlines the hazard classification methodology used to determine the hazard classification of the NIF LTAB, OAB, and the support facilities on the basis of radionuclides and chemicals. The hazard classification determines the safety analysis requirements for a facility

  6. Classification of Building Object Types

    Jørgensen, Kaj Asbjørn

    2011-01-01

    made. This is certainly the case in the Danish development. Based on the theories about these abstraction mechanisms, the basic principles for classification systems are presented and the observed misconceptions are analyses and explained. Furthermore, it is argued that the purpose of classification...... systems has changed and that new opportunities should be explored. Some proposals for new applications are presented and carefully aligned with IT opportunities. Especially, the use of building modelling will give new benefits and many of the traditional uses of classification systems will instead...... be managed by software applications and on the basis of building models. Classification systems with taxonomies of building object types have many application opportunities but can still be beneficial in data exchange between building construction partners. However, this will be performed by new methods...

  7. Molecular determinants of interactions between the N-terminal domain and the transmembrane core that modulate hERG K+ channel gating.

    Jorge Fernández-Trillo

    Full Text Available A conserved eag domain in the cytoplasmic amino terminus of the human ether-a-go-go-related gene (hERG potassium channel is critical for its slow deactivation gating. Introduction of gene fragments encoding the eag domain are able to restore normal deactivation properties of channels from which most of the amino terminus has been deleted, and also those lacking exclusively the eag domain or carrying a single point mutation in the initial residues of the N-terminus. Deactivation slowing in the presence of the recombinant domain is not observed with channels carrying a specific Y542C point mutation in the S4-S5 linker. On the other hand, mutations in some initial positions of the recombinant fragment also impair its ability to restore normal deactivation. Fluorescence resonance energy transfer (FRET analysis of fluorophore-tagged proteins under total internal reflection fluorescence (TIRF conditions revealed a substantial level of FRET between the introduced N-terminal eag fragments and the eag domain-deleted channels expressed at the membrane, but not between the recombinant eag domain and full-length channels with an intact amino terminus. The FRET signals were also minimized when the recombinant eag fragments carried single point mutations in the initial portion of their amino end, and when Y542C mutated channels were used. These data suggest that the restoration of normal deactivation gating by the N-terminal recombinant eag fragment is an intrinsic effect of this domain directed by the interaction of its N-terminal segment with the gating machinery, likely at the level of the S4-S5 linker.

  8. Gating mechanism of Kv11.1 (hERG) K+ channels without covalent connection between voltage sensor and pore domains.

    de la Peña, Pilar; Domínguez, Pedro; Barros, Francisco

    2018-03-01

    Kv11.1 (hERG, KCNH2) is a voltage-gated potassium channel crucial in setting the cardiac rhythm and the electrical behaviour of several non-cardiac cell types. Voltage-dependent gating of Kv11.1 can be reconstructed from non-covalently linked voltage sensing and pore modules (split channels), challenging classical views of voltage-dependent channel activation based on a S4-S5 linker acting as a rigid mechanical lever to open the gate. Progressive displacement of the split position from the end to the beginning of the S4-S5 linker induces an increasing negative shift in activation voltage dependence, a reduced z g value and a more negative ΔG 0 for current activation, an almost complete abolition of the activation time course sigmoid shape and a slowing of the voltage-dependent deactivation. Channels disconnected at the S4-S5 linker near the S4 helix show a destabilization of the closed state(s). Furthermore, the isochronal ion current mode shift magnitude is clearly reduced in the different splits. Interestingly, the progressive modifications of voltage dependence activation gating by changing the split position are accompanied by a shift in the voltage-dependent availability to a methanethiosulfonate reagent of a Cys introduced at the upper S4 helix. Our data demonstrate for the first time that alterations in the covalent connection between the voltage sensor and the pore domains impact on the structural reorganizations of the voltage sensor domain. Also, they support the hypothesis that the S4-S5 linker integrates signals coming from other cytoplasmic domains that constitute either an important component or a crucial regulator of the gating machinery in Kv11.1 and other KCNH channels.

  9. An automated cirrus classification

    Gryspeerdt, Edward; Quaas, Johannes; Goren, Tom; Klocke, Daniel; Brueck, Matthias

    2018-05-01

    Cirrus clouds play an important role in determining the radiation budget of the earth, but many of their properties remain uncertain, particularly their response to aerosol variations and to warming. Part of the reason for this uncertainty is the dependence of cirrus cloud properties on the cloud formation mechanism, which itself is strongly dependent on the local meteorological conditions. In this work, a classification system (Identification and Classification of Cirrus or IC-CIR) is introduced to identify cirrus clouds by the cloud formation mechanism. Using reanalysis and satellite data, cirrus clouds are separated into four main types: orographic, frontal, convective and synoptic. Through a comparison to convection-permitting model simulations and back-trajectory-based analysis, it is shown that these observation-based regimes can provide extra information on the cloud-scale updraughts and the frequency of occurrence of liquid-origin ice, with the convective regime having higher updraughts and a greater occurrence of liquid-origin ice compared to the synoptic regimes. Despite having different cloud formation mechanisms, the radiative properties of the regimes are not distinct, indicating that retrieved cloud properties alone are insufficient to completely describe them. This classification is designed to be easily implemented in GCMs, helping improve future model-observation comparisons and leading to improved parametrisations of cirrus cloud processes.

  10. Using DRG to analyze hospital production: a re-classification model based on a linear tree-network topology

    Achille Lanzarini

    2014-09-01

    Full Text Available Background: Hospital discharge records are widely classified through the Diagnosis Related Group (DRG system; the version currently used in Italy counts 538 different codes, including thousands of diagnosis and procedures. These numbers reflect the considerable effort of simplification, yet the current classification system is of little use to evaluate hospital production and performance.Methods: As the case-mix of a given Hospital Unit (HU is driven by its physicians’ specializations, a grouping of DRGs into a specialization-driven classification system has been conceived through the analysis of HUs discharging and the ICD-9-CM codes. We propose a three-folded classification, based on the analysis of 1,670,755 Hospital Discharge Cards (HDCs produced by Lombardy Hospitals in 2010; it consists of 32 specializations (e.g. Neurosurgery, 124 sub-specialization (e.g. skull surgery and 337 sub-sub-specialization (e.g. craniotomy.Results: We give a practical application of the three-layered approach, based on the production of a Neurosurgical HU; we observe synthetically the profile of production (1,305 hospital discharges for 79 different DRG codes of 16 different MDC are grouped in few groups of homogeneous DRG codes, a more informative production comparison (through process-specific comparisons, rather than crude or case-mix standardized comparisons and a potentially more adequate production planning (considering the Neurosurgical HUs of the same city, those produce a limited quote of the whole neurosurgical production, because the same activity can be realized by non-Neurosugical HUs.Conclusion: Our work may help to evaluate the hospital production for a rational planning of available resources, blunting information asymmetries between physicians and managers. 

  11. A Model of Classification of Phonemic and Phonetic Negative Transfer: The case of Turkish –English Interlanguage with Pedagogical Applications

    Sinan Bayraktaroğlu

    2011-04-01

    Full Text Available This article introduces a model of classification of phonemic and phonetic negative- transfer based on an empirical study of Turkish-English Interlanguage. The model sets out a hierarchy of difficulties, starting from the most crucial phonemic features affecting “intelligibility”, down to other distributional, phonetic, and allophonic features which need to be acquired if a “near-native” level of phonological competence is aimed at. Unlike previous theoretical studies of predictions of classification of phonemic and phonetic L1 interference (Moulton 1962a 1962b; Wiik 1965, this model is based on an empirical study of the recorded materials of Turkish-English IL speakers transcribed allophonically using the IPA Alphabet and diacritics. For different categories of observed systematic negative- transfer and their avoidance of getting “fossilized” in the IL process, remedial exercises are recommended for the teaching and learning BBC Pronunciation. In conclusıon, few methodological phonetic techniques, approaches, and specifications are put forward for their use in designing the curriculum and syllabus content of teaching L2 pronunciation.

  12. Algebraic QFT as a framework for classification and model-building. A heretic view of the new kinematics

    Schroer, B.

    1990-01-01

    We show that the algebraic structure encountered first in conformal QFT 2 corresponds to the multiciplicity-free parastatistics description of H. Green of 1953. We give arguments in favour of the dictum that QFT, in contradistinction to Quantum Mechanics, does not have to rely on quantization but rather allows for a formulation and classification in terms of intrinsic quantum principles. We interpret the Karowski-Weisz-Smirnov form-factor program in this light and comment on some properties of anyons and plektons which may be relevant for condensed matter physics (the enigma of Non-Fermi-Liquid-States)

  13. Self-organization comprehensive real-time state evaluation model for oil pump unit on the basis of operating condition classification and recognition

    Liang, Wei; Yu, Xuchao; Zhang, Laibin; Lu, Wenqing

    2018-05-01

    In oil transmission station, the operating condition (OC) of an oil pump unit sometimes switches accordingly, which will lead to changes in operating parameters. If not taking the switching of OCs into consideration while performing a state evaluation on the pump unit, the accuracy of evaluation would be largely influenced. Hence, in this paper, a self-organization Comprehensive Real-Time State Evaluation Model (self-organization CRTSEM) is proposed based on OC classification and recognition. However, the underlying model CRTSEM is built through incorporating the advantages of Gaussian Mixture Model (GMM) and Fuzzy Comprehensive Evaluation Model (FCEM) first. That is to say, independent state models are established for every state characteristic parameter according to their distribution types (i.e. the Gaussian distribution and logistic regression distribution). Meanwhile, Analytic Hierarchy Process (AHP) is utilized to calculate the weights of state characteristic parameters. Then, the OC classification is determined by the types of oil delivery tasks, and CRTSEMs of different standard OCs are built to constitute the CRTSEM matrix. On the other side, the OC recognition is realized by a self-organization model that is established on the basis of Back Propagation (BP) model. After the self-organization CRTSEM is derived through integration, real-time monitoring data can be inputted for OC recognition. At the end, the current state of the pump unit can be evaluated by using the right CRTSEM. The case study manifests that the proposed self-organization CRTSEM can provide reasonable and accurate state evaluation results for the pump unit. Besides, the assumption that the switching of OCs will influence the results of state evaluation is also verified.

  14. Do thoraco-lumbar spinal injuries classification systems exhibit lower inter- and intra-observer agreement than other fractures classifications?: A comparison using fractures of the trochanteric area of the proximal femur as contrast model.

    Urrutia, Julio; Zamora, Tomas; Klaber, Ianiv; Carmona, Maximiliano; Palma, Joaquin; Campos, Mauricio; Yurac, Ratko

    2016-04-01

    It has been postulated that the complex patterns of spinal injuries have prevented adequate agreement using thoraco-lumbar spinal injuries (TLSI) classifications; however, limb fracture classifications have also shown variable agreements. This study compared agreement using two TLSI classifications with agreement using two classifications of fractures of the trochanteric area of the proximal femur (FTAPF). Six evaluators classified the radiographs and computed tomography scans of 70 patients with acute TLSI using the Denis and the new AO Spine thoraco-lumbar injury classifications. Additionally, six evaluators classified the radiographs of 70 patients with FTAPF using the Tronzo and the AO schemes. Six weeks later, all cases were presented in a random sequence for repeat assessment. The Kappa coefficient (κ) was used to determine agreement. Inter-observer agreement: For TLSI, using the AOSpine classification, the mean κ was 0.62 (0.57-0.66) considering fracture types, and 0.55 (0.52-0.57) considering sub-types; using the Denis classification, κ was 0.62 (0.59-0.65). For FTAPF, with the AO scheme, the mean κ was 0.58 (0.54-0.63) considering fracture types and 0.31 (0.28-0.33) considering sub-types; for the Tronzo classification, κ was 0.54 (0.50-0.57). Intra-observer agreement: For TLSI, using the AOSpine scheme, the mean κ was 0.77 (0.72-0.83) considering fracture types, and 0.71 (0.67-0.76) considering sub-types; for the Denis classification, κ was 0.76 (0.71-0.81). For FTAPF, with the AO scheme, the mean κ was 0.75 (0.69-0.81) considering fracture types and 0.45 (0.39-0.51) considering sub-types; for the Tronzo classification, κ was 0.64 (0.58-0.70). Using the main types of AO classifications, inter- and intra-observer agreement of TLSI were comparable to agreement evaluating FTAPF; including sub-types, inter- and intra-observer agreement evaluating TLSI were significantly better than assessing FTAPF. Inter- and intra-observer agreements using the Denis

  15. Risk adjustment models for interhospital comparison of CS rates using Robson's ten group classification system and other socio-demographic and clinical variables.

    Colais, Paola; Fantini, Maria P; Fusco, Danilo; Carretta, Elisa; Stivanello, Elisa; Lenzi, Jacopo; Pieri, Giulia; Perucci, Carlo A

    2012-06-21

    Caesarean section (CS) rate is a quality of health care indicator frequently used at national and international level. The aim of this study was to assess whether adjustment for Robson's Ten Group Classification System (TGCS), and clinical and socio-demographic variables of the mother and the fetus is necessary for inter-hospital comparisons of CS rates. The study population includes 64,423 deliveries in Emilia-Romagna between January 1, 2003 and December 31, 2004, classified according to theTGCS. Poisson regression was used to estimate crude and adjusted hospital relative risks of CS compared to a reference category. Analyses were carried out in the overall population and separately according to the Robson groups (groups I, II, III, IV and V-X combined). Adjusted relative risks (RR) of CS were estimated using two risk-adjustment models; the first (M1) including the TGCS group as the only adjustment factor; the second (M2) including in addition demographic and clinical confounders identified using a stepwise selection procedure. Percentage variations between crude and adjusted RRs by hospital were calculated to evaluate the confounding effect of covariates. The percentage variations from crude to adjusted RR proved to be similar in M1 and M2 model. However, stratified analyses by Robson's classification groups showed that residual confounding for clinical and demographic variables was present in groups I (nulliparous, single, cephalic, ≥37 weeks, spontaneous labour) and III (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, spontaneous labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour) and to a minor extent in groups II (nulliparous, single, cephalic, ≥37 weeks, induced or CS before labour) and IV (multiparous, excluding previous CS, single, cephalic, ≥37 weeks, induced or CS before labour). The TGCS classification is useful for inter-hospital comparison of CS section rates, but

  16. Pharmacological Classification and Activity Evaluation of Furan and Thiophene Amide Derivatives Applying Semi-Empirical ab initio Molecular Modeling Methods

    Leszek Bober

    2012-05-01

    Full Text Available Pharmacological and physicochemical classification of the furan and thiophene amide derivatives by multiple regression analysis and partial least square (PLS based on semi-empirical ab initio molecular modeling studies and high-performance liquid chromatography (HPLC retention data is proposed. Structural parameters obtained from the PCM (Polarizable Continuum Model method and the literature values of biological activity (antiproliferative for the A431 cells expressed as LD50 of the examined furan and thiophene derivatives was used to search for relationships. It was tested how variable molecular modeling conditions considered together, with or without HPLC retention data, allow evaluation of the structural recognition of furan and thiophene derivatives with respect to their pharmacological properties.

  17. Inclusion of Neuropsychological Scores in Atrophy Models Improves Diagnostic Classification of Alzheimer’s Disease and Mild Cognitive Impairment

    Mohammed Goryawala

    2015-01-01

    Full Text Available Brain atrophy in mild cognitive impairment (MCI and Alzheimer’s disease (AD are difficult to demarcate to assess the progression of AD. This study presents a statistical framework on the basis of MRI volumes and neuropsychological scores. A feature selection technique using backward stepwise linear regression together with linear discriminant analysis is designed to classify cognitive normal (CN subjects, early MCI (EMCI, late MCI (LMCI, and AD subjects in an exhaustive two-group classification process. Results show a dominance of the neuropsychological parameters like MMSE and RAVLT. Cortical volumetric measures of the temporal, parietal, and cingulate regions are found to be significant classification factors. Moreover, an asymmetrical distribution of the volumetric measures across hemispheres is seen for CN versus EMCI and EMCI versus AD, showing dominance of the right hemisphere; whereas CN versus LMCI and EMCI versus LMCI show dominance of the left hemisphere. A 2-fold cross-validation showed an average accuracy of 93.9%, 90.8%, and 94.5%, for the CN versus AD, CN versus LMCI, and EMCI versus AD, respectively. The accuracy for groups that are difficult to differentiate like EMCI versus LMCI was 73.6%. With the inclusion of the neuropsychological scores, a significant improvement (24.59% was obtained over using MRI measures alone.

  18. Using Factor Mixture Models to Evaluate the Type A/B Classification of Alcohol Use Disorders in a Heterogeneous Treatment Sample

    Hildebrandt, Tom; Epstein, Elizabeth E.; Sysko, Robyn; Bux, Donald A.

    2017-01-01

    Background The type A/B classification model for alcohol use disorders (AUDs) has received considerable empirical support. However, few studies examine the underlying latent structure of this subtyping model, which has been challenged as a dichotomization of a single drinking severity dimension. Type B, relative to type A, alcoholics represent those with early age of onset, greater familial risk, and worse outcomes from alcohol use. Method We examined the latent structure of the type A/B model using categorical, dimensional, and factor mixture models in a mixed gender community treatment-seeking sample of adults with an AUD. Results Factor analytic models identified 2-factors (drinking severity/externalizing psychopathology and internalizing psychopathology) underlying the type A/B indicators. A factor mixture model with 2-dimensions and 3-classes emerged as the best overall fitting model. The classes reflected a type A class and two type B classes (B1 and B2) that differed on the respective level of drinking severity/externalizing pathology and internalizing pathology. Type B1 had a greater prevalence of women and more internalizing pathology and B2 had a greater prevalence of men and more drinking severity/externalizing pathology. The 2-factor, 3-class model also exhibited predictive validity by explaining significant variance in 12-month drinking and drug use outcomes. Conclusions The model identified in the current study may provide a basis for examining different sources of heterogeneity in the course and outcome of AUDs. PMID:28247423

  19. Development of Classification Models for Identifying “True” P-glycoprotein (P-gp Inhibitors Through Inhibition, ATPase Activation and Monolayer Efflux Assays

    Anna Maria Bianucci

    2012-06-01

    Full Text Available P-glycoprotein (P-gp is an efflux pump involved in the protection of tissues of several organs by influencing xenobiotic disposition. P-gp plays a key role in multidrug resistance and in the progression of many neurodegenerative diseases. The development of new and more effective therapeutics targeting P-gp thus represents an intriguing challenge in drug discovery. P-gp inhibition may be considered as a valid approach to improve drug bioavailability as well as to overcome drug resistance to many kinds of tumours characterized by the over-expression of this protein. This study aims to develop classification models from a unique dataset of 59 compounds for which there were homogeneous experimental data on P-gp inhibition, ATPase activation and monolayer efflux. For each experiment, the dataset was split into a training and a test set comprising 39 and 20 molecules, respectively. Rational splitting was accomplished using a sphere-exclusion type algorithm. After a two-step (internal/external validation, the best-performing classification models were used in a consensus predicting task for the identification of compounds named as “true” P-gp inhibitors, i.e., molecules able to inhibit P-gp without being effluxed by P-gp itself and simultaneously unable to activate the ATPase function.

  20. [Landscape classification: research progress and development trend].

    Liang, Fa-Chao; Liu, Li-Ming

    2011-06-01

    Landscape classification is the basis of the researches on landscape structure, process, and function, and also, the prerequisite for landscape evaluation, planning, protection, and management, directly affecting the precision and practicability of landscape research. This paper reviewed the research progress on the landscape classification system, theory, and methodology, and summarized the key problems and deficiencies of current researches. Some major landscape classification systems, e. g. , LANMAP and MUFIC, were introduced and discussed. It was suggested that a qualitative and quantitative comprehensive classification based on the ideology of functional structure shape and on the integral consideration of landscape classification utility, landscape function, landscape structure, physiogeographical factors, and human disturbance intensity should be the major research directions in the future. The integration of mapping, 3S technology, quantitative mathematics modeling, computer artificial intelligence, and professional knowledge to enhance the precision of landscape classification would be the key issues and the development trend in the researches of landscape classification.

  1. Automatic white blood cell classification using pre-trained deep learning models: ResNet and Inception

    Habibzadeh, Mehdi; Jannesari, Mahboobeh; Rezaei, Zahra; Baharvand, Hossein; Totonchi, Mehdi

    2018-04-01

    This works gives an account of evaluation of white blood cell differential counts via computer aided diagnosis (CAD) system and hematology rules. Leukocytes, also called white blood cells (WBCs) play main role of the immune system. Leukocyte is responsible for phagocytosis and immunity and therefore in defense against infection involving the fatal diseases incidence and mortality related issues. Admittedly, microscopic examination of blood samples is a time consuming, expensive and error-prone task. A manual diagnosis would search for specific Leukocytes and number abnormalities in the blood slides while complete blood count (CBC) examination is performed. Complications may arise from the large number of varying samples including different types of Leukocytes, related sub-types and concentration in blood, which makes the analysis prone to human error. This process can be automated by computerized techniques which are more reliable and economical. In essence, we seek to determine a fast, accurate mechanism for classification and gather information about distribution of white blood evidences which may help to diagnose the degree of any abnormalities during CBC test. In this work, we consider the problem of pre-processing and supervised classification of white blood cells into their four primary types including Neutrophils, Eosinophils, Lymphocytes, and Monocytes using a consecutive proposed deep learning framework. For first step, this research proposes three consecutive pre-processing calculations namely are color distortion; bounding box distortion (crop) and image flipping mirroring. In second phase, white blood cell recognition performed with hierarchy topological feature extraction using Inception and ResNet architectures. Finally, the results obtained from the preliminary analysis of cell classification with (11200) training samples and 1244 white blood cells evaluation data set are presented in confusion matrices and interpreted using accuracy rate, and false

  2. The clinical inadequacy of the DSM-5 classification of somatic symptom and related disorders: an alternative trans-diagnostic model.

    Cosci, Fiammetta; Fava, Giovanni A

    2016-08-01

    The Diagnostic and Statistical of Mental Disorders, Fifth Edition (DSM-5) somatic symptom and related disorders chapter has a limited clinical utility. In addition to the problems that the single diagnostic rubrics and the deletion of the diagnosis of hypochondriasis entail, there are 2 major ambiguities: (1) the use of the term "somatic symptoms" reflects an ill-defined concept of somatization and (2) abnormal illness behavior is included in all diagnostic rubrics, but it is never conceptually defined. In the present review of the literature, we will attempt to approach the clinical issue from a different angle, by introducing the trans-diagnostic viewpoint of illness behavior and propose an alternative clinimetric classification system, based on the Diagnostic Criteria for Psychosomatic Research.

  3. Robust Seismic Normal Modes Computation in Radial Earth Models and A Novel Classification Based on Intersection Points of Waveguides

    Ye, J.; Shi, J.; De Hoop, M. V.

    2017-12-01

    We develop a robust algorithm to compute seismic normal modes in a spherically symmetric, non-rotating Earth. A well-known problem is the cross-contamination of modes near "intersections" of dispersion curves for separate waveguides. Our novel computational approach completely avoids artificial degeneracies by guaranteeing orthonormality among the eigenfunctions. We extend Wiggins' and Buland's work, and reformulate the Sturm-Liouville problem as a generalized eigenvalue problem with the Rayleigh-Ritz Galerkin method. A special projection operator incorporating the gravity terms proposed by de Hoop and a displacement/pressure formulation are utilized in the fluid outer core to project out the essential spectrum. Moreover, the weak variational form enables us to achieve high accuracy across the solid-fluid boundary, especially for Stoneley modes, which have exponentially decaying behavior. We also employ the mixed finite element technique to avoid spurious pressure modes arising from discretization schemes and a numerical inf-sup test is performed following Bathe's work. In addition, the self-gravitation terms are reformulated to avoid computations outside the Earth, thanks to the domain decomposition technique. Our package enables us to study the physical properties of intersection points of waveguides. According to Okal's classification theory, the group velocities should be continuous within a branch of the same mode family. However, we have found that there will be a small "bump" near intersection points, which is consistent with Miropol'sky's observation. In fact, we can loosely regard Earth's surface and the CMB as independent waveguides. For those modes that are far from the intersection points, their eigenfunctions are localized in the corresponding waveguides. However, those that are close to intersection points will have physical features of both waveguides, which means they cannot be classified in either family. Our results improve on Okal

  4. Featureless classification of light curves

    Kügler, S. D.; Gianniotis, N.; Polsterer, K. L.

    2015-08-01

    In the era of rapidly increasing amounts of time series data, classification of variable objects has become the main objective of time-domain astronomy. Classification of irregularly sampled time series is particularly difficult because the data cannot be represented naturally as a vector which can be directly fed into a classifier. In the literature, various statistical features serve as vector representations. In this work, we represent time series by a density model. The density model captures all the information available, including measurement errors. Hence, we view this model as a generalization to the static features which directly can be derived, e.g. as moments from the density. Similarity between each pair of time series is quantified by the distance between their respective models. Classification is performed on the obtained distance matrix. In the numerical experiments, we use data from the OGLE (Optical Gravitational Lensing Experiment) and ASAS (All Sky Automated Survey) surveys and demonstrate that the proposed representation performs up to par with the best currently used feature-based approaches. The density representation preserves all static information present in the observational data, in contrast to a less-complete description by features. The density representation is an upper boundary in terms of information made available to the classifier. Consequently, the predictive power of the proposed classification depends on the choice of similarity measure and classifier, only. Due to its principled nature, we advocate that this new approach of representing time series has potential in tasks beyond classification, e.g. unsupervised learning.

  5. Exhaustive Classification of the Invariant Solutions for a Specific Nonlinear Model Describing Near Planar and Marginally Long-Wave Unstable Interfaces for Phase Transition

    Ahangari, Fatemeh

    2018-05-01

    Problems of thermodynamic phase transition originate inherently in solidification, combustion and various other significant fields. If the transition region among two locally stable phases is adequately narrow, the dynamics can be modeled by an interface motion. This paper is devoted to exhaustive analysis of the invariant solutions for a modified Kuramoto-Sivashinsky equation in two spatial and one temporal dimensions is presented. This nonlinear partial differential equation asymptotically characterizes near planar interfaces, which are marginally long-wave unstable. For this purpose, by applying the classical symmetry method for this model the classical symmetry operators are attained. Moreover, the structure of the Lie algebra of symmetries is discussed and the optimal system of subalgebras, which yields the preliminary classification of group invariant solutions is constructed. Mainly, the Lie invariants corresponding to the infinitesimal symmetry generators as well as associated similarity reduced equations are also pointed out. Furthermore, the nonclassical symmetries of this nonlinear PDE are also comprehensively investigated.

  6. Emotions Classification for Arabic Tweets

    pc

    2018-03-05

    Mar 5, 2018 ... learning methods for referring to all areas of detecting, analyzing, and classifying ... In this paper, an adaptive model is proposed for emotions classification of ... WEKA data mining tool is used to implement this model and evaluate the ... defined using vector representation, storing a numerical. "importance" ...

  7. SAW Classification Algorithm for Chinese Text Classification

    Xiaoli Guo; Huiyu Sun; Tiehua Zhou; Ling Wang; Zhaoyang Qu; Jiannan Zang

    2015-01-01

    Considering the explosive growth of data, the increased amount of text data’s effect on the performance of text categorization forward the need for higher requirements, such that the existing classification method cannot be satisfied. Based on the study of existing text classification technology and semantics, this paper puts forward a kind of Chinese text classification oriented SAW (Structural Auxiliary Word) algorithm. The algorithm uses the special space effect of Chinese text where words...

  8. Hierarchical Spatio-Temporal Probabilistic Graphical Model with Multiple Feature Fusion for Binary Facial Attribute Classification in Real-World Face Videos.

    Demirkus, Meltem; Precup, Doina; Clark, James J; Arbel, Tal

    2016-06-01

    Recent literature shows that facial attributes, i.e., contextual facial information, can be beneficial for improving the performance of real-world applications, such as face verification, face recognition, and image search. Examples of face attributes include gender, skin color, facial hair, etc. How to robustly obtain these facial attributes (traits) is still an open problem, especially in the presence of the challenges of real-world environments: non-uniform illumination conditions, arbitrary occlusions, motion blur and background clutter. What makes this problem even more difficult is the enormous variability presented by the same subject, due to arbitrary face scales, head poses, and facial expressions. In this paper, we focus on the problem of facial trait classification in real-world face videos. We have developed a fully automatic hierarchical and probabilistic framework that models the collective set of frame class distributions and feature spatial information over a video sequence. The experiments are conducted on a large real-world face video database that we have collected, labelled and made publicly available. The proposed method is flexible enough to be applied to any facial classification problem. Experiments on a large, real-world video database McGillFaces [1] of 18,000 video frames reveal that the proposed framework outperforms alternative approaches, by up to 16.96 and 10.13%, for the facial attributes of gender and facial hair, respectively.

  9. Geomorphology Classification of Shandong Province Based on Digital Elevation Model in the 1 Arc-second Format of Shuttle Radar Topography Mission Data

    Fu, Jundong; Zhang, Guangcheng; Wang, Lei; Xia, Nuan

    2018-01-01

    Based on gigital elevation model in the 1 arc-second format of shuttle radar topography mission data, using the window analysis and mean change point analysis of geographic information system (GIS) technology, programmed with python modules this, automatically extracted and calculated geomorphic elements of Shandong province. The best access to quantitatively study area relief amplitude of statistical area. According to Chinese landscape classification standard, the landscape type in Shandong province was divided into 8 types: low altitude plain, medium altitude plain, low altitude platform, medium altitude platform, low altitude hills, medium altitude hills, low relief mountain, medium relief mountain and the percentages of Shandong province’s total area are as follows: 12.72%, 0.01%, 36.38%, 0.24%, 17.26%, 15.64%, 11.1%, 6.65%. The results of landforms are basically the same as the overall terrain of Shandong Province, Shandong province’s total area, and the study can quantitatively and scientifically provide reference for the classification of landforms in Shandong province.

  10. Sample size and classification error for Bayesian change-point models with unlabelled sub-groups and incomplete follow-up.

    White, Simon R; Muniz-Terrera, Graciela; Matthews, Fiona E

    2018-05-01

    Many medical (and ecological) processes involve the change of shape, whereby one trajectory changes into another trajectory at a specific time point. There has been little investigation into the study design needed to investigate these models. We consider the class of fixed effect change-point models with an underlying shape comprised two joined linear segments, also known as broken-stick models. We extend this model to include two sub-groups with different trajectories at the change-point, a change and no change class, and also include a missingness model to account for individuals with incomplete follow-up. Through a simulation study, we consider the relationship of sample size to the estimates of the underlying shape, the existence of a change-point, and the classification-error of sub-group labels. We use a Bayesian framework to account for the missing labels, and the analysis of each simulation is performed using standard Markov chain Monte Carlo techniques. Our simulation study is inspired by cognitive decline as measured by the Mini-Mental State Examination, where our extended model is appropriate due to the commonly observed mixture of individuals within studies who do or do not exhibit accelerated decline. We find that even for studies of modest size ( n = 500, with 50 individuals observed past the change-point) in the fixed effect setting, a change-point can be detected and reliably estimated across a range of observation-errors.

  11. Effective Exchange Rate Classifications and Growth

    Justin M. Dubas; Byung-Joo Lee; Nelson C. Mark

    2005-01-01

    We propose an econometric procedure for obtaining de facto exchange rate regime classifications which we apply to study the relationship between exchange rate regimes and economic growth. Our classification method models the de jure regimes as outcomes of a multinomial logit choice problem conditional on the volatility of a country's effective exchange rate, a bilateral exchange rate and international reserves. An `effective' de facto exchange rate regime classification is then obtained by as...

  12. Application of Artificial Neural Network Models in Segmentation and Classification of Nodules in Breast Ultrasound Digital Images

    Karem D. Marcomini

    2016-01-01

    Full Text Available This research presents a methodology for the automatic detection and characterization of breast sonographic findings. We performed the tests in ultrasound images obtained from breast phantoms made of tissue mimicking material. When the results were considerable, we applied the same techniques to clinical examinations. The process was started employing preprocessing (Wiener filter, equalization, and median filter to minimize noise. Then, five segmentation techniques were investigated to determine the most concise representation of the lesion contour, enabling us to consider the neural network SOM as the most relevant. After the delimitation of the object, the most expressive features were defined to the morphological description of the finding, generating the input data to the neural Multilayer Perceptron (MLP classifier. The accuracy achieved during training with simulated images was 94.2%, producing an AUC of 0.92. To evaluating the data generalization, the classification was performed with a group of unknown images to the system, both to simulators and to clinical trials, resulting in an accuracy of 90% and 81%, respectively. The proposed classifier proved to be an important tool for the diagnosis in breast ultrasound.

  13. Spatial Numeric Classification Model Suitability with Landuse Change in Sustainable Food Agriculture Zone in Kediri Sub-district, Tabanan Regency, Indonesia

    Trigunasih, N. M.; Lanya, I.; Hutauruk, J.; Arthagama, I. D. M.

    2017-12-01

    The development of rapid population will make the availability and utilization of land resources is increasingly shrinking in number, especially occurs in rice field. Since the last 5 years the numbers of farmland is decrasing by industry, infrastructure development, tourism development and other services. The agricultural problems facing at the moment is the occurrence of a change of use of agricultural land into farming now is not more popular is called over the function of agricultural land into non-farming. According to the Central Bureau of statistics (BPS) of the province of Bali (2013) within a period of 14 years (1999-2013), there has been a change of use of agricultural land be not agriculture/wetland functions over the 4,906 hectares. When averaged over the function flatten paddy fields per year occurred in Bali approximately 350 ha (0.41%). The highest paddy fields over the function during a period of fourteen years there is in Tabanan area of 1,230 ha. To maintain the existence of the rice fields or subak in Bali in particular, need to be done protection against agricultural lands sustainable. Ninth District/Town in Bali today, haven’t had a Perda on protection of agricultural land sustainable food that is mandated by law 41 Year 2009. This will have an impact on food security of the region, and the world’s cultural heritage as the water will lose its existence as a system of irrigation organization in Bali. The purpose of this research was done to (1) determine the numerical classification of spatial parameters of sustainable food farm in Tabanan Regency Kediri Subdistrict, (2) determine the model of the zoning of agricultural land area of sustainable food that fits on Years 2020, 2030, 2040, and in district of Kediri, Tabanan Regency. The method used is the kuantitaif method includes the focus group discussion, the development of spatial data, analysis geoprosessing (spatial analysis and analysis of proximity), and statistical analysis

  14. Data Modeling, Feature Extraction, and Classification of Magnetic and EMI Data, ESTCP Discrimination Study, Camp Sibert, AL. Demonstration Report

    2008-09-01

    Figure 19. Misfit versus depth curve for the EM63 Pasion -Oldenburg model fit to anomaly 649. Two cases are considered: (i) using all the data which...selection of optimal models; c) Fitting of 2- and 3-dipole Pasion -Oldenburg models to the EM63 cued- interrogation data and selection of optimal models...Hart et al., 2001; Collins et al., 2001; Pasion & Oldenburg, 2001; Zhang et al., 2003a, 2003b; Billings, 2004). The most promising discrimination

  15. Strong decays of sc-bar mesons in the covariant oscillator quark model with the U tilde (4)DS x O(3, 1)L-classification scheme

    Maeda, Tomohito; Yamada, Kenji; Oda, Masuho; Ishida, Shin

    2010-01-01

    We investigate the strong decays with one pseudoscalar emission of charmed strange mesons in the covariant oscillator quark model. The wave functions of composite sc-bar mesons are constructed as the irreducible representations of the U tilde (4) DS xO(3,1) L . Through the observed mass and results of decay study we discuss a novel assignment of observed charmed strange mesons from the viewpoint of the U tilde (4) DS x O(3,1) L -classification scheme. It is shown that D s0 * (2317) and D s1 (2460) are consistently explained as ground state chiralons, appeared in the U tilde (4) DS xO(3,1) L scheme. Furthermore, it is also found that recently-observed D s1 * (2710) could be described as first excited state chiralon. (author)

  16. An Automatic Segmentation Method Combining an Active Contour Model and a Classification Technique for Detecting Polycomb-group Proteinsin High-Throughput Microscopy Images.

    Gregoretti, Francesco; Cesarini, Elisa; Lanzuolo, Chiara; Oliva, Gennaro; Antonelli, Laura

    2016-01-01

    The large amount of data generated in biological experiments that rely on advanced microscopy can be handled only with automated image analysis. Most analyses require a reliable cell image segmentation eventually capable of detecting subcellular structures.We present an automatic segmentation method to detect Polycomb group (PcG) proteins areas isolated from nuclei regions in high-resolution fluorescent cell image stacks. It combines two segmentation algorithms that use an active contour model and a classification technique serving as a tool to better understand the subcellular three-dimensional distribution of PcG proteins in live cell image sequences. We obtained accurate results throughout several cell image datasets, coming from different cell types and corresponding to different fluorescent labels, without requiring elaborate adjustments to each dataset.

  17. Acute alteration of cardiac ECG, action potential, I{sub Kr} and the human ether-a-go-go-related gene (hERG) K{sup +} channel by PCB 126 and PCB 77

    Park, Mi-Hyeong; Park, Won Sun; Jo, Su-Hyun, E-mail: suhyunjo@kangwon.ac.kr

    2012-07-01

    Polychlorinated biphenyls (PCBs) have been known as serious persistent organic pollutants (POPs), causing developmental delays and motor dysfunction. We have investigated the effects of two PCB congeners, 3,3′,4,4′-tetrachlorobiphenyl (PCB 77) and 3,3′,4,4′,5-pentachlorobiphenyl (PCB 126) on ECG, action potential, and the rapidly activating delayed rectifier K{sup +} current (I{sub Kr}) of guinea pigs' hearts, and hERG K{sup +} current expressed in Xenopus oocytes. PCB 126 shortened the corrected QT interval (QTc) of ECG and decreased the action potential duration at 90% (APD{sub 90}), and 50% of repolarization (APD{sub 50}) (P < 0.05) without changing the action potential duration at 20% (APD{sub 20}). PCB 77 decreased APD{sub 20} (P < 0.05) without affecting QTc, APD{sub 90}, and APD{sub 50}. The PCB 126 increased the I{sub Kr} in guinea-pig ventricular myocytes held at 36 °C and hERG K{sup +} current amplitude at the end of the voltage steps in voltage-dependent mode (P < 0.05); however, PCB 77 did not change the hERG K{sup +} current amplitude. The PCB 77 increased the diastolic Ca{sup 2+} and decreased Ca{sup 2+} transient amplitude (P < 0.05), however PCB 126 did not change. The results suggest that PCB 126 shortened the QTc and decreased the APD{sub 90} possibly by increasing I{sub Kr}, while PCB 77 decreased the APD{sub 20} possibly by other modulation related with intracellular Ca{sup 2+}. The present data indicate that the environmental toxicants, PCBs, can acutely affect cardiac electrophysiology including ECG, action potential, intracellular Ca{sup 2+}, and channel activity, resulting in toxic effects on the cardiac function in view of the possible accumulation of the PCBs in human body. -- Highlights: ► PCBs are known as serious environmental pollutants and developmental disruptors. ► PCB 126 shortened QT interval of ECG and action potential duration. ► PCB 126 increased human ether-a-go-go-related K{sup +} current and I{sub Kr}.

  18. Efficient computational model for classification of protein localization images using Extended Threshold Adjacency Statistics and Support Vector Machines.

    Tahir, Muhammad; Jan, Bismillah; Hayat, Maqsood; Shah, Shakir Ullah; Amin, Muhammad

    2018-04-01

    Discriminative and informative feature extraction is the core requirement for accurate and efficient classification of protein subcellular localization images so that drug development could be more effective. The objective of this paper is to propose a novel modification in the Threshold Adjacency Statistics technique and enhance its discriminative power. In this work, we utilized Threshold Adjacency Statistics from a novel perspective to enhance its discrimination power and efficiency. In this connection, we utilized seven threshold ranges to produce seven distinct feature spaces, which are then used to train seven SVMs. The final prediction is obtained through the majority voting scheme. The proposed ETAS-SubLoc system is tested on two benchmark datasets using 5-fold cross-validation technique. We observed that our proposed novel utilization of TAS technique has improved the discriminative power of the classifier. The ETAS-SubLoc system has achieved 99.2% accuracy, 99.3% sensitivity and 99.1% specificity for Endogenous dataset outperforming the classical Threshold Adjacency Statistics technique. Similarly, 91.8% accuracy, 96.3% sensitivity and 91.6% specificity values are achieved for Transfected dataset. Simulation results validated the effectiveness of ETAS-SubLoc that provides superior prediction performance compared to the existing technique. The proposed methodology aims at providing support to pharmaceutical industry as well as research community towards better drug designing and innovation in the fields of bioinformatics and computational biology. The implementation code for replicating the experiments presented in this paper is available at: https://drive.google.com/file/d/0B7IyGPObWbSqRTRMcXI2bG5CZWs/view?usp=sharing. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Asteroid taxonomic classifications

    Tholen, D.J.

    1989-01-01

    This paper reports on three taxonomic classification schemes developed and applied to the body of available color and albedo data. Asteroid taxonomic classifications according to two of these schemes are reproduced

  20. 4D-Fingerprint Categorical QSAR Models for Skin Sensitization Based on Classification Local Lymph Node Assay Measures

    Li, Yi; Tseng, Yufeng J.; Pan, Dahua; Liu, Jianzhong; Kern, Petra S.; Gerberick, G. Frank; Hopfinger, Anton J.

    2008-01-01

    Currently, the only validated methods to identify skin sensitization effects are in vivo models, such as the Local Lymph Node Assay (LLNA) and guinea pig studies. There is a tremendous need, in particular due to novel legislation, to develop animal alternatives, eg. Quantitative Structure-Activity Relationship (QSAR) models. Here, QSAR models for skin sensitization using LLNA data have been constructed. The descriptors used to generate these models are derived from the 4D-molecular similarity paradigm and are referred to as universal 4D-fingerprints. A training set of 132 structurally diverse compounds and a test set of 15 structurally diverse compounds were used in this study. The statistical methodologies used to build the models are logistic regression (LR), and partial least square coupled logistic regression (PLS-LR), which prove to be effective tools for studying skin sensitization measures expressed in the two categorical terms of sensitizer and non-sensitizer. QSAR models with low values of the Hosmer-Lemeshow goodness-of-fit statistic, χHL2, are significant and predictive. For the training set, the cross-validated prediction accuracy of the logistic regression models ranges from 77.3% to 78.0%, while that of PLS-logistic regression models ranges from 87.1% to 89.4%. For the test set, the prediction accuracy of logistic regression models ranges from 80.0%-86.7%, while that of PLS-logistic regression models ranges from 73.3%-80.0%. The QSAR models are made up of 4D-fingerprints related to aromatic atoms, hydrogen bond acceptors and negatively partially charged atoms. PMID:17226934